text
stringlengths 44
950k
| meta
dict |
---|---|
Pelagic Thoroughbreds - quickfox
https://www.newcriterion.com/issues/2019/2/pelagic-thoroughbreds
======
hprotagonist
“pelagic”, for those wondering, means “relating to the open seas”.
i’m mostly familiar with the term from “pelagic zone”, which is the upper
stratum of the water column where most fish live.
------
CoryOndrejka
Flying Cloud is a wonderful innovation story that connects to the larger story
of Matthew Maury, who used US Navy datasets to transform how ships navigated.
Gave a talk about this at the US Naval Academy a few years ago, build a bunch
of visualizations since NOAA still hosts the data sets
[http://ondrejka.net/history/2014/02/28/maury.html](http://ondrejka.net/history/2014/02/28/maury.html)
------
twic
> Trade with China is as old as the republic itself, blossoming initially out
> of Salem, Massachusetts, and then later usurped by New York–based merchants.
No Transcontinental Railroad, no West Coast ports, no Panama Canal - so did
they sail from Boston or New York, along the length of the Americas, through
the Strait of Magellan, and then the long way across the Pacific to China?
That's quite a trip. Or was there a portage in Central America somewhere?
The wikipedia article on US - China trade in the era prior to that of the
clippers is rather interesting:
[https://en.wikipedia.org/wiki/Old_China_Trade](https://en.wikipedia.org/wiki/Old_China_Trade)
I had no idea that America bought opium from Turkey, or that ginseng was grown
in Appalachia for export to China in the early 19th century!
~~~
madhadron
Not through the straits. Around Cape Horn! An old British admiral started his
career as a seaman on one of the last of the big square riggers that made that
voyage in the 1920's, and brought along one of the earliest handheld video
cameras. It's amazing photage ([https://www.amazon.com/Around-Cape-Horn-
Johnson-Sailing/dp/B...](https://www.amazon.com/Around-Cape-Horn-Johnson-
Sailing/dp/B000W8MMO2)). It gives an amazing view of just how huge the waves
are sailing there.
That was the China trade, though. That was bringing bat guano from Chile as
fertilizer.
~~~
twic
Online - i am definitely not keen to do this trip:
[https://archive.org/details/IrvingMcClureJohnsonAroundCapeHo...](https://archive.org/details/IrvingMcClureJohnsonAroundCapeHornOriginalFootageFromOnboardThePekingFilmedIn1929)
~~~
davidp
Superb video, and in the public domain. Thanks so much for the link!
------
Qwertystop
Title left me thinking of horse-breeding (on boats) in international waters,
presumably for legal or tax reasons.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Why is “My Bathroom Mirror is Smarter Than Yours” being posted so much? - davelnewton
This has been posted so many times-possibly dozens. Why? How?<p>Is someone trying to game this Medium story for some reason? If so, why does HN keep allowing it to appear over and over?<p>(14 times at last count, with tweaky URL settings to deliberately bypass HN's dupe posting filter.)
======
ocdtrekkie
Medium seems to have some URL cruft at the end of it that is unique for
different users, and it doesn't seem like HN knows how to dedupe that.
~~~
gus_massa
Duplication is usual problem with Medium stories.
In this case the story is interesting enough to submit and gain a few points,
but not enough to get to the front page, so it's not easy to link to the
previous main discussion and try to keep all the comments there.
A similar problem has the "Banned by Tesla (I)" and "Banned by Tesla (II)"
stories. I think one of them was more lucky, but I'm not paying too much
attention because I think it's a pointless discussion.
I remember a few previous case, were one submission was very successful (front
page and many points+comments) and it was unintentionally resubmitted many
many many times: "The resolution of the Bitcoin experiment", "The Sad State of
Web Development", "Paul Graham Is Still Asking to Be Eaten"
~~~
ocdtrekkie
Medium seems a common enough share source these days that it might be worth HN
looking into filtering out the junk at the end of the URL for deduplication
checking.
~~~
dang
It's on our list.
------
nkijak
Don't be jelly
------
echolima
I'm sure he has invested some very serious cash into something he should not
be spending that much time looking at, and now he wants a return on investment
by building a readership; cue the submission bots/friends/family to spread the
word.
~~~
davelnewton
I wish I had that many friends; this is nuts.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Get anonymous feedback from your colleagues - ali_ibrahim
Hi everyone,<p>We have developed a platform to help tech professionals solicit anonymous feedback from colleagues and discover tech content relevant to their skills. Check us out at www.pleasantfish.com<p>We'll be super happy if you find it useful!
======
GFK_of_xmaspast
Colleagues as in co-workers? How anonymous could that possibly be?
~~~
ali_ibrahim
Yes! By colleagues we assume both former and current workers. We try with
three things:
1\. Allowing coworkers to write anonymous private message to the user.
2\. Asking user feedback though rating system on their technical skills. Rate
users as beginner, intermediate, expert and advanced.
3\. 4 simple questionnaires with 7 multiple choice questions in each to get
reviewed on their qualities. These questionnaires review users on their
professionalism, collaboration, leadership and interpersonal skills.
The collected data from Point 2 and 3 above is then aggregated based on the
colleague relationship (current has more weightage then former coworkers) and
presented to the user in an easy to understand graphical format which
basically identifies their strengths and weaknesses. In the whole, user
identity who rated is not revealed in any step, we just tell user that he has
been rated by one his coworkers in one of the companies he has worked and
listed on his profile.
On top of that on each quality and skills, we have assembled list of articles
curated from top sources so that user can improve these skills. He can also
subscribe to other skills he is interested in learning and they are made
available to him in his personalized feed.
Hope that helps!
| {
"pile_set_name": "HackerNews"
} |
A Taxonomy of Internet Chum (2015) - firloop
https://www.theawl.com/2015/06/a-complete-taxonomy-of-internet-chum/
======
Bartweiss
> _Clicking on a chumlink — even one on the site of a relatively high-class
> chummer, like nymag.com — is a guaranteed way to find more, weirder, grosser
> chum. The boxes are daisy-chained together in an increasingly cynical, gross
> funnel; quickly, the open ocean becomes a sewer of chum._
This seems like a particularly interesting point.
Presumably 'chum' ought to be higher-impact than the source page, so as to
beat out "Related Articles" links, other open tabs, or leaving the computer.
(After all, you just read an article of mental impact X, so you're someone who
cares about stories of >X value.)
But there's a limit on how fast you can ramp up - you can't go straight to sex
and death without provoking whiplash and disgust. So we get the weird
progression that's come to define the internet; the outbound links for a given
page are always weirder than the page itself.
Hence "the weird part of Youtube". Hence the 4chan -> Reddit -> Buzzfeed
progression by which content is generated in strange spots, then sanitized for
mass consumption. And hence the bizarre sponsored-content funnel: stock news
leads to stock tips leads to pyramid schemes leads to "BUY GOLD!" Either you
cash out somewhere (some of those sponsored links go to products, not
'stories'), or you stick to news and teaser sites, and head arbitrarily far
down the rabbit hole.
~~~
hyperpape
I believe there's also a reputation issue. Established brands typically don't
want their brand associated with disreputable things. So once your site is
peddling chum, the New York Times doesn't want to advertise, even if you'll
offer them cheap rates.
This is apparently why the popup ad was first created:
[https://www.theatlantic.com/technology/archive/2014/08/adver...](https://www.theatlantic.com/technology/archive/2014/08/advertising-
is-the-internets-original-sin/376041/?single_page=true).
------
mattkevan
Although a few years old, this article gets more relevant all the time as this
kind of chumvertising gets incorporated into ‘native’ ads.
Chum seems to go in phases, and always worse. A while ago it was
‘Dermatologists hate her!’, recently it’s ‘What $celebrity looks like now will
shock you!’
Like the recent debate about chum kids YouTube videos, it’s probaby
automatically generated based on what gets the most ‘engagement’.
------
jasode
Internet chum has a lot in common with paper-based chum like tabloid
newspapers:
[https://www.google.com/search?tbm=isch&q=national+enquirer+s...](https://www.google.com/search?tbm=isch&q=national+enquirer+star+globe+tabloids)
I see shared techniques of exclamation marks in headlines and showing human
faces in distress...
\- faces caught mid-expression with anger and mouths open like wolves showing
fangs (Michelle Obama, Tom Cruise, Dr. Phil)
\- Angelina Jolie crying
\- photos of celebrities in caskets
In contrast, People Magazine still has some exclamation marks but a lot less
of it than the tabloids:
[https://www.google.com/search?q=people+magazine+covers&sourc...](https://www.google.com/search?q=people+magazine+covers&source=lnms&tbm=isch)
(But many would consider People, US, Cosmopolitan, etc to be "chum" as well.)
------
camtarn
Note: if you have trypophobia (fear/disgust of irregularly spaced holes -
especially in organic things) watch out for the images in this article. The
example of a 'chumbox' (grid of spammy links) includes an image which made me
feel a bit ill, and because it's repeated all down the article, I had to just
stop reading :(
~~~
grkvlt
The act of translating some random noun ('hole' in this case) and the word
'fear' into Greek does not automatically mean that a recognised mental
disorder involving fear of that particular thing exists. In fact, Wikipedia
cites several sources that explain this is merely a 'proposed' mental disorder
[0] and is not officially recognised.
0\.
[https://en.wikipedia.org/wiki/Trypophobia](https://en.wikipedia.org/wiki/Trypophobia)
~~~
fenwick67
The fear of any particular thing is recognized in the DSM as a "specific
phobia". Irrational fear of cotton balls, holes in a pattern, the color orange
etc. all fit under this umbrella.
~~~
grkvlt
OK, that's a good point, but I still think in this case it's better to simply
say that people have "a phobia of holes or objects with a pattern of holes"
rather than translating the word holes into Greek to create a complicated new
word for no real reason except to sound 'medical' and impressive.
------
PaulHoule
What I don't get is that there is so little diversity of Chum. It seems like
the same 6 advertisers are funding the whole thing.
~~~
0xCMP
And also, what do they actually get from this? Affiliate links? How do they
make money?
~~~
duskwuff
Probably some combination of:
* "Funnels" to affiliate marketing products (like the diet pills referenced in some of the ads).
* Similarly, funnels to sign up for marketing email lists.
* Driving traffic to other pages with more lucrative advertisements on them, which is every bit as circular as it sounds.
~~~
watmough
Agreed. Sad little boxes of anti-aging pills regularly appear at the door for
my mother in law who is a prime target for these old people / skin thing
funnels.
------
wffurr
I found myself wondering what the NY Magazine art critic would think about
having "chum" at the end of their piece, and then decided they would say
"those ads pay my salary, artistic integrity is for schmucks".
| {
"pile_set_name": "HackerNews"
} |
On Hiring and FizzBuzz - scrabble
http://topherlandry.wordpress.com/2014/08/31/on-hiring-and-fizzbuzz/
======
thebear
I honestly don't know if it's the mathematician in me, or if I'm just a cranky
person, or if I actually have a valid point, but if I were given the FizzBuzz
task the way it is worded here, I would reject it on the grounds that it is
self-contradictory:
'for multiples of three print “Fizz” instead of the number [...]. For numbers
which are multiples of both three and five print “FizzBuzz”'
So for the number 15, for example, I am being asked to print "Fizz" instead of
the number, but I am also asked to print "FizzBuzz". That cannot be done. A
valid way of formulating the problem would be:
Write a program that prints the numbers from 1 to 100. But for multiples of
three _that are not multiples of five_ print “Fizz” instead of the number, and
for multiples of five _that are not multiples of three_ print “Buzz” instead
of the number. For numbers which are multiples of both three and five print
“FizzBuzz” instead of the number.
An alternate way of saying it would be:
Write a program that prints the numbers from 1 to 100, except for numbers that
are multiples of three or five. For multiples of three print “Fizz”. For
multiples of five print “Buzz”.
(Edit: no, the above doesn't work, because it would allow you to print
"BuzzFizz" for numbers like 15. You'd still have to add, 'For numbers which
are multiples of both three and five, print “FizzBuzz”.' So it's really just
the clause "instead of the number" in the original formulation that causes the
contradiction.)
~~~
mcv
It's still a good test. I wouldn't hire you if you insist on making a problem
out of something you understand perfectly well.
------
chrisbennet
I'm in the camp who thinks that fizzbuzz is a "do you know the modulus
operator?" test.
I use the modulus operator several times a week but I realize that the
software world is a pretty big place and people do _not_ all know the same
things that I do because they don't have any reason to.
For example: I never have cause to use a hash table. It's not something
applicable to the stuff I work on. A world where hash tables aren't useful is
probably unimaginable to some of you. I can assure you, I'm not an idiot - my
mother had me tested ;-) I just work in a different corner of the programming
space than the mainstream developer.
------
raymondh
FWIW, Trello's FizzBuzz has more meat to it than this article suggests :-)
~~~
scrabble
It really is an excellent FizzBuzz, and I feel like it gets to the root of
what FizzBuzz should be accomplishing -- does someone understand basic
programming constructs like loops, and are they able to analyze and solve the
problem?
The people that I've shown it to in person tend to overthink it on first go,
going so far as to talk about rainbow tables to solve the hash. In fact, if
you spend the time to look at the hashing algorithm itself it's much easier
(and is also a terrible "hash.")
------
mcv
You can even implement FizzBuzz without being aware of the modulo operator.
It's not that hard to write your own test that does the same thing.
~~~
JoeAltmaier
Sure you could have two counters, one initialized to 3 and one to 5, that are
decremented each time thru the loop. When they reach zero print Fizz or Buzz
and reinitialize.
Or you could create a finite list, initialize each member to its index, then
go through by 3's and 5's and replace a number with Fizz or Buzz, or if not a
number just add it.
Or you could create a program that generates print statements that print the
same, then run that.
Or you could write a scraper that searches the web for 'fizzbuzz solution' and
print the text on that page.
Or...
~~~
mcv
Or you take the number, divide it by 3, round down, multiply by 3, and if it's
the same as the original, you print Fizz. Same effect as the modulo operator,
but without the modulo operator.
| {
"pile_set_name": "HackerNews"
} |
Gender bias in open source: Pull request acceptance by gender [Preprint] - jasonhoyt
https://peerj.com/preprints/1733/
======
kriro
I like how this paper is constructed. They form a bunch of hypotheses and test
them and aren't afraid to be wrong. Rather refreshing since the typical papers
I read are more of the "here's our successful test" variety. I feel like a
supplementary qualitative analysis (code quality) would be helpful. My
personal hypothesis is that women self-select and the pool of female
contributes has a higher average skill than the pool of male contributors.
I have no evidence to back this up but by gut says that it's still socially
harder for women to become developers and as a result the ones that "survive"
tend to be better on average. While this is a bias I'd argue that the major
reason (cause) for the acceptance/rejection of pull requests is the quality of
said request which would argue against a bias in the acceptance. I'm not sure
how to phrase it well but in summary I think there's a social bias against
women in programming (as a career) but I suspect code quality is the main
cause of accept/reject decisions and there is no bias there.
------
hamax
Was this flagged into obscurity while I was reading the paper? I can't find it
on the first five pages anymore. Pretty sad.
~~~
stalled
There was a more successful repost 6 hours later that got some discussion:
[https://news.ycombinator.com/item?id=11074587](https://news.ycombinator.com/item?id=11074587)
| {
"pile_set_name": "HackerNews"
} |
Tablet dollars: Android passed Apple for first time in Q3 - arunitc
http://tech.fortune.cnn.com/2013/11/15/apple-ipad-huberty-idc/
======
codelust
Notable point being this part:
"Working from the tracking numbers IDC released two weeks ago".
Extrapolation, like this, has very limited use and is vaguely indicative at
best and nowhere close to being reliable for anything other than fancy-
sounding talking points.
~~~
bigdubs
/ useful for click-bait headlines.
------
drzaiusapelord
Not too long ago, android buyers were regularly shouted down on HN with
"There's no tablet market, there's only an ipad market" when discussing how
fun and usable the N7 is.
Interesting how things have turned out. I guess a low-cost quality 7-inch
tablet was something the market was demanding but whatever cargo cultism
Jobs/Cook and Ivy subscribed to made this an impossibility for a long time.
I personally have an N10 and love it. My main use case is downloading
torrents; something I can't do on iOS without a jailbreak.
------
jasonrr
The data presented in this article is extremely suspect. The methodology for
both the collection of shipment information and the extrapolation to revenue
are secret (or at least not reported here -- see the update at the bottom of
the story). As a community that is concerned with accuracy and precision, I
think we should be bothered by this. Instead it's a launchpad for arguments
based on conjecture and personal anecdote.
------
gress
What does this even mean? Most Android tablets are cheap video players sold in
China.
It's great that Android is enabling these devices to be built, but comparing
the revenue of low end video players with iPads is nonsensical.
Android is a free OS that can be used to make any kind of embedded appliance.
iOS is the operating system used in Apple's mobile computers.
How is it surprising or informative that Apple's share of consumer electronics
revenue in general is lower than 50%?
------
josefresco
I love the update at the bottom which basically says "we have no idea how they
got these numbers"
------
bluedino
Since Android phones have out-sold iPhones, it shouldn't be surprising that
the same happens with Android tablets.
But what are the numbers when it comes to web traffic from those devices and
app purchases etc?
~~~
notatoad
If you're looking for a way to say "iOS is still winning", then yes, there's
vastly more web traffic generated by iOS than by android. but that's beside
the point. Android tablets are mainstream now. John Gruber's old article [1]
"there is no tablet market, there's only an iPad market" is obviously no
longer true.
~~~
Samuel_Michon
It depends on what you mean by a ‘tablet market’. On the hardware side, sure,
there are plenty of companies that sell Android tablets (albeit often at a
loss). On the software side however, I don’t think there’s a tablet market,
there’s just an iPad market. Owners of Android tablets don’t buy apps and many
of those tablets don’t run recent versions of Android which makes it very hard
to create the same quality apps that are available for iPad.
------
pazimzadeh
So Amazon, Samsung, Google, HP, ASUS, and all other companies that make
Android tablets now make more money together than Apple does on the iPad?
~~~
jsight
Yes, that is the assertion in the article.
~~~
ctdonath
That is the assertion of the _title_ , but the _body_ of the article (correct
me if I'm wrong) refers to total _units_ , not noting that (A) it took Amazon,
Samsung, Google, HP, ASUS, and all other companies that make Android tablets
lumped together to out-sell essentially a single product[1] from Apple, (B)
most of those devices were low-price low-profit-margin units as contrasted
with Apple making some 50% profit off each sale, and (C) how many of those
units were relegated to minor usage (or disused entirely in short order) for
minor purposes (video player, occasional games) vs "heavy use" (broadly
speaking; see other comment about actual network traffic).
My snide "it's Friday and this coffee was too strong" side envisions
comparison of the aggregated productivity & vitality of our intrepid hero
against waves of zombies.
------
Bud
Apple's investors, I can assure you, take precisely zero "comfort" in Apple
leading in revenue. They care about profit. By that metric, the only metric
that really counts, Apple is dominating.
------
einehexe
There should be a law that manufacturers have to take back their cheap tablets
and phones for recycling so they don't end up a toxic pile in a landfill.
~~~
dangrossman
Why must there be a law? What major tablet market doesn't already have free
electronics recycling available? In the US, most big-box electronics stores
take items for recycling from any manufacturer. Given the very high price
floors seen in used tablet markets (eBay, Craigslist, etc), I doubt all that
many are being thrown out in the first place.
~~~
Bud
This is only because Apple basically invented the entire tablet market in
2010, just a few years ago.
This will change quite rapidly, starting approximately now, as various tablets
begin to become more and more obsolete.
| {
"pile_set_name": "HackerNews"
} |
How Secure your iOS Apps are? - subhransu
http://www.slideshare.net/arya.subhransu/hacking-and-securing-ios-apps-part-1
======
subhransu
I will be talking more on the security part in our next iOS developer meet-up
in Singapore.
If you are in Singapore on 13th, September drop by and say hello to us.
<https://www.facebook.com/events/340285926062221/>
| {
"pile_set_name": "HackerNews"
} |
How to avoid picking the wrong technology just because it's cool - scarhill
https://blog.bradfieldcs.com/you-are-not-google-84912cf44afb
======
AznHisoka
I used to work at Standard and Poors. They do media monitoring for lots of
financial companies.
When I joined, I thought they using state of the art tools to monitor millions
of sources.
Turns out they just had a team of thousands in India that manually visited
certain websites to check for press releases, transcripts, earnings reports,
etc and push them to a database if they found them.
~~~
sharemywin
wonder if that could be crawled and save money.
~~~
dismantlethesun
Accuracy. Every solution I've seen that relies on automatic crawling will
eventually have a parsing error when someone changes their sentence structure
of a press release.
It's not so obvious when you're looking at the breaking releases for a few
stocks or companies, but historical records have at least 1 error per stock
per year.
~~~
greggyb
So split your stream:
1. Data matching expectations (you do have a definition of correct, right?)
2. Log for manual review -> manual inserts or correction and placed into queue for (1)
Monitor (2). When inserts start trending up, it may be time to update your
processing logic.
~~~
flukus
I came up with a similar idea for a company several years ago where we had a
team of people doing data entry from faxed documents. I wanted to build
something that would do all the OCR it could and then display it to users to
verify, which should have been a 10 times efficiency increase, not to mention
speed and accuracy.
The idea was rejected, they wanted either a perfect solution or nothing. I
don't know why, but for some reason the idea computers removing humans is
acceptable to management, but computers augmenting humans wasn't.
------
Animats
The author's acronym is silly, but it's a real problem. Soylent liked to
blither about their "infrastructure", for a product that sells a few times per
minute. They could be using CGI scripts on a low-end hosting service and it
would work fine.
Wikipedia is some MySQL databases with read-only slaves front-ended by Ngnix
caches and load balancers. That seems to get the job done. Wikipedia is the
fifth busiest web site in the world.
Netflix's web site (not the playout system) was originally a bunch of Python
programs.
The article mentions a PostGres query that required a full table scan. If
you're doing many queries that require a full table scan, you're doing
something wrong. That's what indices are for.
~~~
squeaky-clean
I remember reading a scaling-out article from some startup. Some of the things
felt a little over-engineered, some were impressive, some seemed wrong. But
then they get to the point where they brag about their scale, and the metric
they used was that they can handle thousands of requests.... per day.
~~~
aqme28
[https://engineering.hellofresh.com/scaling-hellofresh-api-
ga...](https://engineering.hellofresh.com/scaling-hellofresh-api-
gateway-7d40be55450f)
~~~
tnolet
This is hilarious and sad at the same time. However, most of these write ups
are aimed at attracting talent. Even more, some tech stacks are deliberately
built to attract talent when the core domain is just too simple or boring. "We
serve user subscriptions and recipe data from an SQL database using Rails"
just doesn't sound as snappy as the infra-porn on the blog.
~~~
taneq
Isn't that kind of thing a real red flag for the kind of talent you'd want to
attract? If someone told me they'd built a GPU compute cluster for their phpbb
based social club forum, I'd think they were an idiot and not want to work
with them.
~~~
keganunderwood
I am sorry if I'm completely off base but I'm still thinking about the danluu
page on options v cash which quotes this
[https://news.ycombinator.com/item?id=11200296](https://news.ycombinator.com/item?id=11200296)
Maybe startups don't want the absolute best of the best but rather the best of
the gullible?
Edit and or poor
~~~
taneq
So you're thinking that kind of technical mark-missing is the startup
equivalent of the typos and other glaring errors in email scams? They're to
weed out the people smart enough to be a problem?
------
Kiro
Nothing I've built has ever needed anything more than a $5 Digital Ocean
droplet and one of my services gets around a thosand requests a second at
peak. Purely anecdotal and I'm not doing anything CPU intensive but I really
feel startups are overdoing their infrastructure.
~~~
sidlls
It isn't just startups. And I agree with another commenter: it seems like
resume driven development.
~~~
__jal
There's also a weird peer-pressure involved. Overheard a conversation a while
back that summarizes it nicely - someone was talking about a scheduling system
written they've used for years, and mentioned it was written in Perl. Another
participant guffawed, and after the requisite Perl-bashing, the original
person allowed that, yes, even though it worked fine, they should rewrite it.
No idea what company that was, but I'd love to work in a place where that was
the most pressing concern on my plate.
~~~
thehardsphere
Were all of these people competent?
Honest question. I have only seen the "it's written in X, so therefore it must
be re-written in something nicer even though it is working" thinking from
incompetent people who were just trying to take ownership of something they
didn't quite fully understand. Though I never have seen it in a appropriately
functioning commercial setting; if management is competent, they'll
immediately recognize the high costs with no concrete benefit and say no.
It's one thing to say "we have to re-write this because it uses Java applets,
and Java applets are problematic because Oracle is dropping support for them,
so our customers are going to be screwed soon if we don't do something." It's
another thing to say "we have to re-write this because it's in Perl because
Perl is something I don't like."
~~~
ProblemFactory
I've seen this situation multiple times, and yes the developers involved were
competent. They were even well-meaning, and wanted to build something for the
benefit of the company, not just their resumes.
I think the tendency to over-engineer and over-polish comes mostly from
getting too invested in one particular project or task. The developers have
"professional pride" \- they want to deliver software that has good
architecture, high test coverage, easy to understand and maintain code,
reliable, scalable, etc.
This means competent developers are very tempted to continue working on a
project as long as there are possible improvements to it, even if these
improvements do not make business sense. Nobody wants to admit that "cron job
that fails once per month" is a sufficient solution when they can see a better
solution, and go work on the next hacky cron job instead.
------
dismantlethesun
Working in the D.C. area has given me a high tolerance for acronyms and
backronyms (seriously: P.R.O.T.E.C.T. Act stands for "Prosecutorial Remedies
and Other Tools to end the Exploitation of Children Today").
U.N.P.H.A.T does raise a smile to my face for trying, but if the author is
reading, I'd suggest you change it to a prescriptive paragraph where the first
word in each sentence becomes a letter in the acronym (e.g. B.A.M.C.I.S).
====
Here's my best try:
UNPHAT:
Understand the problem.
Nominate multiple solutions.
Prepare by reading relevant research papers.
Heed the historical context.
Appraise advantages versus disadvantages.
Think!
~~~
dismantlethesun
Drat, it's too late to edit my comment and take it out, but "Consider
candidate solution" wasn't meant to be included in the acronym. It was part of
my brainstorming.
~~~
dang
Ok we took that out for you.
~~~
dismantlethesun
Thanks.
------
pram
A lot of this just sounds like Resume Driven Development, not people thinking
they're Google or Amazon.
~~~
OpenDrapery
I was thinking the same thing. I wonder how much of it is due to the way we
make devs work and the hours we make them keep.
For example, I'd be more than happy to use the same old, tried and true,
boring tools to just get the job done, if it meant that I could then go play
golf or otherwise not be in the office.
But if you insist that I be in my seat 8 hours a day regardless of workload,
then goddamn let's take this shiny new tool for a spin!
Do I want my resume to show that I used the same tool for every job for the
last ten years? Or do I want it show some new hotness?
The industry and employers are as much to blame for this as the engineers, if
not more. When you use middleman firms to find your employees, and all they
understand is buzzwords, well then guess what game the devs are gonna play?
~~~
allcentury
Fellow golfer and tinkerer - we are a product of the job markets. Hot
employers typically want new and shiny on the resume in addition to
fundamentals, seems like everyone is playing the same game...
~~~
collyw
I get the feeling that my skillset is becoming outdated.
Fact is I know Django inside out, and plenty of Python libraries. Its very
rarely that I will find something that requires me to learn a new language or
tech (I will likely get things done a fair bit faster using the tech that I do
know). Anything else feels like resume driven development.
------
lacampbell
I am fortunate, in that I got a lesson in not over-engineering things very
early in my career.
My first programming job was a 3 month contract at the maintenance department
of an international airport. They had a bunch of information in large,
unwieldy ERP system and wanted to automatically generate job sheets for the
different maintenance crews. So I did the simplest thing possible - I
generated an excel file from the ERP system, then using that file as input, I
outputted different excel worksheets for the different crews.
It was very plain GUI app that had one or two buttons. I remember being a bit
worried that it wasn't nearly fancy enough for 3 months work, but everyone
seemed pretty happy with it.
Later on I found out that - before me - they had hired an experienced software
developer who had worked on the same problem for 6 months, and at the end of
that 6 months had apparently not produced a solution. I had done the dumbest,
simplest thing - not because I had any insight or wisdom, but because it was
really the only thing I had the skills to do. But I delivered.
It was a brilliant, accidental first lesson in not over-engineering.
~~~
gaius
_I generated an excel file from the ERP system, then using that file as input,
I outputted different excel worksheets for the different crews._
As a complete aside, you might be surprised how far you can go with Excel
these days. Do you know it has a built-in in-memory columnar database now? You
can have millions and millions of rows of data in there that you can use in
tables and charts completely independently of the size of the grid. Pull back
a huge chunk of data from the DB and slice and dice it to your heart's content
locally.
I look at people buying expensive "business intelligence solutions" and I
think, it's right there on your PC all along and you don't even know it...
~~~
collyw
The problem is people using Excel for everything that it shouldn't be used
for.
~~~
gaius
"Throwaway" Python code winds up becoming part of real systems all the time,
but we don't blame Python for that
------
joshribakoff
I've seen this in action. Using code generators to convert XML configuration
to a few API end points. Or using a DSL/rules engine because you don't want to
write code. Or having APIs that hit other APIs ad infinitum when the whole
thing runs on one server because "micro services are the only right way". The
result was we spent time gluing together what was already a monolith
disguising as microservices, rather than adding features the customers wanted
More recently I had to solve time drift on 1000s of devices. The problem was
someone installed puppet to manage those devices which uses NTP. The devices
are behind firewalls so if they block the puppet master or mess with SSL
puppet doesn't even phone home. Or worse it gets incorrect time from NTP peers
on the network. The solution was to throw out the shiny tool "puppet" and just
call "date". Puppet and NTP are great in theory for getting time down to the
millisecond but totally backfired when some devices were off by over 24 hours.
For our purposes as long as all devices were within 5 minutes we were good.
The irony was after disabling NTP puppet just started it again. And we
couldn't use puppet to fix that since 50% of our users had it blocked. No
other choice but to throw out puppet and start over from scratch. The guy who
spent months setting up puppet was not happy.
~~~
liveoneggs
the real issue is why the firewalls were randomly blocking puppetmaster and/or
ntp and why the puppet ssl stuff stopped working (apparently randomly?)
Everyone involved sounds like they need a lot more experience.
~~~
joshribakoff
With all due respect, you don't know the real issue. Your response is the same
thing the guy who installed Puppet said to me... just have them unblock it.
Our sales pitch is "these devices use plain http and will work behind your
corporate firewall". The blockage wasn't an issue that could be solved, it was
our whole business model to workaround the blocks by using simple http instead
of https, proxying everything through our IP, and things like that.
Even the puppet documentation says not to run a puppet master when you have
devices that are behind firewalls or limited network. The guy who added puppet
apparently didn't read that.
I wasn't the one who decided the business model just the guy who fixed it to
work as advertised while dealing with the pressure of everything crashing &
burning. You're right no one had experience but thats not the point.
My point was that the fancier tools sometimes just add new issues without
solving your real issue. Despite my lack of experience I solved the time drift
using a linux built in "date" to set the date time. It didn't account for
network lag like NTP, and an NTP developer would probably laugh at my
solution, but now all devices are accurate to within a few minutes & that
particular problem was solved. So don't always go for the most complex tool is
all I'm saying.
For what its worth I do plan to bring back puppet but run it in "puppet agent"
(offline) mode. We'll using custom scripts to copy in new puppet configs so
puppet does not need to phone home.
~~~
liveoneggs
I would love to get into it more deeply as you continue to supply details! :)
distributing hardware outside of your network and using puppet in
master/client mode is obviously a bad idea, just like having any dependency is
difficult to manage (sometimes like NTP)
However, clocks will drift. Consider ntpdate in a cron or an easier-to-manage
sntp client vs ntpd, which is a little nutty.
So the point is that a tool like puppet, only properly configured, is probably
a great asset for your use case of distributing hardware, as it can help keep
things working as expected.
~~~
joshribakoff
Yes Puppet solved one problem... How do I add a cron to all devices, and retry
it if it failed without adding it twice to devices where it worked. Puppet is
amazing. It solved that problem....
But then it created a whole new world of problems since it violated our
business model to have it phone home. Thanks for the suggestions on NTP. We'll
likely add features that do require more accurate time in the future & your
suggestions will probably come in handy!
------
wwweston
> Don’t even start considering solutions until you Understand the problem.
> Your goal should be to “solve” the problem mostly within the problem domain,
> not the solution domain.
I'd guess that _this guideline alone_ would stop 2/3 adoptions of JS SPA
frameworks (and 4/5 Angular adoptions!) if followed.
~~~
BigJono
At least the SPA frameworks themselves have a reasonably common legitimate
use. The tooling around them is the major problem. Most projects I've worked
on, even complex ones, could comfortably trim from 100 dependencies down to 10
and have the developers working on them be an order of magnitude more
productive.
People wilfully wrestle with thousands of functions worth of APIs every day
and don't even notice the immense slowdown it's causing them. It's especially
bad in React land, which is ironic seeing as Sebastian Markbage at Facebook
has an excellent talk about reducing API surface area.
------
merb
Rule 1: don't use any modern javascript framework.
------
marktam264
I'm typing this impromptu but this article seems to be a qnd informal
carnation of the Architectural Tradeoff Analysis Method
([https://en.m.wikipedia.org/wiki/Architecture_tradeoff_analys...](https://en.m.wikipedia.org/wiki/Architecture_tradeoff_analysis_method)).
------
yumaikas
I suppose another point worth bringing up is that hardware has made some
pretty strong advances in recent, especially with SSDs being widely available.
Stuff like that has raised the ceiling on what a single box can do compared to
2000 and earlier, when Google was building MapReduce at first.
~~~
jethro_tell
Pretty sure that's in the article is it not?
------
NTDF9
"Don't use tensorflow to predict everything"
------
dlwdlw
Many people may not work at Google scale, but name would probably like to work
at Google
------
dang
We temporarily replaced this article's baity title with the text's more
accurate self-description.
If someone would care to suggest a good title—i.e. accurate, neutral, and
preferably drawn from the language of the article itself—we can change it
again.
~~~
stickfigure
FWIW, I think the original title was pretty good. I have had the unfortunate
experience of screaming pretty much exactly those words (albeit replace
"Google" with a different company from which one of my CEO's advisors came
from).
EDIT: how about combine the two?
_You Are Not Google: Another "Don't Cargo Cult" Article_
~~~
nemild
I like your proposed title, "Another 'Don't Cargo Cult' article" on its own
seems dismissive, when the content seems quite useful for many engineers (the
acronym need more work, though).
~~~
dang
Ok, fair point. I've made up a title, even though we hate to do that, because
I can't find any phrase in the article that neutrally summarizes it.
~~~
ethbro
New title seems fair. (Also, welcome to the dark side of the editing force,
etc etc)
------
komali2
Regarding "UNPHAT": Is this... serious? Does the author genuinely hope that we
will use this acronym as a means to help guide our technology choosing
decisions? Is it not their creation and is just something I wasn't aware of
yet?
Finally, do these forced acronyms ever help anybody else out there? I mean
seriously, the "N" standing for "eNumerate?" The "P" standing for "Paper,"
which barely correlates to the actual meaning "consider a candidate solution."
Seems to me just saying "apply a principle of unfattening your technology
decisions" would be a hell of a lot easier to remember.
~~~
paulddraper
I doubt it. I think it needs to be three or fours letters. (E.g. Always Be
Closing. Keep It Simple, Stupid.)
\---
Trying my hand:
\- understand the DOMAIN
\- find the OPTIONS
\- research a CANDIDATE
\- know the HISTORY
\- consider the ADVANTAGES
\- apply deliberate THOUGHT
DOCHAT
~~~
jaclaz
Maybe simpler:
DOE
Don't Over Engineer
(which more or less brings us back to KISS principle)
~~~
ChuckMcM
I like DOE because the inverse is E (Engineer). It illustrates an on going
challenge in technology where the first question isn't "What capabilities
should our resulting systems have? And what constraints are there on our
implementation?" (which would be engineering a solution) instead we get the
question "What other systems out there seem to solve this problem?", or worse
"What other systems have similar inputs and outputs to the ones we have and
want?"
~~~
jaclaz
Also there is this other question (as I see it):
What CAN this (pre-chosen) _something_ (insert here _hardware_ or _tool_ or
_programming language_ or _library_ ) do?
Let's use ALL (or most) these functionalities! (because we CAN)
Losing sight of the actual question which should be "What is actually needed"?
------
gtirloni
This is just a "my technology stack is better than yours" post like countless
others we see daily. Sorry to dismiss it so abruptly but it gets tiring.
~~~
gtirloni
I think people downvoting tend to ignore the fact that the proposed "optimal"
solutions for non-Google companies were at some point novelties themselves. If
the same logic is applied we'd be using CICS app servers, IMS, and buying
terminals.
At some point things change, the new normal changes, etc. The shift we are
seeing in some areas also contributes to finally accepting the realities of
distributed systems.
~~~
icebraining
The people are probably downvoting because you have missed the point of the
article. Nowhere does it say that one shouldn't change things, or adopt new
solutions.
| {
"pile_set_name": "HackerNews"
} |
“Don’t Move to Vancouver”: Why I Changed My Mind After 6 Months - notastartup
http://webcache.googleusercontent.com/search?q=cache:cWVAfdoXLvgJ:www.anabellebf.com/dont-move-to-vancouver-why-i-changed-my-mind-after-6-months/+&cd=1&hl=en&ct=clnk
======
notastartup
I lived here for 19 years and she hits every point perfectly. What the fuck am
I still doing here?
| {
"pile_set_name": "HackerNews"
} |
LoRaWAN Energy Performance and Ambient Energy Harvesting - c0n5pir4cy
https://www.stream-technologies.com/whitepapers/lorawan-energy-performance-and-ambient-energy-harvesting/
======
c0n5pir4cy
Full disclosure: I work for the company that is hosting the whitepaper.
~~~
1001101
This is great stuff, thanks for sharing! I just looked at your website, and
I'm trying to figure out how energy harvesting fits into your business model.
Just lowering the coefficient of friction for deploying LoRa networks? Where
are you trying to take this?
~~~
c0n5pir4cy
Hi there, the member of our team who wrote this article joined us as part of a
Knowledge Transfer Partnership with a local university.
While not directly related to our business model, we gain a lot in terms of
understanding LoRaWAN networks; getting more people to deploy LoRa networks
like you mentioned is also a plus.
------
gus_massa
[Metacomment: Most articles in this area are about a technology that in the
future will be so efficient to charge your cell phone, notebook (or electric
car :) ). This is more serious, with experiments with current technology, give
it a try.]
| {
"pile_set_name": "HackerNews"
} |
The Case for Getting Rid of Borders - mhb
http://www.theatlantic.com/business/archive/2015/10/get-rid-borders-completely/409501/?single_page=true
======
venomsnake
Only if the people that come in are willing to assimilate into the culture
that is the host. There are good reasons to keep some worldviews out of one's
borders if they clash with the place's values.
| {
"pile_set_name": "HackerNews"
} |
Early employees take the most risk today - gyre007
https://medium.com/@tikhon/founders-it-s-not-1990-stop-treating-your-employees-like-it-is-523f48fe90cb#.c6u481pwn
======
jalopy
Great, great article.
I'd like to add: This, I think, is one of the big reasons why startups prefer
hiring young people (ie, recent grads) - young people just don't know any
better. They have the barest idea (if any) what dilution, ratchets, preferred
participations, etc does to their already minuscule equity package.
"OMG I'm getting 70,000 shares!" is what I thought about my first startup.
Wasn't even offered (and didn't bother to think about) anything else.
Perhaps I'm projecting too much of my ignorance back then on newly minted
grads now, but it's safe to say that lack of experience in the myriad of
different ways things can (and most likely, 99% chance) will devalue the work
I'm willing to put into a company at 80+ hour weeks.
The most inane argument I hear from founders nowadays is "we just got funding,
so we're de-risked". Nice try. Just cause you sold someone with money to burn
(VCs have a bias to action - "gotta get that IRR to our LPs in 10 years!")
does _not_ mean you've de-risked anything. Proper de-risking comes from
finding a real product-market fit, with achievable financials metrics that
pave the way to real profitability. Anything else is just greater fool theory
- hoping a greater fool comes around and buys the company's story.
EDIT: Member notacoward has a great comment about rank and file employees also
not appreciating the back-end commitment usually required at an acquirer. In
the highly "fortunate" event where the startup is acquired, there's usually at
least a 2-3 year commitment after that fact to get liquidity. This is
_assuming_ liquidity is even available! With the ever-telescoping horizon to
an IPO for even the "unicorns", I wouldn't be surprised if most rank and file
employees are committed to 6, 7, even 8 years to achieve full (diluted) value
of their option packages.
~~~
oldmanjay
It's fair to note there's a significant amount of risk in hiring recent grads,
because largely they aren't great at actual engineering, just coding.
~~~
myth_buster
Isn't the armed forces model relevant here, which also happens to fan out:
"Experienced" soldiers work on the strategy while
new recruits work in the trenches.
~~~
crdoconnor
If you're developing software and there's a lot of "trench work" that usually
means you're doing it quite badly.
~~~
manigandham
How so? Doesn't this completely depend on the actual product being built?
------
notacoward
Nice piece. Many good points, but my favorite was this one.
"they need to spend years at the acquirer for whatever the founders and m&a
department decide behind closed doors."
That's an under-appreciated risk. Of the ten startups I worked at, two went
this route. For one, I'm pretty sure I was the last person to leave
voluntarily; I had trouble finding someone to take my resignation letter
because they were all in negotiations. Some of my coworkers ended up working
for Symantec. At the other one, a bunch of us ended up working for EMC. At
least that worked out OK financially, but it was _not not not_ a choice any of
us would have made for ourselves. Not a one. We'd all had that option before,
and not taken it. Since acquisition is a far more common kind of outcome than
IPO, even for "successful" exits, that's worth thinking about.
Employees have always shared more of the risk than founders and investors
would like to admit. That's part of the package, and I was OK with that for a
long time. Nowadays, it seems like the share of risk is even larger and the
share of success even smaller. I guess it's still worth it as a career-
building move, if you're that way inclined (Google doesn't look bad on a
resume either), but as a way to make good money it's becoming kind of a bad
deal.
~~~
jerguismi
I dunno about the amount of risk, but at least the risk is quite well-defined
for the employees from the start. You get your salary each month, and maybe
something on top of that. You might learn something in your job, or maybe not.
Founders have to start with their own savings/debt and salary in the beginning
is usually zero. Of course situation changes if they get investors etc. But I
would say that the risks are really not that easily comprehensible for
founders.
~~~
rphlx
In my experience _talented_ early employees tend to take a tremendous amount
of insufficiently-compensated risk. The founders can of course lay them off at
any time, dilute them, give them paychecks that bounce, outsource their jobs
after a year or two, sell so much equity that employees stay underwater or
barely in the money, etc. The founders take a lot of risk themselves, but they
are in control, and they usually reward themselves for it. Whereas volumes
could be written about early employees at successful companies who maybe got a
solid bay area house downpayment out of their stock options, after working 60+
hours a week for ten years.
------
twostorytower
I have to respectfully disagree with this article. While it may be spot on in
some scenarios, it couldn't be further away in others.
As a co-founder, I didn't take a salary until a year into the startup (same
with my other two co-founders). Even when we received our accelerator funding,
all of it went towards our first hire's (an engineer) salary and operating
expenses. At this point, our risk was significantly higher, not slightly. If
the company didn't succeed, I can tell you our first hire was next in line for
a cushy market job, and he was being actively poached, not us.
After we graduated the accelerator we raised a ~$1M seed round. We hired more
two early team members at market salaries. Each of the co-founders were taking
$33K salaries. Why? We wanted the budget to hire great people. So no, they
definitely not a similar pay cut. In fact, it's increasingly hard for early
stage startups to hire good talent at less than market rates because there are
plenty of amazing startups hiring above market. Our risk at this stage was
even higher, because failing would burn most of our bridges with our new
investors (maybe a couple wouldn't hold it against us), where as if our
engineers went on to start something, no investor would think twice about
their history working at a failed VC-funded startup.
We didn't increase our salaries again until we were generating revenue. Even
now, ~four years in, I'm taking $20K less than the starting salary for a
junior person in the role I have. While I want to increase that a little more
as our revenue grows, I don't think it's fair to take a market salary at our
stage.
I'm not complaining, but to say being an early employee is a rotten deal is
unfair. If our startup goes under, I definitely have the more rotten deal. It
only looks like I had the better deal if we succeed.
And if you want to start a startup, I encourage it, that's the only way you'll
know how truly hard it is.
NOTE: This comment is a rehash of a comment I made on a similar statement. I
am reposting it because I feel like it addresses this and sheds a little light
on the other side of things.
~~~
rubicon33
I think your situation differs significantly from the one the author is
depicting. You sound like you treat your employees correctly, offering them
market rate compensation instead of a carrot on a stick.
~~~
twostorytower
It's honestly an engineer's market. Nobody is forcing them to take these
below-market jobs for crappy options. If they keep taking them, nothing will
change. If you would rather have market salary and little to no equity, most
founders are willing to accommodate. But I guarantee that years later when the
startup is worth something or exits they complain that they got a shitty deal
(when they weren't willing to take on the risk).
------
Osiris
I agree with this. I work at a well-funded startup, but despite the funding,
most of us are working for below market wages. I'm employee 22 and my options
are about 0.02% of outstanding shares.
It's disappointing to me to see so much disparity between the compensation for
C-level and VP-level versus engineering. Without engineering, there would be
no product to sell and nothing for investors to invest in. Ideas are nice, but
implementation is hard.
~~~
copsarebastards
> It's disappointing to me to see so much disparity between the compensation
> for C-level and VP-level versus engineering. Without engineering, there
> would be no product to sell and nothing for investors to invest in. Ideas
> are nice, but implementation is hard.
It's exactly what you'd expect when programmers are willing to sell their
birthright for lentil stew.
Frankly, most C-level executives at a small startups are glorified secretaries
taking care of the paperwork for the people who provide most of the actual
value. They get paid more because they've played the social game well enough
to persuade engineers to work for them instead of the other way around.
~~~
anindyabd
I work at a big company, and I also consider most C-level executives here to
be glorified secretaries. All they ever do is write emails, stare at some
charts, and yell at engineers to write code faster. What's most annoying is
that sometimes they pretend to understand engineering challenges -- they learn
some jargon and start throwing it around. That makes things worse; a little
knowledge is a dangerous thing.
We engineers are nothing but servants to these people.
~~~
mbesto
> _We engineers are nothing but servants to these people._
Which is funny you say that because apparently the "SV elite" think quite the
opposite:
[https://twitter.com/sama/status/641281287660007424](https://twitter.com/sama/status/641281287660007424)
~~~
gaius
Career advice from a VC is given with the VC's interest in mind, not yours.
~~~
mbesto
Exactly my point :)
------
mbesto
One thing that's not discussed enough in the context of employees compensation
(especially early ones) is prestige. Many people join startups because of the
promise that that company will become the next Google. They see peers who
follow the same logic (Marissa Mayer, the PayPal Mafia, any PM at Google in
the early 2000's, etc) and see what fame, riches and following those people
gain as a result of being part of the early days of a massively successful
company. This is also why YC companies categorically can attract better
talent, because even if the company fails, you can still tell your next
potential employer you worked at a "YC backed startup". It's almost the
equivalent of a Stanford degree. So, if you're an employee of a company that
is already vetted by a number (i.e. amount of funding) of well-known backer
(i.e. YC), you're risk is massively reduced given the insane amount of
recruiting opportunities available.
~~~
birken
Just as a quick aside of the prestige thing. It is a lot easier to get
"prestige" by being a founder of a barely successful startup than it would be
to be an early employee of an extremely successful one. The whole startup
ecosystem is very friendly to founders in this respect.
And when it comes to recruiting, you are much better off getting a job at a
large prestigious company like Google, Facebook, Amazon than you would be at a
random startup. The quality of engineering at a random startup, even YC
startups, is extremely low when compared to large prestigious companies. This
is due to the fact that the engineering talent at startups is generally below
that of employees at (Google|Facebok|Amazon|etc), and startups are
incentivized to ship stuff quickly and not necessarily work on engineering
quality software. If your goal is to found a startup, then maybe you are
better off working at another startup to pick up a more diverse set of skills.
But if your goal is to get hired as an engineer, a big prestigious company is
way way better than a random startup to have on your resume.
------
cubano
> Back in the day, founders would go into debt to buy a hard drive. Some even
> mortgaged their homes to keep things afloat.
I can remember my first company, Magicomm, in 1988 bought two 25Mhz 386's
after we released our first BBS-based search engine and we literally went into
pretty severe debt IE no paychecks for a month/living off raman, so this isn't
just bullshit.
It really was that.
I still remember our first "partnership" offer (not sure what its called
now)...."free" office space and $5k for 51% of the company from a local guy
who owned a shady call center. I had to beg my partner not to take it, too.
My dad ended up letting us borrow a few $k and gave us a closet to work out of
at his medical office.
------
sbov
Note that the major risk for an employee isn't that the startup fails in 2, 3,
or even 6 months. The risk is that 5 years down the line their stock ends up
being worth 0-249k, but they gave up 250k in salary in the same time period.
We're also in the situation that we likely have no way to asses whether the
risk we're taking is a good one. Especially if the company takes on investors.
Especially as the years go by - it can be difficult to figure out when it's
best for us to cut our losses.
An individual employee might not be taking on more risk than a founder. But in
aggregate they might be.
~~~
shostack
Outside of evaluating the risk, are there an good calculators out there for
doing the math on this with various exit scenarios?
~~~
sbov
Not that I know of. As a regular employee you probably don't even have access
to enough information to do the calculation yourself.
------
austenallred
> Employees take the most risk today. Not the investors or the founders — it’s
> the employees.
I agree that in many places employees should be given more options and better
compensation. I also agree that many founders don't realize how large the
opportunity cost for talented people can be. But I also think that statement
is categorically false.
Most founders I know work for several months (or years) for zero pay, and then
pay themselves the minimum amount possible while the company is growing. The
founders have opportunity cost too, and if the company fails they get nothing,
too. Using the superlative that employees are taking the _most_ risk is often
simply not true.
~~~
geebee
But is this true relative to the reward? It of course depends greatly on the
individual. A senior SE who walks away from an offer at Google or Netflix to
work for a startup could be forgoing as much as 100k a year in salary. Five
years in that dev might have banked a half mil.
Yes the founders are giving up a lot too bit the equity they receive may be
vastly higher.
It all depends on the numbers but adjusted for reward and equity? Yeah by that
measurement I'd say its certainly possible that an early engineer is taking on
the worst risk to reward ratio.
Lastly keep in mind that a founder who does not have strong tech skills may
not be giving up as much in potential salary even if he or she works for
"free". Again this all depends on the individual.
------
sixtypoundhound
Or said differently, what do you bring to the table as a founder that I (as a
highly talented employee) cannot immediately replicate on my own?
This gap has been getting very narrow lately.
\- Money? Not really; if we assume I'm coder #1, the cash cost of launching a
product/service is <$100,000, well within the range of many mid-career folks
with savings. \- Relationships? eh, Linkedin and Google can connect me with
many people - most of whom would like an alternative source. \- Business model
idea? Oh please, it's most likely been done before in an adjacent segment and
well documented; actually, I'm not interested in a model that hasn't been. \-
Technical / Process knowledge? um, that's why we're talking...
So yeah, if your goal is to take my contribution, give me 1% equity and keep
20%, it gonna be a difficult conversation...
------
Alex3917
The typical founders spend maybe 6 months building a prototype on nights and
weekends, then another six months full time without any pay, and then another
six months at least at minimum wage. And then they usually never get beyond
that, and are just out all those hundreds of thousands of dollars they would
have otherwise earned.
As an early startup employee you might only get 1% as much equity as the
founders, but you're also only taking 1% as much risk unless you're working
for vastly less than market rates.
~~~
ryanSrich
Source? Most startup founders pay themselves a market rate salary after
funding[1] (even if it's seed).
1\.
[https://docs.google.com/spreadsheet/ccc?key=0AgrWVeoG5divdE8...](https://docs.google.com/spreadsheet/ccc?key=0AgrWVeoG5divdE81a2wzcHYxV1pacWE1UjM3V0w0MUE&usp=drive_web#gid=1)
~~~
lmeyerov
Data is the plural of anecdote:
"The founders whose companies die usually only earn small salaries. Before
being admitted to Y Combinator, founders usually live off savings or taking
loans. During the Y Combinator program, they use a one-off seed investment
from Y Combinator of US$120,000 to pay living and business expenses15. If they
go on to receive angel investment, they can pay themselves about $50,000 per
year. With venture capital funding, this tends to increase to about US$100,000
per year"
(From the section on "What about the companies that died", namely, the case
for most founders.)
------
kzhahou
In the recent Square IPO, it showed that Jack Dorsey has an ownership stake
worth $1.5 billion, while the (estimated) 1000 employees have an average stake
of roughly $300K.
Think about what you can do with one and a half BILLION dollars. Now think how
far $300K will get you in the bay area.
This is completely typical and representative of the disparity between founder
and employee equity.
~~~
harryh
The math that you did in the Square IPO thread was completely and totally
wrong. Any conclusions you drew from your results are based on a false
understanding of reality.
1\.
[https://news.ycombinator.com/item?id=10389397](https://news.ycombinator.com/item?id=10389397)
~~~
kzhahou
Ok, please post the correct math, then we can compare and see who got closer.
Jack's percentage is spelled out in the S-1, along with the top execs,
directors, and investors.
If you have any better data, including median equity and other percentiles for
non-founders, that'd be very useful. Thanks.
~~~
harryh
The "13 people" you thought owned 61.5% of the company were VCs who did not
own those shares as individuals but as managers of their VC funds.
~~~
kzhahou
I didn't mention exec/directors in my original comment above, only Jack at
$1.5B and average employee at $300K. Which of those is totally and completely
wrong, and why?
Are you saying that employees retained the full remaining 38.5% ?
~~~
harryh
Yes, you have massively miscalculated the % of the company owned by employees.
On a further note, even once you get that number right, it silly to talk about
the average employee stake from that number. You're dealing with a power law
here where talking about an average makes very little sense.
------
dpeck
no, we don't.
There are plenty of reasons to argue for more equity but risk is not one of
them. If you're good at what you do you can get a job tomorrow anywhere and
there is near zero stigma about being associated with a failed startup unless
you're on the founding side AND you've failed multiple times.
~~~
davidw
The salient point in the article is about opportunity cost. When you take a
100K startup job, and could have been making 200K somewhere else, that's a
very real loss. Now, those are made up numbers, and you need to plug in real
ones to make an accurate assessment, but that's his point.
~~~
dpeck
right, and that is completely valid to discuss.
I think it does a disservice to the other problems and discussion points to
toss everything under "risk".
~~~
x0x0
But what would you call it besides risk? ie you're gambling (risk) that your
opportunity cost will be made whole (and hopefully even exceeded!) via a
liquidity event.
~~~
dpeck
I'm not denying that it is risk, I argue that it doesn't allow for the
subtleties and nuances (opportunity cost vs responsibility increase vs
positive/negative association after exit/failure).
There are a lot of different variables involved and a great many of them boil
down to calculated gambles on the part of the employee and employers. Those
are worthy of discussion and there are plenty of discussions to be had around
those. But simply calling it "risk" and saying you need more of the upside is
so simplified its nearly meaningless and impossible to have a conversation
about with everyone involved having the same idea about what is being
discussed.
~~~
davidw
There are certainly non-monetary benefits to working at startups, as well as
costs. It certainly makes sense to include those as part of the calculation,
but money is not a small part of that, either.
------
rbranson
Founders probably aren't able to hire because they can't get enough money from
investors to pay market salaries or because their company just plain sucks.
Maybe they think that because they've convinced investors that they're onto
something that it means prospective hires won't know better?
Put yourself in these people's shoes. Even if your company has a chance to be
worth a billion dollars, you're going to have to do better than a quarter
point. After several rounds of dilutions, a seed-stage employee with a quarter
point of even a $1B company would end up with ~$1.5M. You've got to be
offering people a 10-100X opportunity of what they'd get working at BigCos to
get good talent, not 50% more.
Corollary: if you are giving engineers at your pre-series-A company less than
a point and 50% of market salary, you are probably not hiring great talent.
~~~
walshemj
And 1.5 isn't that much at poptel in the UK at one point all the employees
where worth around $1m each.
No if only .coop had taken off (and we hadn't been screwed by ICAN) and the
coop had brought us out ;-)
I know some on just retired from BT that has over $.5 million in stock from
his various share saves
------
MattRogish
I think it depends a lot on the market you're in, how you've funded your
company, and where you live.
A founder of a bootstrapped, remote company in Iowa has far more risk than
almost any engineer in NYC or SF. Relatively speaking, if you are a not-
terrible engineer in SF/NYC you can have multiple jobs competing for you
within a day of your company going out of business.
A non-well-off or not-well-connected founder has comparatively little
opportunities to "fail up" when their company goes out of business. I'm a
founder of a bootstrapped, remote company, in a non tier-1 tech city. I'm the
lowest paid person in our company (of all engineers). If we go out of
business, our folks will have jobs in days. I won't. Does that mean I have
more risk? In a certain dimension, yes. In others, definitely not.
Of course, nothing in my post suggests that early engineers shouldn't be paid
reasonable salaries or have reasonable options. They should. But I don't think
risk has anything to do with it - common decency does.
------
draw_down
Makes sense to me. I have no clue why people take shit salaries in exchange
for a tiny slice of a company that will probably fail and if it succeeds,
still not give them a down payment on a house.
------
mrkurt
What's especially troubling is the difference between early engineers and
early exec type hires. Engineers tend to get marginal salaries and not enough
options. Early marketing, sales, ops hires tend to get more than _all_ the
engineers, even if they join post series A as person number 40.
------
mrdrozdov
So what you're saying is that more people should start companies? Maybe you're
on to something...
~~~
kzhahou
He is calling something out as unfair, and your response is that more people
should be unfair?
~~~
mrdrozdov
I don't think that what the article describes is a balance of fairness, rather
an observation that employees are taking a lot of risk! The message to me is
that you should probably think twice before joining a company. This is not a
one-sided battle. Employees clearly have the power to start their own company,
avoid risk, and reap the same rewards that so many founders are achieving.
Over time, if enough people are starting companies, then risk will have to be
shifted back in the direction of founders and becoming an employee will become
more appealing.
~~~
kzhahou
The opening paragraph:
> Why are we still using old 1990’s cap tables and the same tiny option grants
> for employees as we did back then? Is that fair? To whom? Is it the right
> thing to do? I don’t think so.
We're talking about fairness. Not the abstract elementary-school concept, but
simply that employee equity compensation should not be wildly disproportionate
to their risk (and, this is not counter-balanced by their salary or perks).
------
msoad
I joined a startup that went IPO and experienced how those options are
worthless, funny enough my options are negative with current stock price!. Now
if I decide to join a startup company I would just value the options precisely
$0.00
------
awicklander
Everything written about here is in relation to VC funded companies. For
people building companies without outside investment, and who pay employees
with money, none of this is true at all and in fact sounds nonsensical.
------
cryoshon
Have you considered forming a tech workers union? Collective bargaining would
be very strong.
~~~
ThrustVectoring
A professional guild - like lawyers, accountants, or actuaries - is a much
better structure, IMO.
~~~
bpicolo
Plus nerdy folk love guilds.
------
seansmccullough
Unless you're a founder, joining a startup is a terrible deal. There's no two
ways about it.
~~~
harryh
Because everyone should have the exact same risk tolerance as you. And if they
don't, they're wrong. There's no two ways about it.
~~~
seansmccullough
Both the odds and rewards are much smaller than most people assume. You can
take all the risks you want - if the rewards don't justify it, it's a bad
deal.
~~~
harryh
I agree with you that some people probably overestimate those two things.
But I also think that other people tend to lump all startups together with the
assumption that the risk is more or less the same across all of them. In
reality risk/reward ratio certainly varies by at least 10x across
opportunities. Consider the difference between a seed stage company with no
users and a product that doesn't even work yet vs a Series B company with
significant traction and 10s of millions in the bank. These can be hugely
different situations.
But, even if you ignore this, I come back to the concept of risk tolerance. No
matter what #s you choose, there are certainly people out there who by dint of
personality, age, or financial situation are perfectly willing to take on a
lot of risk.
------
blizkreeg
All fair points and something that should change, especially (imo) the clause
around options expiry as well as %age.
I have just one honest (not coming from a place of sneer) question: did the OP
follow his own advice in the startup he founded (Parse and/or Scribd)?
------
JonFish85
If that were true, wouldn't you expect real estate prices around San Francisco
not to be rising as fast as they are? Something is driving up wages around the
startup hot-spots, isn't that a sign that there really isn't that much risk?
~~~
sundaeofshock
Nope.
The Bay Area is home to some major tech employers (Google, Apple, Oracle,
Cisco, Salesforce, Intel, etc), and is also a major outpost for other big
names (Microsoft, Samsung, Sony, etc). There are also a number of major
employers in high-compensation jobs (finance, bio-tech, law). Even among
startups, there is a big difference between a scrappy little company of three
folks and a company like Uber.
Bottom line: there is lots of money floating around the Bay Area. As much as
we might like to believe otherwise, the amount in small start-ups is just not
that much.
~~~
deegles
There's also a lot of overseas money being invested in real estate, driving up
the prices. The same is happening in Seattle.
------
jerguismi
> if adding a particular engineer to the team increases the company’s value by
> 10% overnight
Btw, how do you prove that? For me hiring seems more difficult than that, you
start to grasp the value of an employee after 6 months or so.
~~~
dannyr
Take the case of Instagram, they hired several Android Engineers. They added 1
million new users on the 1st day the app was released.
~~~
jerguismi
Yeah, take this one company and generalize it, makes perfect sense. There are
bazillion of companies and only one instagram.
~~~
dannyr
Wow. Pretty hostile.
Where did I generalize?
I was just giving an example that you can consider.
------
oldmanjay
I think I get the point, but I really had to work to keep reading it. The tone
is relatively difficult to want to absorb without already being wholeheartedly
on board with the premise.
The main problem I have is that it troubles me to feel a sense of agreement
when the rhetoric is so emotionally manipulative, because parsimoniously, _I
've been manipulated_ into the agreement.
------
kzhahou
Ok, let's see some numbers!
Founders, post how much % you have.
Employees, post your employee # and your %.
Let's see if this disparity exists.
------
alpb
Reminds me of a post by Sam Altman: [http://blog.samaltman.com/employee-
equity](http://blog.samaltman.com/employee-equity)
------
mcnamaratw
Huh. Well written. I've been a founder and not a startup employee, and I found
myself paying attention.
------
rco8786
A lot of hyperbole in this article. The overarching message might be true but
the article basically reads as one sarcastic complaint to me.
| {
"pile_set_name": "HackerNews"
} |
Developer and Social Activism - rahulrrixe
https://medium.com/p/4b31cbd4ce3c
How small contribution using technology can makes big difference
======
rahulrrixe
How small contribution using technology can makes big difference? and we
developer have major role to play in the society.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What is a good tech business model to start during a recession? - numakerg
In the WeWTF article that's currently on the front page, Scott Galloway mentions that the firms he started during recessions were more successful for these reasons:<p>1. an easier time finding talent<p>2. easier to control costs<p>3. getting immediate feedback because clients/customer held their purse strings closed<p>What type of business model do think would be more successful under in an economic downturn?<p>For example, I think services where users altruistically pay to support creators whose media is largely free to consume (e.g. Patreon, Twitch) wouldn't yield as much revenue because people have less disposable income. I can't really think of a model that would do well under these conditions. Of course I could look up Scott's past work, but that would be cheating.
======
tlb
Ideally, you want to start the business just before the bottom of the business
cycle and have a large market share during the next boom. Many companies
started in 2007-2008 managed to hire great people at the beginning and are
bringing in huge revenue today.
Many products can be sold as cost savers during the trough, and rake in huge
profits during the boom. Cloud computing is an example -- people start using
it to avoid buying servers but during boom times they over-provision.
------
hourislate
>What type of business model do think would be more successful under in an
economic downturn?
The type that was successful during the good times. A good business that
solves customer problems or provides value will survive in all kinds of
economic conditions.
Scott is connected. His resources are endless. Unless you're rich or have
incredibly rich friends or investors it isn't going to happen.
------
jppope
Historically people go to the movies during recessions... but I wouldn't count
on that this next time around
------
verdverm
Something that helps a company do more with less. It's probably less about
model and more about product.
------
koseikusi
Dollar stores. Buy Dollar Tree (ticker: DLTR) or invest in Wish.
| {
"pile_set_name": "HackerNews"
} |
Google ML/AI Comic - jacquesm
https://cloud.google.com/products/ai/ml-comic-1/
======
siliconc0w
Annnnd Martha still doesn't have a ml-solvable business problem identified
with a large enough curated dataset to actually create a useful model.
~~~
baron_harkonnen
That's not a problem at all! In the fortune 500 there are plenty of companies
with "problems" that used to be solved with boring things like "averages" and
"business logic", but now you can replace those things with LSTMs and Deep NLP
models and get half the performance with several orders of magnitude more
complexity! The best part is the people building these systems have none of
that annoying "engineering background" baggage that will mean they don't worry
about stupid stuff like support and maintenance, or even basic debugging: if
the model breaks you just build a brand new one!
~~~
jorblumesea
As an added bonus, people with these skills cost x3 as much as a standard
engineer. HR will love it!
------
lame88
Comically absent in this description of ML which includes hard technology like
NLP and actual use cases like self-driving cars is the elephant in the room of
advertising and surveillance. It's just like Andrew Ng's machine learning
course on Coursera - lists all these uses of machine learning....except mining
user information for advertising and other purposes. If anything, it's buried
under "image recognition" and "recommender systems". And yet it's what brings
in the dough. Pretty telling that the overwhelming majority of this
technology's current application is too unpalatable to acknowledge.
------
MattRix
Looks like Scott McCloud worked on this. I highly recommend his book
"Understanding Comics".
~~~
nestorD
It appears that it is not the first time he works with them, he also worked on
their chrome comics posted elsewhere in this thread.
------
blowski
Before clicking, I wondered if this was going to be a comic produced by ML.
Has such a thing been done?
~~~
aliljet
I wondered exactly the same thing and scrolled to the end for the human
credits:
Script by Dylan Meconis, Scott McCloud, Syne Mitchell
Art by Dylan Meconis
Color by Jenn Manley Lee
Japanese localization by Kaz Sato, Mariko Ogawa
Produced by the Google Comics Factory (Allen Tsai, Alison Lentz, Michael Richardson)
~~~
tylerhou
Even GPT-2 can’t create long, coherent stories; I doubt that such an AI which
can explain things and draw useful pictures exists.
------
bepvte
[https://federated.withgoogle.com/](https://federated.withgoogle.com/) this is
another fun one
------
rapind
I'm picturing this comic as a tattered poster on the wall of an abandoned
shell of a factory where the last human rebels live 200 years in the future
after post-AI fallout.
~~~
jacquesm
With a bunch of 'wanted' posters from a.d. 2026 next to it.
------
sertaco
If I would have seen this five years ago, I would find it cool but it is a
hard sell now. So many similar projects are around now for conveying basics of
ML mixed with a pinch of fun elements.
~~~
movedx
I don't think anyone is trying to sell you anything, friend.
Learning and educating come in different shapes, sizes, flavours, and many
people learn in different ways and at different paces. This is just another
way others can learn about a complex topic.
------
endianswap
Ah yes, the common scenario of engineers getting time-and-a-half for their
overtime work.
~~~
jessaustin
It's a good expectation to include subtly in something that will be read by a
variety of people. The Overton window on this won't be moved quickly, but it's
nice to think that it might be moved...
------
snek
Good explanation of machine learning somewhere under the layers of
condescending rhetoric and marketing. I went into this expecting something
like the chrome comic and boy was that underwhelming.
------
axiom92
Reminds me of Logicomix!
([https://www.amazon.com/gp/product/1596914521/](https://www.amazon.com/gp/product/1596914521/))
------
iancarroll
The Chrome comic is pretty iconic I think:
[https://www.google.com/googlebooks/chrome/](https://www.google.com/googlebooks/chrome/)
------
mistrial9
I like this, especially since it seems to make a fair case regarding CNN and
Deep Learning (not solve-alls).. looking forward to the second part.
------
happy-go-lucky
It's a great intro. It reminds me of the official scikit-learn tutorial I
worked on a while ago.
------
haberdasher
Mel is Bezos?
| {
"pile_set_name": "HackerNews"
} |
Hotels Hammered by Coronavirus Offer 14-Day Quarantine Packages - djsumdog
https://www.wsj.com/articles/hit-by-coronavirus-slowdown-hotels-try-catering-to-the-quarantined-11584624502
======
jobigoud
What about the staff? Are they still doing room service? What about when the
guest leave, to the hospital or after the 14 days? Who disinfect the room?
------
ThePowerOfFuet
[http://archive.is/Vu92k](http://archive.is/Vu92k)
------
kwhitefoot
Use [https://github.com/iamadamdev/bypass-paywalls-
firefox/blob/m...](https://github.com/iamadamdev/bypass-paywalls-
firefox/blob/master/README.md) to read paywalled articles.
~~~
ThePowerOfFuet
That doesn't help anyone on mobile.
This does: [http://archive.is/Vu92k](http://archive.is/Vu92k)
~~~
kwhitefoot
Firefox tells me bad certificate domain.
If I tell it to accept the risk I get a 403 Forbidden from cloudflare.
------
doodlebugging
I haven't read the paywalled article yet but the headline caught my attention
because this looks like a great way to maintain some occupancy while
everything is adjusting to Covid-19.
| {
"pile_set_name": "HackerNews"
} |
The $2 million penalty clause - chaostheory
http://weblog.infoworld.com/gripeline/archives/2008/12/tom_offers_us_t.html?source=rss
======
jamess
Ha! A wonderful example of why we have laws regarding unfair contract terms. I
can't believe they found a lawyer either ignorant or optimistic enough to put
this in. The remedy is out of all proportion with the injury (which, as far as
I can tell is non-existent) so the contract is unenforceable.
~~~
noonespecial
_I can't believe they found a lawyer either ignorant or optimistic enough to
put this in._
I sure can. My only question is: Did he bite his pinky while adding the clause
and pronounce it "meeeelion"?
Its strange to think that, as a lawyer, you can write a 'program' in legalese
to run on 'the legal system' and collect your pay without ever having to run
it. Your client finds out later (much) if it will run or not! Wish my code
worked that way.
| {
"pile_set_name": "HackerNews"
} |
Functional Geometry - b-man
http://www.frank-buss.de/lisp/functional.html
======
jcl
Neat. It might be interesting to try this with DrScheme, since it can display
images inline in the repl.
<http://docs.plt-scheme.org/quick/>
<http://docs.plt-scheme.org/teachpack/image.html>
~~~
b-man
In the same spirit I would recommend that you try section 2.2.4[1] of the SICP
textbook using The SICP Picture Language [2] which can be found here [3].
That section is based on the same original paper, and it is very nice to check
on the book's explanation using a modern environment such as DrScheme.
[1] [http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-15.html...](http://mitpress.mit.edu/sicp/full-text/book/book-
Z-H-15.html#%_sec_2.2.4)
[2] [http://planet.plt-scheme.org/package-
source/soegaard/sicp.pl...](http://planet.plt-scheme.org/package-
source/soegaard/sicp.plt/2/1/planet-docs/sicp-manual/index.html)
[3] [http://planet.plt-
scheme.org/display.ss?package=sicp.plt&...](http://planet.plt-
scheme.org/display.ss?package=sicp.plt&owner=soegaard)
------
mccutchen
Thank you so much! I first encountered this article back in 2004 or 2005. I
just remembered it again about a month ago, and have been fruitlessly Googling
for it off and on since then. You've really made my day!
~~~
mccutchen
And, I just did a quick port to Python: <http://gist.github.com/220038>
------
youngian
The GIMP has a Scheme-based console that would be pretty suited to this sort
of thing.
------
stevesmith155
Reminds me of Esher. Very cool.
| {
"pile_set_name": "HackerNews"
} |
U.S. officials believe Iran is behind recent cyberattacks - evo_9
http://security.blogs.cnn.com/2012/10/15/u-s-officials-believe-iran-is-behind-recent-cyberattacks/?hpt=hp_t1
======
MaysonL
Talk about burying the lede.
Here's the last sentence: _"The unit was developed in response to American and
Israeli cyberattacks on the Iranian nuclear enrichment plant at Natanz."_
| {
"pile_set_name": "HackerNews"
} |
Star Simpson speaks out about how MIT treated her in LED case at Logan - kramarao
https://medium.com/@starsandrobots/understandably-cause-for-alarm-1f0929be0615
======
tehwebguy
> So, I went directly to the best place I could think of: the very first place
> I walked to after I was let free, was the Office of the President at MIT.
> (In loco parentis, right?) But I was stopped at the door. She wouldn’t see
> me or talk to me. Liability, and all. The potential cost of giving me any
> legal advice or talking to her directly about anything, would simply be too
> great. So MIT found the protection it sought, while I did not.
Bummer, it seems college sometimes prepares you for the real world by turning
its back on you just like the real world does.
> Star Simpson’s actions were reckless and understandably created alarm at the
> airport. — MIT News Office, Sep 21 20
Ouch, fuck MIT
~~~
lsc
Yeah, MIT kind of has a rep[1] for being particularly unfriendly to students
in legal trouble. Really, it doesn't seem like a very good place for people
who want to push the envelope.
It seems like something a student ought to consider, I mean, that MIT seems to
be more concerned about it's reputation with the legal community than with
it's reputation with students.
[1][https://en.wikipedia.org/wiki/Aaron_Swartz](https://en.wikipedia.org/wiki/Aaron_Swartz)
------
gravypod
Is this (
[http://i2.wp.com/boingboing.net/images/cfa4827569_20070921de...](http://i2.wp.com/boingboing.net/images/cfa4827569_20070921device3.jpg?w=315)
) the hoodie that Star Simpson was wearing? I could understand the police
being a little suspicious. If you are going to do any electronic project, you
need to hide anything other than your display/LEDs.
I could understand MIT distancing itself from what would have been a PR s
__tshow because of how obviously threatening that device looks.
If anyone is wondering where I found this image, it can be located here (
[http://boingboing.net/2007/09/21/mit-student-
arrested.html](http://boingboing.net/2007/09/21/mit-student-arrested.html) ).
It seems hard to find a photo of this with any article on the subject. It took
a bit of digging to find that few year old BoingBoing story. This link also
contains more details on the incident that were not covered in the story.
~~~
lsc
I guess I fail to see how that is any more obviously threatening than, say,
gluing a bunch of legos to my shirt. I mean, it's weird, and if the point of
security is to make us all try not to be weird, that's one thing... but I'm
not sure how "weird" has anything to do with "threatening"
~~~
gravypod
To us a bread bored is just a piece of plastic, to the common person it is
something scary.
Would you take a suit case with a bread bored, exposed wires, and a 9V battery
through security at an air port? How about higher security areas than that?
There is a very, very big difference between what was worn and something that
one would associate with being a light up sweeter.
When I heard the story I assumed this (
[http://cdn.shopify.com/s/files/1/0070/8002/products/g513a-ch...](http://cdn.shopify.com/s/files/1/0070/8002/products/g513a-christmas-
sweaters-with-lights_grande.jpeg) ) was what was being worn.
There is a big difference between exposed wires, batteries, with blinking LEDs
stuck together on a breadboards and Legos. When going to an area with elevated
tension, security, and crazed guards the gap between "weird" and "threatening"
is quickly closed.
~~~
lsc
>When going to an area with elevated tension, security, and crazed guards the
gap between "weird" and "threatening" is quickly closed.
Yeah, that's kind of my point. Incidents like this aren't about safety, they
are about conformity.
------
IanDrake
Looking at pics of what she was wearing, I think it's sad that she still
doesn't realize she was reckless.
I'm not saying she deserved everything that happened to her, but wearing a
breadboard with a cluster of lights and a 9v hanging off your chest, at an
airport, might be the very definition of reckless.
Ok, I get it, you're a maker. That doesn't give you the right to scare the
shit out of people. Would it be ok for air-soft fans to start a shooting war
at an airport and expect people to understand?
~~~
mquander
It's not super easy to figure out what things other people might be scared of
if it never occurred to you to be scared of them yourself. If I saw someone
walking around with a bunch of electronics draped over themselves it wouldn't
cross my mind to be afraid of them; as a result, I wouldn't have thought twice
about doing it (until this and the other more recently publicized case of
freakouts.)
It's not as if there's a high school civics lecture on the topic of Strange
American Fears, and parents don't tell their kids not to have something that
looks like a bunch of messy electronics, so I'm not sure where she is supposed
to go and figure this out. As a result, it's hard for me to say she did
something wrong.
~~~
Nadya
_> It's not super easy to figure out what things other people might be scared
of if it never occurred to you to be scared of them yourself._
I don't agree with this claim. In fact, I think it's extremely easy to
identify what might scare other people. So much so that it can be reduced to a
single question:
"Is this outside the norm?"
Doesn't matter if it's full body tattoos, 30 body piercings, electronics
plastered all over you, you're lit up like an X-mas tree, wearing a full-body
suit, wearing a balaclava, what have you. Do you see other people doing it?
No? Chances are you're going to raise suspicion and suspicion not only _can_
cause fear but I argue it _will_ cause fear.
Normality is a social comfort zone. Nobody bats an eye at anyone who isn't
standing out from the crowd.
Now before anyone tries to wage some sort of moral war against _me_ for
stating _how things are_. I don't pass any judgement on if this is "good" or
"bad" behavior. However, there is an evolutionary explanation for this:
"People who don't fit with your community are outsiders. Outsiders can be
friend or foe. Be suspicious of them."
Nobody would be scared of what they were wearing if they were at a Hackathon
or some place where "this is normal". But they were at an airport. That isn't
normal for an airport.
~~~
DougMerritt
> Doesn't matter if it's full body tattoos, 30 body piercings, electronics
> plastered all over you
So not only do you agree she deserved to be arrested, you also claim that
anyone with tattoos and/or piercings who goes to an airport should also be
arrested?
Unbelievable.
Civil liberties should not hinge on looking just like everyone else, following
the herd, never daring to be creative, etc.
~~~
Nadya
I'm not sure how you read my post and got that message out of it without
purposefully being dishonest.
_> Civil liberties should not hinge on looking just like everyone else,
following the herd, never daring to be creative, etc._
You even explicitly went out of your way to ignore what I said.
_> Now before anyone tries to wage some sort of moral war against me for
stating how things are. I don't pass any judgement on if this is "good" or
"bad" behavior. _
Take your moral war elsewhere, because I'm not interested in this discussion.
I'm stating how things are. _Not how they "should" be_. Not how _you want_
them to be. Not how _I want_ them to be. How they _currently are_. "What they
are" and "what they should be" are not the same thing. Am I being patronizing
enough to make my point crystal clear?
_> never daring to be creative_
This is a strawman. Dare to be creative at hackathons and art conventions -
not an airport. Ever heard the phrase "time and place"?
~~~
DougMerritt
> Dare to be creative at hackathons and art conventions - not an airport. Ever
> heard the phrase "time and place"?
Remove tattoos and piercings at the airport, really?
> stating how things are
Like hell. Your view of things 100% implies that Star was in fact culpable for
her own arrest, which isn't true, so I don't believe your disclaimers that you
are just neutrally commenting.
> without purposefully being dishonest...
> ...Am I being patronizing enough to make my point crystal clear?
That you're being an ass? Sure.
~~~
Nadya
_> Remove tattoos and piercings at the airport, really?_
Why are you so stuck on one of several examples? Open carrying is legal in
many states. Go for a workout outside a police station while open carrying,
let me know how that works out. Just do some jumping jacks across the street.
Nothing _illegal_ but it will certainly draw some unwanted attention from the
police!
_> Like hell._
It's what happened and why it happened. She wore something outside of the
ordinary, grabbed unwanted attention, and was arrested. Which part of that is
a false statement?
~~~
DougMerritt
> She wore something outside of the ordinary, grabbed unwanted attention, and
> was arrested. Which part of that is a false statement?
If that's all you had said, then sure, that's just the facts.
My interpretation of your _tone_ was that you were unsympathetic to Star,
while I felt and feel great sympathy to her, and outrage towards the people
who mistreated her.
Somehow a lot of idiots in the world have gotten the idea that digital
electronics resemble a bomb, which is deeply retarded considering that it only
resembles a timer, with no sign of an explosive.
I fault the idiots, not Star, and I am surely going to be upset with anyone
who seems unsympathetic to what happened to her.
| {
"pile_set_name": "HackerNews"
} |
Racket on the Playstation 3? It's Not What you Think [video] - vmmenon
http://www.youtube.com/watch?v=oSmqbnhHp1c
======
thefreeman
It should be noted that the speaker is one of the lead developers at Naughty
Dog, likely one of the greatest game studios of all time. Definitely recommend
a watch.
------
kevingadd
The speaker actually begins talking about racket and how they use it at about
6 minutes into the video. Here's a link:
[https://www.youtube.com/watch?v=oSmqbnhHp1c&t=6m0s](https://www.youtube.com/watch?v=oSmqbnhHp1c&t=6m0s)
Interesting to hear how they replaced GOAL in their toolchain. The people I've
met who used GOAL in the past talked at length about how much they miss it.
~~~
aktau
Parent is referring to GOAL/GOOL [1], Lisps that were developed internally at
Naughty Dog for game scripting (the first iteration for the Crash Bandicoot
series, the second on for the Jak & Daxter era. It is said that these were
very powerful systems that could compile to some very mean PS1/PS2 assembly.
If you read about how they had to optimize Crash Bandicoot to be able to give
that "wow"-factor at their first E3, blazing past all other PS1 games and
setting a new standard, you'll know what I mean.
There's a very interesting blog series about it [2].
[1]:
[http://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp](http://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp)
[2]: [http://all-things-andy-gavin.com/2011/02/02/making-crash-
ban...](http://all-things-andy-gavin.com/2011/02/02/making-crash-bandicoot-
part-1/)
~~~
z3phyr
Could not just Sony open source it? I am very interested to get a feel.
| {
"pile_set_name": "HackerNews"
} |
Making floating point math highly efficient for AI hardware - probdist
https://code.fb.com/ai-research/floating-point-math/
======
grandmczeb
Here's the bottom line for anyone who doesn't want to read the whole article.
> Using a commercially available 28-nanometer ASIC process technology, we have
> profiled (8, 1, 5, 5, 7) log ELMA as 0.96x the power of int8/32 multiply-add
> for a standalone processing element (PE).
> Extended to 16 bits this method uses 0.59x the power and 0.68x the area of
> IEEE 754 half-precision FMA
In other words, interesting but not earth shattering. Great to see people
working in this area though!
~~~
jhj
At least 69% more multiply-add flops at the same power iso-process is nothing
to sneeze at (we're largely power/heat bound at this point), and unlike normal
floating point (IEEE or posit or whatever), multiplication, division/inverse
and square root are more or less free power, area and latency-wise. This is
not a pure LNS or pure floating point because it is a hybrid of "linear"
floating point (FP being itself hybrid log/linear, but the significand is
linear) and LNS log representations for the summation.
Latency is also a lot less than IEEE or posit floating point FMA (not in the
paper, but the results were only at 500 MHz because the float FMA couldn't
meet timing closure at 750 MHz or higher in a single cycle, and the paper had
to be pretty short with a deadline, so couldn't explore the whole frontier and
show 1 cycle vs 2 cycle vs N cycle pipelined implementations).
The floating point tapering trick applied on top of this can help with the
primary chip power problem, which is moving bits around, so you can solve more
problems with a smaller word size because your encoding matches your data
distribution better. Posits are a partial but not complete answer to this
problem if you are willing to spend more area/energy on the encoding/decoding
(I have a short mention about a learned encoding on this matter).
A floating point implementation that is more efficient than typical integer
math but in which one can still do lots of interesting work is very useful too
(providing an alternative for cases where you are tempted to use a wider bit
width fixed point representation for dynamic range, or a 16+ bit floating
point format).
~~~
grandmczeb
The work is definitely great and I have no doubt we'll see new representations
used in the future. But at least on the chip I work on, this would be a <5%
power improvement in the very best case. For the risk/complexity involved, I
would hope for a lot more.
------
moflome
Not sure why this isn't getting more votes, but it's a good avenue of research
and the authors should be commended. That said, this approach to optimizing
floating point implementations has a lot of history at Imagination
Technologies, ARM and similar low-power inferencing chipsets providers. I
especially like the Synopsys ASIP Design [0] tool which leverages the open-
source (although not yet IEEE ratified) LISA 2.0 Architecture Design Language
[1] to iterate on these design issues.
Interesting times...
[0] [https://www.synopsys.com/dw/ipdir.php?ds=asip-
designer](https://www.synopsys.com/dw/ipdir.php?ds=asip-designer) [1]
[https://en.wikipedia.org/wiki/LISA_(Language_for_Instruction...](https://en.wikipedia.org/wiki/LISA_\(Language_for_Instruction_Set_Architecture\))
------
Geee
A bit off-topic, but I remember some studies about 'under-powered' ASICs, ie.
running with 'lower-than-required' voltage and just letting the chip fail
sometimes. I guess the outcome was that you can run with 0.1x power and get
0.9x of correctness. Usually chips are designed so that they never fail and
that requires using substantially more energy than is needed in the average
case. If the application is probabilistic or noisy in general, additional
'computation noise' could be allowed for better energy efficiency.
~~~
david-gpu
That sounds awful for verification, debugging, reproducibility and safety-
critical systems. Imagine this in a self-driving car. Scary.
~~~
TomMarius
Well, you could simply not use these in a self-driving vehicle.
------
dnautics
Wow! It's kind of a wierd feeling to see some research I worked on get some
traction in the real world!! The ELMA lookup problem for 32 bit could be fixed
by using the posit standard, which just has "simple" adders for the section
past the golomb encoded section, though you may have to worry about spending
transistors on the barrel shifter.
~~~
jhj
The ELMA LUT problem is in the log -> linear approximation to perform sums in
the linear domain. This avoids the issue that LNS implementations have had in
the past, which is in trying to keep the sum in the log domain, requiring an
even bigger LUT or piecewise approximation of the sum and difference non-
linear functions.
This is independent of any kind of posit or other encoding issue (i.e. it has
nothing to do with posits).
(I'm the author)
~~~
dnautics
Thanks for your work!! (And citing us ofc)
Do you think there might be an analytic trick that you could use for higher
size ELMA numbers that yields semiaccurate results for machine learning
purposes? Although to be honest I still think with a kuslich FMA and an extra
operation for fused exponent add (softmax e.g.) you can cover most things
you'll need 32 bits for with 8
~~~
jhj
I've thought of that, but the problem is that it needs to linearly interpolate
between the more accurate values, and depending upon how finely grained the
linear interpolation is, you would need a pretty big fixed point multiplier to
do that interpolation accurately.
If you didn't want to interpolate with an accurate slope, and just use a
linear interpolation with a slope of 1 (using the approximations 2^x ~= 1+x
and log_2(x+1) ~= x for x \in [0, 1)), then there's the issue that I discuss
with the LUTs.
In the paper I mention that you need at least one more bit in the linear
domain than the log domain (i.e., the `alpha` parameter in the paper is 1 +
log significand fractional precision) for the values to be unique (such that
log(linear(log_value)) == log_value) because the slope varies significantly
from 1, but if you just took the remainder bits and used that as a linear
extension with a slope of 1 (i.e., just paste the remainder bits on the end,
and `alpha` == log significand fractional precision), then
log(linear(log_value)) != log_value everywhere. Whether or not this is a real
problem is debatable though, but probably has some effect on numerical
stability if you don't preserve the identity.
Based on my tests I'm skeptical about training in 8 bits for general problems
even with the exact linear addition; it doesn't work well. If you know what
the behavior of the network should be, then you can tweak things enough to
make it work (as people can do today with simulated quantization during
training, or with int8 quantization for instance), but generally today when
someone tries something new and it doesn't work, they tend to blame their
architecture rather than the numerical behavior of IEEE 754 binary32 floating
point. There are some things even today in ML (e.g., Poincaré embeddings) that
can have issues even at 32 bits (in both dynamic range and precision). It
would be a lot harder to know what the problem is in 8 bits when everything is
under question if you don't know what the outcome should be.
This math type can and should also be used for many more things than neural
network inference or training though.
~~~
nestorD
> It would be a lot harder to know what the problem is in 8 bits when
> everything is under question if you don't know what the outcome should be.
I might have a solution for that : I work on methods to both quantify the
impact of your precision on the result and locate the sections of your code
that introduced the significant numerical errors (as long as your numeric
representation respects the IEEE standard).
However, my method is designed to test or debug the numerical stability of a
code and not be used in production (as it impacts performances).
~~~
jhj
None of the representations considered in the paper (log or linear posit or
log posit) respect the IEEE standard, deliberately so :)
~~~
nestorD
You drop denormals and change the distribution but do you keep the 0,5 ULP
(round to nearest) garantee from the IEEE standard ? And are your rounding
errors exact numbers in your representation (can you build Error Free
Transforms) ?
~~~
jhj
For (linear) posit, what the "last place" is varies. Versus a fixed-size
significand, there is no 0.5 ulp guarantee. If you are in the regime of full
precision, then there is a 0.5 ulp guarantee. The rounding also becomes
logarithmic rather than linear in some domains (towards 0 and +/\- inf), in
which case it is 0.5 ulp log scale rather than linear, when the exponent scale
is not 0.
For my log numbers under ELMA (with or without posit-ing), the sum of 2
numbers alone cannot be analyzed in a simple ulp framework I think, given the
hybrid log/linear nature. Two numbers summed are both approximated in the
linear domain (to 0.5 ulp linear domain, assuming alpha >= frac + 1), then
summed exactly, but conversion back to the log domain when done is
approximate, to 0.5 ulp in the log domain. But the result is of course not
necessarily 0.5 ulp in the log domain. Multiplication, division and square
root are always the exact answer however (no rounding). The sum of two log
numbers could of course also be done via traditional LNS summation, in which
case there is <0.5 ulp log domain error.
Kulisch accumulation throws another wrench in the issue. Summation of many log
domain numbers via ELMA will usually be way more accurate than 0.5 (log
domain) ulp rounding via LNS traditional summation techniques, because the
compounding of error is minimized, especially when you are summing numbers of
different (or slightly different) magnitudes. Kulisch accumulation for linear
numbers is of course exact, so the sum of any set of numbers rounded back to
traditional floating point is accurate to 0.5 ulp.
------
sgt101
For those interested the general area I saw a good talk about representing and
manipulating floating point numbers in Julia at CSAIL last week by Jiahao
Chen. The code with some good documentation is on his github.
[https://github.com/jiahao/ArbRadixFloatingPoints.jl](https://github.com/jiahao/ArbRadixFloatingPoints.jl)
------
davmar
caveat: i haven't finished reading the entire FB announcement yet.
google announced something along these lines at their AI conference last
september and released the video today on youtube. here's the link to the
segment where their approach is discussed:
[https://www.youtube.com/watch?v=ot4RWfGTtOg&t=330s](https://www.youtube.com/watch?v=ot4RWfGTtOg&t=330s)
------
moltensyntax
> Significands are fixed point, and fixed point adders, multipliers, and
> dividers on these are needed for arithmetic operations... Hardware
> multipliers and dividers are usually much more resource-intensive
It's been a number of years since I've implemented low-level arithmetic, but
when you use fixed point, don't you usually choose a power of 2? I don't see
why you'd need multiplication/division instead of bit shifters.
~~~
jhj
Multiplication or division by a power of 2 can be done by bit shift assuming
binary numbers represent base-2 numbers; i.e. not a beta-expansion
[https://en.wikipedia.org/wiki/Non-
integer_representation](https://en.wikipedia.org/wiki/Non-
integer_representation) where binary numbers are base-1.5 or base-sqrt(2) or
base-(pi-2) or whatever (in which case multiplication or division by powers of
1.5 or sqrt(2) or (pi-2) could be done via bit shift).
But when multiplying two arbitrary floating point numbers, your typical case
is multiplying base-2 numbers not powers of 2, like 1.01110110 by 1.10010101,
which requires a real multiplier.
General floating point addition, multiplication and division thus require
fixed-point adders, multipliers and dividers on the significands.
------
saagarjha
I find it interesting that they were able to find improvements even on
hardware that is presumably optimized for IEEE-754 floating point numbers.
~~~
nestorD
It is a trade-of : they find improvements by losing precision where they
believe it is not useful for their use case.
| {
"pile_set_name": "HackerNews"
} |
There should be a day every year for donating money to free apps - jcslzr
======
seekingcharlie
Really?
I mean, of all the things that one could donate money to...
------
aurora72
A similar thing might be done for e-books, too. Because e-books are sooner or
later falling into torrent networks becoming available for free.
------
chrismcb
Or... 365 days...
| {
"pile_set_name": "HackerNews"
} |
Release day economics - tbassetto
http://uniformmotion.tumblr.com/post/9659997039/release-day-economics
======
gravitronic
I always like music industry posts showing up on HN as it's my view that the
startup industry is in some ways like the music industry of years past.
Competition is fierce. Most start out working on their (startup or music)
product part-time until the product becomes popular enough that it is a "hit".
At that point the entrepreneur may be lucky to end up funded ("signed") which
will help their ability to pursue their product full time and spend the
marketing dollars to reach a larger audience.
Thankfully it's unlikely the VC/entrepreneur relationship will become as
twisted and exploitative as the major music industry's relationship with
artists as being funded does not suddenly unlock access to an entire set of
verticals inaccessible to unsigned artists. Or does it?
~~~
rasmusrygaard
I think the main difference between the two is that entrepreneurs have a
viable alternative in working for an established company (if we can forget
about the lack of jobs for a minute). Yes, musicians can work day jobs too,
but there are hardly as many openings for professional musicians as there are
for professional programmers. The startup analogy is valid, but for recording
artists, taking the leap is often in direct conflict with whatever pays the
bills.
------
alex1
I didn't see any mention of songwriting royalties, which can be very
significant if they also write their own music.
The songwriter/composer of a song (not a recording of a song, but the actual
melody, lyrics) gets a performance royalty each time a song is played in
"public" (internet and broadcast radio, in the elevator, at a bar, etc). This
is the royalty BMI, ASCAP, and SESAC collect. If the song is recorded and sold
by someone, the songwriter gets mechanical royalties for each unit sold. If
memory serves me correctly, the compulsory rate right now is 8 cents per unit
sold. If the song in any form (recorded, sung by a drunk dude, etc) is used in
something like a movie or a TV show, the songwriter gets a synchronization
royalty. I've seen sync royalties range from $5,000 to $250,000. Songwriters
are usually signed to publishing companies. Publishing companies are mostly
owned by record labels themselves (or their parent companies). Publishing
companies take a cut of the songwriters' royalties, but not as big as the cut
record labels take from recording artists. I've seen rates ranging from 10% to
30%.
On the other hand, the recording artist gets pretty much whatever the record
label decides to give them as described in the recording contract. The label
will own the song recordings, not the artist. Recording artists (and record
labels) do _not_ get any royalties for public performance. Yes, when a song is
played on FM radio, the record label doesn't get a penny. The _only_ exception
to this is when the song is played on "interactive" services on the Internet.
This is the royalty Sound Exchange collects. In those cases, both the
songwriter and whoever owns the copyright to the sound recording (the record
label) get royalties. The main sources of revenue for the label are from these
royalties, and from selling the sound recording in stores and online. They
take a large chunk (60-70%) of this and distribute the rest to the song's
recording artist, producer, etc.
~~~
burrokeet
the compulsory rate in the US right now is 9.1 cents for songs up to 5 minutes
in length, and 24 cents for ringtones.
syncs right now go from free to maybe 50k, unless it is a massive song (thing
the Beatles) in a massive campaign or feature. the average network tv sync
right now is prob around 5 grand all-in, meaning 2.5k goes to the owner of the
sound recording copyright (the label or artist) and 2.5k goes to the owner of
the composition (the songwriter(s) or publisher(s)).
Music publishers can take anywhere from 10% (for an admin deal) to 50% (for a
co-publishing deal). Bigger percentages involve advances (recoupable payments
against future royalties), but also much longer terms (5-10+ years).
Songwriters don't get publishing deals unless their songs are being performed
or sold, and there are lots and lots of indie publishers out there, along with
the majors as you mention.
In many territories outside of the US, artists and master recording copyright
owners do get paid for public performance - in the UK for example PPL is one
society that pays some of these rightsholders for public performance.
~~~
zcrar70
Just to add a little more detail:
> Music publishers can take anywhere from 10% (for an admin deal) to 50% (for
> a co-publishing deal). Bigger percentages involve advances (recoupable
> payments against future royalties), but also much longer terms (5-10+
> years).
Note this is for the publishing royalties only (not performance rights)
And on @alex1'- post:
> The label will own the song recordings, not the artist.
This would depend on the contract, though it's true that in most cases today
the label would own the recording.
In some cases the artists choose to sub-license the recording to the record
company, in which case he/she/they retain rights to the recording.
Finally, as is well known now artists often get an advance from the record
company on signing a contract. This advance however is deductible from any
earnings the artist would receive. Sometimes the advance is used to pay for
the recording or equipment or even to finance a tour (the tours are usually
not financed by the labels, aside from the 360 arrangements someone else
mentioned.)
No-one has mentioned the artist manager fees - I'm not sure of what the
figures for that are, but I think they range from 10% up to 50% (of the
advance) in some very rare cases.
In short, in most cases making a living as a musician/recording artist is hard
to impossible. Many semi-successful indie bands don't earn much more than a
minimum wage job, with perhaps similar long-term prospects. If you make it
big, you're rich but anything else is not a great existence. Oh, and the
record companies often struggle too (both majors and indie these days.)
------
burrokeet
If the band members are also the authors/composers of their recorded material,
they will receive slightly more than what is suggested here, since they will
also be due mechanical and/or performance royalties from various services.
That being said, it's the best/worst time to be a musical artist - you can
distribute yourself online (the biggest music marketplace) and receive a MUCH
larger chunk of the revenue than ever before. At the same time, you are only
going to sell 24 albums, because it takes a label or label-like organization
to sell records, and because of the former point, you are now participating in
a marketplace with a selection of products an order of magnitude greater than
a decade ago.
~~~
unohoo
>>it takes a label or label-like organization to sell records
with the new digital distribution methods and physical media almost on the
decline, record labels hardly 'sell records' any more. They just do a huge
marketing push for the artist.
If indie musicians can figure out a way to market themselves, they really
wouldnt need a record label.
~~~
burrokeet
sorry but i disagree - that huge marketing push is what sells the records -
that is what labels do, whether they are majors or one-person indie labels.
"marketing themselves" is a pipe dream that has only worked for a tiny
percentage of indie artists - a huge industry has built up around this effort,
and just like the major labels made lots of money off of their artists, this
industry is making lots of money off these indie artists, just exploiting
economies of scale of artists instead of consumers.
~~~
ivancdg
No question that this is true.
And there are a lot of people making a lot of money on selling 'self-marketing
opportunities' to independent musicians.
But to compare that industry ('huge'?), in terms of revenue, to what the
majors did in their hey-day is an exaggeration.
The model of selling services to indie musicians has not yet surpassed the
model of selling music to customers.
But perhaps that's where we're headed? That's a depressing thought.
------
ivancdg
Spotify is such a rip-off for musicians; I love the way these guys highlighted
that in a light-hearted way. But the economics of the alternatives are not
going to make anyone rich, either.
For future reference: if you sign-up via CDBaby, there is a one-time fee of
$35 (or $55 if they create your bar-code) and you are set-up with iTunes,
Amazon, etc, without a yearly fee. Even though Derek's gone it's still a good
deal.
Also, you should seriously consider contacting Magnatune. If they like your
music you can be on all of those platforms for free. And John Buckman is a
very nice guy. Et en plus il parle le français comme toi et moi.
~~~
iand
Even though they look like ripoffs with their tiny payments Spotify and other
subscription models have a few advantages for the artist over download models.
For a start the artist gets paid even if the listener is just trying the
track, hates it and never listens again.
The main advantage though is the open ended payment model. The OP should
compare the 20 year revenue for each track. I listen to some tracks from 1996
every week and spent the period 1996-2001 listening to them multiple times a
day. I'm not unusual (just getting old! :)
~~~
earbitscom
There are definitely albums I have listened to so many times that the
streaming payments would outperform the purchases, but I've also bought some
albums more than once, too!
Most people, and most albums, though, are not going to outperform those
economics. Simply put, $5-10 a month for access to 15M tracks is a joke and a
big loss for the industry. I look forward to the labels realizing it and
walking away.
~~~
danielsoneg
Yeah, but nobody listens to 15M tracks. They listen to a small subset of those
tracks, and that subset differs from person to person.
Heck, for fun, let's say you were to listen to music 8 hours a day every day
for a month (~30 days) -
8hrs * 30 days = 240hrs
(240hrs * 60min/hr) / 3min/song = 4800 songs.
Basically, you're paying $5-10 for a maximum of 4800 songs - or, between $.001
and $.002 per track, if you listen constantly.
Or, from the other side: Spotify's premium is $10/mo, which comes with no ads.
They pay $.003 out to each band - let's just pretend they follow Amazon's
'agency' policy of a 70/30 split, so Spotify's making roughly $.001 on each
song, and it's costing them $.004 total for that song (hypothetical - just
stick with me). If we assume they're not losing money, then an average $10/mo
user must listen to at most 2500 songs per month, and if Spotify's costs are
higher than $.001 per track, the number goes down.
Point is, the industry's providing _ACCESS_ to 15M tracks, but they're only
having to deliver ~2500/mo - but that's a different bundle of 2500 songs for
each user. It MAY make sense from their end just to call it 'unlimited' and
rely on the fact that the user can't consume music fast enough to really upset
the economics for them.
(Incidentally, if you were to decide to listen to each of those 15M songs
once, you'd wind up paying:
15,000,000 Songs * 3min/song = 45,000,000 min
45,000,000 min = 750000 hrs = 31250 days ~= 1027 months
1027 months * 10/mo = $10,270
The record labels, then, value their entire collection of music at $10,270 -
if you only listen once!)
~~~
earbitscom
I think the _access_ to 15M tracks for free, $5, or $10, makes them seem
pretty worthless. That's my issue.
~~~
iand
So having access to 5 billion web pages for a few dollars a month makes them
seem equally worthless?
Not sure this is any different to being able to listen to any radio station in
the country for free. I don't think that devalues music.
~~~
earbitscom
I am not sure how I feel about the first. On the second, it's very different.
You don't get to pick what you want to hear, when you want to hear it. So, you
discover something new on the radio, if you want it, you go buy it. You don't
just sit and wait for it to come on again. With Spotify, you hear something
you like, you have no reason at all to support the artist with a purchase, and
you'd have to listen a ton of times for them to make any money.
------
physcab
Consider for a moment this alternate viewpoint. What if submitting a song to
<insert music distribution service> was kinda like submitting a blog to
<insert blog service>. You don't get paid for blogging, but if you produce
enough good content, you can create an audience and then sell them other
things later on. Smart bloggers give out their content for free, then charge
for premium services and products like consulting, books, podcasts,
screencasts, merch, etc. Seems like the same model could be applied to
independent artists as well. Then the questions become, which platform can you
use to stay connected with your fans? Which platform will allow you to upsell
other services for which you can make real money on? Which platform allows you
to publish your content effortlessly to a potentially limitless audience?
~~~
earbitscom
Plenty of artists do that, but some prefer to keep their content behind a pay
wall, to use the same analogy. Both are viable models and should be respected.
~~~
zcrar70
I'm not sure how much giving content away for free is a viable model. It
depends on the meaning of viable; it isn't viable financially, but it can be a
worthwhile sacrifice if you think that more people reading your content is
going to mean more people are going to pay for it. More often than not, that's
not the case though; a lot of the content we access is free, and the author
won't get remunerated for it.
This is great for consumers, but it makes it a lot less interesting for
producers. I'm not sure yet what the impact of that is going to be, but I
suspect that it could mean a decrease in the quality of content overall, which
would be detrimental to everyone.
------
ChuckMcM
I like these posts as well, as its a window on the economics of their
information content (in this case music).
They didn't mention how long it took them to come with this album, but since
the web site says they added a drummer at the end of 2010 and this album was
done in April of '11 we will call it 4 months work of three gentlemen best
case, and if they really only finished it here at the end of August it would
be 9 months. If we use the outside estimate of 9 months, and these guys had
'regular' jobs, lets say they would have earned $60K/year each with benefits,
so call it $67.5K/year each for 9 months at an annualized pay of $90K. Note
the numbers here are just guesses, I know they are in Europe and may have
access to other healthcare options.
So had they worked at this mythical job they would have earned $67.5K * 3 or
$203K. They opted instead to spend that time making an album so now, 9 months
later instead of $203K in value they have this album with 9 songs on which
they own the copyright for the next 75 years. Its an interesting exercise to
compare that 'foregone' revenue for the possible future value of the album.
They can make as many copies of this album as they want and sell it for what
ever they can get. Now they state that Spotify pays them .003 euros/play,
Deezer .006 euros/play. Lets say it averages out to .0045 e/play. To keep
everything in dollars, 1 euro => 1.43$ according to google, so .0045 E => 6.4
cents.
The question one can ask is this "Would they have been better off working for
9 months? Or making this album?" We can assume that as soon as they release
the album they gave up music forever and went back to a 9-5 job at $60K/yr.
(or not but that would be one way to look at it). In financial terms, when
does this album they created give them 203K $ of value back?
A 6.4 cents/play That is 3.2M plays. Over the life of the Copyright of 75
years, that is 42K plays per year on average or 115 plays per day. So if they
had 115 Spotify/Deezer fans who played one of those nine songs every day, they
would earn back exactly as much money as they had 'not made' by not working
9-5. Conversely they would have to sell 29,000 albums on Amazon or 22,500
albums on iTunes to earn back the same amount of money they would have made.
So a couple of things that are also important. First, they don't have to do
anything to manufacture copies of the album. And secondly, their time is
available to add another album to this 'stream'. (if the financial analysis of
making the this one pans out).
What this illustrates is that music is about the long tail, not the up front.
If you make back all your investment in making an album in the first year,
then your 10 year rate of return will be better than any other investment you
could possibly make. What is more you can keep feeding albums into the system
at what is your marginal cost of living (eating, thinking, composing,
recording). This multiplies your revenue stream going forward.
The record companies used to play an interesting game with musicians, it
worked like this:
Give us the 75 year rights to this music and we'll pay you a big chunk of
change right now.
Now the criminality was that the record companies created accounting systems
which obfuscated additional revenue to the point of not paying the artist
anything. However in this world its quite different. If these guys turn into a
'huge success' and sell a million copies of their album on Amazon their are
going to make nearly $5M on a $270K investment. In the past they might get
$50K in 'upfront royalties' and then never see any of that $5M.
One thing they might do is sell the 'rights' to this album for $203,000. They
are revenue neutral at that point and if the album does poorly they are
protected from 'losing' money but if it does well they don't stand to gain
from that. Risk arbitrage, its what VCs do, it is what music companies do, its
what you and I do when we fill up our gas tank at half full rather than wait
until the car is empty.
Being a musician is hard work. And early on when you are finding your voice
and your fans, its not very profitable (in fact if you don't love doing it you
shouldn't because if you die early all you will have to show for it will be
memories of creating that music.) However on the flip side, down the road, it
can be hugely profitable with _little if any additional investment_. You
develop a following and your numbers get better, no need to go out can cut
down additional vinyl trees :-) or schedule another "pressing" of your album.
It is this sea change that musicians need to understand, if you don't 'sign'
with a label you are keeping control of your profits and managing the risk
yourself. If you do sign with a label you can probably get more money up front
but you don't benefit from the upside. Distributors make money on leveraging
things like PR where it costs the same to promote 5 different albums at radio
stations as it does to promote one. They work to amp the distribution so that
they make more money. As a musician/owner you can do that but its not as
efficient. The better news for musician is that the long tail money ends up in
their pockets if they keep the rights, people underestimate that but it can
get to be serious cash.
It will be interesting to follow these guys as they develop to see how it
works out.
~~~
alex_c
I don't have any numbers, but I strongly suspect music sales tend to drop off
pretty quickly: a big splash (if you're lucky) that will quickly slow down to
a trickle. So IF you make back all your investment in the first year
(obviously not guaranteed), the rest may still be fairly small. If you don't
make it back in the first year, you might never make it back.
~~~
ethank
Average drop off is around 60% week over week. This did not used to be the
case however. Your window for selling is about 3 weeks right now unless you
miraculously have a "deep" record with a lot of singles.
But that is expensive to market.
~~~
zcrar70
> But that is expensive to market.
Exactly - the estimate above just counts the time to make the album towards
the cost, but there are many additional costs to add to that: from a financial
perspective, the cost of pressing CDs, making sleeves, any marketing costs
(making posters, paying for designers, buying ad space, perhaps hiring a
marketing person), hiring a plugger (someone who plugs your record to radio
stations, magazines, etc. for plays or reviews). For someone self-releasing,
the time to do all that themselves (plus some minimal fixed costs, e.g.
printing, pressing CDs for sales at gigs, etc.) would need to be accounted
for.
Finally, there would still need to be some minimal admin around the publishing
to make sure the author rights are protected. I'm not sure there's a DIY route
for this other than setting up your own publishing company and getting someone
to administer it (but there may be.) This would also take time and/or reduce
earnings.
------
pherk
Seems like a very tough business to be in. Guess, how do upcoming bands manage
to make it through given that most people on the band are pretty much
committed to it full time.
~~~
njharman
I would not consider this a business. In fact, the drive to monetize art is at
the root of many problems with copyright expansion, culture privatization, and
art quality.
~~~
gankit
How else would artists get paid?
~~~
shabble
The usual argument is either through patronage/sponsorship by some entity with
the money to spare that enjoys their art, or by working in another field and
making their art essentially as a hobby.
Whether that is a good thing or not is a much more complicated question, and I
suspect we'd have a lot less technically skilled artworks if there was no way
for an artist to develop those skills in their primary profession.
Sponsorship and patronage may be the way to go, but that risks the possibility
of discouraging artists from producing any works that may offend their
sponsor. The similarity to academia with grant funding and tenured
professorship is quite clear.
What is probably a novel approach is essentially the pay-whatever-you-like, or
"distributed patronage" movements that have been occurring more and more
recently. The problem then is shifted to gaining popularity/mind-share
sufficient to fund the artist.
------
ivancdg
The Earbits guys (frighteningly prolific bloggers) wrote about Spotify
recently:
[http://blog.earbits.com/online_radio/spotify-replaces-
piracy...](http://blog.earbits.com/online_radio/spotify-replaces-piracy-and-
purchases/)
"The service may do a good job fighting illegal file sharing but it also does
a great job of eliminating any motivation to buy an album that you can listen
to through the service."
In Europe Spotify's been available for a while. I was in on the beta when
their catalog was a lot more restricted, and it was already impressive. With
the majors on board, it's hard to see the freight train stopping.
How can one reconcile how wonderful it is for consumers with the payment
statements that make us musicians cringe? I think of it as all-you-can-eat
iTunes for very little per month; the recent competitors/alternatives pale in
comparison.
They deserve a lot of credit for building a workable model that makes iTunes
look like a rip off (I hate that software).
But they further dilute the value of recorded music, which is a huge paradigm
change for the music industry that will ruin the viability of many musicians.
Perhaps we can re-educate the public to value music again by taking a pledge
to pay for it, à la pg-patents? Something tells me this new, 'recorded music ~
free' paradigm is here to stay.
~~~
gergles
To anyone who had a computer in the past 5 years, recorded music is not worth
anything. Sorry, but that's just the way it is.
If you want to make a pledge to keep paying for buggy whips, go right ahead.
I'm sure there are people who would argue that buggy whips have intrinsic
value -- but the market for a buggy whip right now is basically nil.
Same thing with recorded music. If you want to make money as a musician, you
don't make it through recordings, you make it through extortionate "public
performance" licenses, by doing concerts (and selling $30 t-shirts), or by
offering experiences that people can't get elsewhere (pay $50 a year and get
access to my website where I post about my tour and post unreleased samples
and occasionally mail you a trinket, or whatever.)
I also don't understand the undertone of righteous indignation at Spotify's
existence. I can listen to the radio, where songs are played gratis. I can
record those songs (legally!) for my own personal use as much as I want. The
only difference with Spotify is that I don't physically push "record", and
that's the kind of semantic difference only a lawyer would love.
~~~
zcrar70
> To anyone who had a computer in the past 5 years, recorded music is not
> worth anything. Sorry, but that's just the way it is.
It wasn't always that way, and it doesn't need stay that way either. If no-one
values the music, then maybe it will; if people do value music, then maybe it
won't.
> I also don't understand the undertone of righteous indignation at Spotify's
> existence. I can listen to the radio, where songs are played gratis
The difference is that radio play was used to promote albums, which people
then bought. Recording a song on the radio came with many disadvantages: DJ
interruptions, missing the start/end of the song, lower sound quality, no
album art etc.
With Spotify, there's no need to purchase the album, as there are no such
disadvantages, the whole album is usually online, and you can play songs
whenever you want to listen to them, not when the DJ feels like playing them.
This makes in less economically interesting to be an artist. The righteous
indignation against spotify is probably due to the fact that artists actually
make very little money out of their content, whereas the spotify owners are
probably going to make a lot of money out of the artists' content.
------
frewsxcv
If I listen to only non-RIAA signed bands on Spotify (which I do), how exactly
am I supporting major record labels?
~~~
2arrs2ells
The implication seems to be that the major labels get some fixed percentage of
Spotify's revenues. I have no idea if this is true or not.
~~~
a3camero
They do indeed: [http://www.bloomberg.com/news/2011-07-14/spotify-wins-
over-m...](http://www.bloomberg.com/news/2011-07-14/spotify-wins-over-music-
pirates-with-labels-approval-correct-.html)
In addition to the percentage cut, they're also shareholders.
~~~
ethank
Yes, the labels get minimums and breakage if they are not reached.
Besides, non-RIAA acts often use RIAA companies for catalog management,
publishing and/or distribution. Even Radiohead was distributed by Sony.
------
neeleshs
Zero knowledge about the music industry here, how about a subscription based
startup? I even have a name for it - asongamonth.com. Any signed up solo
artist/band promises at least a song per month and you as a listener pay half
a dollar or a dollar a month as subscription per solo artist/band. You can
chose to pay for only the bands you like, switch them whenever you want to.
~~~
ethank
So.... Columbia House?
------
spatten
I was hoping to see some numbers from emusic in there. I've been paying my
monthly subscription for years, and I've always been curious as to how the
payout split goes.
Does anyone have a link / source / info on this?
~~~
burrokeet
eMusic has a fairly low payout compared to other services offering DPDs
(digital phonographic downloads aka an mp3 file) - in the range of 10 to 30
cents a track depending on a number of circumstances. On the other hand, they
generally do good volume (often number 3 after iTunes and Amazon) and you can
look at not distributing on eMusic as an opportunity cost - i.e., persons have
paid already for a subscription on eMusic, so they are unlikely to take
additional money and buy your music elsewhere if it is not available on
eMusic.
eMusic's real fail is that there are one of the very few DSP (internet music
retailers) that only account quarterly... almost everyone else is monthly.
------
runn1ng
"..., it costs us 35 EUR/year to keep an album on iTunes, Spotify, and Amazon"
Why is that? How much do you have to pay Apple, Amazon or Spotify to sell/play
your music?
~~~
leviathant
Chances are they're using something like Tunecore where you sign up once, and
they redistribute to various music services. You still retain whatever rights
you have, but you collect your income from Tunecore after they aggregate it
from Apple, Amazon, Spotify, et al. IIRC, Tunecore does not take a percentage
of each sale, but has a yearly fee to keep your music listed using their
services.
------
Valien
Now listening to a new band in Spotify. Thanks. Hope the meager cents adds up
from hundreds or thousands of users.
~~~
DrCatbox
Don't worry, much more meager cents will add up in the recording labels
pockets, and Spotify, than this band!
~~~
antonp
Very sad, but true : [http://www.informationisbeautiful.net/2010/how-much-do-
music...](http://www.informationisbeautiful.net/2010/how-much-do-music-
artists-earn-online/)
Having played in a band myself I certainly do end up with a bitter-sweet
aftertaste when consuming music on Spotify. I've got a paid subscription and
I'm absolutely loving it!
It just doesn't make sense: music is such an integral part of our lives yet
the people who drive it end up being exploited in a blatant way.
"Don't hate the player. Hate the game." comes into mind when seeing the linked
infographic... I just hope to see the rules change in my lifetime.
------
stevewillows
The author paid too much to press those CDs.
~~~
ivancdg
He probably did it in France; it is much more expensive here to do stuff like
that than in, say, the UK or Germany.
They're way behind the US in terms of competitive pricing for factory-produced
goods.
I ordered promo CDs pressed in California in 2009.
They were pressed in Taiwan with Japanese machines.
I received them 6 days later in Mountain View...at half the price of France,
great quality. Incredible.
| {
"pile_set_name": "HackerNews"
} |
The Service Mesh: What Engineers Need to Know - scott_s
https://servicemesh.io/
======
ajessup
One reason for the explosive interest in service mesh over the last 24 months
that this article glosses over is that it's deeply threatening to a range of
existing industries, that are now responding.
Most immediately to API gateways (eg. Apigee, Kong, Mulesoft), which provide
similar value to SM (in providing centralized control and auditing of an
organization's East-West service traffic) but implemented differently. This is
why Kong, Apigee, nginx etc. are all shipping service mesh implementations now
before their market gets snatched away from them.
Secondly to cloud providers, who hate seeing their customers deploy vendor-
agnostic middleware rather than use their proprietary APIs. None of them want
to get "Kubernetted" again. Hence Amazon's investment in the very Istio-like
"AppMesh" and Microsoft (who already had "Service Fabric") attempt to do an
end run around Istio with the "Service Mesh Interface" spec. Both are part of
a strategy to ensure if you are running a service mesh the cloud provider
doesn't cede control.
Then there's a slew of monitoring vendors who aren't sure if SM is a threat
(by providing a bunch of metrics "for free" out of the box) or an opportunity
to expand the footprint of their own tools by hooking into SM rather than
require folks to deploy their agents everywhere.
Finally there's the multi-billion dollar Software Defined Networking market -
who are seeing a lot of their long term growth and value being threatened by
these open source projects that are solving at Layer 7 (and with much more
application context) what they had been solving for at Layer 3-4. VMWare NSX
already have a SM implementation (NSX-SM) that is built on Istio and while I
have no idea what Nutanix et al are doing I wouldn't be surprised if they
launched something soon.
It will be interesting to see where it all nets out. If Google pulls off the
same trick that they did with Kubernetes and creates a genuinely independent
project with clean integration points for a wide range of vendors then it
could become the open-source Switzerland we need. On the other hand it could
just as easily become a vendor-driven tire fire. In a year or so we'll know.
~~~
streetcat1
This is a good overview. However, I think that the reason that we see a lot of
service variations is because the core tech - namely - Envoy, contains all the
"hard" tech (the data plane) while creating a "service mesh", basically comes
down to creating a management layer on top of it.
Another intresting note is that Google did NOT recede control over Istio to
CNCF.
~~~
jacques_chester
> _Envoy, contains all the "hard" tech (the data plane) while creating a
> "service mesh", basically comes down to creating a management layer on top
> of it._
I'd argue this is backwards. Envoy has a fairly tightly defined boundary with
relatively strong guarantees of consistency given by hardware -- each instance
is running on a single machine, or in a single pod, with a focus on that
machine or pod.
The control plane is dealing with the nightmare of good ol' fashioned
distributed consistency, with a dollop of "update the kernel's routing tables
quickly but not _too_ quickly" to go with it. It's "simple" insofar as you
don't need to be good at lower-level memory efficiency and knowing shortcuts
that particular CPUs give you. But that's detail complexity. The control plane
faces dynamic complexity.
------
cbsmith
I'm going to sound like an old man but...
What amuses me about this is back in the day everyone thought the Mach guys
were crazy for thinking things like network routing and IPC services be
implemented in user space... and others mocked the OSI model's 7 layers as
overly complex (e.g. RFC3439's "layering considered harmful").
Now we've moved all our network services onto a layer 7 protocol (HTTP), and
we've discovered we need to reinvent layers we skipped over on top of it.
We're doing it all in user space with comparatively new and untested
application logic, somehow forgetting that this can be done far more
efficiently and scalably with established and far more sophisticated
networking tools... if only we'd give up on this silly notion that everything
must go over HTTP.
~~~
taneq
Network-over-network is just another Inner Platform Effect.
~~~
jonahx
Wonderful term. I've been aware of the phenomenon for a while but not its
name.
Link for others:
[https://en.wikipedia.org/wiki/Inner-
platform_effect](https://en.wikipedia.org/wiki/Inner-platform_effect)
~~~
Bombthecat
Oh wow! I had a customer who was really bad at it. All! there Software was
effected by it in one way or another.
Now I have a name for that at least :)
------
lycidas
At my company, we were migrating all our apps to a kubernetes + istio platform
over the past couple of months and my advice is this - don't use a service
mesh unless you really, really need to.
We initially choose istio because it seemed to satisfy all our requirements
and more - mTLS, sidecar authz, etc - but configuring it turned out to be a
huge pain. Things like crafting a non-superadmin pod security policy for it,
trying to upgrade versions via helm, and trying to debug authz policies took
up a non-trivial amount of time. In the end, we got everything working but I
probably wouldn't recommend it again.
It's funny that I was at kubecon last week and there was a start up whose
value prop was hassle-free istio and the linkerd people stressed that they
were less complex than istio.
~~~
snupples
I would go as far as to say I think the vast majority of people don't need a
specialized service mesh. We unfortunately started with Linkerd and it
actually is the cause of most reliability/troubleshooting issues. I don't
think lack of complexity is actually a good selling point for it, because it's
inherently more complex that not using a service mesh.
Istio may appear more complex but that's because it has a superior abstraction
model and supports greater flexibility. We're beginning to migrate from
Linkerd to Istio at this point. I had the same initial frustrations with
podsecuritypolicy (and linkerd suffers from the same), but istio-cni solves
the superuser problem, and I believe even the istio control plane is now much
more locked down in the latest release.
However if I had my way I would be telling every team they don't need service
mesh. We don't have any particular service large and complex enough to really
take advantage of its sold features.
~~~
bogomipz
>"We unfortunately started with Linkerd and it actually is the cause of most
reliability/troubleshooting issues"
Would you mind elaborating on what those Linkerd issue are/were that were
effecting reliability and troubleshooting?
~~~
williamallthing
I'm also curious about this (author here btw). The majority of people we see
coming to Linkerd today are coming _from_ Istio. They get the service mesh
value props, but want Linkerd's simplicity and lower operational overhead.
Would love some more details, especially GitHub issues.
------
tick_tock_tick
My favorite use of this kind of system is to manage tls and acls. The service
itself can be extremely dumb and just expose a unix socket.
The 10:1 ratio of microservices to developer sounds like hell though that's
just too much to reason about.
------
tracer4201
Good article - I must admit I’m vaguely familiar with the concept and this
read certainly gave me some new insights.
One meta call out on the writing - I read and scrolled at least 30% through
the page on my iPhone until the author explained why I should care about a
service mesh I.e. what problems it tries to simplify or solve.
It seems to me there are some strong use cases here, but it’s only worth your
while if you’re operating at sufficient scale.
For instance, if my team at some FAANG scale company is responsible for
vending the library that provides TLS or log rotation or <insert cross
cutting/common use case here>, and it requires some non trivial on boarding
and operational cost, migrating to this kind of architecture longer term where
these concerns are handled out of the box may be beneficial.
Still - it doesn’t mean the service owners are off the hook. They still need
to tune their retry logic, or confirm the proxy is configured to call the
correct endpoints (let say my service is a client of another service B and for
us, B has a dedicated fleet because of our traffic patterns). This is an
abstraction. Abstractions have cost.
Trust but verify.
The trap people fall into is, “Here’s a new technology or concept. Let’s all
flock to it without considering the costs.”
------
theamk
It’s a pity “fat clients” are dismissed so quickly. I think that when your
tech stack is uniform enough to use them, they can provide much more that
service meshes, and do it faster as well. After all, why does “service is
down” and “service is sending nonsense” have to be handled via completely
different paths?
~~~
omeze
The main problem with fat clients is that for polyglot architectures (which
most large companies that end up building a service mesh evolve into over
time) you have to maintain a fat client library for every language. You can
get very far leveraging existing tools like gRPC that codegens fatty clients
for you but the quality of tooling is very uneven depending on the language of
choice. By pushing all of this into the network layer you skip all of that.
~~~
theamk
Right, polyglot architectures have no choice, but this text talks about “5
person startups” as well. Surely they can keep the set of languages limited?
Plus, it’s not either/or situation. A fat clients for Go + Node.js; and a
proxy for all others. This way your core logic can enjoy increased
introspection / more speed / higher reliability; while special purpose
services get a proxy which allow interoperability.
------
jayd16
As someone who's familiar with the API gateway pattern, is it fair to say this
is just another API gateway for internal services? Seems like it is but its
also described in an extremely convoluted way with 'control planes' and such.
~~~
hardwaresofton
The service mesh is a bit different from an API gateway -- in it's current
most popular implementations (linkerd[0] & Istio[1]), there are basically
small programs that run next to _each individual instance_ of the programs you
want to run. Linkerd has been around for a while and IMO there weren't _that_
many companies that were at a scale where they needed it (I didn't see it
deployed that often), but it's basically that same concept, but on a more
granular level -- if you delegate all your requests to some intermediary, then
the intermediaries can deal with the messy logic and tracing so your program
doesn't have to.
A better way to describe is "smart pipes, dumb programs". Imagine that all
your circuit-breaking/retry/etc robustness logic was moved into another
process that happened to be running right next to the program actually doing
the work.
You can have both an API gateway _and_ a service mesh deployment -- for
example Kong's Service Mesh[2] works this way. They're saying stuff like
"inject gateway functionality in your service", but that only make sense if
you sent literally every request (whether intra-service or to/from the outside
world) through the gateway. _Maybe_ that's how some people used Kong but I
don't think everyone thought of API gateways as a place to send every single
request through. You'll have a Kong API gateway at the edge _and_ the kong
proxies (little programs that you send all your requests through) next to
every compute workload.
[0]: [https://linkerd.io/](https://linkerd.io/)
[1]: [https://istio.io/](https://istio.io/)
[2]: [https://konghq.com/solutions/service-
mesh/](https://konghq.com/solutions/service-mesh/)
~~~
jayd16
Hmm, is the assumption that, because you're deploying an instance of the mesh
as close to the application as possible, you don't need robust logic between
the application and the service mesh? I can buy that I suppose.
~~~
hardwaresofton
Yes kind of -- except not in between the application and the service mesh,
it's between application and application.
Imagine that for every application there is _one_ small binary that runs and
serves _all_ it's traffic, like a chauffeur. Your application stops talking to
the outside world completely and sends all messages to the small chauffeur
binary -- which then talks to _other_ chauffeurs, over the network.
Keeping with the chauffeur analogy, there is a "head office" which calls the
chauffeurs on CB radio at regular intervals that lets them know which cars go
where and how to start them/etc.
"head office" => "control plane"
"chauffeur" => "side-car proxy"/"data plane"
In the end what this means for your application is that you just make calls to
external services (whether your own or others) and since _all_ your
communication goes through this other binary, you get monitoring, traffic
shaping, enhanced security, and robustness for free.
Another interesting feature is that if the side-car proxy can actually
_understand_ your traffic, it can do even more advanced things. For example
you can prevent `DELETE`s from being sent to Postgres instances at the
_network_ level.
------
peterwwillis
Every part of a service mesh could be baked into operating systems so that all
this extra technology was just there by default. This would put a fair amount
of start-ups out of business, but it would also mean a lot less people having
to be hired to set up and maintain all this stuff. Devs could just... develop
software, with a clear view into how their apps run at scale. And Ops wouldn't
have to custom-integrate 100 different services.
This is really the future of distributed parallel computing, but we're still
just bolting it on rather than baking it in.
------
reilly3000
I'm evaluating using AWS App Mesh at the moment. We're a really small team so
we're choosing Fargate vs Kubernetes- mainly because we don't have need of
nodes nor want to deal with them.
The appeal of App Mesh for us was initially around using it to facilitate
canary deployments. AWS Code Deploy does a nice job with Blue / Green
deployments and that may suffice for us, but it doesn't support canary for
Fargate. Is that enough reason to add the additional complexity in our stack?
Not sure, looking for input.
Also, much of the documentation is focused on K8s. I'm murky on how to
implement an internal namespace for routing. Most of what I've seen is like
myenv.myservice.svc.cluster.local but its not clear to me that using that
pattern is needed in the context of Fargate.
Consistent observability is valuable, but again Fargate can do that pretty
well- it just doesn't mandate access logging so that would be left to the app
itself.
We want to implement OIDC on the edge for some services, but App Mesh doesn't
support that yet as other meshes like Ambassador, Gloo, and Istio seem to.
Since App Mesh doesn't really act as a front-proxy on AWS, we'll still be
using ALB to handle auth which is fine, I think. I get mixed messages about
the need for JWT validation, but if so, that would need to be implemented in
the app level with ALB fronting it.
Can anybody help me find resources to sort this out? I've been through the
`color-teller` example time and time again, but it still leaves lots of open
questions about how to structure a larger project and handle deployments
effectively.
~~~
hardwaresofton
> The appeal of App Mesh for us was initially around using it to facilitate
> canary deployments. AWS Code Deploy does a nice job with Blue / Green
> deployments and that may suffice for us, but it doesn't support canary for
> Fargate. Is that enough reason to add the additional complexity in our
> stack? Not sure, looking for input.
Maybe you should write a script for this? It sounds like you're about to take
on a _lot_ of complexity for just the ability to do canary deployments when
you could probably hack up a script in a day or two.
> We want to implement OIDC on the edge for some services, but App Mesh
> doesn't support that yet as other meshes like Ambassador, Gloo, and Istio
> seem to. Since App Mesh doesn't really act as a front-proxy on AWS, we'll
> still be using ALB to handle auth which is fine, I think. I get mixed
> messages about the need for JWT validation, but if so, that would need to be
> implemented in the app level with ALB fronting it.
JWTs are only required for client-side identity tokens (you can use opaque ids
and other kinds of stuff for backends) -- it seems like you're also at the
same time looking for something to take authentication off your hands? App
Mesh doesn't do that AFAIK, it's _only_ the service<->service communication
that it's trying to solve.
I think it might be a good idea to make a concise need of what you're trying
to accomplish here, it seems kind of over the place. From what I can tell
it's:
\- Ability to do Canary deployments
\- The ability to shape traffic to services (?)
\- Observability, with access logging
\- AuthN via OIDC at the edge
A lot of meshes do the above list of things, but the question of whether it's
worth adopting one just to get the pieces you don't have already (which is
only #2 really, assuming you scripted up #1), is a harder question.
~~~
shubha-aws
> Namespaces: In order to identify the versions of services for routing, you
> need independent virtual nodes and routes in a virtual router. You can reuse
> the DNS names or use cloudmap names with metadata to identify the
> versions/virtual nodes. > OIDC at ingress - App Mesh does not do this yet,
> ALB / API Gateway is needed for this. App Mesh has this on the roadmap. >
> Resources - You can reach the app mesh team with specific questions at the
> App Mesh roadmap Github and we can help
------
solatic
Re. "fat client" libraries:
> Sure, it only worked for JVM languages, and it had a programming model that
> you had to build your whole app around, but the operational features it
> provided were almost exactly those of the service mesh.
The thing is, all of our microservices communicate with each other using
Kafka. Envoy has an issue open for Kafka protocol support [1], but it's a
fundamentally difficult issue because adopting Kafka forces you to build out
"fat client" code and building a network intercept that can work with pre-
existing Kafka client code is non-trivial. On observability, Kafka produces
its own metrics.
Granted, Kafka doesn't offer the same level of control. But Kafka does offer
incredible request durability guarantees. We don't have "outages" \- we have
increased processing latency, and Istio/Envoy and other service meshes can't
offer that because they do not replicate and persist network requests to disk.
[1]
[https://github.com/envoyproxy/envoy/issues/2852](https://github.com/envoyproxy/envoy/issues/2852)
------
reissbaker
Opinionated read, but interesting. That being said, Linkerd wasn't the first
service mesh — SmartStack predates it by three years. [1] Although they didn't
use the (then-nonexistent) "service mesh" term at the time, it pioneered the
concept of userspace TCP proxies configured by a control plane management
daemon. I doubt the Linkerd folks are unaware of it, so it was a surprising
omission.
[1]: [https://medium.com/airbnb-engineering/smartstack-service-
dis...](https://medium.com/airbnb-engineering/smartstack-service-discovery-in-
the-cloud-4b8a080de619)
------
golover721
While nobody ever seems to want to hear it, the vast majority of companies
utilizing service meshes and k8s are wasting huge amounts of time and money on
things they don’t need.
Unfortunately these technologies are at peak hype so everyone seems to be
implementing them for their small to medium crud apps. But get very sensitive
if you try and point it out.
------
Animats
How many transactions per second before you need all that stuff? If you're not
in the top 100 sites, it seems unnecessary.
~~~
tptacek
It's not as much about load as it is about complexity; it starts to make sense
when you hit some threshold number of internal services, regardless of the
amount of traffic you're doing. You use a service mesh to factor out network
policy and observability from your services into a common layer.
~~~
thom
What is the threshold above which a service needs to exist at all, over a
module in an existing codebase?
~~~
cpitman
The point at which you have multiple teams working on the same codebase, and
their velocity is suffering from communication overhead and missteps.
~~~
koffiezet
A few remarks:
* Codebase should be defined as 'the platform'. where one team will most likely never look at the code of other team's microservices. * this communication problems and overhead start the moment you go from 2 to 3 or more teams. * the term 'team' in this context should be interpreted very broadly. One dev working alone on a microservice should be considered "a team".
Also, things mentioned in the article: you don't want to implement TLS,
circuit breakers, retries, ... in every single microservice. Keep them as
simple as possible. Adding stuff like that creates bloat very quickly.
------
djohnston
This is quite interesting. I used to work in more devopsy kind of roles but at
the current gig it has been almost entirely removed from my purview. It's
impressive to step away for a few years and return to see so many changes, but
the article laid out the concepts in an easy to understand manner.
------
adolph
If one were to implement a service mesh of microservices wouldn’t the services
need to be versioned similar to how the packages used by a microservice are
version-pinned?
~~~
dodobirdlord
Sort of, but only for major versions, and it's preferable to bake that sort of
thing into the API itself. The API exposed by a microservice should only ever
be updated in backwards compatible ways unless you can verify that you have no
callers, which is hard. New functionality should be introduced using backwards
compatible constructs like adding fields to JSON or protobuf. Breaking changes
go in a new API. This is easily managed conceptually by having the
microservice expose version information as part of the API. A FooService might
define "v1/DestroyFoo" and "v2/DestroyFoo" with different calling contracts.
Perhaps v1 was eventually consistent and returns a completion token that can
be used with a separate "v1/CheckFooDeletionStatus", but now with v2 the
behavior has been made strongly consistent and there is no
"v2/CheckFooDeletionStatus". The v2 of the API can thus be thought of really
as a separate API that happens to be exposed by the same microservice, and
pre-existing callers can continue to call the (perhaps now inefficient) v1
API.
| {
"pile_set_name": "HackerNews"
} |
PyPy 1.9 Released - makeramen
http://morepypy.blogspot.it/2012/06/pypy-19-yard-wolf.html
======
mark_l_watson
That is impressive performance, and steady progress. I have a renewed interest
in Python since all the work for a new customer is in Python. I usually use
Ruby, Clojure, Common Lisp, and Java - but, I am finding Python to be
perfectly acceptable.
~~~
j-b
I'm curious to know what about Python is 'acceptable' in comparison to Ruby?
Does Ruby have capabilities that you find better?
~~~
mark_l_watson
I decided 6 or 7 years ago that I "needed a scripting language" and used
Python for about a half year. I then tried Ruby and found it more to my
personal tastes, mostly because of blocks. No disrespect intended re: Python
| {
"pile_set_name": "HackerNews"
} |
Why a FILE-BASED dependency manager rocks for C/C++ - MordodeMaru
http://blog.biicode.com/file-based-cpp-dependency-manager/
======
thewolas
Because it was about f __*n time! Almost all new languages ship out with one
as default. And C /C++ is the most used language in the world (or almost)
------
cza
Read here and find out why biicode rocks:
[http://docs.biicode.com/biicode/biicode.html#basic-
concepts](http://docs.biicode.com/biicode/biicode.html#basic-concepts)
------
jeff_abrahamson
I think biicode is trying to do (better) what autotools, cmake, and SCons do.
So comparing to them might be useful. And if it is not trying to do replace
those tools, explaining that might be helpful.
~~~
drodri
Not really, it is not trying to replace any build system. In fact it uses
itself cmake as a build system, because ourselves were users of CMake, we like
it, but mainly because CMake is by far the most popular build system for C/C++
projects, especially multi-platform and open source projects. We don't say
that other tools are not good, they are excellent tools too. But we cannot
manage (at least now) to offer integration with them, so we chose just to use
CMake. So biicode uses it as transparently as possible, allowing the user to
configure things with the typical syntax we are used to with CMake in
CMakeLists.txt files. You can use "configure_file", set "cmake_cxx_flags" or
configure your "CTests".
Probably the post fails to explain that what biicode does is to generate basic
xxx_vars.cmake files that contains useful variables about the project, and
also a CMakeLists.txt (in case it is not defined by the user) which has some
biicode macros that help to define the build. You might be able to read about
it here:
[http://docs.biicode.com/c++/building.html](http://docs.biicode.com/c++/building.html)
What biicode tries to do is to complement, and fill the voids that build
systems were not designed for: \- Storing code in a central server repository.
Later retrieving code from the repository. Handling dependencies per project,
not per system. \- Managing different versions of dependencies, managing
conflicts of dependencies and allowing conflict resolution. \- Allowing easy
discovery, retrieval and updating of dependencies. Offering web access to the
code.
These are more or less the typical features of dependency managers (as PyPI-
pip, NPM...). Biicode tries to go one step further by eliminating almost
completely the packaging, the user does not have to think about libraries,
setups, installs, updates... and this can be done thanks to the file based
approach.
I hope it is better explained now. Thanks very much for your comment, do you
think it is worth to edit the post with these ideas? Any further suggestions
are very welcomed.
------
Asimo
such goodies are always useful ! thanks !
------
ruymanfm
It looks good
------
LuisAparicio
it's amazing
------
juanfont
Cool.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Should I go into management despite of loving to code? - NumberCruncher
Hi there! The last 3 years I used to work for a big telco as a senior data scientist and I was rehired by the VP of IT at one of my former employers. He did this because he used to work for the same big telco in the same department like I did (the time before we got to know each other) and he was looking for someone who knows certain systems of the telco and who could rebuild them at his company. So far, so good. I joined one of his teams 3 month ago and the first time in my life I really enjoy what I am doing. I can do what I want, how I want, have not to take care about business BS. This week my team lead resigned and the VP asked me whether I could imagine taking over his position and my team.<p>The point is he made me clear that “he makes sure that I do not have time for coding if I take the position” which would kill the part of my job I enjoy the most. On the other hand I am 37 and this is not only the first but high probably the last time I get an offer like this. I know a lot of people who would kill for getting into management. It’s somehow like the Jewish dilemma: pork for free.<p>Is anybody out there who went from coding into management without screwing it up and discovered his passion for the mundane management tasks?
======
venkasub
Management is not easy as it sounds. Leading people and setting strategy needs
a lot of experience. The culture of the company to take calculated risks
should gel with your thought process when you are higher-up. You are also
responsible for the people who report into you and need to make sure that all
are 'taken care of'.
Being in Management has nothing to do with Coding; you should be a good
manager of time/tasks, so that you can afford to code atleast a few hours
every week, if you dig such.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: When do you containerize new projects? - jamesmaniscalco
When you spin up a new project, how far do you go into containerization and similar abstraction before you actually start writing code?<p>I am planning on starting a new Django app after a few years out of "the game". The last sites I worked on were Rails and Django projects deployed on Heroku back in about 2014, before Docker took off. Now in 2020, I have a sense that I "should" be using Docker or Kubernetes in my deployment scheme, but I'm not sure how much I should worry about containerization before I start writing code. I would like to get started on the codebase quickly, but I also don't want to incur the technical debt of a containerization retrofit.<p>So, HN readers, when do you containerize a new project?
======
trcollinson
I am very good at containerizing and deploying projects. So I usually do that
part immediately because I have patterns for it and it takes less than an hour
to get started. This is how I start a new project:
1) I use Gitlab. So I make a quick gitlab-ci.yml and have a few steps. Test,
Build, and Deploy. Test might just run one fake test to start. Build might do
nothing much. Deploy might do nothing much at first.
2) Stick the project into a docker container. So create a simple Dockerfile.
Make sure I can run the app from within the Dockerfile. Make sure the
Dockerfile builds from Gitlab from the gitlab-ci.yml (which is the build
step).
3) Deploy the app to AWS. I like ECS but maybe it's a lambda and I deploy it
to Lambda. Just depends on what I am building. I update the gitlab-ci.yml to
do that (this is the deploy step).
4 - forever) Code and only update the gitlab-ci.yml when I need to.
Honestly, the whole thing really takes less than an hour and I never have to
worry about build and deploy after that. I say do it early. Also, don't over
complicate it. Especially if you are using Django.
~~~
jamesmaniscalco
Interesting, thanks. Do you recommend Gitlab over Github because of the built-
in CI/CD?
~~~
trcollinson
I happen to be a huge fan of Gitlab, and that is one of the reasons yes. I
don't think Github is bad by any means, but I like the Gitlab roadmap.
There are a number of really good CICD systems for Github. If you like Github,
pick it and go for it. Just get started and don't spend more than an hour
thinking about the whole thing. That's where the waste really comes in.
Spending all of that time thinking instead of just doing.
------
ashconnor
> So, HN readers, when do you containerize a new project?
When my Heroku bill becomes a concern.
If you choose Kubernetes then you will end up wasting time Yak Shaving that
could be spent writing your Django.
| {
"pile_set_name": "HackerNews"
} |
The real reason why developers are awkward - swombat
http://trogger.com/discussions/the-real-reason-why-developers-are-awkward
======
pmjordan
Maybe this is true for some people. I can't really find myself in this
article, though. I'm pretty sure I remember finding certain everyday social
situations difficult when I was little and had never touched code. I'd have to
say the same for a hacker friend who I've known since age 6 (!) - he "didn't
quite fit in" from the start.
Programming hasn't made me awkward. Instead, I've taught myself how to deal
with situations, making me much _less_ awkward. (to the point of being pretty
normal and hardly awkward at all)
And as the reverse of the article, I find hours of social interaction (mostly
with people other than friends) extremely exhausting. Unlike the author, I
don't warm up, I just feel drained and want some time to myself to regenerate.
~~~
kwamenum86
<http://www.theatlantic.com/doc/200303/rauch>
~~~
pmjordan
Awesome. I'm forwarding that to my girlfriend.
_"It has even learned, by means of brain scans, that introverts process
information differently from other people (I am not making this up)."_
Just as the rest of the article, this part certainly rings true for me. The
fact that I don't think in terms of words seems to baffle people. I always
thought this might be because I was raised bilingually, so my thought process
might be happening on some kind of superset of the two languages. Maybe,
however, it's this that the author alludes to and has no connection with
language? It'd have been nice if he had covered more of this angle.
~~~
edu
I've been rised bilingual (Catalan and Spanish) and now I think I'm fluent in
English, and I find myself thinking in the three languages (mostly Catalan and
English) in different moments.
What is more weird is that I'm more or less social depending on the language
I've to use, being much more social talking in English than in Spanish or
Catalan. I think it may be due to the fact that the English classes I took
where very participative and I was usually forced to talk. I really don't
know.
(And for the article, right now after 5 days of vacation, back at work, and
after 6 hours coding I'm really looking forward to go out and socialize a
little bit more (I'm pretty happy I have a very good environment at work), but
sunday after spending all day with people I was eager to sit in front of my
computer and code for hours.).
------
ktharavaad
Programming makes weird because even though I'm not coding, I'm thinking about
my code all the time and as a result of that, I rather live in my own head
than interact with people around me.
However, its also during these times ( not in front of computer ) that I often
think of the most brilliant re-factoring, algorithms and ideas for my code. So
strange as it sounds, socializing helps me to code.
~~~
atas
"So strange as it sounds, socializing helps me to code."
Too bad it doesn't work the other way too...or maybe it does?
~~~
access_denied
It could make you better at UID.
------
swombat
I have to agree with the thesis of this mini-article. The mind-set of
productive coding (for me) is extremely anal and abrasive. The computer
doesn't care about niceties, it only cares about correctness, and so to feed
it, it is more important to express things exactly and correctly than nicely.
I often find myself being a lot more brutal in my communications if I'm in the
middle of, or emerging from, a coding session, than the rest of the time.
~~~
quan
I have the same experience as well. Whenever I'm most proficient with coding
and have to leave for lunch with my coworkers, I find myself just sitting
there not participating in the conversation at all. The reverse is also true,
it often takes me substantial amount of time to get back to my most efficient
coding mode after going out.
I think the reason is more than just b/c my mind is immersed with the problem.
Even when we work in the same team and discuss the same technical problem I
still find it difficult to engage in the conversation. As a bilingual speaker
it's always awkward for me to switch between English and my native language,
especially if I spend a long duration using one exclusively.
My guess is developers won't feel awkward if we can socialize in machine
language. That also saves tons of time spending on switching in and out of the
awkward mode.
------
matthewking
Being a developer gives you an endless source of learning, there's always
something new to experiment with. For me that meant that in my late teens I
was often messing about with code and reading books whilst my friends were
calling saying they were bored, as a result they'd look to social interaction
to solve their boredom, including going out to parties etc.
I think that's the start of the divide. When you do eventually pop your head
up from your laptop, your friends have all developed superior social skills
than you, so you're instantly out of place and on the back foot in highly
social situations such as parties and nights out.
I recognise that I lack certain social skills required to flourish in big
crowds and groups, but I think its just a matter of forcing myself to attend
events, be more open with people and it'll improve. If you do something all
the time, providing you're a good learner - it _should_ get easier.
------
plinkplonk
I am not sure I agree with the "developing makes you awkward" idea. I don't
know if there is any scientific basis to the introversion/extroversion axis,
but I've had great success in building up an extrovert persona that I am
completely comfortable in, though all the tests I've taken puts me strongly in
the "introvert" end of the spectrum.
The Myers-Briggs test for example gives me an INXJ profile (with a very strong
"I" score). About a decade ago, I figured out that "becoming" more of an
extrovert could really balance my life out, and now it is something I can
swoitch on and off as well to the point where (a) I am equally comfortable in
either mode and (b) people who have seen my "extrovert" side can't believe I
am perfectly happy sitting alone in a corner and coding for a few days or
weeks if that is what the situation warrants, or alternatively party for a 12
hours staright. I don't know if this is relevant, but I am extremely
comfortable with public speaking and quite enjoy theatre(performance) and
music(performance).
I wonder if this whole "personality" thing isn't very fluid (and thus
hackable). Just one anecdotal data point against the "development makes you
awkward" idea.
------
sown
I started out awkward from a young age.
Programming just suits me.
------
puzzle-out
I'm not a hacker, but work with a at times very awkward programmer, who often
sends out emails which are a monument to pedantry. Reading this article makes
me more understanding. On the flip side, should one be worried if they are
working with a programmer who is not awkward, then?
~~~
mechanical_fish
_should one be worried if they are working with a programmer who is not
awkward, then?_
Part of the article's theme is that the awkwardness comes and goes. When the
author is not programming he doesn't feel awkward at all.
It's also important to realize the range of human variation. There are
programmers who can code at top speed while carrying on a continuous patter.
They're not common, but they exist. There are programmers who do all their
work with an IRC session chugging along in an adjacent window. I can't seem to
cope with that, myself -- I can't focus when messages are scrolling by in my
peripheral vision. I also can't focus when there's music playing, but other
people can't code _without_ music.
~~~
LogicHoleFlaw
I find that when I'm in the middle of an intense coding session I become
abrasive and distracted from others' perspective. If you interrupt me when I'm
working on a problem, I will lose it! In several senses. I find that I can't
work when there is music playing - generally when I start working I have music
quite loud but as I get more immersed in the problem it becomes quieter and
quieter until I mute it completely.
On the other hand I'm comfortable in social situations so long as I have some
time for that mental intensity to dissipate. I just need a buffer to switch
modes.
------
christofd
It probably matters how you program. If you spend more time doing research,
then you are not so much in danger of wasting time with hacking away at stuff
in the hope that enduring trial and error will solve the problem.
I try not to spend that much time actually coding stuff and probably spend
more time working stuff out on paper and talking to people before using a
computer.
~~~
christofd
I guess in my heart I'm not really a hacker. But computers are good at getting
things done.
------
wglb
Disagree. Been a hard-core programmer for 43 years and an occasional manager.
My friends think of me as being as social as anybody, as do I.
------
plesn
Social interaction requires you to relate to other's experiences. When I'm
often in the zone, I not only feel more distant, but also have less to say as
usually I read less then, don't go out much.
Moreover, I often feel dissatisfied of myself while I don't have some tangible
results at work. Then I feel shy and can't enjoy the moment. People see that
(and especially girls I think!).
------
mannicken
Programming made me over-rationalize surroundings in a sociopathic, House-like
way. I don't find it particularly hard to not be awkward but in most cases I
don't see why.
Being awkward helps me deal with (unwanted) attention and loads of bullshit
that most people try to unload on me for some reasons.
------
redcap
I'm working from a functional spec and just getting used to the business at my
new job. Because the spec sometimes isn't very clear I have plenty of
opportunities to talk to my supervisors to ask them to clarify things.
Of course this is very different from shooting the breeze over a drink or
lunch.
------
ssharp
These articles are mostly boring and they seem to constantly pop up. I think
programming tends to attract less social people because you can do so much on
your own. It's also a brainy activity and a lot of smart people are not
particularly social.
It's a genetic fact that all personalities are different. Some people are more
naturally social than others. That may lead them to careers other than
programming. However, this shouldn't excuse the developer from being socially
awkward. Humans are naturally social creatures and learning to be social
should not be something that is ignored. I think people who are more social
are more happy and lead more fulfilling lives. Why you may enjoy sitting
behind a screen for 12 hours a day, I think you'd be a lot happy spending 8
hours behind the screen and the other 4 interacting with people.
~~~
LogicHoleFlaw
_people who are more social are more happy and lead more fulfilling lives_
Wow, that's a bigoted extrovert perspective. Really, I'm just fine being alone
with myself and my thoughts. Sure, company is nice from time to time but no
amount of training will change the fact that introverts find other people
draining.
I took a class on public speaking once, and the instructor mentioned that I
was the most personable speaker she'd ever known. I still need plenty of alone
time to recharge after dealing with people for any extended length of time.
Don't underestimate the calmness, power, and meditative qualities of
introversion and self-acceptance.
~~~
oz
Hallelujah. You can find good articles in this vein at sengifted.org
------
biohacker42
It's not as bad for me, but I do sense the same things. After a long day of
coding it takes me about 30 minutes of conversation to get back into talking.
------
timothychung
Shouldn't we make programming more social? Something like programming 2.0. :-)
~~~
timothychung
I wonder why I get a vote down for this comment.
Programming 2.0 is happening as open source development. Just because I
presented my point in a casual way does not mean my comment is meaningless and
negative to the community.
------
travers
If people were logical and did what I said we would get along just fine.
| {
"pile_set_name": "HackerNews"
} |
Apple Store Update - aeolus42
http://store.apple.com/
======
therealarmen
Assuming that the online store makes up 10% of total sales, Apple is losing
approximately $35,000 per second that their website is down.
| {
"pile_set_name": "HackerNews"
} |
What's the best way to meet a technical co-founder? - ronnwer
Hi what's the best way to find a technical co-founder?
======
zv
Here <http://programmermeetdesigner.com/> Some forums (Joel, etc).
On a side note: I'm interested in serious projects. Mail me paavels@gmail.com
~~~
ronnwer
where r u from?
------
sidmitra
<http://www.techcofounder.com/> is a pretty decent place.
My contact details on my profile too.
| {
"pile_set_name": "HackerNews"
} |
GDPR: Privacy and data protection in mobile applications - _o_
https://www.enisa.europa.eu/publications/privacy-and-data-protection-in-mobile-applications
======
_o_
"Moreover, the document focuses on the concept of privacy by design and tries
to make it more clear, especially for mobile app developers. Approaches to
privacy and data protection by design and by default are presented that help
translate the legal requirements into more tangible engineering goals that
developers are more comfortable with. In particular, the concepts of data
protection goals and privacy design strategies are discussed in general terms,
while providing concrete examples from the mobile app development
perspective."
This is the part that was missing on the web, mobile applications are
breaching users privacy to the extent unavailable to web pages. Typical
android application has less code than the frameworks for tracking and
advertising that are used in it. Not to mention google play and google
services. This documents sheds some light from mobile application development
perspective and provides some guidelines.
Actually I think the greatest and most meaningful battle in context of GDPR
will be on field of mobile applications.
| {
"pile_set_name": "HackerNews"
} |
A platform full of opportunities for students - siddhartharora
https://gradbee.com
======
mrfregg
"Connect with India's best students". Well that's a pity.
~~~
siddhartharora
the website is in development, we have implemented mobile verification across
the globe.
~~~
mrfregg
So it's not limited to Indian students as stated in the recruiters section?
| {
"pile_set_name": "HackerNews"
} |
Airbnb and San Francisco - betadreamer
http://blog.samaltman.com/airbnb-and-san-francisco
======
callmeed
_" Unfortunately, a lot of other people have problems paying their rent or
mortgage. 75% of Airbnb hosts in San Francisco say that their income from
Airbnb helps them stay in their homes, and 60% of the Airbnb income goes to
rent/mortgage and other housing expenses."_
C'mon Sam, you can do better than this. Just about everyone's income helps
them pay their rent or mortgage regardless of where it comes from. And let's
not pretend (a) parkinson's law doesn't exist or (b) people allocate specific
income sources to specific expenses. If you make more, you spend more. If you
have to pay your rent on the 1st, you write a check from your bank account–you
don't pull cash out of your "AirBnB income" envelope.
And, BTW, I've never rented an AirBnB in SF who wasn't (a) a young
professional that could afford to live there or (b) someone 40+ who clearly
had lived in SF a long time and bought prior to the spike in prices. These
statistics and stance just don't compute for me.
Look, I love AirBnB and I think it's a great service. But it's just that–the
best short-term/vacation rental service. Nothing more. I'm a little tired of
them (and their apologists) acting like they're some kind of cultural
juggernaut.
I always knew they'd have huge political forces to answer to (my friends and I
would often wager who was more likely to succumb to governments: Uber or
AirBnB). They're going to have enforce bed taxes. They're going to have to
police municipal laws, HOA regulations, and more. And after it all shakes out,
maybe this isn't as profitable of a business as people thought (for both
AirBnB and hosts).
Of course, like Uber, they'll likely take the lobbyist route to fight this
(maybe they already have). But this is a case where a little humility would go
a long way IMO. Would it be so hard for AirBnB or Sam to say "yeah, there's a
housing issue and we might even be part of the cause. so let's work together
to find a solution or compromise."?
~~~
sama
I'm not arguing that people allocate specific income to specific expenses,
just that people need and deserve more income.
I think Airbnb is well aware of the housing issue and more than willing to
work together on solutions.
~~~
justizin
> I think Airbnb is well aware of the housing issue and more than willing to
> work together on solutions.
I'm sorry, that's just not true, and Sam, I admire a lot of what you've said
on a lot of issues, but I cannot believe that you are sincere in this article
overall, unless you are just naive.
Any property owner in San Francisco has always been able to go to the Planning
Commission and request a conditional use permit to turn their home into a bed
and breakfast, and as far as we can tell, no such request has ever been
denied.
Renters in San Francisco do not in almost any case I have ever heard of have
the right to sublet our apartments, even to additional roommates, without the
approval of our landlords, and with good reason.
If a person has lived in San Francisco for some time, and loses their ability
to earn enough to pay their rent, it is abusive and narcissistic for that
person to believe that simply by having the keys to a place that already does
not belong to them, they can - with no consideration for their neighbors -
turn their home into a business. A hotel business, no less, which _kind_ of
foists upon them the responsibilities of travel guides, as we read in an
article this morning.
Now, as demand for housing in an area is shooting up, individual renters are
leveraging that in a way that does not contribute to the cost of maintaining
property, putting increased pressure on other tenants, and disallowing the
landlord from actually satisfying that demand.
All that aside, AirBnb knows what the fuck they are doing. I know at least one
successful AirBnb host who has had AirBnb approach them and encourage them to
rent up nearby apartments and turn them into new units!
San Francisco housing activists have asked time and time again for AirBnb to
open up data about how many hosts have multiple units, and who is renting when
not in their unit, so that data may be used over time to guide regulation, but
as is becoming a trend with YC companies, private executives and investors
feel that they can do better city planning than people who have decades or
perhaps their entire lives invested in doing so.
Put up or shut the fuck up, sir.
~~~
mildbow
"It is difficult to get a man to understand something, when his salary depends
on his not understanding it."
\-- Upton Sinclair iirc
~~~
cwilkes
Not sure who is downvoting you -- this quote is entirely appropriate.
------
pbreit
I have trouble feeling sorry for AirBnB with this. It has had numerous
opportunities to put forth a reasonable view on how this should all work and
AFAICT, has not.
First, most (all?) HOAs and landlords forbid short term rentals, and for good
reason (short term renting is generally disliked by neighbors).
Second, city zoning policies are implemented for a reason, again, a good one.
Residential neighborhoods generally prefer little or no commercial activity as
well as inhabitants who care about the neighborhood.
I have not seen AirBnB weigh in reasonably on these important issues. For
that, it's possible it deserves Prop F.
~~~
seiji
Yeah, saying airbnb is good for housing prices is like the people who think a
universal income will help people live easier. hint: if you give everyone an
extra $20k/year, housing will magically go up by $20k/year to match.
with airbnb, if you can't afford your $3,000/month rent so you take in a share
tenant, maybe now you can afford $5,000 month in rent. now housing prices know
everybody can make $5k/month appear through tenant-izing, so all prices go up.
now nobody can afford a place unless they take in co-habitating renters or are
DINKs.
~~~
acgourley
It's not magic, it's economics. There are frameworks to think and model this,
and they do not support your conclusions.
~~~
seiji
but does your model take into account ė, the derivative of evil in the hearts
of humans with respect to time?
~~~
acgourley
Yes.
------
jdp23
> In the past year, only about 340 units in SF were rented on Airbnb more than
> 211 nights ...
For the purposes of Prop F, statistics that seem more relevant are
\- how many units are rented more than 90 nights (the current law) [1]
\- how many units are rented more than 75 nights (as proposed by Prop F)
When I see AirBnB supporters focusing on a number that doesn't seems as
relevant, it feels like spin to me.
> The median number of trips per unit was 5, and mean was 13.3.
Interesting shift here to talking about trips per unit, rather than nights per
unit. Back in 2012, the average stay was 5.5 days [2]. So does that mean that
the average number of days per unit is 71.5 (5.5 * 13)?
Also, according to the Chronicle [3], out of the 5,459 listings in 2015, "205
hosts have three or more listings. These super hosts account for 4.8 percent
of all hosts, but control 993 properties — 18.2 percent of Airbnb’s local
listings." I didn't see anything in Sam's post or the other anti-Prop F posts
that discusses this.
[1] [http://www.cnet.com/news/san-francisco-board-of-
supervisors-...](http://www.cnet.com/news/san-francisco-board-of-supervisors-
vote-on-airbnb/)
[2] [http://blog.airbnb.com/economic-impact-
airbnb/](http://blog.airbnb.com/economic-impact-airbnb/)
[3] [http://www.sfchronicle.com/airbnb-impact-san-
francisco-2015/...](http://www.sfchronicle.com/airbnb-impact-san-
francisco-2015/#1)
~~~
chralieboy
The 211 number was given as what Airbnb has found to be what it takes to break
even on the cost of a unit. Prop F, and others like it, are not trying to
destroy Airbnb but stop people from purchasing housing and using it
exclusively for short term rentals.
That statistic is meant to say that only 340 units were rented out enough to
match what could have been made via a lease. So if people are snatching up
property to use just for Airbnb, they either aren't doing it a lot or aren't
actually making a sound economic decision in all but 340 cases.
~~~
jdp23
I understand what the statistic is meant to say and it does that well.
However, the city's already limited units to 90 days, so the 211ers are
already handled -- if the city (with AirBnB's cooperation) enforces the law,
that is. So I don't this point isn't particularly relevant to Prop F. Your
mileage may vary, of course!
------
applecore
_> Airbnb has recently been attacked by San Francisco politicians for driving
up the price of housing in the city._
The high price of housing in San Francisco is caused by one (and only one)
thing: NIMBYism.
If supply is constrained and new high-rise developments are held back, there
will always be higher prices and a distorted market.
In reality, Airbnb doesn't have any measurable effect on the price of housing.
(Still, it may help alleviate the situation slightly for some people living in
the city.)
~~~
pbreit
Except that lack of high density is one of the City's more appealing
attributes. I don't like the simple call for more housing without any
acknowledgement of the consequences.
~~~
harryh
A CAP theorem for bay area housing policy: Charming, Affordable, Popular. Pick
two. And, like Partition Tolerance, you can't drop Popular.
~~~
Futurebot
That's a great way to think about it. You can drop popular (eventually it gets
to expensive that you reach a new equilibrium and people stop coming!), but
it's hard to say at what point in the future that actually happens.
~~~
harryh
Nobody goes there anymore. It's too crowded.
------
flyinglizard
_> Unfortunately, a lot of other people have problems paying their rent or
mortgage. 75% of Airbnb hosts in San Francisco say that their income from
Airbnb helps them stay in their homes, and 60% of the Airbnb income goes to
rent/mortgage and other housing expenses. Making it harder to share your home
in San Francisco may make it impossible for some of these hosts to afford to
stay in their homes and in this city._
This is why Airbnb helps maintain untenable pricing levels. There's more money
to go around and rates can keep going up with no bearing on vacancies.
A rental management company with 20% vacancies wouldn't be so quick to raise
prices; but, as long as the tenants magically come up with ways of catching up
with increased prices, the prices will continue going up.
~~~
sama
Do you have evidence of Airbnb driving up pricing levels? I'd honestly love to
see it if so.
~~~
ChicagoBoy11
Is it necessary?
Isn't it impossible for AirBnB not to either drive up price levels or drive
down availability?
If you have a property, AirBnB immediately makes it more valuable, as it is
now possible to generate a (reasonably) passive income from it - you can use
it more efficiently than before. If you own a home or rent.
What economics are you using in which it doesn't immediately follow that this
would eventually reflect in the rental/purchase price of housing units?
Ahh "rent control and regs" you say! Ok fine, then the market tries to reach
its equilibrium on quantity and fewer units are available. AirBnB makes
housing more valuable, and if there is any vestige of market forces in SF
housing, its effect on price/supply is unambiguous.
This DOES NOT, however, mean that AirBnB is a bad thing. I firmly believe it
is an incredibly beneficial thing for everyone and it boggles my mind how
people can be so against a service whose only function is to allow us to use
our resources more efficiently. If this energy were instead spent on analyzing
all the distortions that our wonderful political system has introduced in the
system, we'd be much better off.
------
rubicon33
Am I the only one who is furious about the cost of housing in SF, and the
apparent lack of action by city government? It was my dream to live in SF
since I was a young kid. Unfortunately, the average working professional
cannot possibly afford to buy a home. Finding affordable rentals, is not an
option either.
How much longer will the professional, middle class populous, put up with
their savings being drained by over inflated housing costs? Sadly, it seems
far too many people see SF as a professional vacation place, and not a home.
When I moved there for work, I also moved there to live. I wanted to do so in
a sustainable way, which means saving money every month for retirement, and
possibly buying a home. That's a pipe dream in SF, even with a nationally
competitive salary.
I cannot figure out whether there is blatant city corruption, or a complete
lack-of-caring about the middle class. Or is it that there aren't enough
developers trying to develop?
~~~
MBlume
Two organizations working to solve the problem:
San Francisco Bay Area Renters Federation (yes, the acronym is unfortunate):
[http://www.sfbarf.org/](http://www.sfbarf.org/)
San Francisco Housing Action Coalition:
[http://www.sfhac.org/](http://www.sfhac.org/)
Both will notify you when there are city meetings coming up where you could
show up and inject some sanity into the proceedings.
~~~
rubicon33
I really wish I'd known about these organizations before I moved. Shame on me,
for not getting involved.
Thanks for the links.
------
7Figures2Commas
> In fact, Airbnb worked with economist Tom Davidoff of the University of
> British Columbia and found that Airbnb has affected the price of housing in
> SF by less than 1% either up or down.
Airbnb _commissioned_ this economist[1]. That doesn't necessarily mean his
conclusions aren't credible, but commissioned research that supports the
agenda of the company that commissioned it should be subject to a higher level
of scrutiny. Is there any independent research Altman can cite?
> Unfortunately, a lot of other people have problems paying their rent or
> mortgage. 75% of Airbnb hosts in San Francisco say that their income from
> Airbnb helps them stay in their homes, and 60% of the Airbnb income goes to
> rent/mortgage and other housing expenses. Making it harder to share your
> home in San Francisco may make it impossible for some of these hosts to
> afford to stay in their homes and in this city.
What about their neighbors? If I'm paying good money to rent an apartment or I
shell out big bucks for a new condo, why should I be forced to live in a
hotel-like environment because a neighbor decides to violate the lease or
association CC&Rs/bylaws?
It doesn't matter how well-intentioned a host is. It's callous to have
sympathy for hosts who are violating leases and condo association association
CC&Rs/bylaws and no sympathy for the neighbors their selfish behavior
negatively affects.
[1] [http://blogs.wsj.com/developments/2015/03/30/airbnb-
pushes-u...](http://blogs.wsj.com/developments/2015/03/30/airbnb-pushes-up-
apartment-rents-slightly-study-says/?mod=WSJBlog)
------
hiou
_> The mean revenue per host was about $13,000 per year_
Otherwise known as the income that one would have previously obtained by
renting a room to a permanent resident. The big difference with Airbnb is that
the service is removing potential rooms and roommate situations from the
market. I'm not going to say whether Airbnb is a good thing as honestly I'm
leaning toward it being a net benefit. But to say it has not made finding a
place to live permanently in places like SF and NYC more difficult for 1st
time and early in life renters is difficult for me to agree with.
All progress has a price. And often that price is worth the benefit. But let's
not pretend there are not people out there that will be worse off in the short
term.
~~~
kelnos
_Otherwise known as the income that one would have previously obtained by
renting a room to a permanent resident._
Not true. If I want to rent my place out for a week while I'm out of town,
that would be income that could never be provided by a permanent resident. If
I rent out a spare room to someone in town for the weekend, that's not a
permanent resident. Maybe I don't _want_ a permanent roommate, but just want
some supplementary income here and there.
You're certainly welcome to argue that I shouldn't be allowed to do that, but
I'd disagree with that point of view, and that has nothing to do with whether
or not a permanent resident could be served by the space.
------
webmasterraj
This atrocious bill is yet another example of how the biggest unicorns are
facing a kind of challenge they aren't built to solve: the political one.
Until now, the rise and of tech companies has been determined by the double-
edged blad of innovation. Someone makes something new that works better, gets
big, and then someone else makes something newer and displaces them. Repeat
cycle over and over. It's why in tech, we specialize in the art and business
of innovation.
What we don't know how to do is navigate murky political waters. We're really,
really bad at it. Can you imagine another $10BN company even letting this kind
of bill happen, that would kill their largest market if it passed?
Airbnb isn't alone. Uber hired Obama's campaign manager because they realized
they're biggest existential threat is a political one too. But see their
ongoing lawsuits and outright bans in other countries – they haven't figured
how to solve the political question either.
Meanwhile, car companies with a lower market cap, like GM, could figure out
how to get a bailout from the government – right after it bailed out another
huge industry, banks.
Those guys are just better at it. They have been for a long time. They get
things like "don't optimize to something that solves problems. Optimize
optics." Or that real deals get done behind closed doors, because you can
control what happens there. That by the time it becomes a public debate,
you've already lost the game.
We in tech have a disgust for politics. Rightfully so. It's useless at best
and harmful more often. It doesn't follow the clear, hard and fast rules that
the rest of tech does. But if we don't hold our nose and figure out how to
play the game, or better yet, reinvent it, we'll get outplayed on the biggest
board of them all.
~~~
JonFish85
"a kind of challenge they aren't built to solve: the political one."
And yet these are the battles that "technology" companies like Uber and Airbnb
had to know were coming. At best, they live in a legal gray area. You can't
start a company skirting existing laws and expect politicians to look the
other way.
And it's not just politicians that are responsible for this. As a condo owner,
I specifically don't want Airbnb to be available in my association. There are
reasons that there are laws against leasing and subleasing apartments, and
it's not just to screw over startups.
Uber and Airbnb grew to huge valuations on the back of pushing externalities
onto others. Airbnb is taking their cut and looking the other way on things
like taxes, zoning regulations and such until they are forced to deal with it.
Uber pushes similar things off onto their "contractors".
Now the political environment is catching up to them, and it's time to deal
with the same legal environment that every other company has to deal with.
~~~
meatysnapper
Strongly agreed. If you are running an 1) illegal cab company or 2) illegal
hotel company, you have to expect this. At a certain scale you are tolerated,
but when you are a major player you will get some scrutiny that cannot just be
"disrupted" away.
------
pyrophane
New Yorker here. Short-term rentals and Airbnb in particular have had a
noticeable negative impact on my downtown Manhattan neighborhood, although I'm
not talking about rent. Everyone I know now has stories about "guests" who let
anyone and everyone into the building, damage common areas, and make noise all
night long, because really, what do they care? They are on vacation. They are
here to party, and then they are gone forever.
Apartments are designed for long-term residents. Why should we even consider
allowing them to become budget hotels?
~~~
Futurebot
Same here. Life-long native New Yorker, and I've never seen anything like it.
My building has many AirBnB guests (my floor alone has 2 apartments that have
different guests all the time.) Overall it doesn't impact my personal
experience, since the neighborhood I live in is fairly noisy already (LES) and
I don't really care unless they blast music.
That all said, I think there are things that AirBnB can do to mitigate all
this: standards enforcement division. A 24-hour service where you call them
up, make your complaint, and they send over some big scary people to knock on
the offender's door and ask them to "turn it down" or "pick up their garbage."
Local government offices and the landlords themselves can't respond quickly
enough (the former can't/won't send someone there at 2AM and it'd be pretty
tough to get the latter to run over to your apartment for this sort of thing.)
Basically AirBnB police. There are steps that can be taken that don't involve
the local government or housing authority; I think AirBnB would be wise to
take them.
------
physcab
SF absolutely needs to build more housing. SF also needs to build taller (more
skyscrapers).
I have trouble believing AirBnB helps SF. Anecdotally, I know a few people who
rent their places on AirBnB. All live in rent controlled units and effectively
re-rent at market rates. One actually reduced hours at his job because income
from AirBnB was so lucrative.
------
beatpanda
Sam, your post doesnt at all address the problem policymakers are trying to
solve, which is landlords evicting tenants and then converting those units to
short term rentals. I agree that Prop F is a bad way to fix that bad behavior,
but its also disingenuous to not talk about the problem. I don't think anybody
can make a coherent argument that renting out a spare room is driving up
prices. What does do that is landlords deciding they would rather be in the
hotel business.
~~~
geebee
Airbnb could very well drive up "spare room" prices. For instance, think about
a room in a house, or may be a small in-law, that used to be rented out to a
student or other longer term tenant. With airbnb, it may be possible to make
up that income on fewer days, or to greatly exceed it as a full time rental.
That would result in a unit being taken off the market as a permanent rental,
which could certainly reduce supply and drive up prices.
------
adrianmacneil
> About 33,000 of these were vacant, generally as a side effect of rent
> control laws. (I don’t honestly know if rent control is a net good or bad
> thing—I assume more good than bad—but it certainly keeps units off the
> market.)
I will never understand why most Americans generally favor a tough-luck, fire-
at-will attitude for employment, but are in favor of rent control and making
eviction extremely difficult.
Coming from New Zealand, it's the other way around (it's extremely difficult
to fire people, but there is no rent control and you can evict anyone with 90
days notice).
Not saying one or the other is necessarily better (I personally think
somewhere in the middle for both approaches would be best), but strict
eviction laws and rent control always seemed very un-American to me.
~~~
gohrt
Americans don't favor a tough-luck, fire-at-will attitude. Employers do
(obviously) and employees don't. Same as with rentals.
~~~
kelnos
Eh, I wouldn't say that's universally true. As an employee, I've appreciated
it when it's been (fairly) easy for the company to e.g. get rid of a peer that
was dragging the team down.
Several of the (smaller) companies I've worked for would not have survived if
not for at-will employment. That's certainly helped me as an employee.
Fortunately I haven't yet fallen on the "wrong" side of that equation; I
imagine I might feel differently if I had... but then that's kinda irrelevant.
------
balls187
> Unfortunately, a lot of other people have problems paying their rent or
> mortgage. 75% of Airbnb hosts in San Francisco say that their income from
> Airbnb helps them stay in their homes, and 60% of the Airbnb income goes to
> rent/mortgage and other housing expenses.
Sources of this data?
How many Airbnb people are putting out property they own vs those who are
renting?
It sounds like people are abusing it to stay in homes they could otherwise not
afford, which is in itself adding to the housing problems in San Fran.
Facilitating someone easily renting out their home, great. Allowing renters
(and to a lesser extent home owners) subsidize their over extended living,
bad.
------
abalone
_" only about 340 units in SF were rented on Airbnb more than 211 nights,
which is what Airbnb has calculated as the break-even point compared to long-
term rental"_
This is a crazy figure. That's saying hosts only charge 1.7X more per night
than they would get from a roommate or tenant. That's ridiculous.
A quick search on AirBnB shows rooms in my neighborhood going for $130-230.
That's $4k-7K/month for a room fully booked. A quick search on Craigslist
shows roommates wanted for $1200-2200. That's about a 3X markup, nearly double
what Airbnb claims.
That matches anecdotally what I hear a lot. People are increasingly preferring
to AirBnB rooms instead of seeking roommates. They get more money and/or have
more control over their space. No getting stuck with a crazy roommate, no
overnight guests, you can have the place to yourself when you want, etc. This
takes housing stock off the residential market and moves it to the more
attractive tourist market. It's similar when you look at entire apartments too
and the incentives to hold onto them and Airbnb them after you've really moved
out, instead of letting new residents move in.
So the 75 day limit that Prop F proposes is much better targeted at changing
those economics than the current 120 day limit (which is not very enforceable
anyway). That means you'd have to charge 5X to break even vs. a long term
rental, which is too much. So it only makes sense to AirBnB rooms/units that
really would never go onto the long term market anyway.
------
1024core
The problem with Prop F is that it has some very dangerous side effects. Read
this detailed analysis if you want to know more:
[https://medium.com/@emeyerson/prop-f-is-worse-than-you-
think...](https://medium.com/@emeyerson/prop-f-is-worse-than-you-
think-17e395ca8761)
------
billiam
My friend Matt just summed up your cynical formulation: "let's let Airbnb
capture tax revenue so that people now in their homes can stay there a little
longer" as a hand sandwich: I will sell you two pieces of bread and convince
you it will taste good if you shove your hand and start eating.
------
seiji
Sam means well, but he does live 200% inside the internet hype machine bubble
vortex:
_The whole magic of the sharing economy is better asset utilization and thus
lower prices for everyone. Home sharing makes better utilization out of a
fixed asset, and by more optimally filling space it means the same number of
people can use less supply._
"better utilization out of a fixed asset" is how we talk about factory
machinery, not so much living space.
Housing has physical implications and psychological cost. If we wanted
_optimal_ space filling, we'd put 10,000 bunk beds in a warehouse and tell
people to deal with it. The proles can have their bunk bed warehouse while the
billionaires can have estates in San Francisco. et voilà, optimal filling of
space allocated by level of monetary expenduture.
~~~
megaman22
> Housing has physical implications and psychological cost. If we wanted
> optimal space filling, we'd put 10,000 bunk beds in a warehouse and tell
> people to deal with it. The proles can have their bunk bed warehouse while
> the billionaires can have estates in San Francisco. et voilà, optimal
> filling of space allocated by level of monetary expenduture.
You know, if there were such arrangements, I'm sure that there would be people
who would jump on them. I'm a little surprised Google hasn't built company
dormitories, since they've got people living in vans in the parking lot rather
than paying $3000 a month for a studio apartment.
~~~
gohrt
Mountain View Citcy Council has consistently blocked Google's attempts to
expand housing.
------
chermanowicz
Your (and many) arguments about AirBnB revolve around housing, economics, etc.
There are other arguments to be made about the quality of service & safety.
Read some of the comments from another recent HN story:
[https://news.ycombinator.com/item?id=10291070](https://news.ycombinator.com/item?id=10291070)
My own individual and anecdotal story: as an individual whose neighbor was
AirBnB-ing his apt next door to mine, seeing dozens of unfamiliar faces (and,
without going into detail, the behavior and antics of some of these occupants)
did not make me feel safe. If anything, they should severely restricted under
this guise, not "affordable housing". (Though I do agree that more housing
would generally improve the situation for all).
------
caminante
Wow! ~2/3 of housing units in SF are rentals...
"In 2014 (the most recent year with available data) there were about 387,000
housing units in SF. About 38% were owner-occupied, and the remaining 62%
or 240,000 were rental units."
~~~
cheepin
I wonder why this is... My first guess is that hardly anyone that works in SF
can afford to own housing, my second is that real estate speculators are
buying up and renting a lot of San Francisco property.
~~~
mrkurt
Prop 13. Very low property taxes create tremendous incentives to hold on to
real estate you own. People paying 2000 level property taxes on real estate
with 2015 level values are making a killing on rent.
~~~
dragonwriter
> Prop 13. Very low property taxes create tremendous incentives to hold on to
> real estate you own.
Very low property taxes should be relatively neutral between holding and
trading compared to higher property taxes.
Prop. 13 encourages holding over trading because, while it does control
property tax rate, it also constrains tax basis value increases to small
annual increases while you hold property, but reassesses at full market value
when you buy a new property. Which means, it increases the incentives to hold
on to property once you've purchased it and decreases the incentives to
purchase property, because the property is (net) higher value to the current
owner than a new purchaser with otherwise similar profile, since the new
purchaser would have to pay higher annual property taxes.
------
jsprogrammer
> _In the past year, only about 340 units in SF were rented on Airbnb more
> than 211 nights_ , which is what Airbnb has calculated as the break-even
> point compared to long-term rental.
Ok, there are a small number of units that are let or sub-let for 2/3 of the
nights per year. That doesn't tell us much. I'd guess it would be near a full
time job to keep your house let out 66% of the time. You wouldn't even be
living there most of the time...how can you even really claim it to be yours?
Any observation would show that the primary use is for AirBnB and their
customers.
------
smacktoward
_> I recently reached out to Brian Chesky, the CEO of Airbnb, to learn more
about this._
I didn't reach out to any of the sponsors or advocates for Prop F, such as the
political figures and organizations listed at
[http://www.sharebettersf.com/endorsements-propf-prop-f-
airbn...](http://www.sharebettersf.com/endorsements-propf-prop-f-airbnb-sf/),
of course. And while this post is full of stats that sound a lot like the kind
of thing you'd get from Airbnb PR, nowhere in it am I going to inquire further
and link to an opposing view, or really engage with opposing views in any
material way. I'll just dismiss them by saying that the solution is for SF to
allow more building, as if making that happen hasn't been the most contentious
and complicated issue in the city literally for generations.
One could also argue that I myself have helped drive up the high cost of
housing in SF, both by running a program that requires the people it admits to
move to SF in order to participate, and more generally by being part of a hype
ecosystem that aims to convince impressionable young people that the only way
to be successful in tech is to somehow jam yourself into this already
bursting-at-the-seams city. I'm not really going to engage with that line of
thought either, though.
------
Mz
The data cited here makes the bill sound ridiculous, though it also leaves me
wondering how many more units are being rented out on AirBnB with less
frequency than these 340 units. Still, SF was pricey before AirBnB. It is
ridiculous to try to blame local housing prices on this one company.
It looks like AirBnB meeds to do some serious PR work. I think thier rapid
rise is helping create the illusion that they impact the local housing market
more than they actually do.
~~~
mildbow
They are on it: Sam's post is part of the PR work.
Why else would he say the fix to the housing problem is to put more housing on
airbnb?
A huge part of the problem is that people are renting places just to sub-lease
them. Guess what that does? Yup. It increases pricing where you are paying the
zero value-add middleman more.
But hey, that can't possibly be part of the problem. /s
I'll leave you with this:
"It is difficult to get a man to understand something, when his salary depends
on his not understanding it."
\-- Upton Sinclair iirc
~~~
Mz
A) I did say up front that I wondered what the other numbers are that Sam is
not putting in the article -- the other half of the picture. I have, in fact,
read "How to lie with statistics" and I am well aware we are being
intentionally given a certain framing from a party with a vested interest.
B) However, I also lived in the bay area at one time, in Solano County, and
was pursuing education with an eye towards going into some kind of urban
planning related career. In fact, I founded and moderated a subforum for a
time on the most successful urban planning forum around at that time. So I
have some familiarity with how crazy prices were back then, before AirBnB was
a gleam in anyone's eye. And also I have some familiarity with the various
factors that go into forcing housing prices up. Saying AirBnB contributes to
the problem is not crazy talk. But acting like they are the single most
important factor meriting the passage of a bill intended to kill them off -- I
want a tad more data than "But look at the crazy high local housing prices,
man!" Because that falls far short of proving they are having that big of an
effect.
C) Yeah, I am very familiar with the saying. I am well aware of how hard it is
to be both profitable and ethical. So far, I have managed to be pretty
ethical. I am also dirt poor. So I am a little tired of hearing that anyone
making money is clearly The Devil. The fact that this is part of a pro AirBnB
PR campaign does not ipso facto make it inherently evil. The other side is
also engaging in a PR campaign, and they also have vested interest that you
can put a dollar amount on. Sometimes, people are actually doing work they
actually fucking believe in. Those people still need to EAT and put a roof
over their head. I am so goddamn sick of the idea that all the good people are
dead martyrs and, if you still draw breathe, you need to feel guilty about
every single fucking thing you do to try to keep body and soul together.
~~~
shostack
Given your interest in urban planning, what are your thoughts on what could
realistically cause housing prices to decline in the Bay Area? Particularly
interested in the Peninsula.
The main thing I've kept my eye on is interest rates, but there are obviously
other factors. I'm not convinced rising interest rates would even have that
much impact--there will always be people with more money who want to live here
for the weather/culture/food/location.
~~~
Mz
I haven't studied it (the specifics of what is going on in SF) well enough to
make specific recommendations for San Francisco. If I were on a task force
looking for answers, I would start by reading everything I could get my hands
on concerning a) California real estate taxes and b) rent control. I would
look for studies, I would look for what we can quantifiably show has a
measurable impact.
Then I would look at trying to find ways to incentivize making small spaces
with housing basics more available.
I would also look at economic factors like the fact that you can live in SF
without a car, so some people can afford the nosebleed rental prices because
they are paying only for rent rather than rent plus a car. And I would
consider creating a PR program around that angle. Walkable communities
typically are more expensive, because humans value the high quality of life
they afford, and they are mostly zoned out of existence. A lot of things that
historically created walkable communities cannot be recreated under modern
car-centric zoning laws.
Edit: To be clear, those are things I would start with, not _everything_ I
would do.
~~~
shostack
Thanks for sharing. Since the Peninsula doesn't have rent control, but DOES
have Prop 13, it has separate circumstances, but still many of the same
symptoms.
------
hoprocker
> About 33,000 of these were vacant, generally as a side effect of rent
> control laws. (I don’t honestly know if rent control is a net good or bad
> thing—I assume more good than bad—but it certainly keeps units off the
> market.)
If I understand it correctly, one of the most common ways of evicting rent-
controlled tenants is through owner move in. A side effect of this is that the
owner has to "live there" for 3 years[0]. Given this, it seems like this
statistic -- which, taken out of context, could be used to demonize rent-
controlled units as wasting valuable housing stock -- is actually forced on
the short-term rental market by profit-seeking landlords.
[0]
[http://www.sfrb.org/index.aspx?page=965](http://www.sfrb.org/index.aspx?page=965)
------
samstave
> __ _His flat is still on Airbnb and guess what, you can still "Instant Book"
> it! And I'd lay odds if you do, you'll be met at the door with some shabby
> excuse about why it isn't ready, but don't worry, he has another place for
> you not far away..._ __
WHY doesn 't AirBnB have a fraud checking department where apartments like
this are booked by agents of AirBnB to check in on just such things.
If ALL AirBnB hosts KNEW that their next tenant ___COULD_ __be an actual
AirBnB rep -- then they wouldn 't pull shit like this as often.
And they should be able to get a "Verified good by AirBnB stays"
------
MaysonL
The price of housing in SF has about doubled in the past 5 years, according to
the post. What has happened to the price of hotel accommodations over the same
period?
------
pcmaffey
We have the same affordable housing problem here in Boulder, but on a much
smaller scale than SF. It's been this way for over a decade... Unfortunately,
fixing this housing dynamic is not so simple as increasing supply. Incremental
increases in supply can never keep up with exponential demand.
IMO the highest impact solution is to focus on transportation. But that's a
topic for another discussion.
~~~
shostack
How has it impacted Denver and the housing market there? Also, are there still
"affordable" and safe parts of Boulder? My understanding is there is plenty of
land that can be developed there.
Would love to understand more about the housing market over there since you
don't see it in the press nearly as much as the Bay Area.
~~~
pcmaffey
The greater effect has been on the suburban sprawl towns in between Boulder
and Denver. Places like Louisville, Lafeyette, and Longmont have seen dramatic
increases in both prices and quality of living, just in the past 5 years.
These towns don't have the restrictions on development that Boulder does. So
that's where the growth is going.
Boulder continues to develop, but its pace can't keep up with demand at all
(which is a good thing). It's a university town, so there's lots of rentals.
But as for purchasing homes, there's really nothing "affordable" in Boulder
proper (except perhaps compared to SF). Nor are there any "unsafe" parts of
Boulder...
~~~
shostack
Interesting, thanks for the insight. Would you consider any of these sprawl
areas as desirable at all? I'm trying to form a comparison to the Peninsula
here in the Bay Area.
How is safety in those other areas or Denver proper?
~~~
pcmaffey
Yeah, certainly, each has its own vibe. Sort of depends on what you're looking
for. Colorado in general IMO is the coolest state in the nation (and I've
lived in a few). The different areas represent magnitudes of that. They are
all relatively safe, compared to East coast (where I'm from). Though I can't
speak much for Denver as I don't need to go there much...
Compared to the bay area, I used to live in the mountains of Santa Cruz, and
now live in the mountains outside Boulder. Other than that, I don't have much
experience with the differing areas of the SF peninsula.
I'd recommend maybe starting your research with Louisville. It's blown up
quite a bit recently, but is not quite at Boulder prices.
If you have some specific questions about places from there, I'm happy to
help. :)
------
dynofuz
The real solution here is to change the laws protecting gigantic swaths of
ugly "historic" districts like the mission. Unfortunately no politician or
home owner wants to vote for this because it would dramatically devalue their
homes if SF is finally allowed to build vertically. Then the tiny increase in
housing costs due to Airbnb is no big deal.
------
Xyik
Why doesn't AirB&B work to reduce prices? Once it becomes less profitable for
people to sublet their places for the sake of making it money, maybe people
will see it as less of an evil. There are far too many hacker hotels in SF on
AirB&B charging ridiculous rates, jamming up to 20 people into a single condo
stacked with bunk beds.
------
ilaksh
[http://runvnc.github.io/tinyvillage/](http://runvnc.github.io/tinyvillage/)
------
rootedbox
A guy with a bias; is trying to tell me that lessoning supply in a super high
demand region is only nominally effecting prices.
my head is spinning..
~~~
sama
Actually what I'm saying is that much more supply is the thing that will drive
prices down.
(And also that the number of units that are effectively full-time rented on
Airbnb is just about 1% of the entire off-rental-market supply--let's get that
99% back!)
~~~
rootedbox
It's just your argument requires me to believe...
"In the past year, only about 340 units in SF were rented on Airbnb more than
211 nights, which is what Airbnb has calculated as the break-even point
compared to long-term rental."
This just didn't sound right; and doing the quick math with the average one
bedroom going for 3500.. To reach that 211 day number would mean that rentals
on air bnb are going for about 200.
But doing a search in SF for air bnb the average rental is 422.. Now some of
these are multi room, and some are just a couch.. but I can't seem to find a
single 1 bedroom non-share for under 260.
This makes that 211 figure feel like its off. Which makes me feel that the 340
units is off.
Can we see the calculations used? Also is there a big jump in those units at
210 days.. 200 or 182..
I mean as a land lord if you only have to work half the year and make only a
little less revenue; plus the positive of less liabilities I could see my self
wanting to air bnb over rent out my unit.
~~~
rootedbox
Also when you throw rent control into the math.. the calculation used is non-
linear and way below 211; very very quickly; because of flat monthly revenue
of rent control vs. monthly increase in revenue from an upward market. I would
suggest you revisit the math.. or ask to see the data of who did the math for
you.
------
ksherlock
What some people call "the sharing economy" is not new. After all, what is the
world's oldest profession if not "sharing" genitals. Sometimes with a
middleman (or "pimp") taking his cut.
------
geebee
Unfortunately, some of the problem may be the language we use to advance our
points. Here's a phrase that I think really does illustrate this:
"Making it harder to share your home in San Francisco may make it impossible
for some of these hosts to afford to stay in their homes and in this city."
I really do want to discuss this reasonably, but to me, this is clearly a
misuse of the word "share". There is a powerful emotion around "sharing", and
to say that San Francisco is making it harder to "share" your home does have a
different ring than saying it is making it harder to "rent out your home short
term".
I will certainly agree that there can be some ambiguity around the word
"share". For instance, if two people both pay equally for a large sandwich,
they might say they "shared" it rather than "split it". But when you list your
room on a website for a certain price, and someone pays you for it, I don't
think we're anywhere close to that ambiguous grey area. This is clearly a
quid-pro-quo commercial transaction. They can be friendly transactions, people
can get to know each other through these transactions. I'm not even saying
it's an undesirable transaction (more or less everyone I know things that
airbnb has its place, though there is great disagreement over how these
rentals should be regulated).
But I really don't think it's "sharing" by any reasonable definition of the
term.
~~~
sama
That's a good point; now that everyone calls it the sharing economy that's the
word that came to mind. But I'll change it.
~~~
chralieboy
"Sharing economy" is just the marketing term for it. If I share a bench with
you, I'm not charging you for the privilege. I agree that it is the word we
use, but it doesn't accurately communicate what we're talking about.
It's a difficult line to walk. On the one hand, sharing sounds nice. Even as
capitalists we are suspicious of efforts to make a profit. And many of the
"sharing economy" services are about using things that you personally own and
selling use of them to the public.
On the other hand, when AirBnB/Uber/etc try to make an economic argument for
their services, it is clearly not around sharing. We're exchanging value (my
empty home, parked car, etc.) for value (your dollars, as a proxy for work you
have done.)
~~~
bduerst
I get that you're trying to break down "sharing economy" by the semantics of
the word "share", but even in your hypothetical the bench is [presumably]
owned by the city, who has granted already access to it for everyone.
By sharing access to economic goods and services that were previously
unavailable, waste from market inefficiencies are being eliminated. Just
because companies are profiting from this waste elimination doesn't negate the
fact that shared economies can still be beneficial.
Even so, considering the size of these markets now, I don't think people are
going to confuse "Sharing economy" with altruism.
~~~
geebee
People have chanted "sharing is caring" at demonstrations against greater
regulations and restrictions on short term rentals. I think we're stepping
close to a deliberate ambiguity.
------
swagv
Looking forward to the day where nobody can afford to live in SF unless they
are also running a private hotel. That will be the new standard.
| {
"pile_set_name": "HackerNews"
} |
How money corrupts Congress. Lessig speaks at Google - zlotty
http://www.youtube.com/watch?feature=player_embedded&v=Ik1AK56FtVc
======
sp332
If you can't watch videos, or if you like being able to skim, Lessig makes his
case in text: <https://news.ycombinator.com/item?id=3353324>
~~~
zlotty
thx for the link
------
teresko
A really good lecture. My recommendation.
\+ favorite
| {
"pile_set_name": "HackerNews"
} |
Cinderella - CLI app to manage open source dev on OSX - tzm
http://www.atmos.org/cinderella/
======
teilo
Needs virtualenv and virtualenvwrapper. These days, almost no one develops in
Python without them.
------
hackermom
What extra does the user get from an additional installation of Python and
Ruby as contained in this package? (OS X already has Python, Ruby, Perl, PHP
and a lot more, since forever.)
~~~
teilo
Python 2.7, and a canonical install of Ruby (the native install has problems,
or at least it used to).
| {
"pile_set_name": "HackerNews"
} |
Samsung Develops Battery Material with 5x Faster Charging Speed - nielsbjerg
https://news.samsung.com/global/samsung-develops-battery-material-with-5x-faster-charging-speed
======
philipkglass
This looks closer to an industrial product than many new battery technology
announcements. The cathode chemistry isn't exotic. The efficiency is high and
stable (supplementary table 3). The rate capability is good. The specific
energy is quite good. The cycling stability is pretty good.
The trickiest part looks like the chemical vapor deposition of graphene onto
SiO2 nanoparticles. CVD is a slow growth process that I normally see applied
to creating precise, thin layers on flat substrates. I think it would be hard
to scale this up to industrial (tonne per day) quantities of coated particles.
Is it possible to replace that process with something like a fluidized bed
reactor? I'm out of my depth here regarding paths to scale-up -- I have a
chemistry background, but I'm not qualified to comment on most chemical
engineering.
~~~
voldemort1968
"I'm out of my depth here"
Could have fooled me.
~~~
throwawayjava
Chem != Chem Eng. I take your point though ;)
------
djrogers
If even 10% of the battery 'breakthroughs' we've seen on these pages in the
past 5 years had come to fruition, we'd have 20Kw batteries that charge in 10
minutes on our phones. Oh, and they'd be 100% recyclable but that wouldn't
matter because they'd last for 100k cycles.
~~~
jacquesm
It's the exact same thing with solar panel technologies. But then if you look
at the long term, 10 or 20 years, you see that there really is an underlying
current (...) of steady improvements that eventually make it to the market or
that reduce cost. But the vast majority are hype.
~~~
mark-r
"We always overestimate the change that will occur in the next two years and
underestimate the change that will occur in the next ten." \- Bill Gates
~~~
njarboe
Most humans seem to understand linear growth pretty well. It is hard to get an
intuitive feel for exponential growth.
~~~
wickawic
"The greatest shortcoming of the human race is our inability to understand the
exponential function.”
\- Al Bartlett
~~~
SomeStupidPoint
It's because we think in 3D, so we only really see three steps of exponential
growth.
If we thought in 100D, we might have a better sense for it, because we'd be
able to see a hundred of them.
Hypervolume grows exponentially.
One way to get a _really_ rough idea is to try and control each and every
joint individually.
Close your eyes and try to imagine that each joint, each muscle is a dimension
along which you can move (by moving it), and your posture at any given moment
is a point in that space. When you move, you make a line through it. Don't
_picture_ it, just _feel_ it.
What is the shape of that space?
You can get an idea of what exponential growth is like by exploring how the
shape of that space changes as you add more and more things you're
controlling.
~~~
flatfilefan
An interesting way to look at human thinking patterns. Is there any book on
it?
I never completely figured out Aikido with it’s joint locks and levers. Maybe
talented aikidokas have a grater capacity to visualize/fill this type of
activity?
~~~
Pamar
(Aikido SanDan, ~28 years of practice, still going to the dojo 3 times a
week).
Interesting point, but I don’t think Aikidoka have any special talent for
that: we use a small number of techniques and what changes is the way you use
them in response to different attacks/holds.
Also, you tend to work on your specific Ryu (school) technicsl curriculum and
nobody goes around “inventing” new locks.
(Some argue that Aikido is not really adapting to modern world nor cross-
pollinating with other martial arts due to -arguably excessive - reverence for
tradition).
------
ficklepickle
Is it possible that this announcement explains the crazy recharge rates
announced during the Tesla truck unveil?
Experts were skeptical[1] that their recharge rates and capacity were possible
with current gen tech...unless Elon knew something they didn't.
[1] [https://www.bloomberg.com/news/articles/2017-11-24/tesla-
s-n...](https://www.bloomberg.com/news/articles/2017-11-24/tesla-s-newest-
promises-break-the-laws-of-batteries)
~~~
krolley
Experts are right to be skeptical that it's possible with current gen tech. As
usual, Musk is probably extrapolating charge speeds to when the truck will be
finally delivered, which is in what, 2020?
------
saagarjha
> Additionally, the battery can maintain a highly stable 60 degree Celsius
> temperature, with stable battery temperatures particularly key for electric
> vehicles.
Isn't this only necessary because Lithium-Ion batteries need it to maintain
efficiency and longevity? Is this also an issue with graphene?
~~~
mrguyorama
meanwhile if my phone maintained 60 degrees in my pocket, I'd be rather
unhappy
------
Skunkleton
Equally interesting is the claim of increased capacity. I wonder how
impractical this is to manufacture?
Edit: better source here
[https://www.nature.com/articles/s41467-017-01823-7](https://www.nature.com/articles/s41467-017-01823-7)
~~~
_grep_
So far there is no way to reliably mass produce graphene. There have been
claims in the last year or so that we're getting closer, but nothing real yet.
~~~
agumonkey
instead of mass production let's have tiny production pods patent free so we
can all make the graphene
~~~
pat2man
So scotch tape and pencils?
~~~
agumonkey
Very low jab. There are other ways to generate graphene since, thank you.
------
bufferoverflow
Graphene coatings for anodes/cathodes is something Robert Murray-Smith has
been talking about on YouTube for years, many people called him a scam artist,
even though he never tried to sell anything.
[https://youtube.com/user/RobertMurraySmith](https://youtube.com/user/RobertMurraySmith)
------
userbinator
...and 5x shorter cycle life?
Observe the noticeable lack of any mention of how many cycles a cell will last
at this charge rate. It is well known that ordinary li-ion cell can be charged
extremely fast too, as long as you don't charge so fast it heats up rapidly
and goes into explosive thermal runaway, but it shortens the lifetime
considerably.
~~~
Defenestresque
You probably missed it, but the nature.com article that some commenters have
referenced [1] has more details, including information on the charge rate.
> A full-cell incorporating graphene balls increases the volumetric energy
> density by 27.6% compared to a control cell without graphene balls, showing
> the possibility of achieving 800 Wh L−1 in a commercial cell setting, along
> with a high cyclability of 78.6% capacity retention after 500 cycles at 5C
> and 60 °C.
In your other comment you write:
>the standard is 80% capacity after 500 cycles at the normally specified (1C)
charge rate
So I'd say that's pretty good.
[1]
[https://www.nature.com/articles/s41467-017-01823-7](https://www.nature.com/articles/s41467-017-01823-7)
~~~
dtx1
> A full-cell incorporating graphene balls increases the volumetric energy
> density by 27.6% compared to a control cell without graphene balls, showing
> the possibility of achieving 800 Wh L−1 in a commercial cell setting, along
> with a high cyclability of 78.6% capacity retention after 500 cycles at 5C
> and 60 °C.
Does that mean 5C Charge rate and > 5C Discharge? Because in the EV Market 5C
discharge would be borderline enough (I think Teslas 18650 discharge at a peak
of 20A per ~3,5Ah Cell so 5C Discharge would be cutting it very close.)
If it's 5C Charge and getting to 500 cycles with higher discharge then...woah.
------
csours
So it seems that this cannot be immediately scaled. I wonder if Samsung/Apple
will incorporate this in a super-luxe phone, which could perhaps bring it to
scale.
~~~
mark-r
I think the better application would be car batteries - you have huge
incentive for a really fast charge. Imagine charging the battery in the same
time it takes now to fill your gas tank!
------
rurban
It's not new battery material. It's just a better anode coating with a
graphene layer, only a normal lithium-ion battery. Same strategy as most
improvements there. Means time to market could be much faster.
Problem is that this graphene layer is extremely thin, one atom. Mass-
production, what they claim to do, would be a killer app for much more than
just batteries, but for batteries it's the easiest win.
------
m3kw9
What about the other rather important attributes like discharge rate, losses,
temperature stability?
~~~
hwillis
Discharge rate appears similsrly improved (iirc), losses aren't really
dofferent, temperature stability is increased, cycle life is increased vs. no
additives, but they didnt test with additives.
~~~
arnoooooo
Indeed, but you'll need to be able to supply the current. Tesla superchargers
are the exception; other than them, 50kW is the max you'll get.
For cars, having twice the capacity with the same charge speed would be
enough, since you can charge slowly when you sleep, what matters is that the
car can handle the distance you can travel in a day.
------
matco11
...And suddenly, Tesla’s battery technology progress implied by the semi’s
announcement looks conservative
------
AJRF
Painting the bike-shed here. We need capacity, not recharge speed.
~~~
hwillis
Most people disagree. It's rare to drive >500 miles between charges, but most
people will want to spend less than 20 minutes charging when they want to go
long distances.
------
executive
Is this the Galaxy Note 7?
~~~
zeep
it used to be
------
georgespencer
What could possibly go wrong
------
marknadal
This is some explosive news! ;)
------
Mrtierne
Whenever I see an announcement for battery technology I'm always just waiting
for Musk's reply.
------
orliesaurus
Anyone else clicked this hoping Samsung would announce "a battery that won't
blow up your phone and will last more than your current battery" and then
reading the comments felt every single one of those dreams and hopes being
shattered, one by one?...
~~~
orliesaurus
Welp I guess I was the only one
| {
"pile_set_name": "HackerNews"
} |
Ceylon 1.2.0 is now available - egorst
http://ceylon-lang.org/blog/2015/10/29/ceylon-1-2-0/
======
gavinking
Yay!
Folks, please feel welcome to Ask Me Anything.
~~~
marvelous
Yay!
The language module is a 1.5MB JavaScript beast and people routinely complain
about 150KB frameworks on HN. Are there plans to have a whole program
optimisation pass to prune dead parts of that module (a ceylon webpack or
ceylon closure command) ?
~~~
gavinking
It seems to me that the best solution here is to split the single file into
one js file for each package. A big part of that file is the implementation of
the metamodel, and it's very likely that a lot of people won't even want to
use that on the client side.
We've even discussed the pros and cons of actually splitting the metamodel
into its own separate module, ceylon.metamodel, or whatever.
P.S. My tests with uglify-js suggest that minification probably isn't the most
fruitful path here.
| {
"pile_set_name": "HackerNews"
} |
Official: Anonymous May Be Able to Disable Power Grids by Next Year - maudlinmau5
http://mashable.com/2012/02/21/anonymous-threat/
======
duncan_bayne
Here comes the FUD in advance of a Govt. crackdown on hacktivism. Cute.
| {
"pile_set_name": "HackerNews"
} |
Music Theory for Musicians and Normal People - dmmalam
http://academic.udayton.edu/tobyrush/theorypages/
======
jtheory
Teaching music theory is _damned hard_.
You very quickly find yourself making statements like this (taken from the
second PDF in this series): "A tuplet is any non-standard division of a note.
These are usually written as a group of notes delineated with a bracket and a
number showing the division being made." It's correct in grammar and sense,
and about as exciting as a lawn-mower repair manual.
This is probably the best series of music theory cheatsheets I've ever seen,
though... just about any other music theory resource you can find, online or
off, gets bogged down _immediately_ in sleep-inducing language. I had to poke
around a bit to find the example above.
The real problem is the "building blocks" approach to music theory pedagogy;
that is, making students learn all of the basic concepts before they can do
anything remotely interesting or useful.
It's really, really logical. It's also a sort of mental torture, in the realm
of music theory, because a lot of the building blocks are arbitrarily weird
for historical reasons, and it takes too much meaningless memorization before
you can do something as trivial as sight-reading a piece of music you could
_already pick out by ear 10x faster_. What about doing basic analysis of a
piece of music? So, so many building blocks required first....
I think it's possible to make learning theory enjoyable, but it'd be damned
hard (and not possible in a static form).
That said, if you have the external motivation already to make the slog
through the basics, these are solid references to help get the details
straight in your head.
~~~
dizzystar
The way this teaches it is very difficult. They make the same mistake as all
music theory lessons, which is to dive right into the Circle of Fifths without
ever mentioning _how_ the Circle of Fifths is derived.
I've been thinking about writing a music theory lesson for programmers and
"normal" people. I swear it is a conspiracy theory of music teachers to make
music theory seem hard. Once you see the logic of how it all comes together,
it is head-slapping easy. Music theory is all created from a few easy-to-
remember patterns.
I already wrote a bit of music theory code in Python. Maybe this will be my
Thanksgiving project.
~~~
mietek
I'm hoping one day to find an explanation of what music really is; why do
certain patterns of sounds appeal to us; why do we share a sense of melody,
harmony, rhythm.
Ideally, this explanation would ignore centuries of historical cruft, starting
instead from the physical and physiological basics, and making full use of the
infinitely malleable sound generators we all own.
~~~
zandomatter
A little something like this? <https://www.youtube.com/watch?v=i_0DXxNeaQ0>
------
commontone
I'm the author of the pages, and wow... I was wondering where all the sudden
traffic was coming from. Thanks, dmmalam, for getting my stuff on the front
page, and for those who emailed and let me know about it.
First, sorry about the Issuu thing. These pages are actually several years
old, and at the time Issuu was actually the easiest way I knew to make them
available without burning out my personal hosting bandwidth. I created the
index page later on, but used the Issuu links since they were there. (You have
to understand, there has never been more than a trickle of a demand for them
outside of my own students.)
The other reason I was a little hesitant to bundle them all together is
because I'm still working on them, and I didn't want to "publish" something
that had the air of being complete.
But the internet has spoken... I've added a link at the top of the page which
takes you to a single PDF. (Thanks to jamie_ca and pyroMax for doing this
before I stumbled into the party.) Oh, and I fixed the <title> tag, too.
Also, thanks very much for the other feedback that has been sent my way; I do
genuinely appreciate it. While I'd like to retain sole authorship (at least
for now) rather than make them open-source, I most definitely welcome comments
on how they can be improved.
~~~
steamer25
One thing I noticed so far is that you make mention of half vs. whole steps on
while discussing accidentals on page one but they're not defined until the
major scale is introduced on the fifth page. That could throw beginners off a
bit.
------
Cogito
This looks like a great resource, that is severely suffering from lack of
accessibility (as pointed out by many others here). I emailed the author,
hopefully they will be able to improve the usability. Following is the guts of
message I sent, for reference. The documents look over a year old in most
cases, so I doubt we will see much, but you never know!
\----
First of all, thanks! These are some excellent notes. That said, it is
_extremely_ irritating trying to read them. If you could provide the ability
to do one or all of the following it would be most excellent:
1\. Download of the entire pdf as one document
2\. View the documents as a web page/series of web pages
3\. Open-source the documentation so others can contribute/provide fixes
------
mertd
This is truly a great effort. At the same time I am frustrated by the choice
of the medium. We are well past the age of disseminating information through
print. I would love to "hear" the concepts described. Why not make an
interactive web page? Maybe sprinkle some audio samples here and there? It
seems convoluted to not use sense of hearing to describe music.
~~~
msluyter
Indeed. The ability to click on an interval and immediately hear it -- and/or
simultaneously see it played on the piano -- would be quite nice. (Or,
conversely, the ability to see piano notes instantly rendered on a staff and
have the intervals identified.) Combining two learning pathways -- visual and
aural = win.
~~~
jtheory
If Java applets don't make you wince too badly, I have some interactive music
theory concepts and drills freely available here:
<http://www.emusictheory.com/interact.html>
and here <http://www.emusictheory.com/practice.html>
I largely ran out of time to extend/improve it several years back, but it
still gets quite a lot of use; students of subscribing teachers can use MIDI
keyboards as well, which makes the instrument/theory link quite tangible.
------
cllns
FYI, the name seems to be playing off 'music for geeks and nerds':
<http://news.ycombinator.com/item?id=4295714>
<http://musicforgeeksandnerds.com/>
~~~
pav3l
I remember that book being posted on HN, but was hesitant to order it. Has
anyone here read it? What are your thoughts?
------
R_Edward
OK, I can understand never including a leap of an augmented fourth in a single
voice. That's just cruel to your singers. But an augmented second? As in a
minor third? As in the first two notes of Greensleeves? or Misty? Whyever not?
~~~
mysterywhiteboy
An augmented fourth sounds great as long as it is then resolved e.g. to the
fifth. See "Maria" from West Side Story[1] for the probably the most well
known use of an augmented fourth. The first interval when he sings "Maria" is
an augmented fourth.
[1]
[http://www.youtube.com/watch?feature=player_detailpage&v...](http://www.youtube.com/watch?feature=player_detailpage&v=VpdB6CN7jww#t=38s)
~~~
R_Edward
You're right, it sounds great--but it's darned hard for the average singer to
nail it. In any case, I'd still consider your example to be an exception to
the general rule, while the augmented second is used so often that I have to
believe the author meant something other than what he actually said. (To be
excruciatingly precise thought, augmented seconds are not used nearly as often
as minor thirds, which are sonically identical, even though they're
musicographically different.)
------
wallflower
OT: I am not a musician, still kick myself for giving up piano lessons after
only a couple years. I believe that anyone who writes software can learn from
how musicians practice and get better and don't or do get in a
creative/skills/motivation/passion/Groundhog-Day rut...
One of the most interesting books I have in my library is "Effortless
Mastery". Recommended by a musician and artist.
[http://www.amazon.com/Effortless-Mastery-Liberating-
Master-M...](http://www.amazon.com/Effortless-Mastery-Liberating-Master-
Musician/dp/156224003X)
~~~
gtani
Werner's book has value for software devs or mathematicians that read it, if
you're the kind that falls into a trance, given a suitable problem to think
about.
These also
<http://sivers.org/berklee>
<http://sivers.org/kimo>
<http://sivers.org/session-musician>
<http://sivers.org/sakamoto>
------
pav3l
Thanks for posting this, I have had some very vague ideas about some cool
music-related side projects that I could work on, but never knew how to go
about learning the theory (enough to at least formulate some well-defined
projects). This looks like a good start. Hoping the discussion here will pick
up to see more suggestions for math/cs oriented crowd.
------
akandiah
It's good, but I dislike the way that it's presented. If you want something
that's presented a little better, you may want to try:
<http://www.musictheory.net/lessons>
------
weewooweewoo
Anyone who spent time to download every single page want to upload the set?
~~~
brunorsini
...please? it just makes zero sense downloading these files one by one, such
is the state of the internet. whoever is benefiting from this, please allow me
to just send you a few bucks for not going through this awfulness...
------
justinator
This looks great; I hope the other puts a title on the HTML page!
------
ronyeh
Thanks, this is a nice summary of music theory. I wish the font were more
readable... though I like how it conveys a casual feel.
------
RossDM
This is pretty sweet. I wish there was a better way of browsing through all
the cheat sheets in some kind of full-screen view.
~~~
raylu
I am also rather annoyed at issuu. If these were all in one PDF file, this
would be much easier to consume.
------
scurvyscott
This is awesome, nice work, thanks for sharing.
------
jws
Issuu wins for most annoying way to break my web experience. I have a
perfectly serviceable PDF renderer, but instead I have to let Flash have a
shot at my security to get a slowly loading page that has navigation obscuring
the content and ignores my scrolling input, requiring me to use their invented
elements and watch their slow, jerky, scroll animation. Going to the next page
requires closing a tab, searching for which page I was on last in a grid of
similar thumbnails, clicking the next one, clicking again to _really_ go to
the page, and one more click to approve Flash (ok, that one is self
inflicted).
That was a lot of effort on their part to make an interface annoying enough
for me to ignore this work.
~~~
sigsergv
Luckily you can (after stupid registration) download these pages and read them
offline.
~~~
jtheory
Yes, one page at a time.
After providing your age (I just turned 99 today!) among other required
fields.
~~~
jamie_ca
At least he licensed them CC visibly - Here's the first section merged:
[http://dl.dropbox.com/u/1002031/Music%20Theory%20Fundamental...](http://dl.dropbox.com/u/1002031/Music%20Theory%20Fundamentals%20-%20Toby%20Rush.pdf)
~~~
brian_cloutier
Thank you so much. I would love the rest too, if you feel like making another
pdf.
~~~
pyroMax
Here you go, all pages merged, plus a little bonus:
<https://www.dropbox.com/s/ln23462k6gu2ay8/Music_Theory.zip>
~~~
keithpeter
Tip of the hat (repeated in 6/8) for using some of your time to save the rest
of us a bit of time.
| {
"pile_set_name": "HackerNews"
} |
Can we please stop saying ORDER BY RAND/RANDOM is slow? - compay
http://njclarke.com/posts/can-we-please-stop-saying-order-by-random-is-slow.html
======
andfarm
Sadly, while just selecting an id is faster than selecting the whole row, it's
still a very slow operation overall. Here's the results on an 8-million-row
production table:
mysql> select * from large_table order by rand() limit 1;
<...>
1 row in set (36.69 sec)
mysql> select primary_key_column from large_table order by rand() limit 1;
<...>
1 row in set (7.33 sec)
Basically, ORDER BY RAND() forces a temporary table / filesort no matter what;
selecting fewer columns decreases the size of the temporary table, but doesn't
actually eliminate the problem.
The best way to select a random row from a MySQL table is using a trick I got
from Mediawiki: create an indexed float column, set it to RAND() for each row,
and select random rows using:
SELECT * FROM table WHERE randnum > RAND() ORDER BY randnum LIMIT 1
This runs as an index range scan, making it basically instantaneous.
------
pinksoda
He's using a small table with only 100,000 rows. Let's see him claim rand() is
fast on 1m, 5m, or 10m rows.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How to start solving binary challenges in a CTF - raven_stark
I'm quite used to web challenges in a CTF. Also I'm familiar with assembly programming, gdb, but find it difficult to solve binary challenges. What all tools should I start using?
======
kiloreux
Since you already know how to use GDB and you're familiar with Assembly, Now i
don't know how much Assembly you know, but you need to know how programs
execute in memory, (stack , heap , syscalls.....), and once you have that
clear in your mind, try to draw a map of thoughts about the logic of execution
of this binary (it will be pretty simple, since it's generally small
programs), look for the points and weaknesses in the logic that you might
attack, one tool i use frequently is [0]peda, also using some visual gdb
extension would be really helpful for you instead of checking and looking
every time on how registers change values, sometimes a little knowledge about
how compilers work and the OS you're working on will be really helpful
although the binaries are independent in most cases, but extra knowledge is
always useful.
[0][https://github.com/longld/peda](https://github.com/longld/peda)
------
hatsunearu
[https://microcorruption.com/](https://microcorruption.com/)
nuff said. Hands on exercises through MSP430 hackmes. Only problem is that it
won't hold your hand through it other than the first one, and you may need to
read online solutions to kinda get the hang of it.
That's how I learned ASM RE, and I tried x86 hackme's in a hackathon and I
came out first. I have completed less than 10 microcorruption challenges. So
there's that.
~~~
phaus
After the first problem, the one that's a tutorial, did you feel like you
could actually solve the second challenge? I still feel completely lost. For
the most part I understand the problem it walks you through, but I feel like I
don't even know where to start with the second one.
I know how to program at a basic level with higher-level languages like
python, but I'm finding this low-level stuff rather difficult.
~~~
hatsunearu
Late reply but yes, I kinda got stuck at times but eventually I got around to
getting it working. Doing a bit of research will get you far.
If that doesn't help, read the walkthroughs online, but don't read the entire
walkthrough because that won't help on your education. Read one line, ponder,
etc.
I'm a hardware guy and I really really love low level stuff; it's my
homeground. I do admit that it's not for everybody.
------
ismailamca
if you are comfortable with gdb go with it or else i generally prefer hexdump,
objdump and radare2 over gdb (for linux pwnables). i really like radare2, and
ctfs generally come with radare nowadays.
however, i think, the most important thing about cracking challenges is your
knowledge, you need to learn the paltforms, the architecture, possible
vulnerabilities and exploitation of all. so you may benefit reading some
vulnzines like phrack and valhalla, some vxforums or papers from exploit-db.
also there are very nice books where you can learn basic exploitation
techniques(shellcoders handbook, hacking the art of exploitation, etc...).
these may be useful if you really have the basic aspects, if you aren't
comfortable with shell(bash, sh, zsh, etc..) you should get comfortable with
them at the begining.
also you need to learn some c and another scripting language(like python,
perl, ruby, lua etc...) for effective cracking (in *nixes).
and don't use windows, it makes you lazy.
also you can take these courses, that would be a marvelous start
[http://www.opensecuritytraining.info/](http://www.opensecuritytraining.info/)
<IMPORTANT!> before starting these please ask yourself, why do you do this to
yourself? go and get a (girl|boy)friend instead of this. the security field is
such a §H!™ hole and endless.
TL;DR: go with radare, and crack this challenges first >> [https://exploit-
exercises.com/](https://exploit-exercises.com/)
------
ryan-c
I like IO from [http://smashthestack.org/](http://smashthestack.org/) \- IOARM
is also a lot of fun.
| {
"pile_set_name": "HackerNews"
} |
GNU C Library 2.30 - jrepinc
https://sourceware.org/ml/libc-announce/2019/msg00001.html
======
pascal_cuoq
> * Memory allocation functions malloc, calloc, realloc, reallocarray, valloc,
> pvalloc, memalign, and posix_memalign fail now with total object size larger
> than PTRDIFF_MAX. This is to avoid potential undefined behavior with pointer
> subtraction within the allocated object, where results might overflow the
> ptrdiff_t type.
I did not think they would take this decision so soon, but it is, in my
opinion, the right decision to take. There will be complaints from users of
memory-heavy programs running on 32-bit platforms though.
For context, this blog post shows how things break when allocation functions
are allowed to create blocks of more than PTRDIFF_MAX: [https://trust-in-
soft.com/objects-larger-than-ptrdiff_max-by...](https://trust-in-
soft.com/objects-larger-than-ptrdiff_max-bytes/)
~~~
ajross
> There will be complaints from users of memory-heavy programs running on
> 32-bit platforms though
In all of recorded history, has a malloc() call for more than 2GB ever
actually succeeded anywhere? Most OSes on such platforms never supported any
more than that amount of addressible memory in a user process at all.
This is fine. Honestly it's seems like mostly pedantry on modern systems, but
it's clearly correct.
~~~
pascal_cuoq
> In all of recorded history, has a malloc() call for more than 2GB ever
> actually succeeded anywhere?
Yes, on OS X 10.5, and on 32-bit Linux with Glibc until two days ago.
The article I linked, written before Glibc 2.30 was released, is from a period
when every Unix had been allowing “malloc(0x80000001);” in 32-bit processes
until recently; only OS X had had the courage to make that allocation fail.
Sorry if the article doesn't make it clear enough that this is the context it
is written in, but in its defense, you only needed to try it (and still need
today to try it if you didn't upgrade Glibc) to see that it succeeds. Or do
you think that the Glibc developers wrote a Changelog entry to explain that
they changed something that didn't actually change?
Linux's default limit on 32-bit has been 3GiB for a while, i think:
[https://stackoverflow.com/a/5080778/139746](https://stackoverflow.com/a/5080778/139746)
Windows's limit is 2GiB by default, but this is only a default and 32-bit
processes can be allowed access to more memory, up to IIRC nearly all of the
theoretical maximum 4GiB for 32-bit processes running on 64-bit Windows.
~~~
ajross
The (sarcastic) point was about the fact that no real world code actually
_relied_ on a malloc() of half the address space.
I'm sure it "worked" in some sense, though I'd be really surprised if you
could make that happen with a default-linked C program on any distro that ever
shipped. The holes just aren't big enough. You'd need to link the app with
special care, and potentially write your own dynamic loader to keep the region
you wanted free. And if you do that... you might as well just mmap() the
thing.
The point was that doing this with the system heap on a 32 bit system was
never a serious thing. There are apps that would do management of memory
spaces that large, but they didn't do it with malloc.
------
fluffything
> * The twalk_r function has been added. It is similar to the existing twalk
> function, but it passes an additional caller-supplied argument to the
> callback function.
I thought this was standard practice for designing C APIs taking callbacks.
> * The Linux-specific <sys/sysctl.h> header and the sysctl function have been
> deprecated and will be removed from a future version of glibc. Application
> should directly access /proc instead. For obtaining random bits, the
> getentropy function can be used.
That's gonna break the world, a lot of code includes that header and uses the
sysctl function.
~~~
WillDaSilva
Well on the bright side it's perfectly reasonable for the function to exist in
a perpetual state of deprecation. Let's hope they don't do anything rash.
~~~
ronsor
If they do, Linus will probably scream at them for breaking compatibility.
(Even if this isn't the kernel)
~~~
jabl
This is a glibc wrapper for the sysctl system call, which has been deprecated
since forever in the kernel, is compiled in only if an option is specified
(major distros don't enable it), and is likely to be removed completely at
some point. Currently trying to use it, even if enabled, generates a warning
in the kernel log.
[http://man7.org/linux/man-
pages/man2/sysctl.2.html](http://man7.org/linux/man-pages/man2/sysctl.2.html)
------
yrro
Whoa, a gettid wrapper? What changed the maintainers' minds on making that
available?
------
zoobab
Static linking works now? Or I have to use Musl to have this feature working?
~~~
pragmaticlurker
static linking works also with glibc, AFAIK (using it)
~~~
jabl
IIRC NSS (/etc/nsswitch.conf etc.) needs dynamic linking for anything beyond
the basic files backend. But, again IIRC, musl has never supported NSS anyway
so that's kind of a moot point.
~~~
jcelerier
> IIRC NSS (/etc/nsswitch.conf etc.) needs dynamic linking for anything beyond
> the basic files backend. But, again IIRC, musl has never supported NSS
> anyway so that's kind of a moot point.
I frankly have never ever ever seen anyone actually configure NSS outside of
the defaults.
~~~
georgyo
You have never been in an organization that has used ldap or other user
backends then.
However even the defaults on Debian and Centos are affected here, as it means
that the dynamic user/host stuff in systemd also won't get picked up when
something doesn't read nsswitch
------
e12e
Anyone able to expand on:
"The dynamic linker accepts the --preload argument to preload shared objects,
in addition to the LD_PRELOAD environment variable."?
Does one ever invoke the dynamic linker directly? Why? How?
~~~
iso-8859-1
$ /lib64/ld-linux-x86-64.so.2 /bin/true --version
true (GNU coreutils) 8.28
[...]
If you run it without arguments it will tell you usage.
~~~
e12e
Ah, yes of course. I actually do this often to look for missing runtime
dependencies. I hadn't thought about preload in that context - or ldd as a way
to run executables "by hand".
------
mort96
A lot of this sounds like great work, and the GNU project is doing great work.
However, I assume I'll have to prepare for more software breaking? When 2.28
rolled around, Electron and a bunch of GNU software (which relied on glibc
specific stuff which changed) broke.
~~~
vortico
Software compiled against glibc links to versioned symbols, which are
backwards compatible in ABI and behavior. I'm unsure of the reason you
experienced breaking software when upgrading your glibc version.
~~~
mort96
The GNU software which broke just didn't compile (with no upstream fix
available for a long time, which I found incredible; I had to go find arch
linux' repos' patch and apply that whenever I wanted to compile GNU build
tools).
The electron thing was apparently an LLVM linker thing according to your
sibling comment.
EDIT: the m4 patch in question:
[https://git.archlinux.org/svntogit/packages.git/tree/trunk/m...](https://git.archlinux.org/svntogit/packages.git/tree/trunk/m4-1.4.18-glibc-
change-work-around.patch?h=packages/m4) \- apparently they still use it.
~~~
shakna
> FIXME: Do not rely on glibc internals.
Seems that's less of a glibc breaking compatibility, and more developers
relying on something outside of the guaranteed API.
~~~
mort96
I mean, it's GNU M4. It's at the core of GNU's build system. It's GNU
developers depending on glibc internals. I'd be with you if it was just some
random project, but it's pretty bad of an update to glibc to break the GNU
toolchain.
------
metalforever
How is the support for the 68k in this release?
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What does it really mean when companies want you to have AWS experience? - martin-ting
I see a lot of job listings that mention "AWS experience" as part of what they'd like to see in someone's skill set. Does this mean that they'd like to see familiarity with using AWS services such as spinning up an EC2 instance and configuring a server or does it mean that you should know how to interface with the AWS APIs provided by Amazon to interface or automate AWS processes?
======
PaulHoule
Ask them.
| {
"pile_set_name": "HackerNews"
} |
Magic in Panama, 1681 (2011) - benbreen
https://resobscura.blogspot.com/2011/04/for-they-are-very-expert-and-skillful.html
======
lowdose
Talking about magic I came across this one last week. Robert Houdin hired by
Napoleon III to perform a magic trick in Africa to convince the local tribes
France magic was stronger than theirs.
[https://en.wikipedia.org/wiki/Jean-Eug%C3%A8ne_Robert-
Houdin](https://en.wikipedia.org/wiki/Jean-Eug%C3%A8ne_Robert-Houdin)
[https://www.amazon.com/Hiding-Elephant-Magicians-
Impossible-...](https://www.amazon.com/Hiding-Elephant-Magicians-Impossible-
Disappear/dp/0786714018)
| {
"pile_set_name": "HackerNews"
} |
Crafty Killer Whales Are Harassing Alaskan Fishing Boats - Mz
http://www.smithsonianmag.com/smart-news/crafty-killer-whales-are-harassing-alaskan-fishing-boats-180963788/?no-ist
======
AnimalMuppet
I wonder if this isn't "harassment", exactly. It sounds like they've learned
that they can collect a whole lot of fish for not much effort. The fact that
doing so ruins the fishing is a side effect.
I heard about a similar thing with bears in Yosemite. They had a back-country
steak fry. They had 700 steaks there. A couple of bears (mother and adolescent
cub) showed up. The people scattered. The bears ignored the people, because
they had found 700 steaks. They put the mother down, and transplanted the cub
several hundred miles away. The next summer, the cub - now an adult - showed
up for the steak fry.
Those bears weren't trying to "harass" the steak fry. They didn't have
anything against such a gathering. They just learned, "Hey, free food".
| {
"pile_set_name": "HackerNews"
} |
Zed Shaw is teaching two four-week Python classes (online) - thesethings
http://codelesson.com/python
======
netmau5
I'd -love- to take a Python course from Zed but this site makes me uneasy
about following through. Not enough information and the site simply doesn't
look professional enough to be sending in payments of $200+. The 404 on the
follow up course is a big red flag too.
\-- "Find courses you're interested in from our course list. After you've
selected a course, we'll send you more information about our Web-based
learning system. "
I kinda want to know before signing up, selecting, and/or paying what your
Web-based learning system is about. From what I can see on the site, there
will be some directed readings, evaluated assignments, and a place to do Q&A.
Those are nice benefits but I'm looking for a quality teacher to pay that
premium. I'd like to know if there is audio/video lectures, what the required
texts are, etc. Unfortunately there is no FAQ and the only obvious way to ask
is the "Contact Us" link which takes you to a generic feedback page.
~~~
jeffreymcmanus
There is a FAQ, actually:
<http://codelesson.com/faq>
~~~
nivertech
No audio/video lectures or screencasts?
It's unclear from FAQ what's "instructor-led" actually means.
~~~
jeffreymcmanus
Sorry, I responded to your question in the wrong point in the thread. See:
<http://news.ycombinator.com/item?id=1724883>
------
thesethings
Full disclosure: I have no professional/ financial affiliation with
Codelesson, though I am friends with one of its founders.
Posted this since I've seen so much praise for Zed's Python book here on HN.
~~~
zaatar
Zed's python book: <http://sheddingbikes.com/LearnPythonTheHardWay.pdf>
~~~
thesethings
Doh! Thanks. I should have linked to that :D (Everybody, check out this
amazing, constantly updated (!) book that Zed wrote.)
~~~
mcn
The grandparent comment links version 0.1 of the book, get the current version
(0.5 as of now) from the book's website at <http://learnpythonthehardway.org/>
(I found that out in the process of attempting to report an error in version
0.1 - it's already fixed in 0.5, and 0.5 is 4x longer.)
~~~
zedshaw
Hmmm, I should probably put up a pdf that says "there's a new version" at that
URL.
------
praeclarum
What's up with this being an introductory course to Python? Shouldn't the
experts be giving expert level instruction?
I understand the desire of programmers to start from ground zero, but come on.
The internet is full of easy beginner tutorials. Bookstores are full of
intermediate materials.
But there is a shortage of expert advice from experienced professionals. Let's
see some of that!
~~~
jeffreymcmanus
You assume every experience professional knows everything. :) There's no shame
in taking an introductory course, even if you're a super genius in some other
area.
One thing that surprised us about our first CodeLesson course is that it was
populated by a few startup CEOs, some of whom had coded in college, others who
were learning to code for the first time.
~~~
praeclarum
While I agree, in general, with your statements/sentiments, such logic will
keep us perpetually in a loop of beginner's mechanics. You advance knowledge
and skill by challenging yourself - you don't do it by repeating beginner's
materials.
~~~
babeKnuth
i disagree. i've been coding for quite a while now, and am now just picking up
emacs. the only way for me to begin is to start with basic/simple tutorials
(e.g. peepcode, emacs starter kit, etc.). for an experienced programmer trying
to learn python, i'd imagine zed would be an amazing fit since he could
possibly customize/direct his vast knowledge specific to the user.
though i agree with the idea that there are very few resources geared toward
expert/advanced users. i'm not sure how flexible the course material would be
in this case tho.
------
jnoller
That's pretty cool; and I didn't know about codelesson - if I ever had the
time to put together a decent class, I'd try this out for sure.
Grats to Zed.
------
sublemonic
I'd love to take a Mongrel2 class from Zed. Codelesson is new to me - I must
explore...
~~~
zedshaw
Well damn, maybe I'll do one. Hell I'd get together with people in SF or
wherever for free and show them how to do stuff. It'd be an awesome bad ass
way for me to get feedback on what needs to happen to make Mongrel2 awesome.
~~~
Psyonic
I'd definitely be interested in attending that.
------
jlmendezbonini
Someone knows about a similar site offering (or someone willing to offer
through codelesson.com) a good software engineering course? I'll be up for
that.
~~~
jeffreymcmanus
You can propose a course here:
<http://codelesson.com/courses/suggest>
What topics would you like to see covered in a software engineering course?
What kind of person would you like to see teach it?
------
kmfrk
Little can be inferred from the link, but I am 100% sure that Zed makes an
awesome teacher for anyone who's considering taking the class. Just hit him up
on Twitter or e-mail, and I'm sure he'll oblige.
------
jbarham
FYI, I'm getting a 404 for the link to the Part B follow-up lesson.
~~~
jeffreymcmanus
I think I fixed the bad link; thanks for letting us know.
~~~
patrickaljord
Still there here [http://codelesson.com/view/introduction-to-programming-in-
py...](http://codelesson.com/view/introduction-to-programming-in-python-
part-b)
~~~
brianmwang
That's not the right URL.
Go here instead: [http://codelesson.com/courses/view/introduction-to-
programmi...](http://codelesson.com/courses/view/introduction-to-programming-
in-python-part-b)
------
mkramlich
I'm really not into taking "courses" online when there's already lots of free
non-interactive textual/reference/tutorial content already online, and offline
in the form of books. And for a really great interactive resource, there's
this thing called the Python REPL.
That said, I do think people should do what they love, and try to monetize the
doing of what they love, so more power to him in this endeavor.
~~~
zedshaw
Same here, but then people like me and you are rare in the real world. Other
folks, for lack of confidence or direction, need someone to point them in in
the right way so they get started.
~~~
mkramlich
fair point
------
mhb
Making the first lesson available for free would answer a lot of questions and
address a lot of anxieties as well as probably lure in more students.
~~~
zedshaw
Well, the entire course is technically already online:
<http://learnpythonthehardway.org/>
Basically, I'll be setting up the first 26 lessons for class A, then the
remaining 26 for class B. The purpose of the course is that you get my time to
help you through the book and grade you on your progress.
It's actually pretty simple and should be a ton of fun.
~~~
Kaizyn
This course is a great idea. Will you do a ruby class next? In saying that, I
am mostly joking. However a C class would be pretty sweet, especially since
it's hard to learn how to write C correctly, efficiently and securely.
~~~
jeffreymcmanus
We have a Ruby course listed on the site already:
[http://codelesson.com/courses/view/the-ruby-programming-
lang...](http://codelesson.com/courses/view/the-ruby-programming-language)
~~~
babeKnuth
i think he was asking if zed would be teaching a course on ruby.
if so, what about mongrel too? :)
------
babeKnuth
i'm curious as to what sort of pedagogical approach zed will be taking with
this course. i know zed's personal preference is to pick up a book and just go
thru all the exercises in it (e.g. mickey baker's jazz guitar).
will he be doing anything different from traditional student/instructor
methods? curious as to what zed's personal take on this is as well.
------
c00p3r
I think there is a much better way to invest your time:
[http://ocw.mit.edu/courses/electrical-engineering-and-
comput...](http://ocw.mit.edu/courses/electrical-engineering-and-computer-
science/6-00-introduction-to-computer-science-and-programming-
fall-2008/lecture-videos/)
Seem like everyone on HN is either a teacher or a prophet nowadays. ^_^
| {
"pile_set_name": "HackerNews"
} |
The Self-Appointed Twitter Scolds - telemachos
http://www.nytimes.com/2010/04/29/fashion/29twitter.html
======
TwitterFail
Some people have asked me why I created the www.Twitter-Fail.com blog. It's
not, as John Metcalfe said, to "mock tweets I consider stupid." Instead, it is
my way of sharing what I find on Twitter that makes me laugh, in the hopes
that others will laugh along with me. Granted, you won't get most of the humor
if you're not a Twitter user, and I'm okay with that. All the people who
laugh, comment, nudge their friends when they've been mentioned, or are
thrilled to find their username in a post, are the real reason the blog
continues to exist. That's what the Times story doesn't tell you. It also
doesn't tell you that the other blogs mentioned are also humorous, and not
judgmental, severe or mean-spirited. I hope, when you read the article, you
click through to each blog and judge them on their merits.
------
mikecane
Everyone on Twitter should do a #TypoTuesday and not correct typos before
hitting Send.
------
telemachos
And there's a brief discussion in a Language Log entry already - Twetiquette:
<http://languagelog.ldc.upenn.edu/nll/?p=2287>
| {
"pile_set_name": "HackerNews"
} |
Aztec app brings historic Mexico codex into the digital age - Thevet
http://phys.org/news/2015-01-aztec-app-historic-mexico-codex.html
======
Trombone12
Wow, nice way for the Brits to avoiding the "we-should-give-this-back-it-was-
essentially-stolen" issue that infects basically all old anthropological
collections in the west.
| {
"pile_set_name": "HackerNews"
} |
What Killed Michael Porter's Monitor Group? The Force That Really Matters (2012) - tortilla
http://www.forbes.com/sites/stevedenning/2012/11/20/what-killed-michael-porters-monitor-group-the-one-force-that-really-matters
======
dworin
This is a rant against Michael Porter, his theories, and the practice of
business strategy generally, but has nothing to do with why Monitor actually
failed. Most of the rest of the industry is doing fairly well, especially
since the end of the recession, and Monitor moved away from the five forces
model decades ago to tackle the same types of business strategy projects other
top-tier consulting firms help with.
Monitor failed for A LOT of reasons. It had a strange debt/equity structure
that paid large sums to former partners who were no longer involved in the
business, an issue that has caused other notable firms to struggle as well.
They were a mid-sized player in a market increasingly split between boutiques
and large global firms. They were a pure play strategy firm in a market where
clients were looking for help with implementation. They did brand-damaging
work with former dictators.
The list could go on, but executives wising up that they didn't want to buy
strategy consulting wasn't why Monitor failed. Executives are going to buy the
same projects, probably from the same consultants, they're just going to buy
them from Deloitte now.
------
javajosh
This article has more than a little shadenfruede, methinks. But there's an
interesting thesis lurking in there, that Porter's theories have lead directly
to the enshrinement of C-level executives as a kind of upper-class:
first, that strategy is a decision-making sport involving
the selection of markets and products; second, that the
decisions are responsible for all of the value creation
of a firm (or at least the “excess profits,” in Porter’s
model); and, third, that the decider is the CEO. Strategy,
says Porter, speaking for all the strategists, is thus
‘the ultimate act of choice.’
The article doesn't go on to talk about what a C-level exec actually _is_ but
makes an emphatic case that, since Porter is responsible for the crowning of
these kings, and now Porter is proven wrong, then perhaps it's time to take
the crowns away.
I'm terribly biased, but I like this line of argument very much.
~~~
paulsutter
Right so Steve Jobs, Elon Musk, Mark Zuckerberg, Larry Page ... all empty
figureheads, deserving little credit for the success of their companies?
~~~
javajosh
Come to think of it, I think all of those guys get WAY too much individual
credit for the success of their companies. Damn straight.
------
robomartin
I don't have an MBA. At one point I felt sorry for myself for not having taken
that route. This was during a time that I was building a business that was
gaining traction. As an engineer I felt my management skills and understanding
of business had serious holes all over the place. I didn't have time to go
back to school. I had to run a business. So, I resorted to reading business
books. At one point My wife made the comment that things had changed because
there were stacks of business books everywhere now instead of engineering
books. It was a really frustrating phase for me. I learned a lot but, at the
same time, a lot of it sounded like highly refined bullshit to me. I was far
more comfortable looking at business from the perspective of mathematical
equations and a series of hypothesis to be tested via a scientific process.
Eventually I got tired of the bullshit. Read three authors and get ten
different opinions. Not one of them ever built anything or risked anything at
all. These people were not my people.
I started to devote more time to getting together with other entrepreneurs and
swapping notes. It was amazing to me how solutions to problems presented
themselves almost without effort in this context. Sure, talk to people who are
really holding a cat by the tail and they might just know a thing or two that
the escapes the consultant's artificial construct. I had some of the most
amazing and revealing conversations with successful entrepreneurs who had
barely finished high-school and hated to read books --any books. Who would
have known?
Look at the history of innovation. Look at some of the most important and
influential companies and products of the last hundred years. How many of
these came out of the minds and work of consultants and MBA's? How many came
out of the efforts of entrepreneurs of all walks of life and levels of
education hell-bent to make it happen?
So, again please, why do we follow these false prophets?
~~~
Retric
Replace the 'business books' category with 'self help v2' and I think you will
see how and why most of that crap is written. It's not about actually helping
people it's about selling books, and because companies / peoples actual
problems are way to complex to deal with in book form you end up with a lot of
empty platitudes and a lot of random ideas.
After all what useful advice apply's to both Bank of America and a local gas
station franchise. Or put another way what book would you hand a 14 year old
genius trying to decide if they should skip a grade and a death row inmate?
~~~
beerglass
Understand your intolerance for self-help category in books... generally, I
detest them too. But once in a while, when the world around looks too complex,
reading simple books like "Jonathan Livingstone Seagull", "Prophet", even "Who
Moved My Cheese?" and "The Magic of Thinking Big" has helped me...
------
paulsutter
The author is silly to conclude that strategy is useless because Monitor
failed. Sustainable advantages are absolutely real (think any network effects
business: Dropbox, Airbnb, Craigslist, eBay). But they're also rare, and
baked-in from the beginning.
Monitor's actual flaws:
(a) a consulting business can't capture the value of a successful strategy,
they collect only fees, and
(b) the sorts of customers who are willing to pay for big ticket consulting
projects are big sprawling businesses for whom it's far too late to identify
an effective strategy.
~~~
mtgx
The Porter model also sounds like it would help you develop myopia regarding
disruptive innovations, because it would make you too focused on how to fight
against direct competitors and gain incremental advantages against them.
~~~
SiVal
Not true. One of his five forces was the threat of substitution. In other
words, while many businesses are focused on their direct rivals, they don't
notice the looming alternatives that could make them and their direct
competitors irrelevant. A Porter-style 5-forces analysis would REQUIRE the
business to broaden its view of competitive threats.
~~~
mtgx
Subtitutes may not even be half-way there in understanding disruptions,
though. Water or soda are substitutes for beer, but that doesn't mean they are
disruptive. In some cases, they can be. Like blogs vs newspapers. But in most
cases, substitutes don't refer to disruptive innovations, that's why I think
it's not even "half way there". Which means it's putting too little focus on
disruptions, when disruptions can literally kill your company, no matter how
big it is, within 10 years of appearing.
------
damoncali
As someone who sat through Bschool strategy class (which, for those who have
not, is typically a near religious semester-long praise-fest of Porter and his
5 forces), I find strange pleasure in reading this. Can't put my finger on
why.
~~~
ojbyrne
Having sat through a similar course, and despite being a proponent of Porter,
I too was enjoying the article. Until the point where they mentioned Peter
Drucker's "foundational insight," at which point I realized it was just
another "my business guru is better than your business guru" pissing match.
Forbes is crap.
------
socalnate1
What utter crap. The version of Michael Porter that this "article" argues
against is a straw man that bears little resemblance to the actual man or his
ideas.
Between Forbe's hosted blogger section (where this article came from) and the
Atlantic's paid content, the quality of mainstream business reporting is
dropping like a stone.
~~~
mindcrime
You basically just said what I came here to say. This guy seems to have a bone
to pick with Porter and this article is flawed on so many levels that it's
ridiculous. Strawmen, faulty logic, unsubstantiated assumptions, this article
has a laundry list of reasons to not take it seriously.
I love the part where he talks about "Porter's Strategy" as though there was
one specific strategy that - in and of itself - encapsulated everything of
Porter's thought. Denning seems clueless here. Yeah, Porter talked about a
handful of "generic strategies" but to claim that Porter's thinking can be
reduced to one simple thing that you adopt or don't adopt, is ludicrous.
------
brc
What a terrible article. It starts out OK, but quickly goes wrong.
Yes, the 5 Forces model is the most overused piece of analysis since the 'pro
and con' list. But to wholesale throw it out because the company went
bankrupt, well, this just looks like dancing on the grave of another.
I don't see anything wrong at all with finding a business that has natural
protections from competition. Any decent startup strategy should have
conversations about switching costs, customer lock-in and looking for niches
where competition is less fierce, or at least favourable to early-movers. If
you read Warren Buffets strategies he appears to spend a lot of time
concentrating on companies with natural resistance to competition, which gives
them pricing power and longevity.
The whole article reeks of academic paybacks and _profit is evil_ thinking,
and I didn't bother finishing it.
------
neutronicus
I'm amused that the author chooses _Amazon_ and _Apple_ , of all companies, as
one that doesn't rely on structural barriers.
------
rayiner
Lots of services firms failed in the recession...
| {
"pile_set_name": "HackerNews"
} |
Show HN: A standards-compliant 3d compass implementation for *the web* - richtr
https://github.com/richtr/Marine-Compass
======
altsa
I'm trying to load the demo on my Macbook Pro with Chrome, but its not working
for some reason. Is this broken for anyone else?
~~~
richtr
Works for me on my Macbook Pro with Chrome (v19.0.1084.56).
<http://caniuse.com/#feat=deviceorientation>
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Give me recommendations to improve jsonip.com - geuis
Hey folks. I wrote http://jsonip.com a while back and lots of folks seem to find it useful. So I'd like to ask for your tips on how it might be made better to be more useful for you.<p>As a quick primer, jsonip.com returns your ip as either a json object, or wrapped in a jsonp callback.<p>Usage:<p>http://jsonip.com => {"ip":"your ip"}<p>http://jsonip.com/cb/ => cb({"ip":"your ip"});<p>http://jsonip.com/randomgurgltyfurt/ => randomgurgltyfurt({"ip":"your ip"});
======
jolan
How about
<http://jsonip.com/myurl/write/>
and
<http://jsonip.com/myurl/read/>
So I can monitor IP changes of my machines while I'm away?
------
templaedhel
Return geoip information. For example:
{
"ip": 204.172.40.10,
"geo": {
"latitude": 86.783273,
"longitude": 92.106578,
"accuracy": 24000
}
}
------
sylvinus
Use a standard jsonp paramter format so that we can use it with jQuery.ajax :
jsonip.com/jsonp/?callback=xxx
------
rawsyntax
yes, hook it into geoip, (something like maxmind)
| {
"pile_set_name": "HackerNews"
} |
Former NSA contractor designs 'surveillance-proof' font - antimora
http://edition.cnn.com/2013/09/30/tech/web/nsa-contractor-surveillance-proof-font/index.html?hpt=hp_c3
======
Nanzikambe
As these are fonts and don't implement any randomness like a captcha would, it
will be trivially easy to just implement recognition matching of the fonts
themselves
| {
"pile_set_name": "HackerNews"
} |
Facebook won't release Russia-linked ads despite call to do by US investigators - SirLJ
http://www.businessinsider.com/facebook-wont-release-russia-linked-ads-publicly-2017-10
======
subie
I believe the issue here is "publicly" releasing the ads. FB just had a
trending story yesterday about turning the ads over to investigators.
Bad title perhaps? Title from article: "Facebook refuses government request to
publicly release Russia-linked election ads"
| {
"pile_set_name": "HackerNews"
} |
Data is the new oil - edward
http://nation.lk/online/2015/09/19/data-is-the-new-oil/
======
Chefkoochooloo
If people and businesses become enabled to streamline and tap into the
possibilities of data query and analysis it could prove to be more monetary
compensation than oil. The filtering and restructuring of data may take a bit
of time to figure out however since data for the sake of information can come
in forms foreign to the analyst.
------
swohns
[http://blogs.gartner.com/peter-sondergaard/the-internet-
of-t...](http://blogs.gartner.com/peter-sondergaard/the-internet-of-things-
will-give-rise-to-the-algorithm-economy/)
| {
"pile_set_name": "HackerNews"
} |
Some NASA contractors appear to be trying to kill the Lunar Gateway - rbanffy
https://arstechnica.com/science/2019/09/some-nasa-contractors-appear-to-be-trying-to-kill-the-lunar-gateway/
======
superkuh
Good. The lunar gateway has absolutely no purpose now that that asteroid
retrieval mission (ARM) funding picks by NASA ignored all the de-spin
proposals. Without the ability to de-spin the number of accessible near earth
asteroids (NEA) to tow back to the lunar gateway numbers, literally, 3. It
would be a huge waste of time and money to implement a lunar gateway that
doesn't have the ability to gather resources.
Going to the moon's surface and back _for fuel resources_ is currently
infeasible due to the large delta-V required and lack of atmosphere to
aerobrake against.
~~~
Sir_Cmpwn
It only takes 1.87 km/s to get from the moon's surface to LLO, and there's
fuel and science and real estate at the bottom. The delta-V budget of, say,
LLO to the nearest Earth-Moon Lagrangian point is about 1 km/s. So yeah, it's
more expensive to go to the lunar surface, but it's not outrageous and there
might be other benefits to round-trips to the surface.
~~~
avmich
In addition to orbital velocity (~1.7 km/s) and gravity losses (~0.2 km/s?)
you should take into account all safety reserves - Eagle could hang a minute
or more above the surface selecting the place to land. Given that we can't
ignore safety, it's better to budget 2200-2500 m/s delta-V to get from the
Moon orbit to the Moon surface.
We need both Lagrange point stations and surface bases. It's arguable what
should go first; an argument for Lagrange station is that it's easier to
build.
~~~
manicdee
By the time we are collecting fuel from the lunar surface there will be well
marked solid landing pads and no need for thirty seconds of reserve fuel to
scoot over to a new landing site.
------
sizzzzlerz
This has all the makings of a political clusterfuck and/or disaster instead of
a unified approach to the solution of a difficult problem. Parties are sniping
at one another through their congresscritters to maximize their piece of the
pie regardless of what makes scientific and engineering sense. At the end,
this program will either be canceled due to the exorbitant costs incurred or
we're going to experience a massive failure on par with the Challenger
disaster where the pre-mature launch was caused by corporate and political
pressure.
~~~
tomatotomato37
There is a reason the SLS was and still is known as the "Senate Launch System"
------
avmich
NASA does many things wrong, but relying on private contractors for launch
services isn't one of those things and deemphasizing the Senate Launch System
is a step into right direction.
------
navaati
Wooooaaaaw, I wasn't aware of this "Lunar Gateway" stuff. That's a great idea
for Kerbal Space Program, I'm going to do that _right now_ !
A Kerbal living there permanently, being able to regularly go up and down the
Mun. A huge fuel tank, to refuel the lander and ships en-route to the deep
solar system. Resupply and refueling missions from Kerbin. Long-running swap-
able science modules. The whole deal. Gonna be great.
~~~
avmich
> Resupply and refueling missions from Kerbin.
Remember, for LOX-kerosene fuel LOX mass is about 2/3 of the total fuel mass.
LOX can be extracted from Moon rocks, a lot of them contain metal oxides, so
refueling from Mun in KSP could be practical.
~~~
ncmncm
Does Mun have ice at its poles?
~~~
avmich
You don't need ice to get oxygen; don't know if Mun has ice.
~~~
ncmncm
Prob'ly need it for H2, though.
------
ncmncm
Sounds like they could have all the pieces of a moon mission lofted to LEO
commercially years before they will be ready to start testing their own upper
stage, and for less money than they will spend on the first test.
Somebody should be indicted. What is the FBI busy with, lately?
------
perlpimp
Efficiencies of private businesses will lay bare backwards politicized revenue
streams to 'derelicts' of bygone era likes of SLS.
~~~
ptah
the article seem to state that they are making things inefficient through
lobbying to get a bigger piece of the pie i.e. greater cost
~~~
TeMPOraL
I don't get where people get this "efficiencies of private businesses" from.
Private businesses are only efficient at making money, and making their (or
everyone else's) _work_ inefficient is a tried and true way of achieving that.
All the evil cost-plus contracts that were popular in aerospace before SpaceX
came? Well, there's a private business at the other end of each such contract,
and that private business made the whole thing inefficient because that's how
they could get more money.
------
davidhyde
Looks like Boeing is headed for another PR disaster with their NASA lobbying
efforts.
~~~
jrockway
They will just make the entire structure out of angle of attack sensors. It
will be fine.
| {
"pile_set_name": "HackerNews"
} |
FOXIT READER Now AVAILABLE ON MAC AND LINUX - byaruhaf
https://www.foxitsoftware.com/company/press.php?id=408
======
detaro
No reason to scream... (please don't use all-caps headlines)
| {
"pile_set_name": "HackerNews"
} |
Show HN: New Music Release Tracker - hmhrex
http://therthm.com/releases
======
hmhrex
I was getting frustrated with AllMusic, Metacritic and Spotify excluding new
music releases, or giving wrong dates for new music. So I built something for
my friend and I to use for finding new releases.
It uses MusicBrainz for the data and then cross references Spotify and
Bandcamp for links, album artwork and genre data. Built on Django. It was a
fun little side project that I use every week now.
| {
"pile_set_name": "HackerNews"
} |
More notes on OS X Mavericks (+ Yosemite) on QEMU with KVM - kvmosx
http://blog.definedcode.com/qemu-osx-update
======
st3fan
I installed Mavericks straight to an LVM volume withou the need for any magic.
| {
"pile_set_name": "HackerNews"
} |
Find the hidden cameras in your BnB and elsewhere - ZguideZ
https://medium.com/fast-company/how-to-find-hidden-cameras-in-your-airbnb-and-anywhere-else-d1de793f7ddc
======
matt-attack
Anyone have a recommendation for an RF scanner? Always hesitant to click on
the amazon referral links in an article like this. Also an Amazon search for
“rf scanner” appears to be mostly scammy advertisements for the same cheap
Chinese garbage.
------
growingconcern
requires sign in? nice.
~~~
growingconcern
[https://www.fastcompany.com/90331449/how-to-find-hidden-
came...](https://www.fastcompany.com/90331449/how-to-find-hidden-cameras-in-
your-airbnb-and-anywhere-else)
| {
"pile_set_name": "HackerNews"
} |
Show HN: Tail -f Your Cloudflare Logs - chasers
https://logflare.app
======
chasers
Use a Cloudflare worker to POST logs to Logflare and they'll be streamed to
your browser. We also have rules you define with regex so you can route log
entries to different sources. Good for saving important events like signups,
bots, etc.
Open source and on Github:
[https://github.com/Logflare/](https://github.com/Logflare/)
I built this to primarily learn Elixir and Phoenix but wanted to build
something useful. A lot of the code is probably terrible but it seems to work
well.
------
aogl
Roadmap link doesn't work:
[https://trello.com/b/wrZusInO/logflare](https://trello.com/b/wrZusInO/logflare)
error: Board not found. This board may be private. You may be able to view it
by logging in.
------
ctrlaltdev
Nice project! Just wanted to mention: Logflare is free, but CloudFlare workers
are not. Thanks for sharing!
~~~
chasers
Good point! I should make that obvious on the homepage probably.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What to consider when buying a business - warent
I'm currently looking at buying a business via FE International which I believe will synergize very well when combined with my current side business (of which I amm the sole owner/employee) and potentially multiply the value of each other. The deal would be around 50-100k.<p>This would be a big deal to me. I've never acquired a business before and am definitely a rookie. What are some things that I should know or ask about? How much can I know about a business to inform my buying decision? Any pitfalls or traps to watch out for? Is there a specialized person I am legally obligated to consult to help with this?<p>Thank you for spending your time on this, it is greatly appreciated.
======
meremortals
I'm in a similar boat and would love if you'd keep us updated
------
gus_massa
There is a story by patio11 from the seller point of view. It's not exactly
what you're asking for, but it may have some interesting parts. " _What I
Learned Selling A Software Business_ "
[https://training.kalzumeus.com/newsletters/archive/selling_s...](https://training.kalzumeus.com/newsletters/archive/selling_software_business)
HN discussion
[https://news.ycombinator.com/item?id=11347006](https://news.ycombinator.com/item?id=11347006)
(439 points | Mar 23, 2016 | 84 comments)
------
fsajkdnjk
the best thing you can do is hire a good accountant and go through their
books.
| {
"pile_set_name": "HackerNews"
} |
Introducing Vector: Netflix's On-Host Performance Monitoring Tool - r4um
http://techblog.netflix.com/2015/04/introducing-vector-netflixs-on-host.html
======
philsnow
I got down to
you should be able to install PCP from binary packages made available by the PCP development team on:
ftp.pcp.io
and that threw up a red flag in my brain. Then I noticed that
techblog.netflix.com doesn't redirect to https (and indeed can't serve https)
(I use [https://www.eff.org/HTTPS-EVERYWHERE](https://www.eff.org/HTTPS-
EVERYWHERE) and did not get content over https).
The directions _I_ saw for building from source looked pretty innocuous, but
you might see a different set of directions if you're being MITMed. Observe an
appropriate amount of caution.
~~~
mspier
Happy to discuss your concerns. PCP 3.10 should be available on Ubuntu's
official repo pretty soon too.
~~~
harshreality
His concerns seem plain to me. Unauthenticated channels for software
distribution or software installation instructions are bad.
The techblog isn't using SSL, and the git pull url for PCP is using the git
protocol which is also unauthenticated, rather than the authenticated https
transport (ssh is only an option when user accounts make sense).
Someone's at a conference and follows the link over public wifi. They get the
same page but with "here's how to get PCP: ftp evil.io or git clone
git://git.evil.io/pcp" Even if the webpage were ssl-enabled so that an
attacker can't rewrite the pcp.io links, an attacker or evil network operator
could MITM git.pcp.io or ftp.pcp.io. (FTP?!)
Being in Ubuntu's repo doesn't make it safe if Ubuntu's maintainers have no
(semi-)trustworthy way of getting the code.
~~~
justizin
Ubuntu's maintainers can check the MD5SUM file on ftp.pcp.io:
ftp://ftp.pcp.io/projects/pcp/download/MD5SUM
The project seems to be hosted by Red Hat these days.
~~~
willglynn
FTP is just as unauthenticated as everything else above, so having MD5SUMs
available over FTP doesn't really change the situation.
------
CWuestefeld
It's enormously frustrating to me that the Windows platform is so far behind.
I've been researching for a rebuild of our ecommerce site with the idea of a
modern microservice architecture. Team skillset and some other considerations
dictate a .Net environment.
Netflix and other companies have created a really rich platform, between
logging and monitoring technologies like this; message queueing; deployment;
and so on. In the Windows environment there's precious little to compare to.
One bright spot is the news story also on HN today [1] about MS announcing
Docker-related Hyper-V technology - but in the next version of Windows server.
It might be that to satisfy those .Net compatibility wishes, we just go to
Mono and do everything else in Linux.
[1]
[https://news.ycombinator.com/item?id=9342369](https://news.ycombinator.com/item?id=9342369)
~~~
wantab
The web was built, and runs on, *nix/BSD. Windows is an outlier. Windows can't
even get the slashes going the right way. There's a reason 80% of internet
traffic does not use Windows.
~~~
CWuestefeld
I'm not sure how that was supposed to be helpful.
I also don't think it's strictly true. For sure the underlying networking came
from Unix. But as for the web itself, once we got to real dynamic content, it
seems to me that Microsoft were the ones that got things moving.
While the Unix world was mired in the awful world of CGI, Microsoft gave us
high-performance ISAPI, and then Cold Fusion (also on the Windows platform)
and Microsoft's ASP made programming a little more sane. While the Unix world
tried to deal with JSP (which IMHO wasn't a very good solution), the Microsoft
platform seems to have been the innovators for several years, until Ruby on
Rails and then node.js and stuff started coming out.
Today, the Apache server powers far more sites than any other, it's true. But
IIS shares the 2nd-place spot [1]. When you say "There's a reason 80% of
internet traffic does not use Windows", that's pretty much true for the
servers, but far off the mark when counting clients. And the reason for that
is that Microsoft's strength hasn't historically been radical advances, but in
figuring out ways to take the bleeding edge tech that doesn't really work
quite right yet, and packaging it into commodity software that may not be as
sexy as envisioned by those with the original ideas, but actually useful to
the average guy.
[1] [http://www.zdnet.com/article/web-servers-microsoft-iis-
and-n...](http://www.zdnet.com/article/web-servers-microsoft-iis-and-nginx-
battle-for-second-place/)
EDIT - missing word "world" in 3rd para
~~~
wantab
Your examples presume there was something better than CGI at the time and the
other products are better than something else. For one, I wouldn't be caught
dead using any of the products you mentioned.
You claim IIS by using a ZDNet article from two years ago but the reality is
IIS is number three behind Nginx and Apache. Still, being a distant #2 is
nothing to brag about.
Now you're trying to claim clients are what powers the web but that's not the
topic. What an amateur uses does not define what the professional uses. And to
claim not wanting to be on the bleeding edge of things is no excuse for
falling behind. Firefox and Chrome knocked IE off its perch years ago by being
on the bleeding edge.
------
fapjacks
I just tried installing this on a test VPS. PCP went well, but whenever I
tried to input a "hostname" in Vector's UI to get stats, it just kept telling
me it couldn't connect and to check hostname, regardless of what I put in the
box. PCP was available and of course ports open and whatnot. I'm not sure what
the problem was, but that was a bit frustrating. It looks like a slick product
otherwise, especially for a first version! Thanks for releasing projects like
this! I'll be trying again in the future, for sure.
~~~
mspier
Can you open an issue on GitHub and post more details? Any errors on the
JavaScript console?
------
samstave
After using Stackdriver for the last 18 months, I would never go back to
rolling my own monitoring infra if I could avoid it. I had nagios, cacti,
munin, graphite all running and had two ops guys pretty much 80% of their time
managing it.
Stackdriver with pagerduty and I have 250,000 custom metrics being published
and hundreds and hundreds of graphs on dozens of dashboards.
Although, I am looking at SignalFX to give even better version of this, but I
manage nearly 1,000 machines with only a staff of four.
~~~
makeitsuckless
Never heard of it, so checked out Stackdriver. Seemed very interesting,
especially since it's geared at AWS (which we use).
Until I noticed the big red flag on the front page _" Stackdriver is now part
of Google."_
Surely Google is going to kill this product and use the knowledge to offer
something similar for Google's cloud services.
~~~
samstave
We were concerned when.Google bought them and we had many meetings with them
about this, I do not believe that will happen, but if that is still an issue
for you, use signalFX.
------
rubiquity
Am I wrong in saying this is a web interface that wraps PCP? So you can't
really compare it to inspeqtor or collectd since those actually do the metrics
collection.
~~~
mspier
True. But that's the first release of Vector. We expect to make our custom PCP
agents public soon.
~~~
rubiquity
Sorry I should clarify I was making that statement for my understanding, not
to degrade the project!
------
debaserab2
What does this have that CloudWatch enhanced metrics doesn't? From the
screenshots, the metrics look pretty similar. Not a slight at all against this
project (it looks awesome), I'm just curious if your infrastructure is already
AWS based what would cause you to choose a non-CloudWatch option.
~~~
toomuchtodo
CloudWatch charges you per instance if you want 1 minute metrics instead of
the standard (free) 5 minute metrics. Any tool that collects the data for you
gets you out of that $3.50/instance/month detailed monitoring charge [1].
[1]
[http://aws.amazon.com/cloudwatch/pricing/](http://aws.amazon.com/cloudwatch/pricing/)
------
Thaxll
Not sure what the difference is between that and Collect + Graphite /
InfluxDB.
~~~
nathan_scott
There are many, many differences - there's some books about PCP that might
help clarify PCP design points, see:
[http://pcp.io/documentation.html](http://pcp.io/documentation.html)
------
capkutay
The charts look like they were built with nvd3. Can anyone confirm/deny?
~~~
mspier
That's right. Any suggestion of better reusable charts?
~~~
forrestthewoods
Honestly? Just write your own SVG code. If you inspect the elements the output
data is super simple and easy to understand. NVD3 just wraps D3.js which just
wraps utilities that output relatively basic data. Well, d3js is a data-
binding system that's way too god damned complicated if all you want to do is
make some simple charts which is how it's used 99% of the time.
I've spent more time trying to manipulate chart libraries into doing almost
the same thing but just different enough to cause pain and suffering. Output
your own path data and it's a million times easier.
For reference here's what I made: [http://forrestthewoods.com/unbalanced-
design-of-super-smash-...](http://forrestthewoods.com/unbalanced-design-of-
super-smash-brothers-part-3/)
------
tdicola
Wow this looks great, thanks for releasing it! Nice to see even 'simple' stuff
like this that can help people who aren't running at Netflix scale is still
released.
------
vizzah
Could anyone who tried this monitoring tool compare it to Munin?
------
victorhooi
This is a slight tangent...but does anybody know what UI toolkit Netflix is
using for this?
Or if it's in-house, any info on whether they have, or might release it?
I see bootstrap-submenu.css mentioned, but not Bootstrap itself:
[https://github.com/Netflix/vector/tree/master/app/css](https://github.com/Netflix/vector/tree/master/app/css)
~~~
mspier
It's bootstrap (see bower.json dependencies), but with our own layer on top of
that.
------
xbryanx
I'm curious how this compares to Zabbix's agent and server. Does PCP give you
finer grained details, or is it possibly more lightweight?
~~~
nathan_scott
PCP is much finer-grained than Zabbix in terms of the metrics it makes
available (esp. from the Linux kernel); not sure on Zabbix costs but PCP is
quite light on all resources (mem, cpu, net) and very robust.
I've worked on production systems where everything else was failing (hardware,
kernel, applications) but PCP kept chugging along, recording and telling the
sad story to anyone that would listen.
------
strunz
Anyone care to compare this to something like collectd?
[https://collectd.org/](https://collectd.org/)
~~~
mspier
Goal is a bit different. Vector doesn't collect and persist metrics. We needed
something that had as little overhead as possible so it could be deployed to
all our hosts and simplify the process of analyzing those metrics.
~~~
toomuchtodo
If its not collecting and persisting metrics, is it more of a glorified htop?
~~~
davidu
Wait, really?
Why not? Storage is cheap. Do you use something else to get historical
visibility into metrics?
~~~
brendangregg
Yes, Atlas, which is also open source:
[http://techblog.netflix.com/2014/12/introducing-atlas-
netfli...](http://techblog.netflix.com/2014/12/introducing-atlas-netflixs-
primary.html) . Atlas monitors cloud-wide, and stores historical metrics at a
one minute granularity.
Vector is for per-instance custom drilldowns. I gave a talk last year where I
showed how they both fit together:
[http://www.brendangregg.com/blog/2014-09-27/from-clouds-
to-r...](http://www.brendangregg.com/blog/2014-09-27/from-clouds-to-
roots.html)
~~~
davidu
Got it... and thank you Brendan!
------
oimaz
Where can I find deb packages for pcp version 3.10 or higher?
~~~
jmedefind
The git repo has a MakePkg script that will generate deb packages for you.
I found it really easy to use.
| {
"pile_set_name": "HackerNews"
} |
Dynamically Typed Languages (2009) - dkarapetyan
http://tratt.net/laurie/research/pubs/html/tratt__dynamically_typed_languages/
======
orting
I dont like this paper. It makes statements based on references to opinion
style articles. F.ex
"in practice, run-time type errors in deployed programs are exceedingly rare
[TW07]."
If we look at [TW07] they state that
"even very simple approaches to testing capture virtually all, if not all, the
errors that a static type system would capture."
But provides no data or reference for that statement.
Another isue is that some references with data are based on small samples and
possibly oudated:
"they [dynamc languages] lower development costs [Ous98]"
[Ous98] Compares time-to-implement and code-size for 8 different programs
implemented in static and dynamic languages and shows that the dynamic
languages are supperior. It is however not clear how much actual
implementation is involved, so it may be the case that the difference is
caused by diferences in available libraries at the time. In any case, the
sample size is small and the article is old (1998) so it is not reasonable to
make generalisations for programming in 2009 (or 20014).
[TW07] Laurence Tratt and Roel Wuyts. Dynamically typed languages. IEEE
Software, 24(5):28–30, 2007.
[Ous98] John K. Ousterhout. Scripting: Higher-level programming for the 21st
century. Computer, 31(3):23–30, 1998.
~~~
judk
Informally, dynamically types languages are chosen when the programmer prefers
to get something running before getting something running correctly. Why would
we assume then that once a program is running, the programmer would "find
religion" and do the extra work to write a comprehensive test suite? If the
programmer is willing to do work in an effort to prove correctness, the
programmer would choose the far more efficient technique of rewriting the
program in a statically typed language.
A dynamically typed program by its very nature a prototype, a program that is
expected to fail when exposed to a non-trivial input. In many cases, that is
fine, just not when correctness over many invocations actually matters.
~~~
dragonwriter
> Informally, dynamically types languages are chosen when the programmer
> prefers to get something running before getting something running correctly.
I don't think that's at all true. I think that a major motive for choosing
dynamic programming languages is that programmers want to get things running
correctly and spend more time on the logic and less on making ritual
invocations to the type system that are redundant with other elements of the
code. (Haskell and other similar languages with very strong type inference are
making this a _less_ compelling reason to choose a dynamic language, but I
think it remains an important one for many real decisions, as Haskell hasn't
yet acheived the ecosystem and mind-share where its always likely to be
considered as an alternative, and not rejected for reasons other than its type
system.)
I think people who choose static languages do so because of concerns for
correctness, but I think it is a mistake to reverse that to conclude that
those who choose dynamic languages do so _because_ they aren't concerned with
correctness; many do so because the hoops you need to jump through in
mainstream static languages are perceived as being a too-expensive way to
_get_ the (often very limited, given the lack of expressiveness in the type
systems in many popular static languages) help in correctness that the static
nature of the language provides.
~~~
orting
I think that programmers reasons for choosing a specific language are as
varied as the languages.
Personally I like programming in C++ because the typesystem and abstraction
mechanisms allows me to write reasonably correct and concise code and at the
same time performance is predictable. I like programming in Python because of
the emphasis on readability, the "batteries included" standard library and the
scripting capabilities.
Both languages have failings, as do all the other I have tried, but what
matters most (for me) is availability (platform support, libraries etc), which
is the reason I occasionally write php code.
------
juliangamble
Robert Harper argues that dynamically typed languages are languages of a
single Type:
_And this is precisely what is wrong with dynamically typed languages: rather
than affording the freedom to ignore types, they instead impose the bondage of
restricting attention to a single type! Every single value has to be a value
of that type, you have no choice!_
[http://existentialtype.wordpress.com/2011/03/19/dynamic-
lang...](http://existentialtype.wordpress.com/2011/03/19/dynamic-languages-
are-static-languages/)
Bob in this same article also wrote the now oft quoted:
_To borrow an apt description from Dana Scott, the so-called untyped (that is
“dynamically typed”) languages are, in fact, unityped._
Now the author of the linked article wrote another piece in 2012 that rebuts
this:
_It therefore makes no sense to say that a language is unityped without
qualifying whether that relates to its static or dynamic type system. Python,
for example, is statically unityped and dynamically multityped; C is
statically multityped and dynamically unityped; and Haskell is statically and
dynamically multityped. Although it 's a somewhat minor point, one can argue
(with a little hand waving) that many assembly languages are both statically
and dynamically unityped._
[http://tratt.net/laurie/blog/entries/another_non_argument_in...](http://tratt.net/laurie/blog/entries/another_non_argument_in_type_systems)
It is worth noting:
_Sam Tobin-Hochstadt argues the uni-typed classification is not very
informative in practice._
_The uni-typed theory reveals little about the nature of programming in a
dynamically typed language; it 's mainly useful for justifying the existence
of "dynamic" types in type theory._
[https://medium.com/@samth/on-typed-untyped-and-uni-typed-
lan...](https://medium.com/@samth/on-typed-untyped-and-uni-typed-
languages-8a3b4bedf68c)
[http://stackoverflow.com/a/23286279/15441](http://stackoverflow.com/a/23286279/15441)
~~~
gw
Another way to look at it is that the "uni-typed" pejorative assumes that
types are intrinsic to the semantics of a language. Optional type systems
reflect the reverse -- that types are an extrinsic feature that should be
taken care of by an external library, potentially allowing the choice between
multiple competing type checkers or no type checker at all.
[http://www.lispcast.com/church-vs-curry-
types](http://www.lispcast.com/church-vs-curry-types)
~~~
groovy2shoes
The "unityped" descriptor is not meant to be pejorative. It's an observation
that, from a type-theoretic point of view every expression in an untyped
language can be given the same type. In a dynamically checked language, the
type is the sum of all possible dynamic types (in Lua, for example: [1]). In
an unchecked language, the type can be something else (in BCPL, for example,
it's simply _word_ ; in the untyped lambda calculus, it's _function_ ). It's
important to note that, according to type theory, types are a _syntactic_
feature, not a semantic one.
It's just an observation rather than a judgment, and -- again from a type-
theoretic perspective -- it's true. It's nothing to be offended about!
Note that if you were to write a type checker for a unityped language, then
every program would trivially type check. So, while technically accurate, the
notion of a language being "unityped" is not very useful. It's more of an
intellectual curiosity than anything.
[1]:
[https://github.com/LuaDist/lua/blob/lua-5.1/src/lobject.h#L5...](https://github.com/LuaDist/lua/blob/lua-5.1/src/lobject.h#L57)
~~~
gw
You may not mean it as a pejorative, but it's clear that Harper did, and so do
those who adopt the term as a rhetorical device. I also have to question your
use of the phrase "according to type theory". You are implying that there is a
single type theory with a single view on the nature of types. I recommend the
link I put in my previous comment.
~~~
tel
Harper is frequently pejorative. He's also accurate. He has an agenda and it
is difficult to interpret him unbiasedly without accounting for it. But if you
achieve it then you realize you can no longer argue with the factual points he
makes.
That said, it's a completely _true_ point in the notion of "static-types-as-
type-theory-defines-them" that dynamic typing is a mode of use of static
typing which is completely captured in large (potentially infinite/anonymous)
sums. Doing this gives dynamic languages a rich calculus.
Refusing to doesn't strip them of that calculus, it just declares that you're
either not willing to use it or not willing to admit you're using it. Both of
which are fine by me, but occasionally treated pejoratively because... well,
it's not quite clear _why_ you would do such a thing. There's benefit with no
cost.
Then sometimes people extend this to a lack of clarity about why you don't
adopt richer types. Here, at least, there's a clear line to cross as you begin
to have to pay.
\---
The "Church view" and "Curry view" are psychological—the Wikipedia article
even stresses this! So, sure, you can take whatever view you like.
But at the end of the day _type systems satisfy theories_. Or they don't. That
satisfaction/proof is an essential detail extrinsic to your Churchiness or
Curritude.
~~~
groovy2shoes
Can you elaborate a bit on what the benefits of embracing that calculus are?
Or maybe provide some pointers? I'm having trouble imagining what utility
there is in treating untyped languages as (uni)typed. I said in another
comment that it's pretty much a useless notion, but I'm genuinely curious if
I'm overlooking something.
~~~
tel
Probably the best one I can think of is that it gives you a natural way of
looking at gradual typing. Doing gradual typing well is still tough, but you
can see it as an obvious consequence of the unityping argument.
~~~
groovy2shoes
I see. Looks like I have some reading to do :) Thanks.
------
tosh
Interestingly the article also mentions optional type systems that recently
became popular again like in Dart and Hack among other languages.
Optional typing provides a great middle ground between not being restricted by
mandatory types when doing rapid iterations, yet getting the tooling benefits
for core parts of your code base by using types to annotate signatures.
~~~
howardlet03
@tosh true this is so interesting article it also mention all type of system
that became popular. but the most interesting are mandatory types by using
annotate signatures.
------
bch
> At the extreme end of the spectrum, the TCL language implicitly converts
> every type into a string
This isn't the full story. Since 1997, Tcl has used Tcl_Obj internally. These
are called "dual-ported objects", where there are two values contained
therein: a string value, to maintain Tcls logical model of "Everything Is A
String" (EIAS), and a native value of the last-used-type. For example, if the
object was last used as an integer, a native int value will be stored in the
"internalRep". If the next use of this value is in an int context, this native
internalRep value will be used. If one uses a value in an int fashion one
call, then a float the next, then an int, then a float, then an int... you
will incur what is called "shimmering", where the interalRep is recomputed
back/forth. Shimmering is something to avoid where possible.
------
the_af
I don't like the article very much, especially the "Disadvantages of dynamic
typing" section.
\- The first point is about performance, which I doubt is the most fundamental
difference between static and dynamic typing. The author even comments on
whether performance truly matters, except in very low-level tasks, so why list
it first?
\- The real difference between the two systems, program correctness and early
detection of errors, is hidden under the misleading title of "Debugging". It
repeats the classic claim from dynamic typing proponents that "runtime type
errors are rarely an issue", which is wrong in my experience (I've seen my
share of ClassCastExceptions and NullPointerExceptions on production). Also,
it advocates doing Unit Testing as a way to catch type errors, which I also
find wrong. I'd rather focus on writing unit tests for other types of errors
the compiler cannot help me catch.
Other parts also irk me:
\- It lists "interactivity" as a strength of dynamic typing, when lots of
languages with static typing have useful REPLs.
\- The section on Built-in datatypes is baffling.
\- Ditto for refactoring. Static typing for refactoring is a _strength_ , not
a weakness. About the only valid point here is that sometimes we want to
temporarily break static checking of the whole program when testing a small
section, but that probably can be handled by testing in isolation.
------
tegeek
Steve Yegge, famously linked the Software Engineering with political axis.
Interestingly he named people "Liberal" who prefer Dynamic languages over
static.
[https://plus.google.com/110981030061712822816/posts/KaSKeg4v...](https://plus.google.com/110981030061712822816/posts/KaSKeg4vQtz)
~~~
allegory
As a pragmatist, I take from both sides always and I think a lot of us do. To
apply it to politics:
[http://www.allmystery.de/i/t5be775_subgenius_big.jpg](http://www.allmystery.de/i/t5be775_subgenius_big.jpg)
Python with static type hinting is where I want to be.
~~~
mercurial
OCaml works for me: it's readable and about as succinct as Python. Though of
course the ecosystem is much smaller.
~~~
allegory
Good choice but I tried it but couldn't get on with it to be honest. That's
just down to me, not the language though.
~~~
mercurial
I'm not saying it's not clunky, but all in all it's fairly pragmatic. That
said, I'm a big Python fan as well. Of all the dynamic languages I work/worked
with, it's the one that tries hardest to be less of a footgun, and it has a
strong tradition of decent documentation (hear, hear, OCaml library
developers...).
------
graycat
His arguments for the advantages of statically typed language agree with mine.
So, for my programming, I continue to prefer static typing. Sorry 'bout that!
------
eru
Nice and in-depth.
------
swah
Sorry for going meta: does everyone also really enjoyed the formatting
(spacing) and font choice of this article? Its so, so good to read here
(Win7/chrome) I'm thinking of copying it for my websites.
~~~
illumen
Sarcasm?
Justified, with different spacing between words seems really hard to me. The
thin stems also make it hard to read the letters. Finally the column width is
not adjustable by the browser (double tap on mobile, or resize window on
desktop), which kills readability even more.
~~~
swah
Absolutely not. Fixed width is a problem but on Windows I usually have my
browser maximized. Are you on Windows (OSX rendering is much different)?
------
t1m
_Smalltalk is a small, uniform object orientated language,..._
"orientated"??
I hate to be a stickler, but you cannot say "object orientated" any more that
you can say "the object was instantiatated", or "Xerox inventated Smalltalk".
~~~
maxerickson
I dislike orientated, but it is clear enough that it has achieved broad usage
and is no longer a mistake.
~~~
dragonwriter
"orientated" may be a perfectly cromulent word -- and may generally be a
synonym for "oriented" \-- but the phrase describing the programming paradigm
is "object oriented" not "object orientated". Just as "functioning" is
indisputably a good word and may be a rough synonym for "functional", but if
you talk about about Haskell as a "functioning programming language", you may
be correct in a sense, but it won't be as a statement about the programming
paradigm it supports.
------
CmonDev
It seems the consensus is to try and avoid them where possible:
[http://programmers.stackexchange.com/questions/53878/dynamic...](http://programmers.stackexchange.com/questions/53878/dynamic-
vs-statically-typed-languages-for-websites)
[http://programmers.stackexchange.com/questions/10032/dynamic...](http://programmers.stackexchange.com/questions/10032/dynamically-
vs-statically-typed-languages-studies)
[http://programmers.stackexchange.com/questions/246762/is-
the...](http://programmers.stackexchange.com/questions/246762/is-there-a-real-
advantage-to-dynamic-languages?lq=1)
~~~
djur
None of those three links suggest a consensus against dynamically-typed
languages. All of them feature highly-rated arguments in favor of such
languages. How can you even begin to suggest a "consensus" from such a basis?
| {
"pile_set_name": "HackerNews"
} |
Great April Fool's Day Hoaxes - tokenadult
http://www.museumofhoaxes.com/hoax/aprilfool/
======
icey
Ugh. I am really not looking forward to tomorrow.
~~~
tokenadult
You have my sympathies. I figured the submitted article would be the best
combination of humorous relief from much less funny April Fools jokes and a
warning not to believe everything we read online for a day or so that I could
share with the HN community. Have a safe, happy, and credible April 1st.
~~~
icey
Yeah, I really do like a good April Fool's joke... It's just that they're all
so predictable now.
It's kind of like... "You mean site X has always done X and now it will be
doing the inverse of X? That's preposterous!! O WATE - APRIL FOOLS U GUISE!!!"
| {
"pile_set_name": "HackerNews"
} |
How iPhone apps are like McDonalds hamburgers. - technologizer
http://technologizer.com/2010/10/18/after-a-while-you-stop-counting/
======
devmonk
"When do Apple and Apple watchers stop caring so much about how exactly how
many iPhone apps there are?"
By your example, somewhere beyond "billions and billions".
The U.S. seems to do the same thing with our deficit. It's over $13 Trillion
USD, and yet they don't put that under every government office sign.
At some point, numbers seem to get just too big to matter to people. Kind of
like Richie Rich and all of the jewels and jewelry all over his estate. It was
just there. It didn't matter.
BTW- Richie Rich is back. "The first new Richie Rich comic should hit retail
in early 2011.": <http://www.icv2.com/articles/news/18545.html>
------
ryandvm
The reason the McDonalds signs lost the "billions served" isn't because
corporate just stopped caring. McDonalds, like most successful mega
corporations, doesn't do _anything_ without numbers and studies to back it up.
What happened is the marketing message changed. No longer does McDonalds need
to prove that they are legitimate fast food vendor by telling everyone "hey -
we've sold a lot of hamburgers!". If anything, they're now trying to gloss
over the notion that they stamp out 2 million of these uninspired little blobs
every hour.
The consumer climate has changed and now "billions served" doesn't sound
nearly as impressive as "we handmade this one for you".
| {
"pile_set_name": "HackerNews"
} |
FreeBSD Kernel Internals Evening Course Taught by a Core Commiter - jedberg
http://www.mckusick.com/courses/introeveclass.html
======
nickynix
This seems like a great opportunity to learn more about FreeBSD, but for an
individual, the price is steep. I even looked at the videos for purchase and
they surprisingly cost the same amount as the in-person course. Can anybody
attest to the value the course provides?
~~~
azinman2
Yah no kidding. I was like oh this might be a fun and interesting side thing
to do.... and then it's like oh its 1495.00!!!
Hardly appropriate as a hobby at that price for normal people.
| {
"pile_set_name": "HackerNews"
} |
Yumbunny (launched at HN): Crowd-Sourced Matchmaking With Hilarious Results - terpua
http://www.techcrunch.com/2009/02/10/yumbunny-crowd-sourced-matchmaking-with-hilarious-results/
======
thorax
Thanks for posting this. Was travelling and was gratefully surprised when they
covered us. Thanks guys for all your great feedback.
~~~
vaksel
shouldn't be surprised, Arrington is known for reading HN, for early startups
to cover
------
ieatpaste
Congrats on the coverage.
| {
"pile_set_name": "HackerNews"
} |
Tesla: the roadmap to domination - andrewtbham
https://medium.com/@andrewt3000/the-2-reasons-tesla-will-be-number-1-bab788ef215e#.yy1id7i5h
======
11thEarlOfMar
“we may be witnessing an interplay of technology, industrial strategy, and
capital not unlike Cornelius Vanderbilt and the railroads, or Thomas Edison
and electrical distribution.”
Hey, how could he leave out John D. Rockefeller?
| {
"pile_set_name": "HackerNews"
} |
Light Trade Centre - c4ddownload
http://lighttradecentre.com/
======
c4ddownload
Light Trade Centre
| {
"pile_set_name": "HackerNews"
} |
Show HN: Ahoy – Twitter for Your Neighbourhood - serious-sam
https://itsahoy.com/
======
verdverm
Add a geo input, not giving location access to any apps these days. Far too
much abuse by the general app market to let anything have access at this point
~~~
vasanthv
That would defeat the purpose of the whole hyperlocal apps.
------
verdverm
How will you prevent the YikYak outcome (devolving into horrible, anonymous
comments)
~~~
vasanthv
I don't think we can prevent horrible anonymous comments. Thats the truth.
Even Twitter & Facebook are fighting this even though they are not anonymous
apps.
------
smartis2812
Amazing Idea!
Love it, but how can i increase my radius?
~~~
serious-sam
You can click on the radius (number) in top text. It is an editable field.
| {
"pile_set_name": "HackerNews"
} |
UI For Drunks - edent
http://shkspr.mobi/blog/2014/01/ui-for-drunks/
======
acconrad
In essence, simple wins. If you can appeal to the lowest common denominator in
your demographics, people who are are the biggest disadvantage with technology
(e.g. drunks, the elderly), then you are clear enough for your primary target
audience.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Keypirinha, a new semantic launcher for keyboard ninjas - polyvertex
http://keypirinha.com
======
Nadya
What advantages does this have over the well-known Launchy [0] which
accomplishes the same tasks but multi-platform? You may want to touch upon any
improvements if you want to win anyone (namely: me) over.
I tried to read through the Configuration section and found nothing about
limiting the scope of the search. For example, if I put .shortcut files into a
certain directory, let's say `K:\Programs` and only want to use Keypirinha to
quickly run those shortcuts. I find Launchy is finicky with updating its
library for this purpose and often requires a bit of
adding/removing/restarting/telling it to rescan before it finally picks up on
additions/removals. I see nothing about limiting Keypirinha's searching scope.
I appreciate that you anticipate users may move the .exe outside of the
install directory. Too many people expect APP_EXE to be within APP_DIR, thank
you for not making that assumption. :)
Also the most important question: can the keybind be left alt + space? Or are
there any keybind limitations that so many programs have that don't allow key
modifiers to be bound with space? (E: Answer is `yes`, mentioned in Config
file)
[0] [http://www.launchy.net/](http://www.launchy.net/)
~~~
polyvertex
I've been a long time and happy user of Launchy. It's a great tool and you
might have noticed the small tribute to it in Keypirinha by using the term
"Catalog" to name its internal database :)
Keypirinha is more modern than Launchy in many ways and integrates better in
recent Windows platforms (for example I've had troubles due to 64-bit
platforms with Launchy; items not found or not launchable, ...). The search
accuracy is very much improved as well. You can have machine and/or user-
specific configuration in a __portable __way. In addition to that, Keypirinha
is more easily extensible, it embeds a Python interpreter to load its plugins
(whereas you had to compile a C++ plugin to do the same job with Launchy; I
wasn 't happy with the unofficial Launchy-Python plugin). That gives the user
the power to modify the existing official plugins (they are open-source), or
to create new plugins that fit her needs (that slowly leads us towards your
"searching scope" question).
Keypirinha is also very young compared to venerable Launchy so its plugins
catalog has yet to be "flourished".
Hope that answers your question regarding Keypirinha vs. Launchy.
Regarding the "searching scope" question, please follow the discussion on
GitHub at:
[https://github.com/Keypirinha/Keypirinha/issues/3](https://github.com/Keypirinha/Keypirinha/issues/3)
------
ishu3101
Check out Wox - an open source launcher like Alfred for Windows with plugin
support. [http://www.getwox.com](http://www.getwox.com)
~~~
polyvertex
Wox offers more exotic features, but to be fair is also way less efficient
when it comes to search accuracy, speed, memory footprint and battery
friendliness.
------
svenfaw
Website looks clean, loads fast and is very informative, so props for that.
Are you using a static generator (Pelican perhaps)?
~~~
captaindiego
Looking at the source I think he's using Sphinx (Python static generator) with
the RTD theme, or some variation.
~~~
polyvertex
That's right, with the vanilla ReadTheDocs theme (no variation).
------
drvortex
Kind of pointless since these days, Win + "typing" does exactly the same
thing.
~~~
michaelmior
You can write plugins for the start menu as well?
~~~
polyvertex
Thanks for pinning one of the points of Keypirinha :)
------
baal80spam
Nothing beats Everything.
~~~
polyvertex
Different purpose. Everything rocks at finding files and is complementary to
Keypirinha, which rocks at adapting to your needs. As explained in an other
comment, Keypirinha is way more than about launching apps and documents.
------
AaronLasseigne
...on Windows.
~~~
polyvertex
Sorry I forgot to mention that. More precisely: Windows 64-bit (Vista, 7, 8,
8.1 or 10)
| {
"pile_set_name": "HackerNews"
} |
What Is the World to Do About Gene-Editing? - amanuensis
https://www.nybooks.com/daily/2019/03/21/what-is-the-world-to-do-about-gene-editing/
======
technotony
The main thing* holding back this is that we don't understand the human genome
enough to make really useful edits. It's very rare that there's a change that
would be better done with germline engineering than somatic cell engineering
or gene therapy. One day that won't be true however, and that's when the
problems will really begin.
To me the strongest non-religious argument against doing this is that it will
further accelerate wealth benefits, eg if you could make your kids smarter.
One of the problems with that however is that this technique is so cheap and
easy that banning it will just create a black market, or a single country
decides to allow it for economic or political reasons and people travel there
to get pregnant.
That impossibility of controlling this is why I think we have no choice but to
proceed slowly, cautiously, but under full transparency.
* there are also uncertainties about off-target effects and overall safety of the technique but I'm confident those issues will eventually be fixed too.
~~~
chrischen
I think the strongest argument against gene editing is the tendency for
societal biases to be taken to the extreme. Gene editing is drastic and has
the potential to greatly reduce genetic diversity in a population since people
tend to want to become “normal”, or meme certain popular traits. If you gave
minorities the ability to become a white males in America, would they?
Certainly many would and it would mean losing the benefits of such diversity
we currently have, even if it’s tougher for many people to be non-white male.
See “Chinese footbinding”, “injecting cement into posteriors for beauty”,
“plastic surgery” for examples of the absuridty people will go to to fit in.
~~~
ramblerman
I can't help but find this incredibly condescending.
You would take away a technological advantage from majorities and minorities
alike because their choices might not lead to your ideal of diversity utopia.
~~~
iguy
Here's a different take, on a different sort of diversity:
Suppose it turns out that extremely high-achievers are gambles on the part of
nature, which have some chance of going wrong. For example, suppose that every
birth of a potential future Nobelist comes with a 10% chance of serious
disabling autism. (I stress, I'm not claiming that this is true, just setting
up a thought experiment.) Many parents would decline this gamble: 10% is quite
a high chance of devoting the rest of your life to care, for a tiny chance of
having a world-class star in the family, who will anyway have very little
personal gain from his contributions. But if the whole society decides never
to gamble, we will (in my scenario!) miss out on major advances.
~~~
chr1
If we get to the point where we can distinguish genes that cause autism to be
able to remove them, we'll also be able to distinguish 10% Nobelist case.
But even if that was not the case, with gene editing some parents who did not
have a chance to make that gamble, will be able to make it intentionally.
~~~
iguy
The point of my thought experiment is to imagine that these are the _same_
genes. Remember that about half of population variation is "unshared
environment" meaning noise in the translation of DNA into an adult: genes will
never allow perfect prediction.
I guess I believe that very few parents would make this gamble, because the
risks are very personal and the benefits are mostly to others. But I could be
wrong. One real live example is that there's been very little push-back
against eliminating Tay-Sachs via testing.
~~~
chr1
In your thought experiment, even if very few parents made the gamble, it would
be enough to keep the gene around until we figured out what in the environment
caused autism, after which the many more would want to have it.
For the Tay-Sachs example what would be the reason to push-back against
eliminating it? And who should have pushed back? people who had that gene, or
people who didn't?
IMO the solution is to simply have more people, if we have 1000 billions on
earth (which is possible with seasteading and terraforming deserts), 10
billion on moon 100 billion on mars we'll have enough place to try out all
kinds of solutions, for people to decide how much to edit or not to edit their
own genes, and many new random mutations arising simply because there are more
new people than were in all of the history combined.
~~~
iguy
Other thought experiments are certainly possible, but mine is about the stated
gamble, in which you cannot predict. That's what's meant by "unshared
environment", it's poorly named, the environment you talk of is "shared
environment". As I said, I don't know that this is true in the real world, but
I don't think it wildly implausible. (Maybe tortured mad artists are a better
example than nerds.)
Tay-Sachs is the first example I know about, of some genetic strain of humans
being deliberately bred out of existence. Phrased like that I think the idea
would alarm many people. But it has not. (I should remind myself more of the
details of this story.)
Thought experiments aside, note that it's not obvious we need a single new
mutation to make much smarter people. We just need the shuffling of the deck
to place more of the existing + variants into one body. This will obviously be
the goal of embryo selection, and I think would also be the goal of explicit
editing. I'd wager that many more of the Nobel-sized brains alive at the end
of this century (and the next) will owe their existence to one of these kinds
of engineering, than to population growth.
------
mirimir
Let's say that we accept the "my body, my choice" argument for legal abortion
on demand. It applies to eggs, sperm, unimplanted zygotes and preterm fetuses.
So why doesn't it also apply to CRISPR editing of our somatic and germline
cells? I suppose that we could carve an exception, in the interest of the
public good. As we have for the War on Drugs. But that's a dangerous path, if
one cares about personal freedom.
~~~
vore
Germline editing has consequences far beyond "my body, my choice". If your
offspring have genetic defects from gene editing, and their offspring also
have the same genetic defects, then "your choice" has consequences far beyond
just you that "personal freedom" can't cover.
If you choose to abort, only your offspring is affected: far less
consequential than possibly ruining generations to come.
I don't know what the War on Drugs has to do with anything here.
~~~
sho
> If your offspring have genetic defects [..], and their offspring [..]
> If you choose to abort, only your offspring is affected
By aborting, you're not merely potentially saddling your offspring with
genetic defects - you're denying them the most basic right to exist in the
first place! And their offspring, too. You're shutting down that whole line
before it can even start.
I'm pro choice, but this line of argument makes no sense to me. Hard to see
how anything could have consequences "far beyond" depriving a germline of
existence itself.
~~~
Thiez
You may be pro-choice, but you would probably oppose the right of mothers to
end the lives of their children (and descendents of those children) _after_
pregnancy. So I don't see what is strange about the argument; you are allowed
to terminate a pregnancy, but you do not necessarily have the right to inflict
arbitrary other life changing modifications on your children. Like, no doctor
would agree to amputate the legs of an unborn child without medical necessity
(when asked by the mother), and I think 'my body, my choice' does not apply
there.
I will admit there is a bit of a grey zone; few (if any?) places outright
prohibit pregnant women from smoking and drinking, despite the possible
negative consequences, but such behavior is still generally frowned upon.
~~~
chr1
> but you would probably oppose the right of mothers to end the lives of their
> children after pregnancy
Unfortunately the distinction is not that clear cut. When we have technology
to raise several weeks old fetus to maturity in an artificial womb (which
we'll have soon), what will be the difference between 'after pregnancy' and
'during pregnancy'? In both cases it will be something that can become a human
if someone wants to spend resources to keep it alive.
> Like, no doctor would agree to amputate the legs of an unborn child
Situation with gene editing is the opposite. Is it moral to allow your child
to be born with one leg, if there is an way to cure her to have both legs?
And the point about 'affecting all descendandts' is wrong because if we have a
technology to make a change we can easily reverse it too.
~~~
iguy
> the point about 'affecting all descendandts' is wrong because if we have a
> technology to make a change we can easily reverse it too
If we had perfect editing technology (even without perfect knowledge of
effects) then as you say we could easily Ctrl-Z the next generation.
But we don't have this yet, so I think it's not unreasonably to worry a bit
about inflicting changes (including unintended off-target edits) on grandkids
too.
~~~
chr1
Sure worrying a bit is reasonable, and that's the reason most people would not
use gene editing now. (Unless they are trying to correct some well understood
life threatening condition).
If we talk about so many people using it that it can change genetic
composition of humanity, it means we already have a reliable way of making
changes.
| {
"pile_set_name": "HackerNews"
} |
Clojure’s Approach to Identity and State (2008) - tosh
https://clojure.org/about/state
======
stuhood
Immutability is very helpful, and the connection between values and identity
is illuminating.
But there have been very important developments in programming languages since
this post/page was written: notably, the introduction of "borrow checking"
(exemplified by Rust's implementation). Borrow checking has a very significant
positive effect on the sustainability of imperative code, which makes the
claim that "imperative programming is founded on an unsustainable premise"
feel dated.
It is worth taking the time to understand what borrow checking enables. For
example: borrow checking allows even mutable datastructures to be treated as
values with structural equality. It does this by guaranteeing that unless you
have exclusive access to something, it may not be mutated.
A good explanation of the benefits of ownership and borrow checking:
[http://squidarth.com/rc/rust/2018/05/31/rust-borrowing-
and-o...](http://squidarth.com/rc/rust/2018/05/31/rust-borrowing-and-
ownership.html)
~~~
stingraycharles
Borrow checking and immutability solve two different problems. Immutability is
about the absence of ownership and state, while borrow checking is a way to
manage ownership.
One does not replace the other, they coexist solving different problems.
~~~
stuhood
Borrow checking allows even structures that _support_ mutation to be safely
(checked by the compiler) treated as immutable, and thus as values.
Clojure also recognizes the connection between ownership and mutability in its
"transients":
[https://clojure.org/reference/transients](https://clojure.org/reference/transients)
... compile time borrow checking extends that idea to an entire language.
~~~
casion
What do transients have to do with ownership? They are simply a way to gain
new performance characteristics from an existing data structure.
~~~
stuhood
The thing that makes it safe to use mutation in the context of a transient is
that you can know with certainty that you have exclusive access to the value
(because no other viewer has observed it yet). This is also what borrow
checking can guarantee: except in significantly more positions in the code,
and at compile time rather than runtime.
------
etbebl
This is interesting. I've tried Clojure, and heard about the idea of avoiding
mutable data and using pure functions plenty of times, but imperative/OOP have
still always made the most sense to me. When reading this though, something
clicked because I've encountered the problem of getting a stable state to
read/write without blocking other operations, and dealt with it in C++ in a
similar way to Clojure without realizing it at the time.
I have this little lightly-tested library: [https://github.com/tne-lab/rw-
synchronizer](https://github.com/tne-lab/rw-synchronizer). I'm not using it
much currently but have played with it a lot while building extensions to Open
Ephys. The idea being that as a reader, you get a "snapshot" of the last thing
that was written, but it's really just one of several copies, and subsequent
writes can happen on the other copies. So you never really modify the current
data, just push newer versions of it. The cool thing is, if you know how many
simultaneous readers you'll need ahead of time, all the allocation can be done
upfront, so then if you have a real-time loop or something, all it needs to do
is exchange pointers.
If I ever get around to it, the next thing I would do is allow any writer to
also read the latest value, so it can use a transformation to create a new
one. Maybe even do it automatically with copy-on-write semantics? On the other
hand, I'm probably reinventing the wheel here...
~~~
fazzone
This is pretty much how clojure atoms [0] work. It's basically a Clojure
wrapper around a Java AtomicReference, but Clojure's immutable data structures
make an atomic reference type really useful because it is very cheap to read a
"snapshot". It doesn't do upfront allocation, because like you mentioned, that
requires you to have some knowledge about how the accessing code works.
Additionally, whatever you are doing in Clojure is pretty likely to allocate
memory anyway, so it probably wouldn't be that beneficial.
[0] [https://clojure.org/reference/atoms](https://clojure.org/reference/atoms)
~~~
etbebl
Oh neat, thanks! Yup, that sounds like a more general/flexible version of what
I was trying to do.
I was focused on situations with just one writer (and originally also one
reader), with the main thing being avoiding allocations. The situation where
future values actually depend on past values, and specifically the _current_
past value with other writers in the mix, is definitely trickier.
------
feniv
Rich Hickey has a talk on this (The Value of Values) here:
[https://youtu.be/-6BsiVyC1kM](https://youtu.be/-6BsiVyC1kM)
~~~
thomk
Thank you, this talk was paradigm shifting AND familiar for me at the same
time.
------
microcolonel
This has been extremely useful to me while writing a (somewhat optimizing)
compiler for spreadsheets. I can do subtree deduplication just by `assoc`ing
into a map.
~~~
neonate
I want to hear more about your somewhat optimizing compiler for spreadsheets!
~~~
microcolonel
It's proprietary for the time being; but in short, it is more straightforward
than I thought it would be.
We are working with LibreOffice Calc ODS sheets, which are pretty terrible as
a format (since the references are not normalized in the formulas, they can't
repeat them even when they behave identically, and they duplicate most of the
XML namespaces in the attributes).
We parse and normalize the references from A1 to R1C1 form, and then
deduplicate the formulas (by text) and extract all of the immediates (and mark
some of them as input, so that they can be varied at runtime).
Then we pass the deduplicated formulas through instaparse (which is
spectacular) with a relatively simple grammar, and propagate some of the
constants.
I then extract the references from the AST, while at the same time replacing
SUMIF/MINIFS/MAXIFS/AVERAGEIF and similar with simple addition/min/max of
known cells, where the tests are known at compile time. Then those ASTs are
complied to functions (ignoring our cross-function optimizations).
Then it's just down to generating a complete DAG of dependencies, and using
that to sort the assignments (cells) topologically. The sheet can be evaluated
naiively at that point by injecting the references into each subsequent
assignment/cell and storing the result in a map (ranges injected as a seq over
a range).
There's a lot more to it, and it's getting better all the time, but that's the
gist of it. Many real spreadsheets are not well-behaved, and they have
dependency patterns which are more difficult to handle (i.e. ranges that refer
to the current cell, or future cells, dynamically). The compiled output is
getting more and more static, and will probably be reduced to some form of
ssa, possibly even well-formed enough to be popped casually into LLVM.
It would be some help if the ODS format were improved, it takes several
seconds just to parse the hundreds of megabytes of XML in our amazing
spreadsheet, and a lot of it is redundant.
~~~
networked
Interesting project! Could you explain what you mean by "since the references
are not normalized in the formulas, they can't repeat them even when they
behave identically"? Do you mean the normalization from _A1_ to _R1C1_ that
you mention later in the post or something else?
~~~
microcolonel
Yes, I mean exactly that. :- )
~~~
networked
How does this normalization affect being able to repeat formulas (or
references in formulas)?
~~~
microcolonel
While spreadsheets usually display references as though they refer to a
specific cell (i.e. A3, B2, etc.), but underneath, the references are relative
(unless specifically made absolute, with $ in the case of A1).
The common pattern in spreadsheets is to have a set of columns of repeated
formulas. i.e.
| A | B | C | D |
|-----|-----|----------------|-----|
1 | | | | 0.12|
2 | 42| 42| =A2*B2+C1*$D$1 | |
3 | 69| 69| =A3*B3+C2*$D$1 | |
Where, you'll note, although the function and reference shape in C2 and C3 is
identical, the text is not.
Whereas, with R1C1-type references.
| 1 | 2 | 3 | 4 |
|-----|-----|----------------------------|-----|
1 | | | | 0.12|
2 | 42| 42| =RC[-2]*RC[-1]+R[-1]C*R1C4 | |
3 | 69| 69| =RC[-2]*RC[-1]+R[-1]C*R1C4 | |
The text of the formula is exactly the same in both copies.
This makes it a lot cheaper to deduplicate them, because we don't need to run
the whole parser on the 400k+ formula invocations in our sheet, and then
compare the ASTs rather than text; since in this form, there are only a few
thousand unique expressions rather than a few hundred thousand.
~~~
networked
Thanks for the explanation. I was confused about the meaning of "repeat". It's
a missed opportunity that ODS doesn't store formulas as ASTs in the first
place.
~~~
microcolonel
> _It 's a missed opportunity that ODS doesn't store formulas as ASTs in the
> first place._
It's really for the best that they don't. ODS is XML, so they'd probably make
the AST XML as well, which would _outrageously oversized_.
------
pdub1
I've tried Clojure.
I prefer a programming language that allows me to pick and choose which
paradigms I want to follow-- whether OOP or FP, mutable or immutable, etc. I
don't need Clojure to do that for me.
Personally, I am trying to figure out why a closed source language is
producing such activism-- trying to increase the popularity importance of the
language... despite the fact that it's a privately owned language-- not really
"open source"\-- everything flows through one man & his company, which come
first & above, regarding the language's development.
Rich Hickey: [Paraphrasing] "Open source isn't about you. I created this, it's
mine, and I'll change it when and how I choose."
Clojure Community: "Hey, let's try to get more people into Clojure! Let's
increase this community!"
~~~
dpkp
I can understand your frustration about Rich's development process. But
clojure is most definitely not closed source. The source is right here:
[https://github.com/clojure/clojure](https://github.com/clojure/clojure) and
the license that allows you to copy, modify, and redistribute that source is
here:
[https://opensource.org/licenses/eclipse-1.0.php](https://opensource.org/licenses/eclipse-1.0.php)
Rich has a fairly strict development approach and wants to personally review
and approve all changes to the core. There are complaints about that process,
and that's fair. But as far as I have seen, most large, successful projects
have similar personalities leading them (Stallman, Linus, Larry Wall,
Guido...).
Finally, I should add -- if what you are looking for is software freedom...
then you should absolutely consider using a Lisp like clojure. Lisp's give
_you_ the power to control your language through macros and non-core
libraries. Unlike other languages, you do not need a core development team to
make language changes for you. Perhaps this is why clojure is so powerful...
because the core process issues you have heard about are not actually that
important, and in fact the language itself enables substantially more software
freedom than perhaps you are giving it credit for.
------
z3t4
Would be helpful with practical examples in code. As a self thought programmer
I dont know what all concepts are called, but when I see code I can usually
recognise them. The actor model as described in the article becomes less
painful when you have an abstraction layer. The question might be are you
going for horizontal scaling or vertical scaling, although you are best off
implementing the simplest solution in order to avoid premature optimization
(and overengineering).
~~~
microcolonel
Regarding values: you can construct the same datastructure in any place, and
compare it meaningfully with a datastructure from a completely different
source (and you can do so efficiently). This is accomplished, as far as I
know, by representing almost everything as persistent hash trees (with some
implementation voodoo and shortcuts).
Beyond that, you can actually just read the Clojure runtime code. It's a bit
messy but there's not really that much there.
~~~
louthy
Persistent hash _tries_ [1]
I have an efficient C# implementation here [2]
[1] [https://michael.steindorfer.name/publications/phd-thesis-
eff...](https://michael.steindorfer.name/publications/phd-thesis-efficient-
immutable-collections.pdf)
[2] [https://github.com/louthy/language-
ext/blob/master/LanguageE...](https://github.com/louthy/language-
ext/blob/master/LanguageExt.Core/DataTypes/TrieMap/TrieMap.cs)
~~~
microcolonel
I love seeing these datastructures show up in more languages. They completely
change the set programs you could feasibly find time to write. Thanks for
sharing your C# one, I'll remember that if I ever need to use Unity again. The
claims about CHAMP are very impressive, in my experience, Clojure's
datastructures perform great for what they do, and they claim CHAMP tends to
be _many times faster_. :- )
> _Compressed Hash Array Map Trie_
Q: “What datastructure would you like?”
A: “Yes.”
------
keymone
Just learning about immutable/functional approach makes you better developer
even in imperative languages. “Share nothing” (in a mutable way) is
beautifully simple solution to so many concurrency problems.
| {
"pile_set_name": "HackerNews"
} |
OpenBSD 5.6 - fcambus
http://www.openbsd.org/56.html?hn
======
brynet
Related to OpenBSD, and BSD in general. Peter N. M. Hansteen is auctioning off
the first signed copy The Book of PF, 3rd edition. He will be supporting the
OpenBSD Foundation by donating the amount raised.
[http://bsdly.blogspot.ca/2014/10/the-book-of-pf-3rd-
edition-...](http://bsdly.blogspot.ca/2014/10/the-book-of-pf-3rd-edition-is-
here.html)
------
eksith
It seems this is the last version to have Nginx in the base install. 5.7 Will
only ship their in-house httpd(8) as the default web server, although Nginx
will still be available in ports. Reyk explained the reasoning :
[http://undeadly.org/cgi?action=article&sid=20140827065755&pi...](http://undeadly.org/cgi?action=article&sid=20140827065755&pid=24)
And the httpd(8) manual shows there are some similarities in the configuration
to Nginx. [http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-
current/man8/...](http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-
current/man8/httpd.8) It seems to be a much more simplified and streamlined
setup.
~~~
gioele
> [nginx] uses custom memory allocators (for performance reasons) and it is
> wrapping or replacing standard C library functions all over the place. This
> could eliminate some of our built-in security mechanisms.
Isn't this what made most of OpenSSL bugs possible and made them go unnoticed
for a long time?
~~~
haberman
And yet techniques like custom memory allocators can have compelling
performance benefits.
To me the right tradeoff is: if you want to do fancy allocation, at least make
the allocator injectable so that people can use standard malloc (or even a
specialized security-hardened malloc) if they prefer. Robust low-level
libraries like Lua, zlib, etc. usually take this approach: you can pass a
pointer to a custom "malloc" function.
Unfortunately this isn't always possible: LuaJIT doesn't support the "custom
allocator" part of the Lua API for a couple of reasons:
[http://www.freelists.org/post/luajit/Why-does-LuaJIT-have-
it...](http://www.freelists.org/post/luajit/Why-does-LuaJIT-have-its-own-
allocator)
[https://gist.github.com/nddrylliog/8722197](https://gist.github.com/nddrylliog/8722197)
~~~
aortega
For critical system daemons, using a custom memory allocator that disables the
in-built system security is insane. Injectable allocators that fill the heap
with function pointers in predictable positions are what make exploiting them
easy.
~~~
masklinn
On the other hand, as far as I know most systems don't bundle security
features in their standard allocators. I'm not sure any non-bsd system does.
~~~
throwaway2048
Both OSX and especially Windows have extensive memory security features baked
into their system allocators (ASLR, SSP, NX bit support, Guard pages,
background page zeroing, etc) and even glibc has some basic security features.
Its kind of funny that windows is way ahead of your average linux distro on
this front, considering the typical attitude with regards to how secure one is
compared to the other.
Also its not really the BSDs that have these types of security features, its
pretty much exclusively OpenBSD. FreeBSD has code support for several of these
features but it is not enabled by default, and tends to break things badly
when it is enabled, although work is (finally) being done to fix this
situation.
~~~
Hello71
ASLR: enabled by default on virtually all Linux systems, can be improved with
the installation of PaX. disabled by default on Windows systems.
KASLR: Windows has it, Linux does too, but only since 3.14.
SSP: libc specific, but MSVS since 2003 has basic stack-smashing support
similar to gcc's -fstack-protector which has been enabled in Debian-based
distros for many years, and is now being improved over MSVC /GS with -fstack-
protector-strong.
NX bit support: Support added around 2004 or XP. Again, PaX is far stronger
than Windows here.
"Guard pages": This is SSP with a different name.
So at best, we can say that Windows added some security features slightly
earlier, but has lagged in updating them to new standards.
~~~
robryk
I've thought that guard pages referred to the practice of having a page or a
couple allocated before and after an accessible region and set to PROT_NONE
(any access causes a trap) to prevent any new allocation from making those
pages accessible. This way reads/writes at most a page past the beginning/end
of our allocation will always fault. I don't see how is this "SSP with a
different name" (the goals are similar but the method is very different). Did
you mean some other kind of guard pages or am I missing something?
------
lmedinas
And brings LibreSSL: \- This release forks OpenSSL into LibreSSL, a version of
the TLS/crypto stack with goals of modernizing the codebase, improving
security, and applying best practice development processes.
~~~
w8rbt
Modernizing the codebase? I think 'simplifying' is the right word. If OpenBSD
is _anything_ it is simple and easy to understand, but it is not modern.
That's why it is secure.
~~~
tedunangst
Well, modernizing as in using modern functions like 'memmove', even though
they may not exist on SunOS 4.1.
------
sauere
Semi-related: is anyone here using OpenBSD in their daily DevOps setup? If so,
why did you choose it (say, over Linux or FreeBSD)?
~~~
eksith
OpenBSD is what I use to host a bunch of private sites for myself and a few
people I know. This is only due to some custom configurations and applications
that my shared webhost didn't provide and for things I can't be bothered with,
like my own domain which is static html. I put them on the shared host.
I wish it was for a lofty goal like security and "code correctness" etc... but
the honest answer is that it's extremely simple (once you get used to it) and
I tend to be extremely lazy at times. Configuration is very straightforward
for a lot of things and there have been very few surprises along the way.
Actually no surprises that I can recall in most of what I do since 5.0.
I wouldn't recommend it as a desktop system although plenty of people
(including my boss) use it as such. There is some fiddling required for this
that I'd rather not do, but for very simple, stable and surprise-free servers,
it works very well for me. I also wouldn't recommend it for first-time admins
either, although their man pages are some of the most thorough and helpful
I've ever read.
~~~
clarry
> I wouldn't recommend it as a desktop system although plenty of people
> (including my boss) use it as such. There is some fiddling required for this
I've been running OpenBSD on a laptop (which works as my desktop) for years
now, and I can say there's been very little fiddling. In fact it's proved to
be the best out-of-the-box experience I've had with any OS (including Windows
XP and a whole bunch of Linux distros).
> I also wouldn't recommend it for first-time admins either
I have to admit I wasn't administrating things for the first time when I did
it on OpenBSD.. but OpenBSD was so simple and straightforward that I
eventually lost the will to fiddle with other systems.
They really have gone out of the way to make sure the system is Dead Simple to
configure (the best configuration is no need for any configuration at all!),
and when you really need to change something, the documentation is
unparalleled.
Of course, different people have different needs so what works for me might
not work for everyone. I know that what seems to work for most people doesn't
really work for me...
~~~
Touche
> I've been running OpenBSD on a laptop (which works as my desktop) for years
> now, and I can say there's been very little fiddling. In fact it's proved to
> be the best out-of-the-box experience I've had with any OS (including
> Windows XP and a whole bunch of Linux distros).
I think with any BSD, trying to run it on modern hardware will be a
frustrating experience as it lags behind Linux in hardware support (which
itself lags behind Windows/OSX). Of course, BSDs are more coherent OSes and if
it were not for hardware support I would use it exclusively.
------
brynet
You can order the 5.6 CD set from the new OpenBSD Store, there's also older
sets and other swag.
[https://www.openbsdstore.com/cgi-
bin/live/ecommerce.pl?site=...](https://www.openbsdstore.com/cgi-
bin/live/ecommerce.pl?site=shop_openbsdeurope_dollar&state=department)
I want a Wireframe Puffy Coffee Mug.
------
brynet
OpenBSD 5.6 isn't quite released yet, it's still not on the master site. The
announcement will undoubtedly be going out soon though, and it's on a few
mirrors if you wish to jump the gun.
~~~
brynet
..and now here it is released! 5.6 is official.
[http://marc.info/?l=openbsd-
announce&m=141486254309079&w=2](http://marc.info/?l=openbsd-
announce&m=141486254309079&w=2)
------
e12e
Hm, ripping kerberos from libssl I can understand -- but from base? Does that
mean that openssh certificate is what people are using for federated
authentication? While kerberos _is_ complex and complected -- are there any
solutions that are better, if you require administrating a non-trivial number
of users, along with a good way to immediatly revoke access as users leave the
organization?
~~~
Mordak
It was just moved from base to ports, so you can still get it from
ports/security/heimdal if you want it.
~~~
e12e
Sure. But authentication and authorization of users isn't like showing a
static (or dynamic) web page (ie: simpler httpd vs more full-featured nginx).
I'd say moving it out of base is a pretty strong signal to OpenBSD users.
(Now, granted, kerberos have and have had perhaps more than its share of
issues ... so the signal could just be: you probably shouldn't have been
depending on this to be secure enough to grant access to the server in the
first place).
I'm more curious if this means OpenBSD (base) doesn't have any form of
secure(ish) federated auth(z) story (other than ssh certs) any more.
~~~
mrweasel
You could use LDAP, that's in base. There's even an small OpenBSD LDAP daemon
actively maintained by the OpenBSD proejct.
~~~
erglkjahlkh
That is not Single-Sign-On, if you have to sign on several times. LDAP is
_NOT_ a replacement for Kerberos.
OAuth and alike might be, but when you work with internal users Kerberos is
much much better. Also for web services because GSSAPI/spnego stuff just
works.
By removing kerberos from base setup openBSD people have once more moved
towards irrelevance outside their own closet setups. Kerberos is the only
really working method you can use in corporate setups for large amounts of
users and achieving SSO. When it's not in base setup, it's just easier to
install any decent Linux distro, and skip openBSD. That as market placement
decision was yet another major blunder from openBSD folks, and still they
wonder why their donations have dried up the last a few years...
~~~
mrweasel
Sorry, I didn't really get that single-sign-on and federated authentication is
the same thing.
Couldn't you just install Kerberos from ports/package? Most Linux distribution
don't come with Kerberos in their base system. I honestly don't believe that a
winning argument for pick OpenBSD over Linux would be: "Kerberos is in the
base install".
I would agree that is't a bit odd that login_krb5 seems to have just gone
away, delete from CVS, but not moved to the ports three. It might be that so
few people actually used it that there's no one to maintain it.
~~~
e12e
> Sorry, I didn't really get that single-sign-on and federated authentication
> is the same thing.
I don't think it is. SSO is a good feature, and kerberos provides it -- but I
was more curious about the more general case. A solid ldap daemon with easy-
to-use ssl/tls covers most of the use-cases I was thinking of.
I don't really care much about "winning arguments" \-- I'm just curious about
the state of secure, working federated authentication. And the fact that it
needs to grip rather deeply into the system, therefore it would be nice to
have it in base. Linux generally has PAM in base -- for some that is
considered bad, for some it is considered convenient. I'm not really
interested in judging one way or the other.
> It might be that so few people actually used it that there's no one to
> maintain it.
Yeah, that's the feeling I got. And it would be worse to keep something that
isn't properly maintained. I guess I'd hoped some openbsd'er would hammer out
a robust token/ticket based scheme without many of the flaws of kerberos (ie:
hardened implementation, proper/modern cryptography primitives combined in a
proper modern way, no premature optimization). That'd probably be hopeless to
get to work with windows AD though, so maybe there's only a very small set of
people that would care.
I'd certainly like (from a technological standpoint, anyway) something like
that, that took lessons from window's kerberos/ldap/dns-story, but made
something free and robust (possibly with a patch for GINA for windows) -- that
allowed stuff like secure encrypted network filesystems etc.
(Come to think of it, I think the fact that I'm sort of enamoured by the
_concept_ of NFSv4 with authentication and encryption delegated to kereberos
is one of the reasons why I was so surprised/disappointed. Why have ZFS and
NFS without v4 and auth/enc support? So beautiful on paper. I guess that
basically leaves sshfs (as OpenAFS also requires(?) kerberos).
I'd really like to see a working distributed single-sign on, single sign-off
system that support (optional) caching/offline use coupled with a filesystem
that is mutually authenticated (client to server, server to client) also with
caching and off-line use. But that is simpler than efforts that have gone
before...).
------
farawayea
Is installing openbsd securely possible using hash checks and signing?
~~~
brynet
OpenBSD 5.5 and 5.6 include signify(1) for both signing and verifying signed
files.
[http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-
current/man1/...](http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-
current/man1/signify.1?query=signify&sec=1)
~~~
farawayea
What is the secure install process? Are there signatures for packages and for
the iso file?
The system wants to be secure, but it's not teaching me how to install it
securely.
Buying the CDs is risky. How can I know they're not backdoored?
~~~
tedunangst
Read the install docs and search for "sign".
[http://ftp.openbsd.org/pub/OpenBSD/5.6/amd64/INSTALL.amd64](http://ftp.openbsd.org/pub/OpenBSD/5.6/amd64/INSTALL.amd64)
~~~
cesarb
Forgive me if I'm missing something, but I don't see how to "bootstrap" the
trust from an existing, non-OpenBSD system.
For instance, for Fedora, the download page I use
([https://fedoraproject.org/pt_BR/get-
fedora](https://fedoraproject.org/pt_BR/get-fedora)) has a link on the sidebar
to
[https://fedoraproject.org/pt_BR/verify](https://fedoraproject.org/pt_BR/verify),
which has a link to
[https://fedoraproject.org/pt_BR/keys](https://fedoraproject.org/pt_BR/keys),
which has the full fingerprint for the GPG keys. That page is authenticated
via TLS.
So, for me the trust chain for the Fedora installation DVD is:
\- The trust chain root is my current browser (a recent enough version of
Firefox);
\- The browser trusts the CAs in its certificate store (the built-in CA
certificates from Mozilla, plus the ICP-Brasil CA certificates);
\- One of these CAs verifies the certificates for the fedoraproject.org pages;
\- From these pages, I download a set of public GPG keys, and if I want I can
verify their fingerprints;
\- The torrent for the installation DVD has the DVD image and a checksum file.
I use GnuPG to verify the signature on the checksum file, and check the page
to confirm that it was signed with the correct key;
\- Finally, I verify the SHA256 of the DVD image and confirm that it matches
the value found in the checksum file.
I don't know how I would do it for OpenBSD. The www.openbsd.org page doesn't
seem to be available over TLS, so I can't use the CAs trusted by my browser to
bootstrap the trust chain. If I had OpenBSD 5.5 installed, I could use it as
the root of the trust chain (as explained at the link you posted), but
unfortunately I don't have it installed anywhere, so that trust path doesn't
work for me.
If I had an OpenBSD 5.6 ISO in hand right now, what could I do to authenticate
it? (Assume I have a recent Linux or BSD system to start with.)
~~~
cowabunga
The official way of doing this is to buy the CD set in which the code and keys
are sent via different channels. You buy the CD set and it is mailed to you.
You then verify that against the key on the web site.
If the verification fails, either the CD set or the key is compromised.
I really wouldn't trust a CA or shared PKI to do this to be honest as that
means you have to trust three or more parties rather than just two.
~~~
farawayea
This mailing of cds seems silly. An attacker could compromise the cds to be
different and serve you another signature on the site.
This is easy for me to do. It must be the same for others.
~~~
tedunangst
It's easy for you to intercept somebody's mail and internet connection? Who do
you work for?
~~~
farawayea
Networks are easy to attack if you have control over the ISP. Mail can be
easily replaced by one single person monitoring someone's mail.
A company where employees get their mail at work and only access the net from
work could do both easily.
I don't have resources for something like this, but doing this isn't as
difficult as it might seem. A big enough adversary with enough resources could
compromise everything used in security sensitive environments.
I wanted to know if anything changed in how OpenBSD can be installed securely.
It is easier to obtain other operating systems securely. They are less secure,
but the authenticity of the iso files can be verified via signatures.
This uncertainty has stopped me from using OpenBSD in the past. I have the
same questions now.
This is a question about obtaining an iso file to install OpenBSD knowing it's
what the developers sent out, just like checking a sha256 signature for other
operating systems when downloading. It's not a question about using it in a
government agency.
Thanks for the replies. You probably have more useful things to do than
discuss this.
~~~
clarry
If it is so easy to attack, then you already lost the game unless you've
pinned the fedoraproject certs. The CA model has been demonstrated broken long
ago.
So would you rather trust that model, or just obtain the OpenBSD key for
yourself via multiple different channels, from multiple sources? The key, by
the way, is all over the place. You start with the official site, but you can
cross-check against all the CVS mirrors, and you can check all mailing list
archives which contain the key in the release announcement.
I would dare say that is heck of a lot better than simply trusting your CAs,
if you are indeed so easily attacked.
~~~
cesarb
Without TLS and having control of the network, it doesn't matter how many
channels over the network you use; it's simple to MITM everything and search-
and-replace all text matching the key with your forged key (in fact, many
networks already MITM all non-TLS HTTP traffic through a "transparent proxy").
With TLS, even with the imperfect CA model, it's much harder. It might have
been "demonstrated broken", but can _you_ get a certificate for
"fedoraproject.org"? It's not that easy. Add to that the Certificate Patrol
extension, which warns the user quite noisily when a certificate is signed by
a different CA (and shows the user the old and new CA).
With mailing the CDs, as suggested several posts upthread, it also gets
harder; now the attacker has to MITM _two_ things (the network and intercept
the physical disks). If you add TLS, it gets even harder (three things: MITM
the network, intercept the physical disks, and obtain a valid forged
certificate).
So, trusting the CAs is better than getting the key via multiple unencrypted
channels through the same network. Trusting the CAs _plus_ getting the key via
multiple channels is even better. The methods are not exclusive, and "multiple
channels" is already common in practice (in my Fedora example, the DVD image
is obtained via bittorrent, while the key is obtained via TLS, and they have
to match).
~~~
clarry
By multiple channels I mean not just channels over a single network. You can
access all these key sources from different networks.
| {
"pile_set_name": "HackerNews"
} |
SWF Machine: generating SWF binary from Erlang - mrinalwadhwa
http://weblog.mrinalwadhwa.com/2010/03/17/swfmachine/
======
pan69
swfmill.org in Erlang.
| {
"pile_set_name": "HackerNews"
} |
A first step towards freeing London’s data - robin_reala
http://data.london.gov.uk/
======
mjs
Hey, that's some quality XML they're pumping out there. From their population
data:
<ROWSET>
<ROW>
<Area_Code>00AA</Area_Code>
<Area_Name>City of London</Area_Name>
<Persons-1801>129000</Persons-1801>
<Persons-1811>121000</Persons-1811>
<Persons-1821>125000</Persons-1821>
<!-- ... -->
</ROW>
</ROWSET>
[http://data.london.gov.uk/datastore/package/historic-
census-...](http://data.london.gov.uk/datastore/package/historic-census-
population)
Embedding the year into the element name, useful that.
~~~
charlesmarshall
you could send them a tweet <http://twitter.com/londondatastore> and ask them
to fix it
I've sent them a couple of bugs / broken links and they've fixed & replied
within a few minutes.
edit: they've just tweeted about the xml structure so hopefully will sort it
out soon.
~~~
charlesmarshall
o, they also have a google group for things that need more than 140 chars -
<http://groups.google.com/group/londondatastore>
------
spuz
I'd love it if they released some real-time data. Imagine an iPhone app that
gave you the position of every tube train and bus in the city.
~~~
wallflower
It is good that mobiles do not work in the tube, as real-time position
information is a security concern.
~~~
simonw
I never really understood why that would be the case. What can people do with
that information that they couldn't do otherwise?
~~~
wallflower
I was implying more GPS-synchronized bomb triggers.
~~~
simonw
Sounds like more of a movie plot threat than anything worth worrying about. If
you really want to do that attaching your own GPS device to the bottom of a
bus (or just having an observer with a mobile phone) is easy enough as it is.
------
DrJokepu
I didn't expect that the current Mayor of London, Boris Johnson would do a
decent job when he got elected. Surprisingly enough, he actually does.
~~~
gaius
It's interesting to see the difference in terms of corporate culture. Ken
Livingstone saw himself as hugely important, he would fly his entourage
(always first class) to Latin America and sign "treaties" with foreign
governments when he should have been, I dunno, _running London_ like he was
elected to do. He was completely out of touch, like the CEO of a huge company
that's lost it's way. Like the Detroit automakers flying a private jet to DC
to ask for a bailout from the taxpayer.
Boris doesn't make a fuss, he flies in cattle class, he rides his bike around
the city (and around city hall!) and always seems to be in a good mood, and he
_gets stuff done_ at an incredible rate, precisely because he's not spending
all his time making sure everyone knows how important he is. He's the startup
mayor.
~~~
samstokes
Upvoted for interesting perspective. Do you have sources?
(That's not meant to be combative - I've also been pleasantly surprised by
Boris so far.)
~~~
gaius
Well, Ken's entourage of 85 people:
[http://www.thisislondon.co.uk/standard/article-23421491-kens...](http://www.thisislondon.co.uk/standard/article-23421491-kens-
grand-tour-of-india.do)
Boris flies economy to Beijing:
[http://www.thisislondon.co.uk/standard-
mayor/article-2354228...](http://www.thisislondon.co.uk/standard-
mayor/article-23542289-comment-upgrade-politics.do)
There are plenty like this. Ken's extravagance and the favours he dished out
to his cronies were legendary.
------
crad
Too bad they're not so open at the Royal Mail. Regardless of the wikileaks.org
publishing of the data, their charging money for a canonical table of postal
code data is shameful at best.
------
charlesmarshall
as the pagination on the a-z seems a bit broken, they do have a full listing
page of whats planned on launch - <http://data.london.gov.uk/datastore/data-
packages-launch>
edit: sorry, that page is list of whats planned for launch, not whats there
now
------
zeynel1
Does anyone know if this type of data is released by New York City?
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What technical skill should I learn to prepare for the next 10 Years? - alexjray
======
chatmasta
It's a safe bet that if something was relevant twenty years ago, and still is
today, then it will also be relevant in ten years from now. Examples: OS
fundamentals, networking fundamentals, low level languages, embedded
development, shell/bash scripting, vim, emacs.
It's nearly impossible to look at a new technology and determine if it will be
around in 10 years. But you know for sure that these timeless fundamentals
will still be relevant, so the first step should be mastering all of those.
Example: Unix system administration fundamentals are not going anywhere and
are more important than ever in the age of containers and developers owning
more parts of the stack. It's funny to read blogposts like "check out this
problem we ran into with docker" that is really just a rediscovery of a long-
known problem in system administration. Example: the recent post from codeship
about running thousands of containers on one network. Surprise, they ran into
issues with an overflowing arp cache.
------
baccredited
Learn about the magic of compounding interest and investing in index funds.
Oh yeah - if you save 68% of your earnings, you can retire in under 10 years.
The Shockingly Simple Math Behind Early Retirement
[http://www.mrmoneymustache.com/2012/01/13/the-shockingly-
sim...](http://www.mrmoneymustache.com/2012/01/13/the-shockingly-simple-math-
behind-early-retirement/)
------
dkarapetyan
Study the fundamentals. Learn some timeless science like physics or math.
------
david90
Learn about fundamentals and underlying principles; equip yourself with fast
learning skills.
You may also push up your own technology and contribute in changing the next
10 years.
------
dbrunton
Learn to be resilient.
Pick up an artistic or handcraft technique.
Make friends.
Comedy, music, drama, or something other performing art.
Know your means, live within them.
~~~
nylonstrung
None of those are technical skills
~~~
dbrunton
"Making friends" is more technical than "learning math." But more importantly,
having friends helps more with technical problems than, say, knowing some
technology that doesn't apply here. Same for resiliency.
This was an off-the-cuff response, but it's a genuine one, particularly with
skills "for the next ten years." I've hired a lot of developers over two
decades, and some of them have done good work for me for a long, long time.
------
andrei_says_
I'd say survival-related technical and medical skills. Climate change will
likely cause mass migrations in the next 10-20 years.
First aid and basic understanding of common emergency medicine needs.
Building of shelters, basic carpentry etc.
Gardening, food preparation, water purification.
Community building.
------
DannyB2
Practice decomposing problems in a way they can be solved on multiple
processing elements in parallel. Identify problems that cannot be decomposed
in this way and why. Good if you can make a certain problem run on eight
processors. But will it also run on a thousand processors?
Processor clock speeds will not rise much or any. Everyone has enough memory
for most every day problems now, so memory will only gradually increase. The
next bragging rights will be how many processing elements my box has vs. your
box.
------
D1tt0
The amount of cores in CPU's is on the rise; get into functional programming.
------
Bumerang
Not sure if exactly technical, more like a meta-skill.
Learn how to analyse and decompose problems. There always will be some.
------
iLemming
If you're talking about programming - learn Lisp. Pick any. Clojure, Racket,
LFE, Chicken, Guile or emacs-lisp, etc.. Understanding Lisp will make you a
better programmer. I'm sure, even 50 years from now there will be a Lisp
dialect among 20 most popular languages in use.
~~~
spcelzrd
20 is pretty far down the list for programming language popularity.
[https://www.tiobe.com/tiobe-index/](https://www.tiobe.com/tiobe-index/) lists
Scratch in the 20th position. Cobol is 25. I'm not sure there's a Lisp dialect
in the top 20 now.
Of course, any ranking of programming languages is problematic. Learning Lisp
is always a good idea.
~~~
juliangoldsmith
I'm not sure I'd agree with the TIOBE Index as a measure of popularity. It
ranks languages based on search queries, which more than likely does not
correlate that closely with use.
For instance, according to the Index, Java (#1) is twice as popular as C (#2).
While Java is certainly popular, it seems a stretch, given the amount of code
written in C, to say that Java is twice as popular as C.
~~~
spcelzrd
You might like githut's rankings better.
[http://githut.info](http://githut.info)
For the purpose of what to learn for the next ten years, Java is probably more
relevant to the job market than C.
------
Blackstone4
__* Tangent warning __* I used to be very focused on what I could do.... learn
how to code, how to write a report, stats, CFA, how to present etc.....
I've come to realise that it doesn't matter as much as I thought. You only
need to be good enough and what is really important is your emotional
intelligence and network. Being able to process and manage your own emotions
and interact with others in a positive, constructive manner is the most
important thing. Reading the book How to win friends and influence people
opened my eyes
~~~
Blackstone4
I should finish by saying that if you want to future proof yourself.... you
should focus on what I wrote above...
------
fegu
Functional programming, especially a language focusing on purity such as
Haskell.
------
true_tuna
Tensorflow. We're going to be offloading a lot of pattern matching to machine
learning. Knowing when and where (and of course how) to apply machine learning
will become increasingly important.
------
tboyd47
There is no technical skill that will prepare you for 10 years.
------
londondev45
C#, JavaScript, Python.
Honestly, i can't see them going anywhere, especially Python.
Might as well learn Clojure, you have ten years..
C seems to not be going anywhere ever. Will there be that many new
technologies??
------
rusht
Machine learning, especially deep learning.
------
Dowwie
decision making through empirical research
------
id122015
Political science. Its like techical. It changes faster than technology.
~~~
nylonstrung
As someone with a degree in political science, it absolutely does not change
faster than technology.
What massive political science paradigm shifts have happened in the last 7
years, the same timeframe that has seen the advent of cloud computing,
containerization, hyperconvergence, agile, ect?
------
behnamoh
The world is going WWW, so I'd recommend stick to it.
------
AlexAMEEE
Algorithms and SQL.
~~~
wirddin
Why specifically SQL?
~~~
popey456963
Even if you don't use the language specifically, the ideas it provides on
creating queries applies to pretty much every database I've used, with the one
exception being Redis.
It also doesn't hurt it's the most popular language at the moment and judging
by the amount of applications using it, won't be disappearing anytime soon
(especially not in just ten years).
------
nicomfe
VR and self driving cars should do ;)
~~~
matthewleehess
I built a self-driving car, and am launching a VR-based startup. Figured I
should chime in here.
I personally don't see much opportunity for people to get involved directly
with autonomous vehicles, from a tech/development standpoint. The vast
majority of work is focused on highly specialized subsets of development.
Computer vision, embedded systems, network infrastructure, cellular networks,
robotics, artificial intelligence, yada yada yada. The only meaningful work
being done right now, is mostly by engineers with Masters, Doctorate, or post-
Doctorate level education in niche fields.
Personally, I don't have that kind of educational background. I still managed
to piece together everything to make a working prototype, but there is not a
snowball's chance in hell that I would be able to contribute anything into
this field.
Not to say it's useless, though. This is about to unlock a need for UI
designers/developers on a level that is hard to fathom. Sit down and think for
a while about what the hell people are going to be doing, while being
chauffeured around. Entertainment options (Netflix, Youtube, News, etc.) will
be in massive demand. Gaming of all different kinds (especially multiplayer
experiences, that involve the vehicle's environment.) Advertising as a whole.
(And goddamn do I wish I could be an investor in PornHub right now.)
While Oculus, HTC, Microsoft, etc. all have (or will soon have) consumer-ready
products available for the VR market, I still feel like this technology is
barely entering it's infancy stage. The financial barrier to entry is holding
back 98% of the world from getting into it (for now). There isn't enough
meaningful content beyond some decent games. (Once again, goddamn do I wish I
could be an investor in PornHub right now.) For the most part, still tethered
to a computer (or using a watered-down phone-based experience). It's cool tech
for sure, but it doesn't feel (to me, at least) like anyone has really figured
out what the hell to do with it yet.
3D development isn't terribly different than any other kind of development. A
few more thoughts and considerations, but still the same principles of
traditional console/pc game development. WebVR (and ReactVR) are still just a
novelty. Because of the sheer scale and intricacy of most 3D environments, and
my own predictions of an explosive growth pattern in this industry, I'm
thinking that some form of automation (A.I., etc.) will have to be taking over
most of the grunt work for development. Thinking that most dev roles are going
to evolve into mostly architect roles, and that the real need is going to be
for UI/UX (particularly thought leaders, as opposed to designers).
~~~
throwaway29292
Thanks for this insightful comment. The current VR status irritates me as
well. It has only affecting gaming till now, despite the remote working/AR
implications. What do you mean by an 'explosive growth pattern'?
------
edimaudo
Learning how to learn is the key.
------
v01d4lph4
Javascript?
------
ParameterOne
Sales.
------
AnimalMuppet
Android.
------
yulaow
Design Patterns.
------
kwoff
digital electronics, hardware programming
------
Const-me
OOP, OOD
~~~
cholantesh
I found it amusing that the comment directly below this was recommending
functional programming.
| {
"pile_set_name": "HackerNews"
} |
Tell HN: The Grocery Store is Watching You (and it's brilliant) - lionhearted
I just saw this quote in an article here and wanted to highlight it. If I had a blog, I'd have written it up there but I thought it was too interesting not to comment.<p>> It's one thing if I trade my personal sales habits to a grocery store chain in exchange for a percentage off the final sale. That's a choice I'm making, consciously and knowingly. (By this point, if you haven't figured that out, you're just deliberately hiding from the fact.)<p>I just did some thinking on the discount cards that are common at grocery stores, at least in America and England (not sure about elsewhere). It doesn't just track your personal shopping history, it also tracks when you buy things on sale - so they can see who exactly changes goods to a more premium version at what discounts, etc. Really brilliant stuff - when you offer "premium cookies" at $2 off, do people add them that wouldn't otherwise? Do they drop the cheap, normal cookies in favor of premium? Do premium buyers stock up, then buy less next time?<p>With some good analytics/datamining/statistically minded people, a grocery could make some intelligent guesses on how gross and net profit would change by offering sales. Maybe they'd even see that offering a certain discount on only one day of the week would do well! Wow...<p>Quote really got me thinking, wanted to share my thoughts with the rest of us here. I know the Tell HN isn't standard operating procedure, but I was totally intrigued and felt compelled to share this.
======
ABrandt
I think posting that here is perfect--I actually like these more "forum" type
posts. I wonder what size grocery stores use these analytical techniques
though. What you describe here is, in my opinion, a pretty sophisticated
analysis, so I can only really see the large chains being capable of pulling
this off.
Regardless, I think this sort of practice is one that easily transfers from
traditional retail to the web sphere. I've seen some pretty impressive
analytic apps released recently, and thats exactly the kind of information
you'd need. Maybe companies such as Clixpy could take their product to the
next level with these type of statistics?
------
rwolf
Forget intelligent guesses--what about the profit to be made from handing some
control over to machines. Put a machine in charge of making tiny price changes
(and reacting correctly to the resulting changes in demand) to blindly suck
every last util of consumer surplus. We're trading some savings now for a
permanent information advantage.
edit: I'm sounding a bit like Art Bell here. to bed!
------
skwiddor
My evil grocery store (Tesco) already sells packets of 5 and 3 so you have to
buy too many or not enough.
They offer price comparisons (a legal requirement) in 100g or 1kg (to make the
expensive stuff look cheaper).
I don't think for 1 second they would use it to the direct benefit of a
customer.
So, er, fuck them and btw., you :>
| {
"pile_set_name": "HackerNews"
} |
I wrote an LLVM-powered trace-based JIT for Brainfuck - Halienja
http://github.com/resistor/BrainFTracing
======
resistor
Hey folks, I'm the actual author of this.
I actually work on LLVM-proper during my day job. This was just a fun exercise
to demonstrate that it was possible. I also have plans to write a tutorial
based on it.
~~~
resistor
Also an example of how to implement a direct-threaded interpreter.
Some performance data from a Brainfuck mandelbrot benchmark.
Interpreter: 37.787s
Tracing JIT: 11.716s
Static Compiler: 2.402s
The tracing JIT loses out to the static compiler largely because there's no
dynamic dispatch in Brainfuck for the tracer to optimize out. There's probably
some performance to be recovered by tuning the tracer thresholds and minor
optimizations, but I would be shocked if it ever beat the static compiler at
least for Brainfuck.
~~~
samps
Thanks for writing this -- it's awesome to see JIT principles boiled down to
the point where you can easily understand the whole system. Please let us know
if you publish the tutorial; I'd love to see more detail on the JIT. In
particular, it would be the perfect template to demonstrate feedback-directed
optimization opportunities and to measure the overhead of tracing; it would be
incredibly interesting to see what has to be done to make the JIT outperform
the AOT compiler.
~~~
resistor
The tracing overheads are pretty huge. Running with tracing but without
compilation takes 107s.
~~~
mikemike
The performance problems originate in the design of your trace compiler, not
in static vs. dynamic dispatch. Some suggestions:
* The interpreter should have a fast profiling mode (hashed counting of loop backedges) and a slower recording mode (for every instruction call the recorder first, then execute the instruction). Either implement it twice (it's small enough), use a modifiable dispatch table and intercept the dispatch in recording mode (indirect threading), or compute branch offsets relative to a base (change the base to switch modes).
* Don't record traces for a long time and then compile everything together. Do it incrementally:
\- Detect a hot loop, switch into recording mode, record a trace, compile it,
attach it to the bytecode, switch to profiling mode (which may call your
compiled trace right away).
\- Make the side exits branch to external stubs which do more profiling (one
counter per exit). Start recording hot traces and continue until it hits an
existing trace or abort if it hits an uncompiled loop.
\- If you completely control the machine code generation (i.e. not with LLVM),
you can attach the side traces to their branch points by patching the machine
code. Otherwise you may need to use indirections or recompile clusters of the
graph after a certain threshold is reached.
\- Region selection has a major impact on performance, so be prepared to
carefully tune the heuristics.
* Sink all stores, especially updates of the virtual PC, data pointers etc. Don't count on the optimizer to do this for you.
* Due to the nature of the source language you may need to preprocess the IR or you need to teach the optimizer some additional tricks.
\- E.g. the loop [-] should really be turned into 'data[0] = 0'.
\- Or the loop [->>>+<<<] should be turned into 'data[3] += data[0]; data[0] =
0'.
\- It's unlikely any optimizer handles all of these cases, since no sane
programmer would write such code ... oh, wait. :-)
~~~
resistor
> * The interpreter should have a fast profiling mode (hashed counting of loop
> backedges) and a slower recording mode (for every instruction call the
> recorder first, then execute the instruction).
It already does this. The recording method is specialized for '[' (since loop
headers can only be '['). All other opcodes go through a fast path that simply
checks if we're in recording mode and stores to the trace buffer.
> * Don't record traces for a long time and then compile everything together.
The tricky part with this is knowing how to start up the profiler when we hit
a side-exit. PC 123 may occur at multiple places in the trace tree. If we want
to extend the tree on side-exit, we need to be able to recreate the path
through the trace tree that led to that point. In essence, we need the
compiled trace to continue updating the trace buffer. Certainly possible, but
doesn't seem like a great idea offhand.
> * Sink all stores, especially updates of the virtual PC, data pointers etc.
> Don't count on the optimizer to do this for you.
Because I'm using tail-call based direct threading, there are no stores to the
virtual PC or the data pointer. They're passed in registers to the tail-
callee.
> * Due to the nature of the source language you may need to preprocess the IR
> or you need to teach the optimizer some additional tricks.
Yes, there's a whole range of pre-processing tricks that could be used to
accelerate both the interpreter and the traces. I haven't even scratched the
surface of that.
------
danieldk
Nice work!
Let me make a tiny plug for a short Sunday project as well... Brainf*ck in
Prolog:
<http://github.com/danieldk/brainfuck-pl/>
One nice thing is that unit testing is really simple:
[http://github.com/danieldk/brainfuck-
pl/blob/master/unittest...](http://github.com/danieldk/brainfuck-
pl/blob/master/unittests.pl)
And for some very trivial outputs, it can generate the program to create that
output.
?- brainfuck:interpret([A,B],[],[0],[0],[1,0]).
A = <,
B = + ?
Ps. Yes, it's easy to improve generation...
------
mathgladiator
Is anyone else oddly inspired to make an OCaml to Brainf __k translator just
to build a staggeringly awesome rube goldberg machine?
~~~
koenigdavidmj
I can not find it, but I have seen a C to brainfuck compiler. Don't ask.
~~~
RodgerTheGreat
Here's the best reference page for the project:
<http://esolangs.org/wiki/C2BF>
------
VMG
Nice work - can you give us some data on how it is?
~~~
VMG
(how _fast_ it is of course)
------
udzinari
I wish I had free time too! brainfuck is boring though.. why not some stack
based language with lisp like syntax or something like that.
------
davidw
I don't know... "neat hack", but it seems there is so much out there that
could actually have some kind of practical application that it's a bit of a
waste to work on "silly" projects. I love to hack on things that don't have
any immediately evident business model or real world application, but I think
purposefully working on something that never will is perhaps a bit
unfortunate. Yeah, he learned something for sure, but that's pretty much all
it can be.
To expand on that: if he'd written his own toy language, say, odds are it
would never go anywhere, but, who knows... maybe it will find a niche. Using
"brainfuck" pretty much guarantees that the code will never find a practical
use.
~~~
vox
I'm assuming you've been downvoted because slightly more than 50% of HNers
think of this project as an artistic/fun project.
But the fact is, even a purely artistic/fun project will have some creativity
or originality in it. I would consider a toy language or Brainf__k written for
the first time as artistic.
But this project is just a JIT for Brainf__k, there's no creativity in it, and
all it did was give the author some experience writing JITs. In that sense
this is an exercise project, and IMO exercise projects do not belong to HN.
~~~
StavrosK
Does HN censor the "fuck" in "Brainfuck", or was it just you?
EDIT: Ah, it doesn't.
~~~
steveklabnik
Generally, as long as it's part of something constructive and adds rather than
detracts from the message, the community won't downvote profanity.
<http://news.ycombinator.com/item?id=1636262>
| {
"pile_set_name": "HackerNews"
} |
American truck drivers could lose their jobs to robots - dankohn1
http://www.vox.com/2016/8/3/12342764/autonomous-trucks-employment?utm_campaign=drvox&utm_content=chorus&utm_medium=social&utm_source=twitter
======
tuna-piano
The other question you need to ask in order to understand this future
scenerio:. Shipping costs will significantly decrease, what will consumers and
shareholders spend the extra money on?
Who knows what they will spend the extra money on, could be healthcare, boats,
TVs, whatever. But new jobs will be created in these expanded industries.
It really, really bothers me the constant "x technology will drive x people
out of work, therefore we need Universal Basic Income, so they won't starve!"
Imagine telling a farmer in 1850s America, when 64% of America farmed, that in
2016 only 2% of people would be farming! Imagine the distopian horror (1)! If
we had established UBI then, and people could get paid to sit around, imagine
the state we'd be in today.
Instead of UBI, people were forced to leave farming and went into other
endeavors, leading to the enormous improvement in production, income and
standard of living since then. If 64% of people still farmed, or 2% farmed and
62% were on UBI - who would have had the time or incentive to create
computers, software, advances in healthcare, etc?
I'm not claiming the transition is easy for someone who's laid off - it can be
an extremely tough process, but it's absolutely necessary for the improvement
of humanity.
(1) [http://www.nytimes.com/1988/07/20/us/farm-population-
lowest-...](http://www.nytimes.com/1988/07/20/us/farm-population-lowest-
since-1850-s.html)
~~~
themagician
Don't kid yourself. New jobs get created, but fewer. And many of those who
lose their jobs don't get retrained—for a myriad of reasons—and end up leaving
the workforce entirely. Many just end up on long term disability because it's
the only path they have.
Jobs are disappearing. They aren't "coming back" in greater numbers. And the
new jobs, by and large, aren't for the previous workforce.
We already live in a world where many jobs are unnecessary. A large portion of
government jobs are, essentially, welfare jobs. One of the systematic reasons
of for the expansion of government is "job creation". Politicians will create
jobs, but those jobs add nothing to society. We've got people pretending to
work a job to collect a paycheck. It's welfare.
The industrial revolution replaced muscle with machine, but brainpower was
still a required input. There was a clear shift: from doing the work to using
a machine to do the same work, but faster. There was enough demand that things
didn't collapse.
The current shift is replacing both muscle and brainpower. Outside of creative
jobs, what is left for the human to do? Make the brain smarter? We are already
entering into a time where the machine makes itself smarter without the advent
of the human.
New jobs will be created, sure. But those jobs will not be for the 1.8 million
truck drivers. Instead the government will likely end up soaking it up through
one program or another. Tens of thousands will end up in welfare jobs.
Hundreds of thousands will end up on long term disability. The numbers aren't
small.
UBI is an inevitability. You've got some 40 million people on food stamps,
about 9 million on disability. Millions more working pointless government jobs
like directing people from one TSA employee to another. At what point to we
recognize that the future does not look like the past?
~~~
atemerev
UBI is inevitably doomed, because the equation doesn't hold.
You are right that ongoing elimination of jobs is a problem leading to huge
social unrest, and UBI fits the bill to be a good solution for this.
The only problem with UBI is that it is unsustainable and therefore
impossible. The balance just doesn't check out. The mere retirement schemes
are in grave danger of collapse; the money there are long spent. UBI is huge,
and there is no way to get this kind of money from anywhere (even if you strip
away all capital from the top 1% and send them to labor camps, as it was done
elsewhere, the money from this will fuel $1000/month UBI for about 2 years
tops).
Antigravity would be an excellent solution to our space travel challenges, but
we don't know any way to make it work. It is exactly the same with UBI.
~~~
themagician
People will have to adapt to a new way of life. We are going to move into a
world where full employment means 50% of people don't work, because there is
nothing for them to do. Like the computer, the cost of everything will begin
to decline rapidly when general purpose robots and automation take over human
tasks. The UBI equation may well balance. When there are no human costs the
only cost is energy. In theory, you will need far less income.
Honestly, I'm excited to see what happens as it will happen in my lifetime. 30
years ago we had the first general purpose computers. Look at where we are
now. Today we have the first general purpose robots. Imagine where we will be
in 30 years. It's hard to imagine.
Drivers, construction workers, doctors, lawyers—it's all going to be
dramatically different 30 years from now. Forget the US. All those jobs making
everything from iPhones to t-shirts will also be at risk. When you don't need
specialized equipment and you can buy a robot that can make t-shirts for a few
thousand dollars you don't need the human anymore.
If not UBI, then what? What do all these people do?
------
ufmace
I'm skeptical about the timelines of these reports. Certainly we'll never see
all 1.8 million drivers lose their jobs overnight. These guys are predicting
auto-truck apocalypse in 5-10 years, and we still don't have a single
commercial system on the road that's capable of even lightening the load on a
driver, much less replacing him. I think truck automation will go in 2 ways at
the same time, and we can watch the progress of each:
Systems to ease the strain on independent drivers. Ones that can cruise on the
highway without supervision indefinitely, but need help with city streets,
parking, maintenance, loading and unloading, keeping manifests, dealing with
whatever company is loading and unloading the cargo, etc. They may need
somewhat fewer of this class of driver, since the trucks will be able to run
more continuously and there will be less need for second drivers and probably
fewer trucks. Motels and truck stops will hurt some when the truckers can
sleep while the truck drives instead of stopping. I think we're at least a
decade away from this existing at all, much less being common.
Full automation for tightly integrated logistics chains. Maybe the Wal-Marts,
Amazons, Fedexes, and other huge companies that own the entire logistics chain
will be able to figure out how to use fully automated trucks, that can drive
from one company facility to another, complete with parking and maneuvering,
driving local streets, and letting other company systems handle the logistics
of loading and unloading and keeping track of what items are where. I bet at
least one of them will start experimenting with something like this in the
next 5-10 years, but probably at least 20 years before it works well enough
for them to cut down on the number of drivers they employ.
There will be job losses, but it will be slow and gradual. There should
hopefully be plenty of time for the economy to adapt, and hopefully either
create new jobs for all of these people to do, or move towards something like
UBI. I think we'll have to have a massive cultural shift before anything like
UBI would be considered or even possibly make sense.
~~~
galdosdi
I dunno. Even if "all" the first generation can do is freeway driving (and
let's say, not even urban freeway, just "easy" rural portions of freeway) that
seems like it still would make the vast majority of long-haul trucking jobs
vanish -- just by definition of "long-haul."
If you ship something from New York to Chicago by truck and it takes 14 hours,
maybe the first and last hour or so are getting in an out of the
source/destination cities. That leaves 12 out of 14 hours where the truck can
drive unattended, eliminating the need for about 12/14 of the truckers (or
their work hours) that drive that route.
It depends on what proportion of truck drivers today mostly do long-haul as
opposed to short haul. I have no idea.
~~~
ufmace
I'm not deep into the freight industry, but I have a few acquaintances who
are, and reportedly a lot of the actual shipments that take place are between
unrelated companies who only moderately trust each other. The freight company
is contracted by one or the other for the route, and usually the truck itself
is owned independently by the driver, with the freight company providing the
routes and organization of loads.
The truck owner isn't going to trust some random other driver halfway across
the country to drive his truck, even if he trusts the auto-driver to navigate
the freeway safely. The freight company isn't going to just trust the shipper
and receiver to load and unload the right stuff properly and not disturb
anything else on the truck. The shipper and receiver aren't going to just
trust each other and the freight company to ship the right stuff to the right
place - they all want somebody in the truck there who knows what's going on.
Basically, the drivers don't just drive, they're the glue that holds together
a whole complex system. Even if we had a perfect auto-drive truck today, it
would probably take decades to figure out systems that work well enough that
shipments work right without somebody who knows what's going on in the truck.
Like I said, I think the best we can hope for short-term is to take some of
the load of actually driving off the drivers, leading to longer trips and
fewer stops.
That's also why I said that the only companies in a position to really use
auto-trucks are those who already have integrated logistics chains, where
drivers who work for the company drive company-owned trucks between company
warehouses to move goods that the company either already owns or has taken
responsibility for tracking.
------
beyondcompute
Yeah, I've been thinking lately as well, why are we doing that? I mean as a
society? (I recall a phrase said by someone, that we are consciously building
a future nobody wants to live in.) What exactly are we getting from that?
Ultra-rich, namely car companies owners and shareholders, will become even
more rich. (Why are they doing it, by the way, don't they have enough super-
cars, mansions, yachts already?)
What am I getting from it? Basic goods will become 7% cheaper? Who needs that?
I am happy with current prices.
And then dozens if not hundreds of millions people worldwide will lose their
jobs and even more the very means for their existence. What will be the impact
on their families, communities?
I may be terribly wrong but it seems like yet another round of value
extraction by a small cohort of ultra-rich from general society.
There are technologies that are genuinely useful, like space exploration,
scientific projects, disease fighting, urban development, planetary computer
network, and so on.
And there are "comfort gimmicks" like refined sugar (and sugary drinks),
tobacco, toasters, etc. that produce effects from which people living
consciously and healthy would want to get rid of. And that are propelled only
by "economic factors". Which are a paperclip maximizer.
~~~
ardit33
I am pretty sure Luddites had very similar arguments back in the 19th century.
[https://en.wikipedia.org/wiki/Luddite](https://en.wikipedia.org/wiki/Luddite)
If you look back at it, they were on the wrong side of history/argument.
Automation is going to keep going, no matter what, and it is mostly a good
thing. Yes, we might have few years of uneasy adjustment times, but things
will sort themselves out on the long run.
Think of it, one modern excavator is much better than 50 people digging
ditches. One operator with one machine replaced 50 manual workers, yet the
world didn't end, but I think it got better over time*
*Whoever protects manual labour vs machines, hasn't lived through deep poor communism. Most eastern Europe didn't have the capital or means to have productive machines in the workplaces, and they replaced them with human labour. Over time the stark differences between west and east became very apparent.
~~~
mavhc
Viewed from another perspective the Luddites were self employed, worked from
home, and set their own hours. They didn't want to work 8 to 8 in a dangerous
factory for a rich guy.
~~~
DigitalJack
Also, they were lucky to live past 40.
------
jondubois
The only people who really benefit from innovation are entrepreneurs,
investors and shareholders. The majority of the population are actually worse
off because of innovation (at least this is the case right now - The value is
just not trickling down).
The worst part of this will come when even highly educated people start losing
their jobs to machines... We will have a situation where entrepreneurs,
investors and company shareholders will earn massive incomes while many of the
world's smartest people (who fell through the cracks of the system) will
struggle to make ends meet - I think many engineers already feel that this
starting to happen now.
Money used to go mostly to employees, but as employees become less valuable in
the workplace, it will go mostly to shareholders (owners of capital). This is
why tax on income is making increasingly less sense - We need a tax on capital
holdings instead.
When you consider the massive role that luck plays in becoming a successful
entrepreneur, it does bring into question the fairness of the entire system.
The balance is shifting; we are moving from an economic system which not so
long ago seemed 'mostly fair' to one which is becoming 'mostly unfair'. Maybe
something like Universal Basic Income would be a good first step.
What we have now is no longer capitalism, it's increasingly an Oligopoly.
~~~
mavhc
The majority of the population now don't starve when there's bad weather, have
indoor plumbing, healthcare, don't die from playing tennis without socks.
What do we need money for now, and why? Houses mostly, because a) hand built,
and b) scarcity.
~~~
mmcconnell1618
In case anyone else was wondering about the tennis without socks death:
[http://www.snopes.com/horrors/poison/coolidge.asp](http://www.snopes.com/horrors/poison/coolidge.asp)
------
MrFoof
> _1.8 million American truck drivers ... well-paying working-class jobs_
Those two items right there are exactly why they are being automated (in
addition to additional efficiencies and cost reductions). If companies can
eliminate those costs, they will if there's a way to do so. That is the
unfortunate reality.
What to do about the aftermath that affects actual people and families as pay
is reduced or eventually eliminated over the next 10-25 years? Well, that's
the new problem. Not one that the companies that employ truck drivers will be
looking to solve, but the one everyone else has to cope with in some capacity
-- whether directly affected by the reduced jobs, or indirectly affected by
those now looking for work in their community.
~~~
Osiris
Our education system needs to adjust to prepare our youth for highly technical
jobs. Instead of truck drivers we'll need engineers to build the automation
systems and programmers to write the complex software behind it.
~~~
PeterisP
There is no economic reason to automate things in order to replace 1.8 million
drivers with 1.8 million engineers.
Automation happens when you can replace 1.8 million drivers with 0.18 million
engineers.
~~~
lkbm
Except the extra 1.62 million engineers can be working on other things.
There's no shortage of useful things to build. It might be hard to think of
things for 1.62 million people to build, but the good news is you and I don't
have to think of those things ourselves right now. Those 1.62 million people
will be thinking about it too.
------
mwsherman
This is a perfectly reasonable intuition – that there will be large net loss
of jobs as trucks are automated – but we should not mistake the intuition for
evidence. There is a long history of believing that massive job losses are
imminent due to technological advance.
The problem with mistaking this fear for a fact is that it often leads to an
incorrect intervention. (I call this a WMD argument.)
We’d be much better served with much greater caution about what is actually,
observably, measurably true. In this case, we’d have to discover the yet-
unfound correlation between technical advance and employment rate.
------
sandworm101
It isn't that simple. The robots will never be a drop-in replacement for all
the various tasks that a "driver" actually does. Driving, negotiating the
vehicle down the road, isn't the entire job.
(1) People will still be needed for inspections and maintenance, however that
will be done. Much of that is now covered by drivers (the little things) and
cannot be automated.
(2) Insurance companies may demand that a human, a certified driver, at least
ride in the truck as backup/security and to deal with awkward situations.
(3) Boarder crossings will still need humans.
(4) Hazmat loads will still need humans on board for safety reasons.
(5) Winter driving. I have yet to see any autodrive system capable of
attaching chains or deicing a clogged brake line.
(6) Automation will open up new areas for drivers. By driving shipping costs
down, more trucks may hit the road, requiring more people for the jobs listed
above.
It may be a wash. The concept that every driver can/will be replaced by an
autodrive bot is naive.
~~~
drcross
Each are edge cases and nothing which can't be fixed by further work.
>The concept that every driver can/will be replaced by an autodrive bot is
naive.
The premise is that the driver aspect will be removed, the things you
mentioned are not strictly what a driver does.
~~~
sandworm101
> the things you mentioned are not strictly what a driver does.
They are according to the drivers I know. Long haul truckers are not like
pilots. They are much more involved with their rigs. They are actually
responsible for maintenance. They are the ones talking to the cops when/if
they hit an inspection station.
~~~
NotSammyHagar
It's not that every single driver will be gone - just a lot of them will be.
There used to be a lot of jobs at stables, horse shoe replacement, etc. Today
there are a lot of jobs changing oil, mufflers, tune-ups etc, that will go
away with the switch to electric cars, but one big truck can carry a lot more
freight than a big wagon train, and it needs less human labor. EVs will still
need work, they are designed and made by humans (at least for a while, ha ha)
so they can break like anything else. But you will need fewer humans probably.
I heard a story on NPR last night where an insurance company was planning for
new businesses, because they expect there to be less need for them with less
human drivers and a lot fewer accidents. Not next year, but in 10 years.
Just like one truck can do a lot more transportation than a wagon train with
10 hourses. You have some jobs taking care of trucks, but fewer than took care
of horse trains. Horses didn't travel as far, so you needed a lot more places
for them to stop and feed and water. I think it will hollow out the middle of
the us even more. I'm not crazy about that, I'm from that middle of the
country that already has continued to hollow out even without robot cars.
~~~
sandworm101
> Today there are a lot of jobs changing oil, mufflers, tune-ups etc, that
> will go away with the switch to electric cars.
But that isn't much of an issue with big rigs, nor with cars generally. The
engine/powertrain isn't a big maintenance item on new cars. Engine internals
are a very evolved and reliable bit of kit. It's the other things like brakes,
wheels, control systems and electrics are the source of most maintenance
costs. Those aren't going away with a shift to electrics. Some things will
transfer over (air filtration) and new things will appear (battery systems
maintenance) and, looking at teslas, powertrain maintenance will still be a
thing.
With autodrive, there will be a host of systems that now need new maintenance.
And, given that the driver isn't there to fix the little things on the spot,
there may be an increase in demand for mechanics that can travel to locations.
Lastly, any switch to autodrive may radically increase trips to the mechanic.
Plenty of cars drive around with engine warning lights on permanently. If you
know what the problem is, sometimes you just live with it. Just look at the
number of cars with blown headlights. An autodrive system might not be so
willing to tolerate faults. We may have to keep the vehicles to a higher
standard, increasing maintenance needs. (Also a great day for parts
manufacturers.)
~~~
drcross
Those points are all very weak. You seem to be in denial that the vast number
of automotive jobs are going to be eradicated even when they are reskilled to
cater for the newer systems that are coming.
~~~
sandworm101
And you seem to have converted to a tech that has yet to see the road. Come
back when we have some actual labour data, not speculation. I've seen many
automotive techs come (FI, antilocks, engine management, collision avoidance,
onstar) each with warnings about diminished labor costs, warnings mostly from
people who couldn't change their own brake pads. Yet little has changed.
------
blfr
Vox writers should be more worried than drivers. Driving a car, dealing with
other users on the road, making regulators happy are all difficult problems.
Meanwhile, I have already seen Reddit bots which summarise submissions. How
far off are from one that will rehash 2-3 articles and toss in an infographic
(created elsewhere)?
~~~
FooHentai
Kinda already happening, see: [http://motherboard.vice.com/read/i-used-to-
write-apocalypse-...](http://motherboard.vice.com/read/i-used-to-write-
apocalypse-survival-guides) "the practice of article spinning, in which the
same human-written article is quickly reorganized and reworded to create one
or more additional “new” articles. (This is often done by software that has a
built-in spintax that replaces keywords in the text with synonyms.)"
------
FooHentai
What I don't get is, if we're so close to this becoming a reality, why isn't
the lower hanging fruit of train/locomotive automation already here?
That seems to be an order of magnitude simpler issue to solve, and yet we
don't seem to be there yet. Granted, forms of automation have penetrated that
industry - deadmans switches, automated signalling and such. But there's still
a human at the helm of every freight loco.
Smaller scale urban light rail deployments seem to have got close to full
automation, presumably due to being able to embed all the necessary elements
into the end-to-end installation of the system (signalling, stock,
cameras/sensors etc).
How can the kind of full automation that would put truck drivers out of work,
arrive before the kind that would put train drivers out of work?
~~~
maxerickson
Trains may be easier to automate, but 100 train cars already only have 1
driver. There is a lot less cost that can be removed there than for trucking.
Even just deskilling trucking offers the opportunity for bigger cost savings.
------
vacri
I don't know about the US, but here in Australia, truck driving is an "old
man's job". The _average_ age of a truck driver here is 47, apparently.
Automated trucks probably won't get here in time to dovetail with the natural
retirement of these drivers...
------
awjr
Once all the robots take over we'll probably need Universal Basic Income
[https://en.wikipedia.org/wiki/Basic_income](https://en.wikipedia.org/wiki/Basic_income)
------
pmarreck
What then? Progress, perhaps. ::eyeroll::
Disclaimer: In 1997 I was cut off by a semi (in the middle of trying to pass
him), who did NOT signal, on a 2 lane road (I-5 in California, speed limit was
85 or so), and ended up entering the ditch and rolling over 8 times and
shattered my hand (it's fine now, but it took a while). The driver never
stopped. I was lucky to walk away from that one.
Sorry, truckers, but your job can eat a bag of dicks.
Lest we forget, the only reason trucking is so huge is because train cargo
wasn't maintained (conspiracists say the oil industry lobbied for trucking).
~~~
merpnderp
Trucking is so big because it is far more efficient than trains. Lol trying to
just in time your inventory using trains.
~~~
gwright
This comment, together with its parent, is a nice example of a false-choice.
They both are built upon an assumption that there is some sort of total
ordering between cargo transportation systems. Trains are better. No trucks
are better. No trains are better.
In reality trains and trucks are part of an incredibly complex cargo
transportation system in which the most appropriate transportation mode is
dependent on the nature of the cargo, the geography of the source and
destination, the infrastructure available, the current price of fuel, weather
conditions, capacity constraints and on and on and on.
There is no total ordering of efficiency of transportation modes.
------
andersthue
It's scary and understandable at the same time especially when you multiple
the number of drivers with their salary, then you get a yearly cost of
72.000.000.000$
That's more than Uber's latest valuation.
------
femto
Is the answer an "Uber for robots"? Rather than an organisation owning all the
robots, individuals could own a robot and rent it out though an online
marketplace. It would be a continuation of the "owner/driver" model, without
the need to actually be a driver.
Maybe the problem is that if it's a lucrative opportunity the group running
the marketplace will want to keep all the profits for themselves, by owning
the robots and locking out small players?
~~~
nxzero
At scale, the value truckers provide is being a driver, not offer there
trucks. Yes, there will be companies that provide logistics as a service, but
it's hard to imagine a business based on a single robot.
------
xf00ba7
At some point everyone gets phased out. The question we should be asking, is
how do we prepare to transition people from one job to the next more quickly.
Coal miners are a perfect example. They're largely stuck. They haven't the
money to send their kids to school to do something else, nor do they have the
$$ to do it themselves (and likely not the time either). We need to rethink
(as a planet), how we deal with churn.
------
TrevorJ
Don't see it happening anytime soon. Self driving cars directly create
convenience for the end user, and the public may be willing to accept a few
crashes here and there in exchange for that convenience.
The first time an unmanned Fedex truck kills a family of 6 in a minivan people
will decided they would rather pay a few cents more to have trucks driven by
humans.
------
palakz
Robots might destroy jobs, but also create new jobs for us.
It's like innovative products - which might make some products obsolete but
also makes space for more innovation and other products that would've not
existed otherwise. :)
------
paulryanrogers
Entropy and diminishing returns from readily accessible energy sources will
kick in at some point. My guess is it'll happen before the robots are more
adaptable to changing road conditions than humans.
------
tfnw
[https://www.jacobinmag.com/2011/12/four-
futures/](https://www.jacobinmag.com/2011/12/four-futures/)
Take your pick, or interpolate between them.
------
sevenless
I bet developers will start to lose jobs to AI. Maybe before truck drivers do.
The reaction here on HN will be something to behold.
~~~
WalterBright
It's already happened. Those AIs are called compilers. Compilers get more
powerful every year. If you took them away, there aren't enough people on the
planet to do the same job with assemblers.
~~~
NotSammyHagar
Brilliant comment! I guess devs are lucky to have enough jobs left to work.
~~~
WalterBright
What happens is pretty straightforward - the more powerful the AI devtools
get, the more is demanded of them. I look at programs I wrote 20, 30 years
ago, and am bemused by how trivial they look today.
I'm calling a compiler an AI tool, because what is it other than you type in
what you want the computer to do, and the compiler figures out how to do it?
------
andrewstuart
Civil war against robots is what will happen:
[http://fourlightyears.blogspot.com/2016/03/get-ready-for-
our...](http://fourlightyears.blogspot.com/2016/03/get-ready-for-our-first-
civil-war.html)
------
galacticpony
Simple solution: Ban self-driving trucks.
I'm surprised that in the land of (apparently ) limitless legal liabilities,
so many people are bullish on self-driving cars, let alone self-driving
trucks.
You should be highly concerned that the push towards this by big players is
going to lead to laws where ultimately nobody will have to take up personal
responsibility for accidents anymore.
~~~
brianwawok
Grandparents killed by truck driver on speed. Robot drivers can't get her soon
enough.
~~~
convolvatron
its surprising this isn't mentioned more often. truck drivers do meth. its the
only way for them to stay focussed and awake driving across country.
so, from both sides, we have drivers with long term substance abuse problems
controlling really heavy metal whacked out of their skulls. and we have an
economy which can only function by effectively ruining people's lives by
turning them into machines.
if we were a remotely moral society we would be horrified
~~~
brianwawok
I THINK that truck driver on speed use is down since log book tampering has
gotten harder over the years, but I am sure it is not perfect. If you can only
work 11 hours a day and need a day off every 3 days of work, less incentive to
speed.
| {
"pile_set_name": "HackerNews"
} |
Does Amazon serve broken JS to anyone else? - pdknsk
Uncaught SyntaxError: Unexpected token ILLEGAL<p>http://z-ecx.images-amazon.com/images/G/01/browser-scripts/site-wide-js-1.2.6-beacon/site-wide-10533302446._V1_.js:1998<p>It has been like this for a few days now. I wonder why Amazon didn't and doesn't notice it. The page works, but any JS enhanced functionality doesn't.
======
masch
Got the exact same problem. Cant use amazon from my laptop but it works on my
desktop. Really strange problem. Clearing cache doesnt fix it.
| {
"pile_set_name": "HackerNews"
} |
Software Allows Hackers to Activate MacBook Webcams Without Green Warning Light - patrickg
http://www.macrumors.com/2013/12/18/software-allows-hackers-to-activate-macbook-webcams-without-green-warning-light/
======
patrickg
I wonder: is this still up to date? Has Apple fixed this?
| {
"pile_set_name": "HackerNews"
} |
How many photos/videos a day do mobile users capture? - fezz
Or also capture and share on instagram/facebook/snap?
======
coralreef
You can probably google it to find some stats, but its got to be several
billions if you're including anything uploaded to fb, instagram and snapchat.
------
jrowley
You may also want to distinguish between screenshots and regular images. On
weekdays I probably screenshot more 2 or 3 times more than I take photos with
my camera. On the weekend it's probably the opposite (because I'm playing
outside or socializing).
------
kleer001
Whoa, what a can of worms there.
on average? by location? by age? by phone model? by day of the week? by
holiday or festival? by location? by time of day? are they in school or
working or on vacation or sick? by data plan size? by income?
So many demographics.
What are you looking for?
~~~
fezz
More specifically would be the storage/bandwidth needed for the average mobile
user to move their photos/videos to the cloud.
One statistic I found was 150 photos/month. For videos, I've not found
anything yet but 10 mins/mo is a rough guestimate. Might be totally wrong...
| {
"pile_set_name": "HackerNews"
} |
Serious OS X and iOS flaws let hackers steal keychain, 1Password contents - dakull
http://arstechnica.com/security/2015/06/serious-os-x-and-ios-flaws-let-hackers-steal-keychain-1password-contents/
======
OrwellianChild
Bad week for password managers...
Here is Agile Bits' response by Jeff Goldberg:
[https://blog.agilebits.com/2015/06/17/1password-inter-
proces...](https://blog.agilebits.com/2015/06/17/1password-inter-process-
communication-discussion/)
| {
"pile_set_name": "HackerNews"
} |
A Cheap, Thin Film Gives Portable Night Vision to Cell Phones and Eyeglasses - elblanco
http://www.popsci.com/technology/article/2010-04/tapping-oled-tech-cheap-thin-film-gives-night-vision-cell-phones-eyeglasses
======
ilkhd2
'sucks a lot electricity, thousand of volts'... And that magazine called __*
Science??? Electricity, like any energy, is measured in Watts. Volts are used
to measure voltage, not energy.
| {
"pile_set_name": "HackerNews"
} |
Bad News: Google Is Doing The Corporate Future-Vision Video Thing - Robelius
http://techland.time.com/2012/04/04/bad-news-google-is-doing-the-corporate-future-vision-video-thing/
======
cromwellian
Except like the Google Car and unlike the Microsoft future video with
transparency, flexible, credit card computer displays, and unlike Apple's
Knowledge Navigator, Google has functioning devices.
Self-drivable cars hade been around for years. What's different about the
Google Prius versions is that they look almost practical, if you squint just a
little bit, just minimized the sensors a little bit more it could even fit
within a car's stylish exterior. In contrast, some of the past versions were
very slow driving and had a truck's worth of equipment in them.
I really think this is a false comparison (to other corporate future videos).
Google may release a product that fails to gain adoption, but they will
release something.
BTW, go search youtube for AT&T's "You Will" campaign with Tom Selleck. They
got a lot of stuff right.
~~~
jedc
Apple can test new phones and new phone prototypes pretty easily. Hardware can
be hidden in new cases, software can just not be shown to other people.
Google tested self-driving cars for quite a while; it was only when the
NYTimes was about to write a story about it anyway that they publicly released
information.
I would assume these glasses are going to be pretty difficult to hide from the
general public when the team is out testing them. If I were them I'd rather
release information they want to the public instead of a random blogger
getting a photo and kicking up a firestorm of interest.
~~~
martinkallstrom
Yeah you never want a firestorm of interest around new products.
~~~
jedc
But it doesn't look like it's a new product at all; it's just a prototype they
want to test in public.
------
pinaceae
interesting that a lot of people don't get why this is bad news.
it is all about expectations, this is still about a product that google wants
to make money of. this video now tickles the fancy of nerds all around. read
the comments and you see things like 'all it needs is a direct to brain
interface' or 'i will buy all iterations of it'. the problem being that google
will not be able to deliver on this concept video, not for the first
iterations at least. just look at how long it took for the iphone to reach its
current state, with apps, etc.
the iphone launch is the perfect counter-example to concept videos. do not
build expectations by releasing concept videos, do not build up _false_
expectations full stop. release and proclaim this. is. it. then see how you
build upon that. even apple stumbles, siri being a case in point of reality
not matching the aspirations.
nothing worse than releasing a great product that just sucks in comparison to
the grand concept videos that didn't have to take into account things like
battery life, difficult background/lighting situations, safety rules, etc.
concept videos take up energy you should spend on the actual product, the one
people will actually need to pay money for.
~~~
Tichy
On the other hand, the latest mantra of the startup world is to build MVPs and
iterate quickly.
What is wrong with getting the conversation started? Is it wrong to write
science fiction novels, for example? I for one enjoy reading science fiction.
The one thing that bothers me about the video is that it seems rather
unoriginal. The thing you'll do with these glasses is check in at your coffee
dealer?
~~~
irollboozers
Exactly. The startup method of MVPs is not universally true for all creators
and visionaries. It's just more true for readers on HN because we tend to have
limited resources. Compared to say, Google.
~~~
Tichy
If I had a huge company, I would probably try to operate the departments like
little startups. It is probably really hard to avoid the drudgery of
bureaucracy and whatnot in big companies. Having more resources could be a
curse.
Therefore I think MVP might have merit even for large corporations.
Also, studies seem to show again and again that nobody can predict the future
success of a product, which would speak in favor of MVPs even for big corps.
~~~
yummyfajitas
This is very tricky to do. Once you are part of a big company you are risking
more than just your small startup.
There is brand risk - if a startup makes a bad product, then petfud.ly looks
bad. If MS Startup Division does, MS looks bad.
If a startup makes a bad product that kills people (or they take on risky
contracts with a big downside), they get sued out of existence. If some small
division of MS does, MS gets sued out of existence.
Big companies are less agile for a good reason. They have a lot more to lose.
~~~
ollerac
I think there's a way around this. For example, YouTube has remained a
relatively separate brand from Google so I think things they do wouldn't
reflect back on Google as much.
One company I think is _really_ good at this is Amazon. IMDB, dpreview,
Audible, Zappos, Woot, and Endless all have maintained their brand fidelity
while also benefitting tremendously from Amazon's technology and assets.
Even negative press for Amazon's Web Services doesn't really impact the Amazon
brand in most consumers eyes, which is pretty amazing in my opinion.
------
swang
From a pure Speech Recognition POV, I will believe Google Goggles is possible
when they somehow solved the major problems in Speech Recognition. SR tech has
been around forever, but only in the last few years have companies been able
to process it on the server side and then zap it over a wireless connection to
get somewhat OK processing times.
But no one has yet to solve the two problems plaguing SR: First, putting a
whole SR engine on a machine let alone on a pair of embedded chips on a piece
of eyewear for instant SR. Even with miniaturization and speed/power
improvements I just don't see it happening mainly because chip makers have
decided to just add more cores. I may be wrong but it seems difficult to
parallelize this with a large corpus.
Obviously the solution is to go the other way and have super fast guaranteed
wireless connection at all times. This will still produce lag but probably
acceptable levels for most people.
The second problem is one that most people encounter with SR which is when the
bloody thing doesn't translate what you said. Either because of an accent or
because you're saying something unique or difficult to parse. I tried asking
directions to a Mexican place in OC and the results were: Q Cortes, cute
protcullis, cute Portos, Q Portales. Can you figure out what I was trying to
say? That is the inherent difficulty of the "last mile" problems that SR has
and I haven't read or heard anything about anyone solving this anytime soon.
SR tech will always be just "good enough" where they can show it off at a
meeting/conference but never good enough to be like Star Trek (which
unfortunately has set a incredibly huge bar for people's expectations).
Even though the Nokia video is less ambitious they seem to have decided to not
include any talking in their future tech videos which seems like a smart idea
to me.
~~~
stcredzero
_From a pure Speech Recognition POV, I will believe Google Goggles is possible
when they somehow solved the major problems in Speech Recognition._
I think one could take the form factor and have an awesome product without
speech recognition. The potential for AR games alone is compelling. Real time
translation of foreign language signs would be useful. Tourist guidebooks
would be great. HUD for maps/gps alone could be a viable market.
------
irollboozers
Man, what a debbie downer. This article coming from a guy who tweeted, "It's
April 1st. THE SINGLE LEAST FUNNY DAY ON THE INTERNET."
Maybe it's just because I've seen the actual effort and the basic research
being done by those tinkerers at Google, that it seems real to me. I know a
phd student who has spent the past 1.5 years without his professor because he
was away at Google working on this (Babak).
This writer just seems like he is stirring the pot though. Unless he has
supreme insight into Google's go-to-market plans, or even basic research for
that matter, I don't get why he feels entitled to shoot down another person's
vision. While others daydream, semi-journo's abound whose only existence is
pointless contrariety.
------
sek
Should i tell you how these videos get made at Microsoft? The MS Office
division has a million left in his marketing budget "Why not make a cool
futuristic video?" When they are hyper futuristic they are just Marketing.
Look at us we are visionary and innovative.
Google has a prototype for a concrete product and tries to show us what the
idea behind this is. Like Microsoft they also have the money for an expensive
video.
Look at the difference between Google Cars, with a prototype and a clear value
proposition and this Toyota video <http://www.youtube.com/watch?v=Q4k0i0c2LWw>
where nothing makes sense at all.
~~~
swalsh
The boldness of future videos in terms of style seems to have increased quite
a bit since the invention of CGI.
------
tjic
What irks me is the painfully studied hipness of the video.
Indy bookstore? Check.
Ukulele reference? Check.
Food truck? Check.
Brooklyn location? Check.
Late 20-something SWPL protagonists in man-boy phase of life? Check.
Ugh.
~~~
Kylekramer
It was pretty obviously Manhattan. Hip Lower Manhattan, but Manhattan.
~~~
tjic
Sure, the Strand is in lower Manhattan, but you're telling me a video that
features a food truck and a uke book doesn't START in Brooklyn?
10:1 the apartment was in Park Slope.
~~~
rory096
That's not really reconcilable with the guy using the 6 train stop at 23rd and
Park.
------
Steko
The other side of this argument is that what Google is doing here is closer to
what MS did with Kinect -- get visibly attached to a groundbreaking original
new product it has confidence it will bring to market.
The Project Natal concept/intro video similarly overpromised when you look at
the first wave of Kinect but the product itself is still crazy revolutionary
and the intro did build the hype and attached the tech unmistakably to MS.
~~~
ralfd
This is actually a good point.
Hm. The concept video was released a year and a half before release. Microsoft
obviously had to announce it, so third party devs could support it. But if you
watch the video so much stuff is bullshit! ^^ Also it gave the competition a
chance to react to it (Playstation Move? I don't own either and don't know
which wave-your-hand-stuff is working better or has better games.)
Natal Project: <http://www.youtube.com/watch?v=IAFbWE_5GvA>
Comparison by CNet: <http://www.youtube.com/watch?v=DeJkPN2smB8>
It certainly built hype (but also derision. I remember many wouldn't take it
serious and as hyperbole). I think Kinect would have been successful in a more
truthful presentation. Like if Microsoft had done an Apple-like reveal and
only shown the real product two months before release.
------
kombine
This is not the future I want. I already got into the habit of not researching
things myself but rather finding a quick answer on Google. Now they take this
approach into the real life. You need to go somewhere? Google will carefully
plan a route for you. Their eventual goal is to route us throughout our entire
lifes. No, thanks.
~~~
almost_usual
I agree with this completely, I'm beginning to believe that these 'cool' and
'innovative' gadgets are beginning to strip away at the individual's
creativity and existence.
I'm not saying this technology wouldn't be cool or awesome to use, I just
don't see it benefitting the majority of humanity in the future.
~~~
DanBC
> _I just don't see it benefitting the majority of humanity in the future._
What do you mean by "benefit"?
There's a lot of stuff which has very little benefit if you define benefit in
certain ways.
With luck the trickle down (people becoming rich from creating these gadgets;
concentrations of very smart people in California; etc) could be used not only
to explore deep oceans but also to create smart innovative tech for developing
world problems.
It'd be great if Google (for example) had a developing world think-tank.
------
forkrulassail
Odd that this received a 'bad news' in the title. The Glass video got me
excited about not needing to carry around GPS, a phone and other unnecessary
gadgets, wearing only the glasses I already require. Also, the writer doesn't
know what is currently happening inside Google X.
~~~
ht_th
But all the things shown in the video already are available in a top-of-the-
line smartphone. The only difference is that you don't need to pull it out of
your pocket. That's just a small improvement if you're into that kind of
thing. What I did miss was actually augmenting reality enabling me to do _new_
things, to perceive the world around me in ways I cannot do now.
For example, given my preference in clothing and my body measurements, when
shopping in the city I would like to "block out" somehow all those shops than
probably don't have anything for me.
Or while cycling, I would like to seen an indication how fast to drive the
next section to not have to stop for the next traffic light. I often have to
stop for a red light to immediately start up again because the light turns
green. I hate that.
Or when I am working in my garden I would like to see some helping lines and
measures to dig straight lines, or to cut my trees at the best spots for it to
grow out nicely next summer. Or to be able to sow seeds at the optimal
distances from each other.
And so on. What kind of _augmented_ reality do you want?
~~~
eric-hu
But all the things available in a top-of-the-line smartphone were already
available in a laptop. The only difference is that you don't need to pull it
out of your backpack. It's just a small improvement if you're into that kind
of thing.
I could go on, but my point here is that improvements are incremental. Having
a desktop has been more convenient than a terminal remoting into a mainframe.
A laptop is that much more convenient over a desktop, and likewise with a
smartphone.
~~~
ralfd
But always wearing glasses is certainly not more convenient than just using a
device when you need it. It is like stripping your Smartphone on your wrist so
you don't need to pull it out of the pocket.
Also I doubt the vision will work that good like in the video. Just a detail:
Imagine the battery constraint of an always-on and always-connected device!
The glasses would be heavy!
And second I find the Google googles vision more scary than awesome. Like
someone quipped on a former thread "why would an advertising company would
want to put a filter between your eyeballs and reality?" And while I
understand that many would love a more cyberpunk-like future it frightens me.
Imagine the amount of pointless distraction to always have a newsfeed in your
visual field! Imagine the procrastination of people always lurking reddit or
facebook or hackernews! You know how incredible rude it is when you are
talking to people but they are checking their phone? And last not least:
Imagine getting used to the glasses and not functioning without it anymore!!
~~~
nknight
> _But always wearing glasses is certainly not more convenient than just using
> a device when you need it._
I take it you don't wear glasses. Many millions of us do. All day, every day,
our entire lives. It's in no way inconvenient.
------
snowwrestler
Visual interface won't work for this sort of thing.
Glasses are too close to the eye and multitasking will require constant focus
racking, which will tire the eyes and produce headaches. Elements displayed in
stereo will require very precise calibration to the user's face geometry to
prevent ghosting or eye strain. Elements displayed in one lens only will
appear transparent.
Even if the tech issues are solved perfectly, humans do not multitask visually
when moving; this is why heads-up displays are not common in cars despite the
technology being easy. It distracts more than it helps. HUD works in planes
because there is no immediate danger of collision when flying, so pilots can
focus just on the HUD for extended periods of time.
Humans in motion multitask across senses, not within senses. Most people can
walk and talk no problem--but walking and reading is a lot harder. So, the
future of wearable computing is probably a wearable computer that listens and
speaks.
And it's here now: a smartphone with an earbud is a wearable computer, and
it's proven successful in the marketplace. To be truly useful, it will need to
volunteer interactions the way the Google glasses do--but to do that well, it
will need to be listening at all times, and understand when information is
welcome and/or needed. Right now that is not possible with current levels of
battery and speech recognition technology.
In the far, far future I could see visual communication multi-tasked into
everyday life. It will take a lot of societal changes for that to happen
though. I like the portrayal of "picting" in Greg Bear's EON series.
------
staunch
If Apple started releasing videos like this it might be revealing. For Google
it just seems like the kind of out-of-character "mistake" they frequently make
due to their less strict and more decentralized nature.
~~~
Tyrannosaurs
If you do something frequently doesn't that make it part of your character?
~~~
ollerac
I think he's arguing that Google has less of an established character because
they've put less effort into establishing one than Apple -- they're more a
bunch of teams working for the same company than one company.
I agree with this point, but I think Google is moving away from this, which I
think will be healthy in the short term and unhealthy in the longterm. Focused
companies need focused leaders. I think Larry Page is filling the Steve Jobs
role nicely right now, but if he falters the whole company will falter with
him.
~~~
Tyrannosaurs
Maybe it's just me but I think that Google has a pretty established culture -
tech led, try lots of stuff, beta early, don't be afraid to kill projects that
aren't working.
Page has certainly done a good job focusing the company more but I think it's
premature to suggest he's in the Steve Jobs mould. He's been in position for
12 months, Google simply isn't as built in his image as Apple was in Jobs. I'm
not talking about quality of thinking, leadership, intelligence or anything
like that, just the sheer extent to which Jobs had asserted his will over
Apple and it's culture.
------
Tichy
TL;DR: guy is dissatisfied that Google is not Apple.
------
krollew
Why bad news? Realy nice video. I dont't need that to enjoy any moment of my
life, but may help another people to do so. Why not?
------
cinquemb
If i had a new product, i would MVP it until it reaches success and iterate
until that point, not spend millions on it upfront. Sure its a nice product,
but great inventions will sell themselves if the people find it useful (and
can afford it) as long as you inform people that it exists.
------
abrimo
This reminds me of the article gruber wrote late last year about the types of
companies that make future concept videos.
[http://daringfireball.net/2011/11/companies_that_publish_con...](http://daringfireball.net/2011/11/companies_that_publish_concept_videos)
------
fatjokes
I think this is rather unfair as the main difference here is that Google has
demonstrated an active commitment to technologically groundbreaking projects,
namely the robot car. While it has not been brought to market either, it is
clearly an advanced prototype.
------
ollerac
Bad news?
I think it'd be cool if contributors were encouraged to remove blatant
opinions from flame bait headlines. I think this would show more respect to
our intelligence and openness as a community. I value the discussions on
hacker news more than the articles themselves and I think it's too bad that
this discussion started out with a biased tilt.
I think something more like "Google made a future vision video" would have
been a more appropriate headline. Then, if the community actually thinks it's
bad news, they can comment on the story and say so. I'd rather discuss plain
facts than the opinion of a time magazine article.
------
erikb
Google makes those videos for a long time now and usually they actually make
it happen. I think just because you know some situations where it didn't work,
it is not correct to assume that it is always a bad idea. Very likely we have
a lot of high tech stuff today because of this kind of videos. It's just a way
of presenting your idea in a quite understandable and exciting way. There are
of course other things like Powerpoint slides, blog posts and so on, but in in
the end what you do just depends on your budget and your available skillset.
~~~
falling
_> Google makes those videos for a long time now and usually they actually
make it happen._
Do you have have links to any of them handy? It would be interesting to
compare the concept video to the real product.
------
ImprovedSilence
There seems to be a lot of hate for this. I like to applaud companies for
making videos like this. Regardless of if it'll make the market in 2-5 years,
it inspires the imagination. It gets you thinking "If that is possible, I
wonder what else is possible" And then the ideas just flow. Same even goes for
that MS video with all those flat transparent screens. It really gets the
creative juices flowing with what can be done. Even if it just sparks you to
think of something completely different from what they show you. I like it.
------
ChrisAnn
Can I get them with a prescription? :/
------
nextstep
Exactly. If Google wasn't on a path to disappoint everyone, they would make
claims about their new product, and then release the damn thing a month or so
later so consumers could actually experience it (like any Apple release).
Instead, Google pre-hypes this thing with no announced plans to release it. I
think we can all assume that either the technology isn't quite as good (yet)
as it seemed in the video.
~~~
mladenkovacevic
Poor disappointed consumers. You mean I can't use this yet? But I want it
naaaooowww!! SIRI! Remind me to make a whiny blog post about this after my
afternoon mochaccino!
First world makes me sick sometimes
------
DanBC
About speech recognition:
Google has very many computers. Google has many smart people. Google knows how
to wrangle large data sets.
Google also has a bajiliion customers, and those customers are used to jumping
through various hoops. Google could get many people to speak words from a
dictionary; and then wrangle that data to improve speech recognition. (I'd be
interested in differences between languages and quality of recognition.)
~~~
waterlesscloud
Google's speech recognition on their voicemail-to-email interface is comically
bad.
~~~
moocow01
I'd have to second this - it used to really frustrate me but now I keep using
Google Voice because their transcriptions make for some great comedy. They are
always painfully off but constant entertainment.
------
uptown
I tend to think of videos like these as creations that are maybe technically
possible today, but not in a package that's affordable, and compact enough for
general sale. It's like Pixar making some of their movies. They'd save the
production of some scenes till the end of production because they presumed the
technology they'd need to pull off what they wanted to do would be available
by then.
------
dbattaglia
I think the real bad news here is imagining a world where everyone is walking
around the city talking to their glasses.
~~~
ImprovedSilence
Still better than talking to their phones? siri...? bluetooth earpiece...?
~~~
glanch
Nobody talks to Siri.
------
skimmas
for a guy that spends most of it's day working on a laptop, I really don't see
the need for yet another screen. This google thing just seems an excuse to
become a road kill. Multitasking all the time just seems to be incredible
tiring. (from someone that recently ditched his smartphone for is old regular
phone)
------
ArekDymalski
Google responded to the buzz - Sergey appeared wearing'em
[http://www.engadget.com/2012/04/06/google-project-glass-
serg...](http://www.engadget.com/2012/04/06/google-project-glass-sergey-brin/)
------
esolyt
Except they do have the product in their hands right now. They are testing it.
~~~
ralfd
But it is nothing like in the video.
~~~
nextparadigms
Siri is nothing like in the video, either. But saying "is nothing like" is
exaggerated anyway. I'm sure it's not 10x worse in reality, but pretty close.
------
pipecork
Let's hope it works better than my Apple Knowledge Navigator
<http://www.youtube.com/watch?v=8mLqJNDWx-8>
~~~
falling
1987\. Right when Apple was starting to fall apart under Sculley. I think you
just proved his point.
------
bobthedino
I feel Nokia's video, in the article, is much better than Google's, mainly
because the hero protagonist doesn't speak.
~~~
ralfd
Well … our heroine does not only not speak, she does really nothing. She wakes
up only to crawl out of the bed and lie down on the couch. And then she
relaxes in the garden. While she is stalked by a guy. And only replying with
smilies! (I guess she is just not that into him. Or Nokia didn't foresee in
2009 stuff like Siri.)
------
gjmveloso
With a simple difference: Google executes their vision faster than Microsoft's
and Nokia's future visions.
------
zv
Imagine "Enlarge your member" and "Buy viagra now" popups on these devices :)
~~~
joezydeco
This is kind of a funny rework of the Google video along those lines:
<http://www.youtube.com/watch?v=t3TAOYXT840>
~~~
drKarl
I think it would be something more like this:
<http://www.youtube.com/watch?v=_mRF0rBXIeg>
------
chj
usually this sort of things aren't going anywhere than the labs.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Alternatives to Bootstrap for non-front end devs? - sfkid222
I’m looking for alternatives to Bootstrap that are as easy to use. Any suggestions?
======
provlem
There are many:
1\. [https://materializecss.com](https://materializecss.com)
2\. [https://getuikit.com/](https://getuikit.com/)
3\. [https://semantic-ui.com/](https://semantic-ui.com/)
4\. [https://foundation.zurb.com/](https://foundation.zurb.com/)
5\. [https://bulma.io/](https://bulma.io/)
6\. [http://getskeleton.com/](http://getskeleton.com/)
7\. [https://purecss.io/](https://purecss.io/)
8\. [https://groundworkcss.github.io/](https://groundworkcss.github.io/)
9\. [https://cardinalcss.com/](https://cardinalcss.com/)
10\.
[https://github.com/powertoweb/powertocss](https://github.com/powertoweb/powertocss)
and many others
------
simplecto
Try Tacit CSS [1]. No classes or special nesting to learn because it simply
overrides the default look.
I use it on some side projects, plus a little more when I need it.
[https://yegor256.github.io/tacit/](https://yegor256.github.io/tacit/)
| {
"pile_set_name": "HackerNews"
} |
Microsoft recommends switching to iPhone, Android as it kills off Windows phones - Varcht
https://www.cnbc.com/2019/01/18/microsoft-ending-windows-10-mobile-says-switch-to-iphone-or-android.html
======
ThrowawayR2
Unfortunate news; though most won't miss it, I will. It was the last
smartphone option that avoided both the Scylla of Google's data vacuuming and
the Charybdis of Apple's pricey walled garden and their "my way or the
highway" design philosophy. Plus, the live tiles are actually useful in a
smartphone context.
Microsoft (and by that I mean both Arbogast and Myerson) was just plain stupid
to break app compatibility multiple times between Windows Mobile 6.x and
Windows Phone 10. What exactly did they think was going to happen to their app
count after pissing off their developer base?
~~~
scarface74
It’s _good_ to not keep app compatibility forever. When you don’t you get the
hodgepodge mess of Windows.
Apple has broken compatibility plenty of times and has been able to move its
customer base to the new platform.
~~~
mikestew
_Apple has broken compatibility plenty of times_
When Apple breaks compatibility, I have to tweak the Objective-C code I've
been toting around for ten years. When Microsoft broke WinMo compatibility,
you burned it to the ground and started over.
~~~
scarface74
Were you around for either the 68K to PPC transition, the PPC to x86
transition or have you had to move from Carbon to Cocoa?
~~~
mikestew
I was not writing Objective-C code for any of those on mobile platforms, which
is the topic at hand.
EDIT: though in rereading your original comment, it's a fair question. Yes, I
was around. No, I still don't think it applies. Even considering desktop, IMO
Apple made great efforts to make such transitions at least somewhat seamless.
In more than one instance, Microsoft's mobile message was: "that app won't run
on the new OS without a _lot_ of work."
------
godzillabrennus
This didn't age well: [https://www.zdnet.com/article/microsoft-celebrates-
windows-p...](https://www.zdnet.com/article/microsoft-celebrates-windows-
phone-7-with-mock-iphone-funeral/)
------
rchaud
Curious: Would there be any benefit to open sourcing parts of the OS, at least
enough so that it could run on mobile hardware of some kind?
I believe HP/Palm open-sourced WebOS after they abandoned the device market,
but nobody tried to build a Hackintosh type phone out of it.
Back then, that made sense as WebOS suffered from a lack of apps, but in 2019,
mobile web apps aren't the afterthought they used to be. I can call an Uber
and track the entire journey using their web app manifest ("Add to home
screen"). I'm using Uber as an example of a complex app that you wouldn't
think would work well as a 'website'.
With everything we know about Google's data collection/tracking policies, I'd
think there'd be some itnerest in having the option of hardware that ran a
non-intrusive mobile OS.
~~~
squarefoot
I would be far more interested if they open sourced the hardware of their
phones, or at least publish enough information that allowed developers port
other operating systems on hardware otherwise doomed to end in some third
world landfill.
~~~
petecox
Yes, it's generally a Qualcomm SoC underneath. Already several Lumias were
reverse engineered to run Android. [0]
A month ago, MS announced Project Mu - opening up their UEFI implementation on
github. [1] Extending that project to their legacy ARM phones would be the way
forward, perhaps.
[0] [https://www.xda-developers.com/microsoft-lumia-525-hacked-
to...](https://www.xda-developers.com/microsoft-lumia-525-hacked-to-run-
android-6-0-1-with-cyanogenmod-13/) [1]
[https://blogs.windows.com/buildingapps/2018/12/19/%e2%80%afi...](https://blogs.windows.com/buildingapps/2018/12/19/%e2%80%afintroducing-
project-mu/)
------
MiddleEndian
Windows Phone has been dead for awhile, really. I will miss it, WP8.1/8.2 was
truly the least bad mobile experience.
~~~
tluyben2
I haard more people saying that: I like the hardware but thought the OS was
awful.
~~~
MiddleEndian
I loved the OS compared to Android. It was very performant which made cheap
hardware seem fast. The OS was incredibly consistent, and it never badgered me
with useless notifications.
------
lostgame
R.I.P. I loved my Lumia.
| {
"pile_set_name": "HackerNews"
} |
AT&T Customer Service Rep Tells Us How She Really Feels: "This is Bullsh*t" - gatsby
http://techcrunch.com/2011/01/13/att-rep-verizon-iphone/
======
varjag
So, TC journalists putting some petty out-of-context bickering with someone
into the publication. How media 2.0.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What privacy-conserving measures should be taken post-snooper's charter? - libeclipse
Since the Investigatory Powers Bill has been passed into law in the UK (https://news.ycombinator.com/item?id=12978300), what steps should one take to preserve privacy and security?
======
alistproducer2
Developers that care about privacy should really start to work on
decentralized, backwards compatible applications to replace the services we
depend on like email and social media.
I know there are already service out there, but most of them lack the
simplicity needed to reach non-technical users. Getting people away from these
services is as much a marketing and user adoption problem as it is a technical
one.
------
joefarish
I'd start with a VPN. I'm personally a big fan of Tunnel Bear as it is
incredibly easy to setup and has a good mobile app as well. TorrentFreak has
some good VPN reviews. Lots more at
[https://www.reddit.com/r/vpnreviews/](https://www.reddit.com/r/vpnreviews/)
Some posters in the original thread had reservations about VPNs
[https://news.ycombinator.com/item?id=12980878](https://news.ycombinator.com/item?id=12980878)
.
For me the benefit of a VPN would be protection against my ISP being hacked
rather than stopping someone with sufficient motivated from GCHQ accessing my
data.
~~~
linux-modder
As a tag on to this, set a solid hardened ( to your level of tin foil)
firewall, complete with replacing router with open|DDwrt one and setting up a
VPN over ssh from within your own home network, this has several benefits, you
know the holes in the firewall, you use the bandwidth you are paying for
already, it makes for a relatively cheap private cloud that also allows for
smaller and or lighter footprint devices that can stream remotely from your
home storage and also know that your nfs / remote cloud is under control of
someone you trust (yourself)
~~~
linux-modder
Updated email on profile but its also on my keybase profile
[https://keybase.io/linuxmodder](https://keybase.io/linuxmodder) ...under my
github profile. sheldon DOT corey AT openmailbox DOT org.... Side note if
anyone is interested in keybase invite I have about a dozen and a half left,
email me at above shown email, I openly and happily solicit PGP mail with the
keys shown on my keybase profile.
| {
"pile_set_name": "HackerNews"
} |
All-In-One Messenger 2.0 released - Gmail, Instagram, FB, Whatsapp ... - ladino
https://allinone.im
======
ladino
operates like Franz ([https://meetfranz.com](https://meetfranz.com)), but much
more simple and minimalistic with less CPU usage.
Beside all the common messenger it supports new messenger like Instagram,
Tinder etc. - Signal and „planed Messages“ are the next goals.
I personally like to have my Gmail account isolated from my default browser
and stay logged out of Google while browsing the web.
It’s free and feedback is greatly appreciated! :)
| {
"pile_set_name": "HackerNews"
} |
Toilet paper orientation - ZeljkoS
https://en.wikipedia.org/wiki/Toilet_paper_orientation
======
The_rationalist
This made me laugh :) but I Wonder if humanity can find an even more useless
debate.
------
DarkWiiPlayer
What can I say... Over is the right way and under is heresy.
~~~
eesmith
> The question "Do you prefer that your toilet tissue unwinds over or under
> the spool?" is featured on the cover of Barry Sinrod and Mel Poretz's 1989
> book The First Really Important Survey of American Habits. The overall
> result: 68 percent chose over.[24] Sinrod explained, "To me, the essence of
> the book is the toilet paper question ... Either people don't care, or they
> care so much that they practically cause bodily injury to one another."
| {
"pile_set_name": "HackerNews"
} |
Rethinking Recurrent Neural Networks - jostmey
https://docs.google.com/document/d/1X9f-wst8QhrCCFTWiJIz6vq1qAOlpyYAUo_kaFf0J8M/edit?usp=sharing
======
eternalban
[https://arxiv.org/pdf/1703.01253.pdf](https://arxiv.org/pdf/1703.01253.pdf)
| {
"pile_set_name": "HackerNews"
} |
Microsoft Tafiti Is Beautiful, But Will Anyone Use it? - transburgh
http://www.techcrunch.com/2007/08/21/microsoft-tafity-is-beautiful-no-one-will-use-it/
======
nickb
Looks like a gimmick made to show off Silverlight. People judge search engines
based on speed and usability. This has neither. But it is pretty! :)
| {
"pile_set_name": "HackerNews"
} |
Partially evaluating a bytecode interpreter using C++ templates - mrry
http://www.cl.cam.ac.uk/~srk31/blog/2015/09/16/#c++-partial-evaluating-interpreter
======
cyrusand
I'm just throwing this here:
[http://blog.mattbierner.com/stupid-template-tricks-
template-...](http://blog.mattbierner.com/stupid-template-tricks-template-
assembler/)
| {
"pile_set_name": "HackerNews"
} |
Nearby Photos on Your Phone - sjs382
http://blog.flickr.net/en/2009/06/18/nearby-on-your-phone/
======
hopeless
Any ideas how they grab the location through a simple webpage?
~~~
hopeless
Nevermind, this is how you can do it with iPhone 3:
[http://blog.bemoko.com/2009/06/17/iphone-30-geolocation-
java...](http://blog.bemoko.com/2009/06/17/iphone-30-geolocation-javascript-
api/)
| {
"pile_set_name": "HackerNews"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.