text
stringlengths 44
950k
| meta
dict |
---|---|
From Bifrost to Panfrost – deep dive into the first render - mfilion
https://www.collabora.com/news-and-blog/blog/2020/04/23/from-bifrost-to-panfrost-deep-dive-into-the-first-render/
======
st_goliath
For the uninitiated, a few bullet points I stitched together from Wikipedia
and freedesktop.org:
The Mali GPUs are IP Core GPUs produced by ARM Holding[0].
Midgard and Bifrost are architecture code names[1].
Panfrost is a FOSS driver for Mali Midgard and Bifrost GPUs[2].
An older effort at reverse engineering exists under the name Lima, targeting
older Mali GPUs[3].
[0]
[https://en.wikipedia.org/wiki/Mali_(GPU)](https://en.wikipedia.org/wiki/Mali_\(GPU\))
[1]
[https://en.wikipedia.org/wiki/Mali_(GPU)#Variants](https://en.wikipedia.org/wiki/Mali_\(GPU\)#Variants)
[2] [https://panfrost.freedesktop.org/](https://panfrost.freedesktop.org/)
[3]
[https://en.wikipedia.org/wiki/Mali_(GPU)#The_Lima_and_Panfro...](https://en.wikipedia.org/wiki/Mali_\(GPU\)#The_Lima_and_Panfrost_FOSS_drivers)
~~~
dmos62
I own a device with a Mali Midgard or Bifrost GPU. I'm excited about getting
HW acceleration on it. Anyone has insight about the usability of the driver at
the moment? Alos, why are developers interested in it? Surely there are loads
of GPUs without open source drivers.
~~~
floatboth
Midgard is usable - GNOME Shell and KDE Plasma usable.. even SuperTuxKart with
GLES3 usable now: [https://www.collabora.com/news-and-
blog/blog/2020/02/27/expe...](https://www.collabora.com/news-and-
blog/blog/2020/02/27/experimental-panfrost-gles-3-support-has-landed/)
Bifrost is quite early as you can see from the article this thread is for.
> Surely there are loads of GPUs without open source drivers
Mali is very popular (so many SoCs from Rockchip, Amlogic, Allwinner, etc.)
and actually was sort of the last holdout among them.
Etnaviv, VC4, Freedreno all predate Panfrost and (the resurrected and
upstreamed version of) Lima.
Now I guess the last holdout would be PowerVR but it's not like there's so
many devices with their GPUs. Nokia N900 and early iPhones I think?
~~~
monocasa
I spent a while playing with PowerVR trying to reverse it on my beaglebone.
It's a huge pain, they feel like an order of magnitude more effort than Mali
cores to get running with free drivers as they run a full RTOS on their GPU
cores. They also have tons of bugs they hack around on a per rev basis around
the caching and virtual memory systems which means your reversed code will
sort of work for a bit, then start mysteriously start failing as you pull your
hair out for a month.
Neat design; I just wish Imagination would get their head out of their ass and
approach the community. They might still have a viable business if they had
done that before now.
~~~
floatboth
RTOS on.. SIMD units?? o_0
~~~
monocasa
Most unified shader GPUs have the same semantics, they just run the priority
thread scheduler in hardware. PowerVR was being cute and saving gates and
their power cost by making the shader cores themselves handle all that rather
than a dedicated hardware block.
------
chris_wot
If you want a good intro to Panfrost, try this blog article:
[https://blogs.gnome.org/engagement/2019/10/03/alyssa-
rosenzw...](https://blogs.gnome.org/engagement/2019/10/03/alyssa-rosenzweig-
panfrost/)
------
Keyframe
I thought initially it was about this [https://area.autodesk.com/blogs/the-
maya-blog/introducing-bi...](https://area.autodesk.com/blogs/the-maya-
blog/introducing-bifrost-for-maya/#)
| {
"pile_set_name": "HackerNews"
} |
Keith Alexander Interview: Obama makes almost the same decisions as Bush - PythonicAlpha
http://www.afr.com/p/technology/interview_transcript_former_head_51yP0Cu1AQGUCs7WAC9ZVN
======
ezl
Brutal re-titling.
In one small part, former NSA head says they make almost the same decisions
regarding issues of how to defend our nation.
_Gen. Alexander: Obviously they come from different parties, they view things
differently, but when it comes to the security of the nation and making those
decisions about how to protect our nation, what we need to do to defend it,
they are, ironically, very close to the same point. You would get almost the
same decision from both of them on key questions about how to defend our
nation from terrorists and other threats._
Not the same as "Obama makes almost the same decisions as Bush".
And it only takes a few different decisions to be meaningfully different.
------
parasubvert
This title is rather misleading as it refers to one question in a lengthy NSA
interview.
I'm curious if you can find any presidential hopeful, save Rand Paul, that
would act much differently. National security politics in the US pretty much
accepts Bush-era policies as the new normal, as much as that may suck.
------
sp332
I always thought the "defund NSA" movement was bonkers. I realize this is from
the other extreme of the spectrum but it's nice to hear from both sides.
| {
"pile_set_name": "HackerNews"
} |
Learning Python while processing raw text: The NLTK book - ColinWright
http://nltk.org/book/ch03.html
======
donretag
Here is another freely available book, Text Processing in Python:
<http://gnosis.cx/TPiP/>
Plain text files and not tied to a library.
------
ColinWright
I know this is Chapter 3, and hence jumping into the middle of the book, but
lots of people here will have enough knowledge and experience to start from
here and check up on unfamiliar terms. You may find you need to backtrack a
little, but it seems to me this is a good place to start.
~~~
RRRA
That is going to be very useful to me, thanks. I'm doing 2 classes right now,
one where I have to present python and the other explain NLTK! :P
------
waterside81
For anyone whose interested in text classification (along the lines of what
Chapter 6 in this book covers), check out our service
[https://www.repustate.com/predictive-analytics-machine-
learn...](https://www.repustate.com/predictive-analytics-machine-learning/)
It's machine learning as a service: simple API calls to train, cross-validate
& classify your data. We'll also be at PyCon this week in Santa Clara so come
on by.
Currently in private beta, but we're ramping things up quickly.
~~~
bromang
Do you really see any sort of market for providing operations that can be
implemented very easily using python itself?
~~~
waterside81
The Python examples given in the book are very very rudimentary. The service
behind our API is "real" machine learning (e.g. SVMs, RBMs, deep neural
networks & the like). This is all transparent to the user - you just submit
your data via our API.
------
denzil_correa
In terms of Machine Learning for text data, Chapter 6 is highly recommended.
<http://nltk.org/book/ch06.html>
------
zissou
I've learned a lot from the NLTK library, but unfortunately NLTK is terribly
slow. Nevertheless, it is a fantastic place for beginners to start with text
processing and learn from as the documentation is superb. However eventually
one may want to start digging into the NLTK source to rewrite necessary
functions using multiprocessing if they plan to process any "big" textual
datasets.
------
danso
Processing text is a great way to learn any programming language but I would
think there's more interesting and varied practice found through web scraping,
not to mention it's a whole lot easier
~~~
ColinWright
Forgive me if I'm mistaken, but that comment feels like you've read the title
of this submission, but not actually read the chapter. This isn't just about
chopping and slicing strings, this is an entry point into a comprehensive book
about Natural Language Processing, and its associated techniques as
exemplified in Python.
~~~
danso
The chapter contains HTML processing but that's a small subset of what this
chapter covers. You don't need to learn word stems to do really interesting
things with structured HTML. Also, web scraping involves more than text
processing, but the programmatic navigation of websites, which does add some
complexity but is pretty manageable with the libraries out there.
Edit: I'm obviously not saying NLP isn't useful, just that web scraping is
more _immediately_ useful. With NLP, besides learning the concepts, you have
to find a source of raw text that's been unprocessed and yet contains
something of real world value. With web text, you just have to collect what
someone already thought was valuable to publish and find insights through the
aggregation. It seems to me that the latter scenario is easier to grasp, with
NLP being useful for going beyond what others have gathered and published.
~~~
ColinWright
Thing is, this is a book about NLP, not a book about web scraping, so while
what you say may be true (although personally I find more value learning about
NLP than WS) it seems a little misplaced.
But there is value in both, depending on your objectives. I find web scraping
trivial, and mining the text hard, hence my interest in NLP and machine
learning.
------
seanlinehan
I actually read this book a few weeks ago. I'm pretty new to Python as whole
(4 months with the language), so I picked up a few small tricks have saved me
quite a bit of time. It does not assume that you either know Python or
linguistics very well, so I was certainly pleased that I was able to have my
hand held through some of that. I recommend it!
------
nailer
This is an excellent resource which I own, but seemed to be focused on
language scientists who are unfamiliar with Python rather than developers who
need to process text - which I suspect is a large portion of the audience.
~~~
agibsonccc
What simpler use cases do you see? In my case, I'm the language guy this is
targeted at and have no clue what most web devs would want this stuff for.
Spam and text classification of some kind maybe? MAYBE certain kinds of named
entity recognition?
~~~
sdoering
Well you as a news-distributor could try to build a tagging-machine,
something, that takes texts from a sports news agency for example and enriches
it with meaningful tags/keywords, your data from your statistics-section (and
so on), to later match other, related content, or match images, or anything
like this. something, that you could transmit with the original texts, to make
life easier for your customers, with sorting and managing these texts in an
automatic fashion inside their content management systems.
[edit] This coming from a text guy, who recently started down the path of
python and is hooked ;-)
------
forgotAgain
Cover and TOC
<http://nltk.org/book/>
------
tootie
I read it and their code examples are not great. Too many abbreviated and
meaningless variable names.
------
dunham
The "pattern" library is also worth checking out:
http://www.clips.ua.ac.be/pages/pattern
| {
"pile_set_name": "HackerNews"
} |
Routers and Ethics - IsaacSchlueter
http://foohack.com/2008/07/routers-and-ethics/
======
rit
The last statement, "As software and hardware engineers, if our defaults put
users in an unsafe situation, where their credit and savings are placed at
risk, then we’ve failed them, and we’ve acted unethically."
is excellent.
Unfortunately, it's not easy to get people out of the mindset described [plug
it in, download porn, rinse, repeat].
Part of the issue is making it easy to initially connect, and as a corollary -
easy to reconnect once you've changed settings. This seems to be the big
issue.
There is however a standard established for doing this:
<http://en.wikipedia.org/wiki/Wi-Fi_Protected_Setup>
My newish dlink router supports it. There's a button on the side that you push
to put it in setup mode [I think you probably need Windows for the "easy"
parts]. By DEFAULT it is setup secure, and requires you to change the
password.
The trick of course is getting this pushed across the board.
------
cmos
Sounds like an opportunity..
Why don't we design one that is foolproof? i.e. security is always enabled,
and it requires a complex password? Perhaps there is even a readout + little
keyboard built into it so the end user can go through a simple wizard to
ensure security. The readout would give them the keys to type into their PC.
And on top of it there would be a large green + red indicators.. Red would
glow if there was any security issue, with the error on the readout + sent to
the end user as a text message. Green would indicate all is secure.
The only real question is what to name port forwarding. While I'm a huge fan
of the obvious and clear "Applications and gaming", being a router
manufacturer I would be obligated to create yet another new name for it, like
"wormhole port" or "you fool, just call your geek nephew already".
Seriously, I have trouble going from a linksys to a netgear to a dlink, not to
mention the OEM routers sold by ISP's now that often lock down features.. How
can we expect these pieces of fail to be installed correctly?
~~~
briansmith
AT&T already does the most important parts of that. In my complex every
network is named and secured uniformly because AT&T sells cable models with
built-in wireless routers. They have an installation process that is totally
automated and automatically configures them for WPA or WEP. The users don't
even (get to) choose the network name. As a result, I couldn't leech off
another network even if I wanted to.
Secure networks make perfect business sense for internet providers; otherwise,
they'd end up with customers with wide-open networks that neighbors could
share together (unwittingly or otherwise).
As for port forwarding, typical users never need to configure that, and when
they do, UPNP usually can do it for them.
~~~
IsaacSchlueter
Yes, AT&T's DSL setups are a triumph of proper defaults. They come in, plug
everything in, give you the password (or set up your computer), and go away
once everything is working and safe. Easy setup, safe installation.
Of course, I've had WPA and WEP hacked enough times to not trust either of
them. MAC address filtering is a bit safer if you want to lock down access to
the internet, but _at least_ the password has to be changed. Default router
passwords put users so badly at risk, there's really no excuse.
------
bprater
I think many vendors may now use encrypted Wifi by default, but I wonder if
doing something as simple as creating a "random" password and printing it on
the user's manual would have done the job?
------
akd
His counterexamples are very bad. If you don't practice proper car
maintenance, you can get into a very unsafe situation with e.g. underinflated
tires. Prescription medication? My dad almost put "otic" (ear) drops in my eye
when I was a kid -- I pointed out that there was a typo on the box and he gave
it a second glance.
If users aren't going to put a minimal investment into using a product safely,
there's only so much you can protect them.
~~~
IsaacSchlueter
That's true, if you want to be as safe as possible, then it pays to be
attentive to all the instructions you receive.
But, my point was that we receive so many complicated instructions about
complicated things that we don't really have time to understand fully, and
there are only so many hours in a day. You simply must prioritize. The fact
is, I expect that my doctor and my mechanic are going to tell me what to do,
and that I can pretty much just trust them. In this case, Netgear violated
that trust.
| {
"pile_set_name": "HackerNews"
} |
Surveillance Cameras Made by China Are Hanging All Over the U.S - propman
https://www.wsj.com/articles/surveillance-cameras-made-by-china-are-hanging-all-over-the-u-s-1510513949
======
propman
"In May, the Department of Homeland Security issued a cybersecurity warning
saying some of Hikvision’s cameras contained a loophole making them easily
exploitable by hackers. The department assigned its worst security rating to
that vulnerability."
They are placed around military army bases and are the #1 devices used to spy
on Chinese citizens.
Someone messed up when approving this
| {
"pile_set_name": "HackerNews"
} |
Microsoft announces Windows Holographic - sz4kerto
http://www.theverge.com/2015/1/21/7867593/microsoft-announces-windows-holographic
======
sctb
Comments moved to
[https://news.ycombinator.com/item?id=8924755](https://news.ycombinator.com/item?id=8924755).
| {
"pile_set_name": "HackerNews"
} |
Qualified Immunity: Explained - dredmorbius
https://theappeal.org/qualified-immunity-explained/
======
solidsnack9000
The article offer a good summary of what qualified immunity is and why it is a
problem.
_Ordinary people ... are expected to follow the law. If they violate someone
else’s legal rights, they can be sued and required to pay for the injuries
they’ve caused._
_Under the doctrine of qualified immunity, public officials are held to a
much lower standard. They can be held accountable only insofar as they violate
rights that are “clearly established” in light of existing case law. This
standard shields law enforcement, in particular, from innumerable
constitutional violations each year. In the Supreme Court’s own words, it
protects “all but the plainly incompetent or those who knowingly violate the
law.”_
~~~
deathgrips
This actually prevents case law from being established. If it's plainly
evident that tasing pregnant women for refusing to sign paper is bad, but no
high profile case has established it, no lawyer will pursue such a case
because there will be no payout. It's a mechanism for cops to do what they
want and face no civil consequences.
~~~
tanderson92
It's actually worse than that, if you can believe that that is possible:
[https://scholar.valpo.edu/cgi/viewcontent.cgi?article=1198&c...](https://scholar.valpo.edu/cgi/viewcontent.cgi?article=1198&context=law_fac_pubs)
------
bradleyjg
The lede is buried:
“However, that justification ignores the reality that such costs are virtually
always paid by the officer’s municipality, insurance, or unions. A study of
more than 80 state and local law enforcement agencies across the country found
that in instances of misconduct—including those involving truly egregious,
clear-cut abuses of authority—individual officers almost never paid such
costs.”
By all means QI should be overturned by statute so victims can be compensated,
but 1983 is not a mechanism for holding officers accountable. For that we need
pare way back civil service and collective bargaining based barriers to
imposing consequences on mal- and misfeasant officers.
~~~
pstuart
Sorry to spam with this comment but...
Police should be self-insured, backed by their pension plan. Without real
incentive to change, there is no change.
~~~
CryptoBanker
Why not insured by actual insurance companies? Surely after any incident the
insurance company would raise premiums and after a certain point it would no
longer be affordable for bad cops to be cops.
This wouldn't be too different from a medical professional being required to
have malpractice insurance.
~~~
Gibbon1
That's what I came around to, cops should get a stipend to pay their bond. And
make sure the insurance companies have full access to their personal files. I
think that would get rid of 90% of your problem cops right there. Because
they'd be unable to find insurance.
------
tehwebguy
Qualified immunity is kind of a red herring. Yes, it's important for citizens
to be able to sue police, but unless those damages roll uphill they will often
just be suing judgement-proof cops.
The other issue, the issue that's on everyone's mind this week, is _criminal_
cases. Police don't avoid prison or criminal cases because of qualified
immunity, but because they are not charged in the first place.
~~~
deathgrips
Criminal cases are a red herring. We need to address the structures and
institutions that make tasing pregnant women possible.
Let's put it like this: if you came into your office one morning and the guys
in the cubicles next to you are shooting up black tar heroin (and no one acts
like they're doing anything wrong), does the problem lie with the individuals
or with the corporation?
~~~
oldsklgdfth
To address that I feel like you have to ask what would moving officers to
pursue the issue that way. I suspect part of the answer is quotas and a “never
back down” mindset.
QI seems like the bandaid. If you don’t push hostile policing, maybe you don’t
get hostile police.
~~~
coffeecat
I disagree with the conclusion that qualified immunity is unimportant. It's
one small facet of a system which encourages hostile policing. Others include
the practice of exclusively hiring veterans, access to military gear, friendly
relations between police and local officials, and the nature of their training
and culture. To some extent, these things reinforce one another. I think the
larger problem can only be addressed by tackling the smaller problems one-by-
one.
~~~
oldsklgdfth
I don’t think QI is insignificant.
I suspect that policing practices became hostile first and QI is the legal
loophole to get away with it. If you eliminate it the “system” will find
another way to do it. “Active” policing is a lot of “cleaning up the
neighborhood” and “maintaining property value”. As long is that is the part of
the goal there will be away to protect police.
------
clarkevans
[https://mappingpoliceviolence.org/](https://mappingpoliceviolence.org/)
claims that 99% of police killings from 2013-2019 have not resulted in
officers being charged with a crime. Their database lists 7663 killings, of
which 26 (or .33%) were tried with a conviction.
------
drtillberg
A competent lawyer can find a violation of Federal law violation in almost any
impromptu government action, mainly violations of due process rights. The key
substantive benefit of making such a claim, piling onto state law violations,
is recovery of attorneys fees under Section 1983.
It's easy to see that each example in the article involved improper conduct
that hopefully violated state law. State law decisions in each case hopefully
would have been easy(ier). Constitutional law, on the other hand, sometimes is
_hard_ \-- especially when there isn't a decision directly on point.
So, we get qualified immunity, where judges decline to allow the tail of
(potential) attorneys fee recovery wag the dog of constitutional law
challenges that involve complicated issues of both fact _and_ law. No one
loves litigation about attorneys fees except lawyers, and that's key insight
in understanding the law of qualified immunity.
An alternate solution would be to adjust the incentives that litigants have to
bring Section 1983 challenges in the first place-- possibly by making
attorneys fees more generally available to litigants-- so that they reserve
Section 1983 claims for those cases where they are truly decisive.
Edit: The article does talk about attorneys fees briefly, I corrected my
comment. Thanks.
~~~
mcherm
> The key substantive benefit of making such a claim, piling onto state law
> violations, is recovery of attorneys fees under Section 1983 (the article
> does not discuss this).
Perhaps you should go back and read the article again -- it absolutely does
discuss this.
------
deathgrips
Cops can't be held liable for violating case law until another cop has been
held liable for that same act. The solution is left as an exercise for the
reader.
It just works :^)
~~~
sumedh
Arent judges supposed to have better critical thinking abilities than us
normal folks. How do they not see a flaw in that logic?
------
dredmorbius
A closely related reform is _prosecutorial_ immunity:
[https://www.themarshallproject.org/2018/03/13/let-s-put-
an-e...](https://www.themarshallproject.org/2018/03/13/let-s-put-an-end-to-
prosecutorial-immunity)
------
1cvmask
The Cato Institute has a blog on abolishing qualified immunity:
[https://www.unlawfulshield.com/](https://www.unlawfulshield.com/)
------
maherbeg
What institutions can we support to help push changes forward here?
This feels incredibly important to have fixed.
~~~
ccvannorman
This. What actionable steps can _I_ take to make a change?
\- call your legislators / senators / city officials -- sure, but can I do
more than a call? \- What is the form of a bill/law that I can get behind? How
does such a process start and finish? I would donate $$$ right now to any
organization with a clear plan to overturn this.
~~~
deathgrips
Get out in the streets until the problem is solved. Your representatives
distract you from how much power you have as an individual.
------
fallingfrog
This is blood curdling
~~~
ccvannorman
Good. May it move your hand to vote for a change, otherwise it will not
change.
| {
"pile_set_name": "HackerNews"
} |
ASK PG: What opportunities do you see in mobile? - rsandhu
======
jimmyjim
How about applications involving NFC?
[http://www.fastcompany.com/1795224/2012-the-year-nfc-goes-
ma...](http://www.fastcompany.com/1795224/2012-the-year-nfc-goes-
mainstreamoutside-the-us)
Not to toot my own horn here, but I've got a good record in being accurate
with these predictions so far ;)
--<http://news.ycombinator.com/item?id=1448851> (and in conversations with
friends I successfully predicted that SRI would be the flagship feature for
the next iPhone)
------
glimcat
Bluetooth Low Energy might end up being a big thing once there are phones out
which have a radio for it.
The spec is intended for devices which send small amounts of data with minimal
power consumption, so it's well suited to things like environmental sensors
and ambient displays which have historically been underutilized outside of
research settings.
| {
"pile_set_name": "HackerNews"
} |
Minesweeper in TypeScript and React - tdelev
https://medium.com/@tomce.delev/minesweeper-in-typescript-and-react-f5f8a5d57383
======
tdelev
The author here, if you have any questions happy to discuss.
| {
"pile_set_name": "HackerNews"
} |
Legal Lesson Learned: Copywriter Pays $4,000 for $10 Photo - domino
http://blog.webcopyplus.com/2011/02/14/legal-lesson-learned-copywriter-pays-4000-for-10-photo/
======
pbhjpbhj
> _Our web copywriters were under the impression that images on the Web
> without any copyright notices were “public domain” and therefore free to
> use. Naive? Yes. [...]_
I find it almost impossible to believe that copywriters dealing in the
creation of copyright works weren't aware of the very basics of the copyright
mechanism.
If they believed the above then they'd have to believe too that they couldn't
charge anyone to use any content that they wrote that had appeared online?
This sounds like an attempt to cover up mindful dishonesty.
> _While we maintained an active stock photo account for our blog with access
> to an array of suitable photos, one of our copywriters grabbed a photo from
> the Web._
Bam. There you have it, why would they pay for stock photos when they could
simply download those same stock images for free by doing an image search (if
the photos had been used elsewhere already). Indeed they could probably have
found the identical stock photos in the wild with a little legwork.
Does anyone find this convincing?
~~~
loupgarou21
I work with a large number of professional creatives, including copywriters.
They're all pretty well aware of copyright laws. This story might make sense
if his "copywriter" was his 16 year old cousin. Also, I'm going to say that
placing pictures isn't really the job of a copywriter.
------
pacifika
Interesting story. After reading the story I still don't know the explanation
of the damages that make up the number $4000. I'm just a developer but what
kind of damages arising from their past infringing use of the image' are they
talking about?
Bluntly: Is the $4000 number just high enough for the order to be taken
seriously and low enough to convince an organization to pay up?
| {
"pile_set_name": "HackerNews"
} |
Mozilla Firefox 4 will be our last big release - cfontes
http://www.pcpro.co.uk/news/365602/mozilla-firefox-4-will-be-our-last-big-release
======
yannickmahe
But chrome also updates automatically, without asking for user permission.
Regardless of you think of that policy, it avoids platform fragmentation.
Firefox seems to be headed to a big fragmentation. Can't say that's good news
for web designer and frontend developers.
~~~
sp332
Almost all users of Firefox and Chrome are using the most up-to-date versions
of those browsers. Firefox updates download in the background and update with
a single click. Firefox 5.0 (which will be out not long after 4.0) will have
completely transparent addon updating - not even a dialog box.
~~~
necolas
It's presumptuous to say that this is the case for Firefox users. The fact
that Firefox requires manual agreement to update means there is fragmentation.
Faster updates will only increase that unless it is accompanied by automatic,
invisible updates.
~~~
estel
From <http://www.w3counter.com/globalstats.php>, 87% of users are on Firefox
3.6 or newer, 95% 3.5 or newer and 99% on FF3 or newer. As fragmentation goes,
that's not half bad.
~~~
tnorthcutt
_From<http://www.w3counter.com/globalstats.php>, 87% of users are on Firefox
3.6 or newer, 95% 3.5 or newer and 99% on FF3 or newer. As fragmentation goes,
that's not half bad._
For reference (according to that page), here are the numbers for each browser
& version listed (as a percentage of that browser's share), along with the
stable release date (from Wikipedia) for that version:
IE8: 65.3% - 3/19/09
IE7: 24.9% - 10/18/06
IE6: 9.8% - 8/27/01
FF4: 2.3% - x/x/11
FF3.6: 85.3% - 1/21/10
FF3.5: 8.2% - 6/30/09
FF3: 4.2% - 6/17/08
Safari 5: 84.2% - 6/17/10
Safari 4: 15.8% - 6/8/09
Chrome 8: 100.0% - 12/2/10
~~~
estel
There are some numbers < 0.7% of the market that are not explicitly included
on that page. Particularly Firefox 2, which still has about 0.25% of the
overall market (0.8% of Firefox), and some older versions of Chrome.
------
nailer
Nice. Google proved you could do the streaming release model, Firefox has seen
the light.
This moves the web forward much faster than the 'big bang' style IE releases
(sadly they're the ones who need this the most).
~~~
nopassrecover
Firefox has been release streaming for as long as I recall. The only exception
was major releases (as in to new rendering models). I'm not even sure Chrome
has been old enough to run into this?
------
viraptor
Now the question of distros remains. What will ubuntu do? Since they want a
distribution that is frozen in time, will they allow updates at all, or create
ubuntu-volatile repos, or ...?
Also, what will happen with addons? How can developers track API changes and
provide updates in time?
~~~
seabee
Same thing ubuntu does with chromium. As far as I can tell it's a problem
already solved.
~~~
RyanMcGreal
If that means I won't have to muck about with hacks like ubuntuzilla any more,
I'll be happy.
------
ozziegooen
I don't like that title.
Had me thinking Firefox was preparing to shrivel down under Chrome. I became
really, really sad for a minute.
For the last few months I've used Chrome because of it's speed and elegant
interface. But I really prefer Mozilla and it's open source approach.
While Google isn't that bad, I feel like I'm doing the evil thing for personal
satisfaction. Like I'm slowly entering the dark side.
~~~
jp_sc
You can use Chromium if the license is what bothers you.
------
Derbasti
Lets see if Mozilla can keep up the pace and lets see if they can push those
updates unintrusive enough to keep users from leaving.
------
kamme
I hope they do change their general approach and not just the update
mechanism. I have been using chrome for a couple of months now and just
switched back to firefox to see what the evolution is during my time 'away'.
It's quite a mess, from my point of view... As a webdeveloper the browser is
my toolbox and while I do find firebug superior to the chrome development
tools, the other things that caught my attention where not so positive... It
feels sluggisch, was using 1.3gig of memory with 6tabs open and crashes
frequently. But I do admit it feels like coming home a bit so I'm sticking
with it and hoping things will be better in the future. I just hope this will
end well for firefox.
If not they can always start developing another lean and fast new browser that
will rock our worlds and gain populairy fast and change names a few times...
(anyone remember phoenix and firebird?)
------
timtadh
DUPE:
<http://news.ycombinator.com/item?id=2189121>
<http://news.ycombinator.com/item?id=2189183>
| {
"pile_set_name": "HackerNews"
} |
Why Bitcoin Should Be Trading Above $800 or Below $450 by This Time Next Week - elishagh1
http://dashpaymagazine.com/index.php/2016/07/10/btc-trading-800-450-time-next-week/
======
gwern
Oh look, some technical analysis.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Craigslist and crawlers - kinkdr
I wrote a small crawler for Craigslist making absolutely sure that it obeys the robots.txt rules.<p>Sure enough, after a few hindered of requests my crawler got banned.<p>Doing a simple search at Google, I can see from the results I get that CL is happily allowing Google bots.<p>I strongly believe that this discrimination against small players is highly unethical, but my question is, is it breaking any laws or maybe anti-monopolistic rules?
======
jacquesm
Craigslist is a privately owned website, they can ban you at will.
That said, you've already given the solution to your problem yourself. Instead
of crawling craigslist, crawl google instead.
~~~
IanDrake
In the epic 3taps vs. CL battle, CL claimed copyright on all posts, which
wasn't supported by their TOS. So they updated their TOS to give them
worldwide exclusive rights to ads.
CL finally backed down when the EFF told them to, which says a lot because the
EFF is practically owned by CL.
~~~
jacquesm
> because the EFF is practically owned by CL.
Sorry?
I'm totally floored by that, did I miss something?
I know that CL donated some sum of money to the EFF but that was a settlement.
~~~
IanDrake
The EFF is an outspoken critic of the CFAA and has admonished multiple
entities for threatening criminal prosecution. When Craigslist threatened me
the first thing I did was contact the EFF. They didn't ignore me. They told me
there is nothing they were willing to to at the time, which in my eyes was
worse than just ignoring me.
Only until it became a high profile issue with 3taps did the EFF speak out
against CL.
That was about a year after I contacted them and they only did after 3taps
started getting press.
To me it was pretty clear that they didn't want to bite the hand that fed
them, until they had to or else lose all credibility.
------
IanDrake
Stay away from doing this. I built a small business doing real-time alerts on
CL and eventually got a C&D letter from their lawyers. I gave up.
I got around IP restrictions by essential having SETI like software running on
my client's computers.
They will threaten you saying you're committing a felony by violating the
CFAA. They are insane, but have deep pockets.
After I shut down, others stood up against CL's ridiculous claims and lost.
Search for 3taps.com and padmapper.com to learn more.
~~~
kinkdr
Wow! Thanks for the info. So sad that they lost. How is this not illegally
thwarting of competition?
I wonder how can they claim that what you did was illegal or against their
TOS, but what search engines are doing is not.
I guess I am lucky I didn't spend too much time on this. Thanks again!
| {
"pile_set_name": "HackerNews"
} |
Are HN submissions dominated by a small elite of users? - jhh
I was wondering the same thing when I recently saw this blog post: https://news.ycombinator.com/item?id=6956690<p>I thought it would be easy to find out how karma is distributed among users using the HNsearch API.<p>Unfortunately, it's not that easy because this API is not optimized for fetching massive amounts of data. What I could do was looking at the 1000 users with the most karma. I separated those into ten chunks and looked at the total sum of Karma per chunk of 100 users. This is the result:<p>http://www.chartgo.com/share.do?id=6cca3d09aa<p>(Data:
[ (0, 2941707),
(1, 1314760),
(2, 936226),
(3, 745944),
(4, 636533),
(5, 557441),
(6, 493996),
(7, 449999),
(8, 413963),
(9, 384491)]
)<p>So as you can see the 1st chunk is pretty dominant but then it kind of flattens out. So are HN submissions dominated by a small elite? It obviously depends on your definitions, but I guess not.<p>It would be interesting to have better data available. Did I miss a better way to query the API? Here's the Python script which I used to query the API: https://gist.github.com/johannes-gehrs/77a92284e3509f2ae960
======
tptacek
If you're talking about submission karma, clearly no. I follow the submissions
of a selection of HN users I've known for awhile, all of whom are karma
leaders, and their submissions are often as not buried on page 4 or 5 with 2-3
votes.
If you're talking about comment karma, clearly yes. I could write a comment
consisting of randomly chosen names from the Chicago phone book and someone
would upvote it. Name recognition has that effect, as does a small group of
people who seem to follow my comments deliberately (as I do for the list of
people in my profile).
~~~
bananacurve
grellas posts are particularly buried because they often don't subscribe to
the hivemind, but they are always enlightening.
~~~
tptacek
'grellas comments are (justifiably) routinely stapled to the top of the
threads on which they appear.
------
sheetjs
Keep in mind that karma includes both submissions and comments (unlike reddit,
where they are separate).
For comments, at least, there is a clear bias where comments from users with
higher karma tend to stay at the top of the comment page longer, regardless of
the votes for other comments. This is intentional and definitely contributes
to the effect
~~~
interstitial
Sort of a "founder effect" built into a site for founders. How meta.
------
mathattack
I suspect that you'll find it to be some kind of power law distribution, which
implies that measures like Standard Deviation won't be so useful. A quick look
at the top 1000 that you provided suggests this might be the case. But the top
100 users don't dominate the top 1000, let alone the entire population. There
does seem to be some leveling off, though it's unclear how steady that will
be.
Net - domination is too strong a word. Dominated might imply 20% of the karma
in 20 users, and this clearly isn't the case. Even defining the Elite as 100
doesn't get us this.
------
mixmastamyk
Yes, take a look at the "new" page. On it you'll see 30 recent submissions,
most of which are just as appropriate as those on the home page. But, they
don't get any votes, why not?
I don't think it typically matters whether the submitter has name recognition
or karma, but rather how many friends and colleagues the person can rally to
their cause. All it takes is about 10 points in the first hour to get to the
front page and then it can "ride the wave" from there. This is an easy
threshold for people with large professional networks, companies, with friends
in SV et al. Many people here know each other. Yes, there are some technical
counter-measures, but there is only so much that can be done.
This is not to say that a user acting alone has no chance of reaching the home
page. However, after a few years of frequenting this site, I believe that most
of the posts hitting the front page have had a boost such as I described
above.
My personal experience anecdote is that, over several years (also considering
prior accounts), working as a "loner", that I've never been able to get more
than a few meager votes on a submission. Even if they were highly on topic,
even if I got up at 6am Pacific, supposedly the best time to do it. Maybe I'm
unlucky or my submissions suck, but I rarely bother any more.
~~~
tptacek
My impression is that suppression of voting rings, which is what you're
describing with your "group of 10 friends" example, is something that the HN
maintainers actually spend a lot of effort on; I get this both firsthand from
the maintainers that I actually know, and from the fact that I feel like I
regularly manage to do (innocuous) things that trip the detector.
~~~
judk
HN voting is a trivial example of selection bias. If frontpage was determined
by a random sampling of the user base (weighted by karma or age if you like),
instead of whoever may go looking for the post, it would be more fair. One of
the reddit devs suggested something like this recently, putting a New post on
every page.
------
nemothekid
[http://en.wikipedia.org/wiki/1%25_rule_(Internet_culture)](http://en.wikipedia.org/wiki/1%25_rule_\(Internet_culture\))
------
hnletter
A little late to this, but for my Hacker Newsletter project I keep track of of
the number of times a user gets on the top three pages:
337 shawndumas
312 ColinWright
298 iProject
291 Libertatea
214 danso
197 llambda
165 Lightning
163 tokenadult
158 sethbannon
147 mtgx
144 evo_9
120 DanielRibeiro
115 denzil_correa, jonbaer
111 lelf
102 001sky
101 dmor
98 coloneltcb
97 Tsiolkovsky, craigkerstiens, dsr12, mh_
95 ssclafani
90 jseliger
89 jamesbritt
87 JumpCrisscross
85 Garbage
83 wallflower
82 luu
81 antr, protomyth
77 anigbrowl, codegeek
74 aaronbrethorst
73 jgrahamc, pg
71 uladzislau
70 sinak, steveklabnik
69 RougeFemme
68 cleverjake, derpenxyne, swohns
67 _pius
66 jamesjyu, rosser, wslh
64 Pr0, eplanit
63 Brajeshwar, sk2code, uptown
62 bane, stfu
61 teawithcarl
59 ghosh, joeyespo, ph0rque
58 gnosis
57 r0h1n, scholia
56 AndrewDucker, cpeterso, ohjeez
55 bpierre, cyphersanctus, gruseom
54 conductor
52 nikunjk
51 Suraj-Sun, kirillzubovsky
50 lispython
49 MikeCapone, WestCoastJustin, co_pl_te, duck, robin_reala
48 Anon84, ca98am79, taylorbuley
47 fraqed, morphics, nkurz, rpm4321
46 jmduke, kumarski, nreece
45 georgecmu
44 daegloe, rdl
43 donohoe, kunle, olivercameron
42 tanglesome
41 austengary, ck2, jnazario
40 interconnector, rnyman, sonabinu, wikiburner, zoowar
39 cdvonstinkpot, pwg, revorad
38 1337biz, hepha1979, jeffbarr, mhb, zdw
37 fogus, kjhughes, vectorbunny
36 Cbasedlifeform, DanBC, DiabloD3, Reltair, acremades, geetarista
35 AndreyKarpov, jessaustin, mxfh, prostoalex, raganwald, rbanffy
34 wglb
33 CrankyBear, cryptoz, hackhackhack, kposehn, nvk
32 MarlonPro, bcn, npguy, qubitsam, techinsidr
31 Ashuu, Baustin, caffeinewriter, drucken, edent, iamtechaddict, ilamont, mariuz, mikecane, rmah, rohshall, shill, whoishiring
30 eguizzo, eksith, gklein, lukashed, talhof8
29 Sami_Lehtinen, ari_elle, cwan, espeed, gwern, jacquesm, mitmads, sgdesign, superchink, velodrome, wyclif
28 Quekster, X4, clicks, esalazar, iamwil, jaf12duke, javinpaul, laurent123456, recoiledsnake, vanwilder77, zt
27 6thSigma, ValentineC, aespinoza, ananyob, anu_gupta, awwstn, bavidar, dave1010uk, ekm2, pajju, rkudeshi, sdoering, yapcguy
26 DanielBMarkham, bitsweet, brudgers, hornokplease, jcr, tptacek, whalesalad
25 EzGraphs, Hirvesh, adventured, albertzeyer, barredo, johns, jpadilla_, kevin_morrill, kmfrk, mmastrac, mooreds, mrb, niggler, ot, prateekj, saurabh, shrikant, tambourine_man, thibaut_barrere, tjaerv, usaphp
24 amerf1, coffeemug, dangoldin, neya, qiqing, signa11, twapi, w1ntermute, zrail
23 Dekku, Shivetya, duggieawesome, ingve, jalanco, mikeevans, ninthfrank07, npalli, olalonde, playhard, rachbelaid, rb2e, subsystem
22 adamnemecek, amarsahinovic, aritraghosh007, chmars, colinprince, ctoth, dutchbrit, eranation, jashkenas, jmacd, joshfraser, kine, lowglow, mike_esspe, muratmutlu, napolux, redDragon, rrhoover, tareqak, youngerdryas, yread, ytNumbers
21 T-A, akandiah, anon1385, breck, dpaluy, glazskunrukitis, gridscomputing, homakov, ianstormtaylor, jusben1369, kunai, mindcrime, msvan, onosendai, peterkchen, prajjwal, pron, schrofer, sp332, treskot, ttunguz, urlwolf, ximeng
20 BerislavLopac, BruceM, Jaigus, a3voices, abdophoto, angersock, boh, brandnewlow, chinmoy, chrisacky, cperciva, fekberg, ivoflipse, jasonshen, julien, llamataboot, mjn, mmahemoff, platz, shakes, smit, soundsop, speeder, tmoretti, tocomment, vellum
19 RobAley, SanderMak, WadeF, bencevans, bitcartel, clarkm, declan, experiment0, gasull, goronbjorn, hartleybrody, jejune06, jkuria, k-mcgrady, kevingibbon, mijustin, minimaxir, njoglekar, obilgic, ohadfrankfurt, pclark, rainmaker23, robdoherty2, rubikscube, sciwiz, stevewilhelm, tdrnd, timf, twakefield, wamatt
18 10char, ForFreedom, Pasanpr, aynlaplant, bsg75, cs702, deviceguru, dotmanish, ezl, jheitzeb, jstreebin, julien421, orrsella, pain_perdu, philfreo, polskibus, rblion, ridruejo, rmason, sew, swombat
17 MaysonL, ananddass, barmstrong, bconway, bergie, crabasa, dmmalam, dsego, dshankar, equilibrium, fejr, ferdo, gebe, jkopelman, josephby, mikeleeorg, mkrecny, nashequilibrium, petenixey, petercooper, petrel, profquail, ra, replicatorblog, songzme, spking, taytus, ujeezy, wrongc0ntinent
16 Adrock, Impossible, LiveTheDream, StavrosK, aashaykumar92, afshinmeh, ajaymehta, cdl, choult, damian2000, danboarder, daw___, electic, esolyt, followmylee, franze, giis, greenyoda, gregpurtell, hunvreus, jdorfman, jdp23, jellyksong, jorde, jordn, karamazov, linux_devil, lleims, mahmoudimus, maskofsanity, mjhea0, narad, nate, p4bl0, peter123, rfreytag, sciurus, sebg, sharkweek, spindritf, stevewillensky, tosh, tshtf, xmpir, zengr
15 Ataub24, FredericJ, MIT_Hacker, Mithrandir, adulau, apress, areski, austenallred, babawere, bhauer, bitops, bmmayer1, bradleybuda, ckelly, cooldeal, czr80, dan1234, danielpal, gits1225, grey-area, groundCode, hamidr, hawkharris, jeffreyfox, lucb1e, mayop100, mikeknoop, pdknsk, relation, robbiet480, rodriguezcommaj, rpsubhub, sasvari, sheri, smacktoward, thejteam, thinkcomp, vinhnx, xtraclass
14 0cool, Alex3917, ComputerGuru, Peroni, SparksZilla, amazedsaint, anons2011, apoorvamehta, bifrost, blacktulip, bradly, brandonb, bslatkin, c-oreills, carlosgg, charlieirish, chwolfe, codelion, davidroberts, dbaupp, dendory, dennybritz, draegtun, dsl, e1ven, edwintorok, enneff, glazemaster, grej, harryzhang, ishener, ivankirigin, japhyr, jarederondu, jkaljundi, johndcook, joshuacc, justincormack, kamaal, kanamekun, lisper, mathattack, networked, nickmain, pjvds, pragmatictester, pykello, robg, sabalaba, samspenc, sinnerswing, technologizer, ternaryoperator, thewarrior, timr, titlex, watermel0n, wlll
13 6ren, AlexMuir, CrunchyJams, DavidChouinard, Kopion, acav, alter8, asanwal, benjlang, bra-ket, brettcvz, bryanh, bussetta, chrismealy, cinquemb, cpleppert, creamyhorror, cwilson, davidw, detcader, dfc, digisth, ekianjo, endtwist, gatsby, glaugh, gulbrandr, ics, jfaucett, jlongster, kn0thing, larrys, lenkendall, libovness, lukaseder, mactitan, marban, mgunes, michaelrbock, neeee, noinput, palidanx, paulschlacter, plessthanpt05, porker, pytrin, rjsamson, rohin, rpledge, samsolomon, sebkomianos, selmnoo, sethev, smaili, srathi, t0dd, techaddict009, tellarin, tilt, tlrobinson, tomasien, trendspotter, turoczy, vyrotek, waffle_ss, waterlesscloud, weu
12 31reasons, ChuckMcM, CrazedGeek, DaNmarner, Ecio78, Fletch137, GuiA, ISL, IgorPartola, JacksonGariety, Kilo-byte, Rickasaurus, RockyMcNuts, Sealy, SteliE, aaronpk, abraham, achalkley, akos, alexholehouse, ankitoberoi, anujkk, bberson, bcl, bjansn, bjonathan, brianchu, btilly, bwertz, casca, cgi_man, changdizzle, chaz, chriscampbell, cocoflunchy, cramforce, d4vlx, daigoba66, dandrewsen, dctoedt, deusclovis, dmoney67, doh, dwynings, eladgil, enmaku, evolve2k, feelthepain, flippyhead, ghshephard, googletron, gz5, hansy, hkmurakami, hornbaker, irollboozers, jnoller, joxie, juliangamble, kareemm, kitcar, metajack, microwise, mindstab, mmariani, morisy, mpweiher, negrit, nekojima, nqureshi, paraschopra, pascal07, pauljonas, pbiggar, pearjuice, peterkelly, poissonpie, robheaton, seminatore, sherm8n, spooneybarger, stesch, taigeair, thegarside, theoutlander, tikhonj, tnorthcutt, uvdiv, vog, vu0tran, weisser, yeleti, zan2434
11 Amadou, BIackSwan, Cieplak, Swizec, TDL, adrianhoward, aelaguiz, afschwartz, alok-g, aram, benhowdle89, benwerd, cstross, darxius, davidsmith8900, dboles99, decklin, devx, diggan, dkasper, dohertyjf, eduardordm, edw519, eibrahim, ekpyrotic, elleferrer, epenn, fatiherikli, frankdenbow, friism, guiseppecalzone, haven, hudibras, iand, igrigorik, immad, infoman, jacoblyles, jamest, jbaudanza, jdmitch, jgv, jipumarino, johnjlocke, jordanmessina, jpmc, judegomila, kloncks, knes, kogir, ldayley, leephillips, lest, lettergram, luigi, marcieoum, mazsa, memoryfailure, moonboots, morefranco, nbashaw, ndesaulniers, nonrecursive, nsns, nsp, null_ptr, octopus, oleganza, patrickaljord, pavel_lishin, philip1209, rasengan, rdemmer, remi, rfnslyr, rvivek, salimmadjd, sgrove, shared4you, sillysaurus2, simonreed, spdy, speednoise, stollercyrus, swannodette, teamgb, thehodge, tippytop, tlongren, tomse, tortilla, trevin, twog, ukd1, whit537, willvarfar, wiradikusuma, xijuan, yitchelle, zachinglis, zhs
10 AirbnbNerds, Avalaxy, CaptainZapp, Danieru, JanLaussmann, MattRogish, Maven911, Mitt, SonicSoul, Xcelerate, andreiursan, andrewnez, antman, austinhallock, bdehaaff, bearwithclaws, bevenky, binarybits, blackhole, brown9-2, ceeK, chaostheory, cheeaun, chewxy, chrislloyd, clarky07, cobrausn, cyang08, dazbradbury, dcope, denysonique, dhotson, diminium, dirkk0, dmitri1981, dsowers, dylangs1030, dz0ny, edtechdev, fecak, felipebueno, fmavituna, giorgiofontana, gkuan, gregman, hboon, hippo33, hodgesmr, imkevinxu, jacobwg, jashmenn, jayadevan, jazzychad, jballanc, jf, jkarneges, josh2600, jpatokal, jvns, jwallaceparker, k33l0r, kafkaesque, kinlan, kintamanimatt, krat0sprakhar, kyledrake, lukedeering, maccman, malloc47, marojejian, matt1, maudlinmau5, mdturnerphys, mecredis, milesf, misiti3780, mjfern, molecule, mparramon, mschonfeld, nathanbarry, nileshd, ovechtrick, pavs, pearkes, petethomas, phenylene, philk10, pkrein, r4vik, radley, ramisms, robhawkes, rocky1138, rsobers, sidcool, siong1987, smanuel, stefan_kendall, sw007, swah, swampthing, swanson, sweis, techdog, thingsilearned, timjahn, tylermenezes, ukdm, will_brown, wsieroci, yesplorer, yonasb, zacharyvoase
~~~
judk
Can you post a windowed version over say the last 3 months? Some of those
accounts are very old.
~~~
duck
I'm away from my computer right now, but I'll post something like that in a
day or two.
------
peterwwillis
Look through the submission list of each user whose posts make it to the top,
or of all of the users submitting to new. You'll see if the stories each user
posts gains lots of points; if they don't, it's all comment karma.
------
minimaxir
Any Karma measure analysis would be biased since Karma is accumulated through
both Links and Comments (in contrast with Reddit, which splits both
measurements).
"Dominated" is a flawed qualifier since Karma is not a zero-sum game.
------
maaku
There's a book you should read called "The 80/20 Principle". It might help you
understand better how this sort of thing naturally happens, and why it is to
be expected and nothing wrong about it.
------
pearjuice
No, they are dominated by the common denonimator and that is a massive flaw of
vote-based user input. You will only see politically correct, left winged and
in-tone liberal opinions and support everywhere. Mutual backpatting is
encouraged and because of these traits, the circle continues.
~~~
AutoCorrect
echo chamber depending on first (maybe most numerous) adopters that self-
reinforces by driving away dissenting voices. I agree 100%
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Can I get Sir Tim Berners-Lee's vision of a writable web? - usermac
======
dreamery
He has never replied to my emails
~~~
usermac
This is not reddit. I simply want to know if I can get a writable Web page
without the hoops of login. I want an obfuscated write URL that I can expose
or send to them. That is all. - the askr
| {
"pile_set_name": "HackerNews"
} |
Uber’s Ad-Toting Drones Are Heckling Drivers Stuck in Traffic - markhall
https://www.technologyreview.com/s/602662/ubers-ad-toting-drones-are-heckling-drivers-stuck-in-traffic/
======
loco5niner
This better not become a trend...
------
safeandsound
This sounds really annoying.
| {
"pile_set_name": "HackerNews"
} |
Google's Performance Review System to Work for Startups - vskarine
http://firstround.com/review/altschools-ceo-rebuilt-googles-performance-review-system-to-work-for-startups-here-it-is/
======
jevanish
I was actually really surprised by this post. While yes, he worked at Google,
and yes Google values their performance reviews, based on what I read in the
VP of People's book "Work Rules", this isn't exactly the Google system.
I think this is a pretty heavyweight system, but he's right that any
measurement is better than none. The time commitment though I bet will get
difficult as they scale further.
One thing that really surprised me was no mention of the Google manager
surveys they use to benchmark managers
([https://getlighthouse.com/blog/google-
management/](https://getlighthouse.com/blog/google-management/)). It's a big
part of the book "Work Rules" and core to how they ensure their managers are
doing their job well...which then is key to retention.
They also didn't mention it, but hopefully they're doing one on ones between
those reviews as it's the in the trenches action that bridges any reviews that
ensures things really get better and you have good things to reflect on in a
review. Otherwise, it's hard to remember much more than a couple weeks ago.
| {
"pile_set_name": "HackerNews"
} |
Our Brains Are Not Multi-Threaded - adenadel
http://www.calnewport.com/blog/2019/09/10/our-brains-are-not-multi-threaded/
======
jenIsOnHN
Email, phone calls, and even IMs has me thinking I might have some multi-
threading capability and then came Slack to show me realit! (Nothing against
it, I just have to limit my use of it since it really makes it hard for me to
focus on tasks requiring more cognition.)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How to verify customer change of address for a defunct email address? - mingabunga
Anyone have any good ideas for verifying a customer who asks to change their old email address to a new email address, where the old email address no longer works.
I'm worried that it would be an easy way for someone to gain access to someone elses customer account/records.
======
arkitaip
Start by sending an email (without any sensitive info) to the old account and
see what happens.
Compare the IP address from when the account was created / last accessed with
the one for the password reset request. IP addresses are easy to spoof though
so even if the IP addresses match you should be very cautious.
If you have the user's physical address, you could send a letter with an
authentication code.
------
kogir
While I worked on HN I did this for a few users. I required that the account
they wanted to update positively identify them (via comments they'd made,
items in their profile, etc), and that they be able to prove they were the
same person.
Definitely something we handled on a case by case basis and declined to do if
we weren't comfortable.
------
pmontra
You need information about him and cross check with what's available to you.
If you only have name and email, sorry.
~~~
mingabunga
Thanks, seems about the only way.
------
brudgers
If it's a serious concern, then process the issue manually and figure out what
is going on rather than trying to automate it. Handle it on a case by case
basis.
Alternatively, there are many situations where a new email address could
simply mean the site requires creating a new account.
| {
"pile_set_name": "HackerNews"
} |
LIGO black hole echoes hint at general-relativity breakdown - privong
http://www.nature.com/news/ligo-black-hole-echoes-hint-at-general-relativity-breakdown-1.21135
======
vanderZwan
> _The echoes could be a statistical fluke, and if random noise is behind the
> patterns, says Afshordi, then the chance of seeing such echoes is about 1 in
> 270, or 2.9 sigma. To be sure that they are not noise, such echoes will have
> to be spotted in future black-hole mergers. “The good thing is that new LIGO
> data with improved sensitivity will be coming in, so we should be able to
> confirm this or rule it out within the next two years.”_
Can I take a moment to commend the level-headed non-hype of this paragraph? It
gives the impression that their first priority is finding the "truth" (I know,
I know; I'm using it as a shorthand), not whatever they _want_ to be
confirmed. Gives the research much more credibility.
I mean, I know it's Nature so we should expect it, but it's still nice to see.
~~~
wyager
It's actually a bit hyped for particle physics. It ignores the fact that
you're very likely to see a lot of "significant" effects in physics because
there are tens of thousands of data analyses performed every year, so we're
bound to find some chance occurrences at this significance level. See the
diphoton excess a few years ago.
This is why the particle physics community has a very strict unofficial
standard of 5 sigma for significance. People don't generally publish "serious"
papers at 3 sigma.
~~~
roywiggins
We've only observed a small handful of black hole collisions though so seeing
something at 2.9 sigma would be somewhat more surprising, right? Definitely
not rising to the level of a discovery yet of course.
~~~
maxander
Only a small handful of observations (only one, last I heard, but I may be out
of date), but lots and lots of _analyses._ And for every scientist that has
their own model and does their own data processing, there's a chance that the
model lines up with some arbitrary noise in the data.
~~~
lamontcg
You also have to add in that once these experimental results were released
that a hundred or so theoretical physicists immediately started working on
massaging the data into supporting their pet theories. After every anomalous
result there's immediately hundreds of published papers by someone trying to
simply be the first to publish in case their idea happens to pan out. It is a
kind of shotgun approach to winding up being the next Dirac discovering the
positron from math or whatever.
Since there's 99 other PhDs who probably looked at this data and found their
pet theories didn't match the data and haven't been able to publish you have
to account for that filtering effect that this was the 1-in-100 paper that
managed to match the data. Adding that "Look Elsewhere Effect" to the 2.9
sigma would push the global significance (in the literal sense of global --
meaning all the research teams across the whole world) of this result down
into meaninglessness.
Of course its likely that any discovery would start out looking like something
on the edge of significance exactly like this. The safe bet is that this
disappears, but all we can do is wait for more data to come in and see if the
significance improves or disappears.
And I do really hope that someone finds something like this via the LIGO data.
I'm convinced there's something very interesting out there to find, and sooner
or later it should pop up experimentally and shake up our model of the
universe.
~~~
jdmichal
Yes, this is basically a distributed version of the issues discussed in
psychology regarding researcher degrees of freedom with a single data set. If
you throw enough models at the data, one of them is bound to stick, whether it
is predictive or not.
------
zeroer
This is super exciting stuff! We know that the two most accurate models of the
physical world, Quantum Mechanics and General Relativity, contradict each
other so at least one, and probably both, are approximations to the real laws
that govern our universe. Since the QM and GR disagree about what happens for
small massive objects, and in particular black hole event horizons, this is a
place to look for divergence to existing theories. If these echos holds up
under repeated measurements, it could be one of the most consequential
measurements of this century. This is another example of how taking
measurements to verify a theory you think you know can lead you in completely
unexpected directions.
Though, for now, the LIGO team is apparently saying that these results could
be the result of noise which would occur 1 out of 270 times. That's not strong
enough evidence (in my mind) to overcome the overwhelmingly likely prior that
General Relativity is correct. In time, we'll see.
Also, the article mentions that LIGO has witnessed 3 black hole mergers. Last
I heard LIGO had only witnessed 2.
~~~
nonbel
>"Quantum Mechanics and General Relativity, contradict each other so at least
one, and probably both, are approximations to the real laws that govern our
universe. [...] overwhelmingly likely prior that General Relativity is
correct"
If you think both QM and GR are likely incorrect, then why do you use "a
overwhelmingly likely prior" that GR is correct?
~~~
raattgift
Neither is "incorrect"; the Standard Model and General Relativity are two of
our best physical theories in that they both accord entirely with
observational and experimental evidence to date.
Either or both may be _incomplete_ , however. Correctness and completeness of
any theory in mathematical physics are esentially orthogonal. You can have a
complete theory that is just wrong, for example.
As I wrote a bit earlier in this thread, the most straightforward approach to
quantizing General Relativity fails in strong gravity. Additionally, the
classical field theory that is General Relativity is defined on a smooth
manifold and yet so far we have been unable to escape the conclusion that some
systems of mass-energy inevitably produce a non-smooth discontinuity. A
completion of classical General Relativity requires the smoothing of these
regions. Sharpening this, the problem with GR is the prediction of a
gravitational singularity; if singularities are physical at all (even if they
are in a region of spacetime that is inaccessible outside event horizons),
then General Relativity is incomplete in its own terms.
The Standard Model as a paradigm of quantum field theory, on the other hand,
is defined against a flat spacetime and thus relies on the result from General
Relativity that the flat spacetime metric is induced on the tangent space of
every point in a smooth spacetime. So if GR is incomplete, so is the Standard
Model, in its own terms. (This is not just an academic point; any theory of
gravity that does not reproduce the Poincaré invariance of flat spacetime in
the energy scales of the Standard Model has a terrible _correctness_ problem.)
Additionally, the Standard Model is not especially well-defined at GUT energy
scales. Additionally, the Standard Model does not describe the whole of the
non-gravitational content of the universe; for example, it is silent on dark
matter.
The Standard Model is highly correct, however, in the limits where it is
effectively complete. It's a pity it has so many free parameters that have to
be determined by experiment.
Likewise, General Relativity is both highly correct in the limits of present
observability, and it is complete in its own terms _if_ one admits the
possibility that gravitational singularities only arise in our idealized
models and that, for example, there are no exactly Schwarzschild black holes
anywhere in the past, present or future of our universe. (One have to show
that, and also that there are no other physically realizable systems of matter
that can generate non-smoothness in our spacetime. That's not an easy ask.
Although General Relativity has only one of the free paramaters complained
about in the previous paragraph, it doesn't offer much guidance about how to
show that you can't actually generate a low-Q Kerr-Newman metric in reality,
and worse, some of that guidance must come from the high-energy behaviour of
matter fields -- we can only be as complete as the Standard Model right now.)
~~~
lambdadmitry
Posts like this is why I read HN. Thanks a lot! :)
------
ThePhysicist
Personally, I would not be surprised if we discover that our understanding of
general relativity is wrong for extreme values (i.e. very high mass
densities). The whole problem of dark energy and dark matter -which has failed
to show up in any conceivable form so far- also gives reason to doubt the
validity of our current theory of gravitation.
I think we're in a similar situation like at the end of the 19th century, were
many physicists thought that everything that could be discovered already was,
apart from some "edge phenomena" that would need to be resolved somehow using
the current theories. In the end, these edge cases turned out to be the first
hints of some completely new theories that dramatically improved our
understanding of nature. I think that gravity and quantum mechanics are due
for a similar change, and in the coming decades we might just get the data
that we need to make this change happpen.
I also have difficulties "buying" the current theory of black hole physics,
especially the concepts of an event horizon and the infinite mass density, as
well as the problems which arise from them (e.g. black hole energy evaporation
through virtual particle generation at the horizon). And as previous theories
of gravitation have broken down at points of extreme value (high energy, high
speed), I think black holes are a hot candidate for breaking general
relativity.
~~~
raattgift
There's plenty of hot dark matter coursing right through you right now !
Fermi -> Wang Ganchang -> Harrison, Kruse & McGuire took a while. I wouldn't
expect a quick detection of something with the properties of a WIMP, as I
would expect any such particle to be even harder to detect than a neutrino,
especially if it doesn't feel the weak nuclear force.
I don't know why you think that dark energy has failed to show up in any
conceivable form -- how do you explain the cosmological redshift without it?
Dark energy in its simplest form is just the cosmological constant, and can be
an inertial effect.
In any case, even if the concordance cosmology is simply _wrong_ , that does
not mean that GR is incorrect as much as we are wrong about the mechanisms
that generate the metric (Afshordi, one of the authors of the paper at the
top, has proposed non-universal coupling to the single metric of GR), or
alternatively we are wrong about the way we choose and stitch together metrics
(i.e., we're misusing GR in a way that introduces serious errors at long
length scales).
Do you really think there are working scientists who think that there's
nothing more to discover? Conversely, are there many who deny that the huge
preponderance of evidence we have so far favours the Standard Model and
General Relativity? Even if we "demote" SM and GR to effective field theories,
the effective limit of each is very nearly everywhere readily accessible,
isn't it?
Buying the BH singularity would be, I think, a pretty extreme position. Every
viable post-GR effort I know about is to some extent focused on abolishing
singularities somehow. (You could alternatively keep them always hidden and
resolve things like BH thermodynamics; if you always keep information locked
up in another region of spacetime -- behind a horizon -- there's no
information loss problem to consider, and you can "cut" singularities out of
the manifold recovering everywhere-smoothness. But black holes might evaporate
completely in the far de Sitter like future.)
Not buying an event horizon in a system of local physics with a maximum
propagation of local state from one point to another seems even more extreme.
The existence of a maximum local speed -- whatever it is, it could be much
faster than light -- sets the slope of a nonempty open convex cone of tangent
vectors (a causal cone at each point for the field-values at that point) which
in turn lets us fix a first order quasilinear system of PDEs admitting a
hyperbolization, and in that you can always find an observer that sees an
event horizon.
The formation of the BH creates a dynamical spacetime with an acceleration
between observers before and after the collapse, and that alone is sufficient
to produce an event horizon.
Abolishing 'c' (as a general free parameter defined at every point; the
definition can even vary by location in spacetime) seems a lot harder to
swallow than abolishing event horizons. If you accept 'c', then while
Schwarzschild event horizons is pretty easy (nonzero J or not-always-zero Q at
all physical compact dense objects, for example), abolishing _all_ event
horizons requires a lot of contortions to avoid immediate conflict with local
experiment, much less astrophysical observation.
~~~
ThePhysicist
> There's plenty of hot dark matter coursing right through you right now !
That statement is exactly the problem as I see it, as dark matter is an
attempt to save an existing paradigm using a trick that makes use of unknown
but conceptually understandable matter.
The same was and is true for Einstein's cosmological constant: It's a hack
that was necessary to make a theory match with the observations.
Introducing hypothetical/imvisible matter to make a theory fit observations
does not mean that this matter really exists.
I did not say that scientists think that there is nothing more to discover,
just that there is a tendency to try to fix up existing theories instead of
accepting that they might be wrong. I'm no expert in particle physics or
relativity (my field is quantum mechanics), so I'm not able to judge the merit
of different theories involving dark matter, I'm just not convinced that dark
matter / dark energy is real. If anyone shows me compelling experimental
evidence I'll be happy to change my mind.
So far we haven't seen any convincing arguments for the existence of dark
energy or dark matter though, and I think there's a chance that they end up as
the 21st century equivalent of the "ether".
~~~
ggreer
I'm pretty sure raattgift was referring to neutrinos when he said hot dark
matter was "coursing right through you". He wasn't assuming the existence of
any speculative form of dark matter.
~~~
raattgift
Yes. "Hot" because neutrinos move quickly compared to the speed of light,
"dark" because they do not feel electromagnetism, and "matter" because they
couple to the metric.
They explain the anomalous momentum in beta decays, among other things, and
are still difficult to detect.
To explain the anomalous momentum we infer around large scale structures at z
<< 1, it's pretty reasonable to consider neutrinos or neutrino-like particles
that are "cold" \-- moving slowly compared to the speed of light, thus more
likely to "hang around" in a region of spacetime instead of quickly running
away to infinity. Although they interact very weakly with matter, they still
impart momentum, so hot dark matter would tend to smear apart gas clouds
rather than encouraging them to collapse into denser objects like stars.
Likewise, it is perfectly reasonable to search for them in ways analogous to
how the neutrino itself was searched for experimentally and observationally,
and like with the first detection of the neutrino, it is liable to take time
to detect or let various non-detections exclude all the regions of the
particle mass vs nucleon cross-section parameter space.
Moreover, the search for this sort of cold dark matter does not preclude
concurrent searches for other possibilities.
So I can't agree with ThePhysicist that there is a problem here, other than
that there is apparently a communications gap that affects even people with
backgrounds in quantum mechanics.
------
lamontcg
The breathlessness of this article is fairly annoying.
LIGO wasn't really setup to confirm GR around black holes. It was designed to
study highly energetic, high-curvature gravitational phenomena where it would
be expected that there might be deviations from GR. Measuring deviations from
GR predictions is exactly why you'd built the experiment and isn't "ironic" at
all (and not even in the Alanis Morrissette sense since finding something new
would be more like having a party on your wedding day rather than raining on
it).
GR is also fully expected to break down at the central singularity of a black
hole. The curvature of space-time become infinite there with infinite force.
At the very least its expected that quantum gravity would smear this out.
The problem of black hole entropy at the event horizon of the black hole has
also been known for decades and is one of the drivers behind doing research
like LIGO. The "firewall" problem is recently all the rage in the west coast
theoretical community, but its been known for some time that we can't make
sense of black hole entropy entirely classically with GR, so that finding non-
GR effects near the event horizon is at least hoped-for, if not expected, and
LIGO is precisely the kind of experiment that could shed light on that.
Its legitimately very exciting, but its the result of methodically grinding
away at a very hard problem for decades.
------
maverick_iceman
2.9 sigma is hardly evidence of anything in fundamental physics. There was a 4
sigma evidence of diphoton excess from ATLAS and CMS last year which went away
this year. 3-4 sigma discrepancies come and go. It's not for nothing that
physicists have the discovery criteria set at 5 sigma.
What's more, one should be extremely skeptical when observations seem to
violate long held physical theories. The superluminal neutrinos from OPERA
ostensibly had >5 sigma evidence but nobody (correctly) took it seriously as
it violated special relativity. Unsurprisingly, it was ultimately traced to a
loose GPS cable.
------
noobermin
Minor comment from another physicist here, not in this field but from what my
friends from this field say, most people expected hints of quantum gravity to
come specifically from black holes, so if there is anything new to be learned
about GR's limits, looking at black holes is the right place to look.
------
m_mueller
I've had this idea in my mind how Black Holes could be connected to universe
generation. It came about when I learned that the known universe would be a
black hole if its mass was concentrated in the center, i.e. its size is about
the same as the event horizon for such a mass.
Thinking backwards, obviously the universe at some point would have been
described as a black hole by GR. Then of course spacetime expansion comes into
play, that somehow makes it into not-a-black hole.
So here is the idea: What if a Big Bang is exactly what happens when matter
falls into itself until the original spacetime continuum breaks? I.e. the
energy of the original structure forms and gets linked into a new spacetime
continuum - part of it as dark energy that expands the new spacetime, part of
it as normal energy and matter.
Is there anything we know makes my idea impossible? If it were true, would
there be a chance that we could combine our empirical knowledge of the Big
Bang with this new empirical knowledge of gravitational waves to come up with
a testable unified theory (i.e. Quantum Gravity)?
~~~
pdonis
_> the known universe would be a black hole if its mass was concentrated in
the center_
The universe doesn't have a "center". The universe did have a much higher
density right after the Big Bang, but it was expanding rapidly; that's why it
was (and is) not a black hole.
_> Is there anything we know makes my idea impossible?_
Yes, the fact that it's based on a misconception about the universe's
spacetime geometry. See above.
There are certainly "bounce" models being considered for what preceded the Big
Bang (although they're by no means the only models being considered). But they
don't work like what you are describing.
~~~
m_mueller
My point is that the way the universe works, i.e. spacetime expansion,
inflation and acceleration (dark energy) could all be governed by processes
inside a black hole's singularity - something we afaik don't have a good model
yet. It's only a black hole from the reference point of the parent universe. I
don't mean the old bounce model, more like bubbles around a water hose that
get smaller the farther away from the source - i.e a stellar black hole
creates a mini universe through its own spacetime rip.
~~~
pdonis
_> My point is that the way the universe works, i.e. spacetime expansion,
inflation and acceleration (dark energy) could all be governed by processes
inside a black hole's singularity_
The singularity doesn't have an "inside". See below.
_> something we afaik don't have a good model yet_
The models that are being looked at get rid of the singularity altogether.
They don't try to model it as being made up of internal parts.
_> a stellar black hole creates a mini universe through its own spacetime
rip._
Some physicists have considered models in which black holes give birth to
"baby universes" (Hawking and Lee Smolin are two that come to mind). But these
models don't "rip" spacetime; they remove the singularity, which in the
standard classical GR model is just a spacelike surface--a moment of time--
that represents a boundary of spacetime, and instead just extend the spacetime
further on, into the spacetime of the new universe.
------
yk
Going quickly through the awesomely titled paper [1], they fit a template to
the data and obtain something like 2.9 sigma for their best value, without a
obvious way how they deal with the look elsewhere effect. On the other hand,
this is probably the window on nature that is worst understood, we understand
gravity at solar system field strength and distances very well, we have great
data from particle physics, but until last year our only evidence for the high
field regime of gravity came from pointing telescopes toward astronomical
objects - and note that telescopes work with the electro-magnetically, they
are using the wrong force.
I think this is exciting, but only the first step. With this, one can deal
with the look elsewhere effect by pointing to this paper and using their
analysis, but I wouldn't think of this by itself as a hint towards deviations
from general relativity.
[1] Abedi &al. "Echoes from the Abyss: Evidence for Planck-scale structure at
black hole horizons"
[https://arxiv.org/abs/1612.00266](https://arxiv.org/abs/1612.00266)
------
nonbel
>"if random noise is behind the patterns, says Afshordi, then the chance of
seeing such echoes is about 1 in 270, or 2.9 sigma. To be sure that they are
not noise, such echoes will have to be spotted in future black-hole mergers."
We are getting closer and closer... So finally we see a correct interpretation
of a p-value in the media, but the connection to the following sentence is not
clear so I am not sure the meaning was really understood.
How does spotting more such echos allow us to "be sure they are not noise",
and how does this relate to that 1/270 number?
If the probability of such an observation was 1/1.7 million assuming a random
noise model (rather than 1/270), would that mean we could "be sure it was not
noise"? Shouldn't that depend on how well the observations could be fit by
alternative models?
~~~
Natanael_L
You throw a dice 3 times. It always show 6. How do you know the dice is
loaded, and that it wasn't a fluke? Repeat the experiment, and see if it
starts looking random (pattern disappears) or if the pattern is strengthened
(always saying 6).
~~~
nonbel
Also, I couldn't get the original documents at the time, but you reminded me
of this:
>'The simplest assumption about dice as random-number generators is that each
face is equally likely, and therefore the event “five or six” will occur with
probability 1/3 and the number of successes out of 12 will be distributed
according to the binomial distribution. When the data are compared to this
“fair binomial” hypothesis using Pearson’s 2 test without any binning,
Pearson found a p-value of 0.000016, or “the odds are 62,499 to 1 against such
a system of deviations on a random selection.”'
[https://galton.uchicago.edu/about/docs/2009/2009_dice_zac_la...](https://galton.uchicago.edu/about/docs/2009/2009_dice_zac_labby.pdf)
The point is you will always find deviations (with extremely low p-values) if
you look hard enough. It is about collecting data as carefully as possible,
and determining which model fits best, not which fits perfectly.
~~~
GFK_of_xmaspast
> The point is you will always find deviations (with extremely low p-values)
> if you look hard enough
I don't know how you can get that point from the article instead of 'an
1894-era die is biased but you need a lot of statistical power in order to see
that'.
~~~
nonbel
I think the lesson is clear... the more messed up your methods the easier to
see the deviation (ie the historical vs modern experiment). Also, where do you
get this:
>"you need a lot of statistical power in order to see that"
------
daxfohl
"The most exciting phrase to hear in science, the one that heralds new
discoveries, is not 'Eureka!' but 'That's funny...' " \-- Isaac Asimov
------
wfunction
Does any layman expect GR _NOT_ to break down? To me it's intuitively only
correctly describing emergent phenomena in the limiting case of large scales;
it's bound to be less accurate than something that is correct on small scales.
------
hanso
They're hopelessly out of their depth. This kind of talk makes zero sense
------
WhitneyLand
No aspersions here. I'm keeping the faith, but can anyone recall what these
allude to?
1) Utah has solved the energy crisis on a table top with deuterium
2) That bump in the collider data is looking pretty odd
3) Remind your child to chelate if the autism acts up
4) Wow those neutrinos are moving so fast
5) Bigotry can stop if we'd go door to door and talk about it
6) Arsenic can kill but it enables growth for at least one family
7) This theory will be perfect if we get rid of Λ
In all fairness, this topic is a little different because we know for sure
something big has to happen eventually to reconcile QM/Gravity.
------
guard-of-terra
It's weird that they had the idea of firewall just in 2012. I had this exact
idea maybe ten years prior while being a school student. A fairly obvious one
if you think of that - some photons will orbit the black hole.
~~~
TheOtherHobbes
It's potentially more complicated than that.
Someone on Reddit asked a brilliant question a while back - what happens to
quantum fields at the event horizon?
In QFT, fields are everywhere. But to support a field, you need a mechanism
that allows causal propagation - which is exactly what isn't allowed across an
event horizon.
So at the very least you have a discontinuity where three and possibly all
four fundamental forces stop working, and which is separate to any
hypothetical relativistic singularity.
Whatever is left is going to be some kind of unimaginably weird sub-quantum
soup.
I don't know if that's the same firewall that was invented in 2012. But the
takeaway is that relativity isn't complete enough to model black holes. You
absolutely need to include quantum effects - and when you do, things get very
strange indeed.
~~~
danbruc
_In QFT, fields are everywhere._
The fields in quantum field theory are mathematical tools, they are not
physical entities.
_But to support a field, you need a mechanism that allows causal propagation
- which is exactly what isn 't allowed across an event horizon._
But that is only a one-way thing - the future light cones of events inside the
event horizon are contained inside the event horizon but the future light
cones of events outside the event horizon certainly overlap the inside of the
black hole.
~~~
vanderZwan
> The fields in quantum field theory are mathematical tools, they are not
> physical entities.
Still, they only maintain physical relevance as long as they are continuous,
no? Otherwise you literally have a break in reality.
~~~
guitarbill
> a break in reality.
Such as a singularity (e.g. gravitational)? I think in Physics (just as when
analysing functions), the interesting things happen when you approach such
limits.
| {
"pile_set_name": "HackerNews"
} |
Hong Kong protesters assaulted and set man on fire - bgee
https://www.scmp.com/news/hong-kong/law-and-crime/article/3037243/hong-kong-father-two-burned-alive-after-chasing
======
ksaj
Unless they catch the perpetrators, we'll probably never know what the MO was.
They don't get into how they identified these people as Hong Kong protesters,
since they can just as easily have been fanning the flames in classic black
bloc operations style to make the protesters look worse in the media and
general public, or to pre-justify an upcoming use of force.
| {
"pile_set_name": "HackerNews"
} |
The Messy Mediterranean - BerislavLopac
https://sovereignlimits.com/blog/the-messy-mediterranean
======
NKosmatos
Can we please leave politics out of HN? Being Greek I don’t want to see a
conflict starting here. With that being said, it would be best if all nations
signed and followed UNCLOS [0] guidelines and adopted a strict equidistance-
based maritime boundary for all sea disputes.
[0]
[https://en.wikipedia.org/wiki/United_Nations_Convention_on_t...](https://en.wikipedia.org/wiki/United_Nations_Convention_on_the_Law_of_the_Sea)
~~~
gus_massa
It looks quite neutral for me, perhaps because I'm from Argentina and we don't
have any skin in this game. (We have our land and maritime border conflict
too. It's a sensitive topic in all the countries. [1])
The article helped me to understand the recent problem. My fast and possible
inaccurate takeaway is:
_The land shore of Turkey has a lot of nearby Greek islands. If there is oil
nearby, that will surely cause a problem._
[1] Let's pick and easy one, the land frontier of Argentina and Chile. The
Andes mountain range is in between. One method to select the border is to pick
the highest mountains, other method is to look at the water basins. They are
very different, so after a lot of problem we selected a mixed method and then
added some arbitrary borders for a smaller section. Now it is quite settled,
probably because there is not too much oil under the area. Other borders are
too boring or too polemic.
~~~
ivanhoe
It's complicated to rely on natural barriers as they change too, for instance
some parts of the border between Croatia and Serbia were drawn to follow the
Danube river, but the river bed is ever-changing as water finds new ways -
creating meanders and river islands - so now we have some areas in-between the
two countries where ownership is disputed. Some guy even tried to declare a
new state in one of those areas
[https://en.wikipedia.org/wiki/Liberland](https://en.wikipedia.org/wiki/Liberland)
~~~
emteycz
The Gornja Siga area (where Liberland is) did not appear over time as the
river changed, it's that the Serbians are explicitly using the historical
river while the Croatians are explicitly using the contemporary river, which
results in Serbia not claiming the piece of land and Croatia declaring it
belongs to Serbia.
~~~
ivanhoe
Actually it's the opposite, and the difference between the two is because the
river bed has changed over the time naturally and by human engineering. To
quote Wikipedia: "Serbia holds the opinion that the thalweg of the Danube
valley and the centre line of the river represents the international border
between the two countries. Croatia disagrees and claims that the international
border lies along the boundaries of the cadastral municipalities located along
the river—departing from the course at several points—reflecting the course of
the Danube which existed in the 19th century before meandering and hydraulic
engineering works altered its course."
------
StavrosK
Google cache:
[http://webcache.googleusercontent.com/search?hl=en&q=cache%3...](http://webcache.googleusercontent.com/search?hl=en&q=cache%3Ahttps%3A%2F%2Fsovereignlimits.com%2Fblog%2Fthe%2Dmessy%2Dmediterranean)
I was hoping we'd gotten past the era where blog articles stopped loading
under a bit of traffic, alas.
~~~
rsecora
Yep, HN hug of death is still real.
------
anonu
This is a very real problem in the Eastern Med. as the article points out.
From a Lebanon perspective, people have been talking about tapping into these
offshore energy reserves for a decade+. But the inside conversation is that
the Lebanese people would never see a penny from any revenues. They would
simply get funneled into various politicians pockets.
It seems like many countries would be far better off without this.
------
082349872349872
TIL cyprus still has two UK bases. That might help complicate things.
~~~
toxicFork
UK, complicating things since 19... wait... 18... 1700s?
~~~
082349872349872
Wikipedia says 1688-1914. I would argue that brit foreign policy makers
weren't ready to pass the baton until 1957 made it obvious.
[https://en.wikipedia.org/wiki/Hegemony#15th–19th_centuries](https://en.wikipedia.org/wiki/Hegemony#15th–19th_centuries)
[https://en.wikipedia.org/wiki/Suez_Crisis#Financial_pressure](https://en.wikipedia.org/wiki/Suez_Crisis#Financial_pressure)
| {
"pile_set_name": "HackerNews"
} |
U.S. Commercial Computing Device Sales Set to End 2013 with Double-Digit Growth - BvS
https://www.npd.com/wps/portal/npd/us/news/press-releases/u-s-commercial-channel-computing-device-sales-set-to-end-2013-with-double-digit-growth-according-to-npd/
======
mpweiher
...in "commercial markets". As far as I can tell, that does not include
consumer and would make the headline extremely misleading.
So a bunch of companies decided to buy ChromeBooks, presumably without asking
their employees.
~~~
tristanz
I have no idea if 21% is correct, but on Amazon Chromebooks are 4 of the 5 top
selling laptops. Something is going on.
[http://www.amazon.com/Best-Sellers-Electronics-Laptop-
Comput...](http://www.amazon.com/Best-Sellers-Electronics-Laptop-
Computers/zgbs/electronics/565108/ref=zg_bs_nav_e_2_541966)
------
gkoberger
Bought Chromebooks (and Chromecasts) for all my tech-illiterate family members
in order to cut down on having to provide tech support, and it's been amazing.
No issues so far, and everything just works.
I think that actually is just fine for most companies -- less tech support,
and does everything they need (collaborative documents, email, web searches,
etc).
~~~
gregwebs
I suggested that a 65 year old I knew that frequently had computer issues (and
asked me for support) but only surfed the web and used gmail consider getting
a Chromebook. Several months after the purchase I was thanked and told it was
working out great. E-mail is an important point: typing anything more than a
sentence on an iPad isn't very nice.
My suggestions to consider using OS X rather than Windows have never worked
because there are no low-cost OS X options.
------
DonGateley
The floodgates will open when you can use one as the human interface to your
Windows system in the cloud that you've uploaded all your apps and data to and
can throw out all its local manifestations once and forever.
There's no longer any excuse or need for OEM Windows machines. Same is true of
Mac systems but their walls will take longer to tear down. Nonetheless, fall
they will.
Some day historians will only scratch their heads about this long detour away
from thin clients that we've suffered for around 40 years and the phenomenon
will become a rich research area for behavioral psychologists.
~~~
YokoZar
Alternatively, computing power will get so cheap and easy to distribute that
myriad small devices will be fetching saved data from the cloud and running
the app locally with all the better latency that entails.
~~~
dredmorbius
Much as I appreciate a fat (Linux) client, the challenge is less device cost
than administration.
The Android + Cloud model gives you a highly uniform user client which access
all the fiddly bits in the Cloud. Until there's an absolutely bulletproof way
of providing those services at the individually-provisioned level, that's
going to win out for the vast majority of the public.
I'm not saying that the services have to remain as centralized as they are
presently -- with Google owning everything (though this provides certain
efficiencies). A more federated model in which there are multiple app and/or
service providers to choose from _could_ come into being, and the present
surveillance environment might help such an environment emerge, but the
efficiencies of size and scale (as well as the very thin margins of such
services) make this a stretch.
I've been watching a number of projects, most notably FreedomBox, for some
time. They're pretty much precisely what you've described: cheap, self-
contained, self-provisioning systems based around Linux (usually Debian and
its excellent provisioning system), but there's been little noise out of the
projects and progress seems slow at best.
If I could run my own servers (on an existing high-speed and highly reliable
connection) without much hassle, it really would be quite attractive.
------
roma1n
Great! Now, where are the mid-range chromebooks? With a decent display and
some additional processing power?
~~~
arianvanp
Check out the Pixel[0]. It has a great display but is horribly overpriced in
my opinion
[0] [https://www.google.com/intl/en/chrome/devices/chromebook-
pix...](https://www.google.com/intl/en/chrome/devices/chromebook-pixel/)
~~~
MAGZine
Why is the pixel "horribly overpriced"? The hardware is better than a MBA,
imo. If there were drivers for win 8, it would be my next notebook.
~~~
nerraga
I think you'd be hard pressed to run Win8 on a _32Gb_ drive with 4Gb Ram
(although, admittedly, my Surface Pro runs Win8 adequately on 4Gb so it's the
tiny drive that will get you).
If you compare the base 1299(wifi) Pixel to a 1299MBA(13") they look about
equal as far as I can see.
The Pixel gives you a higher resolution display (2560x1700 vs 1440x900) and a
slight bump in processor speed (i5/1.8Ghz vs i5/1.3GHz). The MBA also has an
intel HD Graphics 5000 vs the Pixel's HD 4000.
The drawback to the Pixel imo is that there is no 8GB ram option _and_ it's
stuck with a 32 GB drive (even if it is an SSD). Also, the MBA has a longer
battery life (8h vs 5h) and USB3 ports instead of USB2.
It's actually the battery life that pushes me towards the MBA. I picked up an
XPS13 "Sputnik" in an effort to move away from Apple hardware and was
immediately disappointed with the battery life. The Pixel doesn't appear to be
any stronger on this front.
------
Zigurd
The success of Chromebooks should put a question mark over some of the
newcomer mobile OSs. Chromebooks illustrate that the sweet spot for Web apps
is a big screen, a keyboard, and a pointing device. That's partly because the
Web was not designed for finger touch.
If your mobile OS relies on Web apps, you might want to think about adding
Android compatibility, as Jolla has done even though Sailfish also runs Qt
apps.
------
chippy
Depends on definition of "notebook".
I would assume it is different from laptop, netbook, ultabook and tablet.
~~~
azakai
No, it appears to be identical to "laptop" actually. Based on other news
reports that use the same NPD data. Definitely an easy to misunderstand term,
but maybe it's the norm in the commercial sector which is what they cover
here, not the entire market?
------
MBCook
And yet, as Gruber points out, they are just a rounding error in web browser
usage. [1] If they're that successful, why don't Chromebooks appear in usage
stats?
1\.
[http://daringfireball.net/linked/2013/12/28/chromebooks](http://daringfireball.net/linked/2013/12/28/chromebooks)
~~~
ffrryuu
Because the stat is unreliable.
~~~
gress
How is it unreliable, and how is the margin of error so bad that these are a
no-show?
~~~
olefoo
Because web usage statistics are hard to get right, and hard to interpret even
if you do a good job on data collection.
To get a reliable sample of web traffic as a whole you'll need to recruit most
of the larger websites (Yahoo!, Google, Wikipedia, etc.), and a sampling of
second and third tier sites and then process the data taking into account that
browsers lie and that all of your sites that you are sampling are going to be
biased in one direction or another. Google for instance probably sees a higher
share of chrome than similar sized sites. Add to which there is no standard
way of distinguishing browsers other than by looking at the User-Agent string
which is error prone and not guaranteed to be an accurate representation.
~~~
gress
That explains why there will be noise and some degree of inaccuracy, and that
the process is not trivial.
It doesn't at all explain why in a large sample there would be powerful
unexplained systematic error about ChromeOS usage in particular.
| {
"pile_set_name": "HackerNews"
} |
John Derbyshire on "The Rapture for Nerds" - byrneseyeview
http://www.takimag.com/site/article/the_rapture_for_nerds/
======
aswanson
The progression from the roundworm example is enlightening.
| {
"pile_set_name": "HackerNews"
} |
Careem has identified an incident involving unauthorised access to customer data - abdullahdiaa
https://blog.careem.com/en/stories/uae/ksa/security/
======
pcx
It is ingenuine on their part to not report how detailed the trip data they
have is. Trip data could easily show Users' home/office locations, their daily
travel patterns, their kid's daycare and whatnot. This kind of knowledge can
be extremely dangerous if it falls into the wrong hands. Careem should be more
straightforward about this and explain the consequences, rather than slyly
gloss over the most dangerous part of the breach by mentioning only two effing
words about it.
~~~
netsharc
They also have only said when they figured out the breach, but not when the
breach was. It could have happened a day before January 14th, or 3 months
before January 14th. The difference is how much trust I would give them.
Interestingly they said the breach was done by "online criminals". Do they
know, or do they automatically assume that people illegally accessing systems
are criminals?
~~~
i_cant_speel
> Do they know, or do they automatically assume that people illegally
> accessing systems are criminals?
I'm not sure what distinction you are trying to make here. The fact that they
are doing something illegal makes them criminals.
~~~
na85
>The fact that they are doing something illegal makes them criminals.
Not in all countries. In Canada at least, plenty of things are against the law
(illegal) but do not constitute a criminal offence.
I'm not committing a crime when I break the speed limit almost every day on my
way to work, but what I'm doing is still illegal.
~~~
thisacctforreal
Poor example, just a week ago a 19-year-old had his family's house raided for
him scraping documents from a public gov't website.
[https://evandentremont.com/some-information-on-the-
freedom-o...](https://evandentremont.com/some-information-on-the-freedom-of-
information-hack/)
------
__bee
Funny part! I wanted to delete my Careem account. I could not do that. I
cannot delete my account.
[https://help.careem.com/hc/en-
us/articles/115008681747-How-d...](https://help.careem.com/hc/en-
us/articles/115008681747-How-do-I-delete-an-account-)
~~~
reallymental
"A Careem account cannot be deleted as every account detail can only be used
once." \- uh what?
So they've hashed your account details. They won't delete this. Great
------
tzahola
Cmd+F "seriously"
"We take the protection of our customers and captains’ data very seriously."
~~~
nuclearcookie
Exactly what I did when I opened the page.
------
amingilani
Well now.
_What customer account data was stolen?
Customers’ name, email address, phone number and trip data._
------
stevekemp
The compromise was identified on January 14th, and the announcement took three
months? That's a pretty appalling timeline.
------
thawab
a friend ,who had a job interview with careem, told me i should use a
different mobile number and name if I'm using their service. Glad i followed
his advice.
------
thrillgore
>January 14
Thanks for not telling anyone sooner.
------
techwizrd
Why is Uber included in the title here? It makes it seem like Uber was
involved. I think the title should mention, at most, the Careem is a Middle
Eastern ridesharing company.
~~~
thesimon
Ridesharing could also include long-distance ride-sharing like BlaBlaCar.
Since Uber-for-X has become a thing, I don't think including Uber in the title
is a bad thing.
------
thisisit
I know it's difficult to find an appropriate title but wouldn't -
"Careem, ridesharing company/app in the Middle East"
work better than calling out Uber?
------
GrumpyNl
Nothing to see here, its a minor breach.
~~~
ckastner
Trip data can contain _extremely_ sensitive information.
~~~
hyder_m29
They are also unsure whether passwords or credit card details were stolen.
------
ScalaForever
Wonderful, hacking often means dumping one data store due to sec problem with
it (think 90s-SQL-injection).
I assume trip data was stored in the same system as emails - so both got
hacked. Minor security considerations would put those in different systems and
not store together.
| {
"pile_set_name": "HackerNews"
} |
ITerm2-updated terminal for OS X - a2tech
http://www.iterm2.com/#/section/home
======
sunkencity
Nice! I like that they've included visor like functionality and that that it's
now possible to split windows like a tiling window manager.
I tend to switch back and forth between using iTerm full screen and using
xterm in x11 for raw performance.
| {
"pile_set_name": "HackerNews"
} |
Resurrecting the SuperH architecture - justin66
http://lwn.net/Articles/647636/
======
rwmj
RISC-V seems like a better bet ([http://riscv.org/](http://riscv.org/)). It is
a clean, patent-free modern architecture. It already has kernel support, and
supposedly there will be both FPGAs and ASICs "soon"
([http://www.lowrisc.org/](http://www.lowrisc.org/)). Plus you can run it
under qemu: [https://rwmj.wordpress.com/2015/06/11/booting-risc-v-
linux-w...](https://rwmj.wordpress.com/2015/06/11/booting-risc-v-linux-with-
qemu/)
------
unwind
This:
_There have been some minor additions, he said: the J2 adds four new
instructions. One for atomic operations, one to work around the barrel
shifter, "which did not work the way the compiler wanted it to [...]_
Is _so_ intriguing! Does anyone know what was wrong with the original barrel
shifter design? I tried reading up on it but failed to find much reference
material. I followed the link to the J-core community site to read the code,
but it wasn't immediately browsable, just available for download.
I assume there were compilers for SuperH back in the day, didn't they use the
shifter? Why not fix the compiler to teach it the existing instruction, rather
than adding an instruction just for this? How wrong can a shifter be, really?
The questions just heap up.
~~~
TapamN
Compilers did use the shifter. I don't know if this is exactly what he was
referring to, but one oddity with the SH4's dynamic shift instruction is that
it only shifts to the left (there are also a limited number of shift-by-small
constant (1,2,8,16) amount instructions). To shift to the right, you have to
first negate the shift amount, then preform a left shift. So if use did a
right shift by a non-constant, you would always see a negation of the shift
amount before the shift. My guess as to why it was implemented like this was
that since the SH4 had a fixed length, 2-byte instruction set, running out of
possible instructions for future expansion was a real hazard, and not encoding
both directions was done to save space.
On the original SH4 implementation, under certain conditions, there had to be
one cycle in-between when a shift-amount was generated and when it was used,
otherwise there would be a one-cycle CPU stall. A real right shift would avoid
the need to schedule around this stall. This isn't necessarily something that
needs an extra instruction to fix, the implementation could be designed to not
need the stall, but it might difficult to work around. I don't to circuit
design, but dynamic shift instructions typically look at as few bits in the
shift amount to simplify and speed up the design of the shifter. The reason
for the delay in the original SH4 is probably because it analysis and tags
each register with information for the correct shift direction and amount, and
certain units won't have this information ready for the shifter in time, hence
the stall if the shift is too close the shift amount generation. (I've read
this certain CPU implementations have done similar work in tagging if a
register is zero or not, in order to help keep branch-on-zero/not-zero
instructions quick.) If the instruction talked about is a dedicated right
shift, it could be defined in a way that doesn't need a negation and extra
tagging, would be much more compiler friendly, and faster.
~~~
unwind
Thanks!
Does that mean that the shifter is actually capable of doing rotates?
Otherwise the negation part doesn't make any sense.
If you have 0xf0 and want to shift it three bits to the right to get 0x1e, no
amount of negated-amount left-shifting is going to do that unless the
instruction is a rotate.
If, on the other hand, you can do a 8-bit rotate left of 8-3 = 5 bits, that
would produce the same result and need that "negation" (which is actually an
inversion).
------
__david__
We used an SH2 for the main processor of a DDS (DAT) tape drive at a company I
worked at. We had a prototype that used an SH3 and I remember spending a few
days hacking Linux to boot on our hardware (I think we made it to user space
and then the project petered out).
GCC has supported the SH series since at least the 2.7 era (though Hitachi's
compiler seemed to produce better code in those days, but only ran under DOS).
------
cbd1984
From the comments there, early MIPS architectures are also patent-expired at
this point.
They might have had more actual work done with them back in the old days, so
their code might be in better shape now.
~~~
kevin_thibedeau
The key point is that this is the first widely deployed 32-bit RISC platform
with a 16-bit instruction set to come off patent. That has advantages for the
embedded applications being targeted in this case. You won't get that with a
MIPS or ARM clone because MIPS16 and Thumb are still under patent.
------
spydum
mm superH. reminds me of the old HP Jornada's.. they also ran on SH3 processor
(well, some did). back in the day these were mindblowing to me.. a real
pocked-sized PC, with a modem no less!
------
hoggle
I always had high admiration for the SuperH architecture and now to read about
its potential to fuel the much-needed open hardware movement is _fantastic_
news.
------
nickpsecurity
Good to see them doing it. I included SuperH in my list [1] of non-Intel
architectures and old hardware to use post-Snowden. Additionally, I proposed
that it falling out of favor despite Japanese chip-makers backing it might
make it a nice candidate for trying to get them to open the design but still
sell it. Concept is a proven design which can be verified by third parties,
masked by whoever, and taped out at fab of their choice. Although, there's
work in moving to new nodes and that cost would be on whoever did it. The
precedent is Gaisler's SPARC-based processors and I.P. [2] that are dual-
licensed as commercial and GPL with tools for easy customization.
Alternatively, I proposed the security enhancements for processors showing up
in academia be applied to this or another processor with low market share as a
differentiator. Some of these enhancements take almost no chip real-estate,
esp simple tags & tag-checks. The chip designer could also make money for the
semi-custom work. Time has passed, that didn't happen for low market chips,
and did happen for AMD+Intel for non-security applications [that I know of].
Matter of fact, even though my scheme didn't happen, AMD is making so much
money off the other half of my proposal that they could be cited when trying
to convince chip-makers to do it for security enhancements with mass-market
availability. So long as they don't bear the cost of failure (huge in ASIC's)
they might go for it.
Finally, anyone wanting to deploy this or other things, remember the
Structured ASIC's with FPGA conversions. eASIC [3] has a long track record in
this with offers down to 28nm. Gigoptix [4] does S-ASIC's down to 28nm. Tekmos
[5] offers a similar product at 350nm (good for budget masks). Just make sure
you design in FPGA's with ASIC transition in mind from the start & follow
published advice on that (available with Google or consultation). The result
is you prove it in FPGA's, even use it on FGPA boards, and then move it to
S-ASIC later for reduced costs/power + maybe speed increase. Authors are right
that 180nm is a sweet spot although proving it at 350nm or higher first might
be smarter given costs.
[1]
[https://www.schneier.com/blog/archives/2013/09/surreptitious...](https://www.schneier.com/blog/archives/2013/09/surreptitiously.html#c1762647)
[2] [http://www.gaisler.com/](http://www.gaisler.com/)
[3] [http://www.easic.com/products/28-nm-easic-
nextreme-3/](http://www.easic.com/products/28-nm-easic-nextreme-3/)
[4] [http://www.gigoptix.com/products/asics/asic-
type/structured-...](http://www.gigoptix.com/products/asics/asic-
type/structured-asic/)
[5] [http://www.tekmos.com/products/asics/process-
technologies](http://www.tekmos.com/products/asics/process-technologies)
~~~
nickpsecurity
EDIT to add: Triad Semiconductor [1] has a mixed-signal ASIC take on S-ASIC's.
Interesting stuff. Just found it.
[1] [http://www.triadsemi.com/vca-technology/](http://www.triadsemi.com/vca-
technology/)
------
listic
Has Xilinx bitstream been reverse-engineered and reimplemented in open-source?
I haven't heard about it.
~~~
bri3d
I don't think so - at least according to their site, the J2 build chain uses
Xilinx ISE: [http://0pf.org/j-core.html](http://0pf.org/j-core.html) . The
only fully open FPGA toolchain I'm aware of targets Lattice/SiliconBLUE iCE40:
[https://github.com/cseed/arachne-pnr](https://github.com/cseed/arachne-pnr)
------
thrownaway2424
"Resurrecting" something that's not actually dead. SuperH is still a
commercially-used CPU that you can buy off the shelf, as the CPUs themselves
or inside many devices.
------
kjs3
I do like SuperH. Especially the later ones. The SH4 had the most delightfully
odd, fully pipelined 4x4 matrix X 4x1 vector instructions. If you could fit
your problem in that box, you could get so remarkable speed for the clock.
| {
"pile_set_name": "HackerNews"
} |
A Study of Key-Fingerprints: Hex vs. Base32 vs. Wordlists Vs - sufficient
https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/dechand
======
ketralnis
The paper recommends sentence based fingerprints.
I've used rfc1751[0] which is word-based rather than sentence-based, but it's
pretty convenient. I use it for my password sharing tool[1] which creates
prompts that look like
=== secrets.vm ===
common name: secrets.vm
fingerprint: b957e10c998faa9909cff3ba4ec35485d04708c3ecc7481fe14d7f07bc0229cd
public key: c15e697e4807793ef8a9461a7b2c6cf2266d1ec1480a594e83b54e7b75e07702
public sign: f1db594eb55fe97657c57f2aa01afd1210a46d42d80d5552ac4d548162d4968e
mnemonic: AM ROBE KIT OMEN BATE ICY TROY RON WHAT HIP OMIT SUP LID CLAY AVER LEAR CAVE REEL CAN PAM FAN LUND RIFT ACME
does that look right? [y/n]
where "mnemonic" is the rfc1751 mnemonic of the sha256 of the other fields and
is designed to be shouted across a room.
I'd definitely be interested in a standardised sentence-based fingerprinting
system akin to rfc1751
[0]:
[https://tools.ietf.org/html/rfc1751](https://tools.ietf.org/html/rfc1751)
[1]:
[https://github.com/ketralnis/secrets](https://github.com/ketralnis/secrets)
------
nullc
My WAG at this problem a few years ago:
[https://en.bitcoin.it/wiki/User:Gmaxwell/visual_fingerprint_...](https://en.bitcoin.it/wiki/User:Gmaxwell/visual_fingerprint_comparison)
~~~
ketralnis
I'd really want to see that technique studied on actual users before trusting
it. I'm not convinced that users do anything more than glance at one or two
characters in hex passwords and even SSH's visual fingerprints are probably
insufficiently studied (but not totally unstudied[0]) to allow telling users
that glancing is enough. And if glancing isn't enough, using visual indicators
at all is probably actively harmful.
[0]: [http://dirk-loss.de/sshvis/drunken_bishop.pdf](http://dirk-
loss.de/sshvis/drunken_bishop.pdf)
~~~
nullc
In fact, I declined to post the implementation for that reason.
I'm not sure if you read my writeup but I attempted to address that "users
only glance at one or two characters" by suggesting the client show the users
which characters to compare. It's a little kludgy with a text UI, however.
The idea is that the field of characters is large enough that comparing only a
few is fine-- so long as they're selected in a way which isn't predictable to
the attacker.
| {
"pile_set_name": "HackerNews"
} |
How to succeed in language design without really trying [video] - ingve
https://www.youtube.com/watch?v=Sg4U4r_AgJU
======
temuze
Just in case anyone here doesn't know who Brian Kernighan is:
\- He co-created AWK (he's the K) and AMPL
\- He literally wrote the book on C with Dennis Ritchie (the book with the big
blue "C" on it)
\- He authored some of the first UNIX programs (cron, for example) when he was
working at Bell Labs
\- He made "Hello World!" a trope and named UNIX
He's also quite possibly the nicest person ever.
~~~
unwind
I love it how you mentioned "The C Programming Language"
([https://en.wikipedia.org/wiki/The_C_Programming_Language](https://en.wikipedia.org/wiki/The_C_Programming_Language)),
the book commonly known as "K&R", without doing the K-reference and instead
doing that for AWK.
In my tiny little world, K&R is like a bazillion times more well-known than
AWK. Always fun with different perspectives. :)
------
pldrnt
"Notation matters"
"If a language is useful, you will want to generate it by program"
well said
~~~
j-pb
And then he leaves the slide about lisp blank. Even though it's the prime
example of a language that one can generate by programs.
~~~
ericHosick
but only because he admitted he knew little about the subject matter...
------
jaybosamiya
Biggest takeaways from the video (in my opinion):
\- "Notation matters"
\- "Start with domain specific languages [like regex/AMPL/etc], rather than
big general purpose ones - don't build the next C++"
------
andrewchambers
It felt like he was biting his tongue and really pulling punches when
mentioning the complexity of C++.
I also liked the comment about some of the good ideas being lost over time. I
would love to hear an in depth talk about some of the serious the regressions
we have in modern software systems and design.
------
0xdeadbeefbabe
I'm baffled and somewhat suspicious that awk became unpopular enough that I
didn't really understand it till now.
~~~
ferrari8608
Awk seems to have fallen out of favor, in my opinion, due to the rising
popularity of easier to use general purpose programming/scripting languages.
Awk is the bee's knees when it comes to processing text. You can do just about
anything to some text with it. However, you can do much the same with Perl or
Python, both of which can do more than just process text.
I still use awk for shell one-liners. It can do the work of grep, grep -v,
cut, and substring operations in one command, which makes things a bit simpler
(less processes, less pipes). Unfortunately, a grep piped to cut is almost
always faster due to awk being a very robust interpreter.
------
sharvil
Couple of Computerphile videos featured Brian Kernighan [1], which I also
enjoyed. I particularly liked the Bell Labs one where he talked about what it
was like to work there, pretty fascinating stuff.
[1]:
[https://www.youtube.com/user/Computerphile/search?query=Bria...](https://www.youtube.com/user/Computerphile/search?query=Brian+Kernighan)
| {
"pile_set_name": "HackerNews"
} |
You can now host HipChat internally - exogan
https://www.hipchat.com/server
======
breakingcups
$1800 for 12 months with a max of 25 users is a bit steep. Also, this was
launched in January already:
[http://blogs.atlassian.com/2015/01/hipchat_server/](http://blogs.atlassian.com/2015/01/hipchat_server/)
------
AppGirl2012
cool :)
| {
"pile_set_name": "HackerNews"
} |
Show HN: Phishing as a service - naftaliharris
https://cuttlephish.com
======
chrissnell
I wrote some Perl years back to take the fight to phishers. You would provide
my script with the field names and POST URL of the HTML form within the
phishing email, along with some generic types for each form field. There were
types for firstnames, lastnames, email, addresses, usernames, passwords,
social security numbers, and credit card numbers. The script would generate
fake but real-looking values for each of these things--the credit card numbers
would even pass a checksum test--and then post to the URL. It would do this as
fast as the remote end would accept them with the aim of filling out their
database (typically a text file on some compromised server) with bullshit
data, making it hard to pick out the legit data from victims.
It worked wonderfully. I used it through proxies when I could and watched the
phishers try to block me or even attack me back.
~~~
badinker
Do you still have a copy of that script? Would love to look at it.
------
pspace
I work in security at a large Fortune 500 company. I know at first it sounds
like phishing your employees will give you good insight, but you realize
quickly that the data you get is not very useful. Here are the roadblocks I've
hit with these kinds of simulation phishing services:
1\. They rely on e-mail while phishing attacks come from multiple sources like
Facebook and LinkedIn. Sadly, using those services to simulate phishing
attacks violates their ToS.
2\. Simulation phishing only provides pass or fail data meaning you cannot
determine your weakest links in the organization. At best you get an "average"
snapshot.
3\. The data isn't very accurate or precise because there are too many
confounding variables involved. Time of day, subject matter, type of phishing
(attachment, social engineering, etc). Normally we ran our campaigns once a
month but this wasn't enough to produce stable results.
4\. Clicking doesn't mean they fell victim to the attack -- lot's of people
click to investigate then report the links. Ideally, I'd like to specifically
know WHY the employee clicked the link and HOW MUCH was actually at stake.
4\. It pisses people off. There is enough animosity against us security folks
that tricking your employees really hurts that relationship. People feel taken
advantage of.
5\. It doesn't actually improve security in any meaningful way. I found that
it didn't actually improve people's ability to spot and report phishing
attempts. They either became paranoid to the point where they were no longer
productive in legitimate emails, or they had no improvements over time.
6\. There's a growing body of knowledge that dismisses the effectiveness of
this kind of phishing training
([http://www.govinfosecurity.com/interviews/training-doesnt-
mi...](http://www.govinfosecurity.com/interviews/training-doesnt-mitigate-
phishing-i-2148?)) .
With that being said, our company has tried about a dozen of these kinds of
services and the best one so far has been one called Apozy that is rather new.
It's a different approach but the data and insight you get back is actually
very useful.
~~~
lifeisstillgood
I get these (contracting at a Fortune 500) pretty regularly (last week for
example). They are pretty easy to spot and probably have some worthwhile
training value, but Incan see that teasing out any useful data might be hard -
I suspect you will need a huge corpus of templates and a lot of employees.
Sadly I thought of setting up a company like this to do just this job. But
Apozy's gasification approach seems a good idea
------
jedberg
There are many sites like this and I love what they are doing for raising
awareness. As one of the first people to ever fight phishing (I worked at eBay
and PayPal fighting phishing before there was a word for it), I'm keenly aware
that awareness is the only way to really stop it.
That being said, I don't like these reports, because any time I get a phishing
email I immediately load it up in a protected VM to see what it does, so it
would count me as a victim. Since the page you go to isn't a real looking
login page, you can't differentiate between those who fall for it and those
who just clicked to see what it was.
You need to actually set up the fake page and see who puts in valid
credentials to get a true report.
~~~
twelvechairs
Not to dismiss your experience (perhaps you had not heard the term yet) but
the term 'phishing' has been around longer (mid 90s at least) than ebay and
paypal have been big enough to be phishing targets.
~~~
peteretep
I was deeply involved in the fledgling anti-spam industry in the early 2000s,
by way of the anti-virus industry, and it was not a common term then.
Wikipedia gives the first recorded use as '95, and that refers to it as
"fishing", and as being AOL-specific.
~~~
marpstar
I definitely remember old AOL "progz" referring to "fishing"/"phishing".
"Phreaking" was a very popular term before that, which is where I'm guessing
the f/ph replacement came from.
------
zensavona
the FAQ page is 10/10
[https://cuttlephish.com/faq](https://cuttlephish.com/faq)
~~~
ninjakeyboard
I noticed a serious issue with the documentation. I'm not able to go any
farther until this is corrected...
The documentation's FAQ page asks:
"How much phish could a cuttlephish phish if a cuttlephish could phish phish?"
This is not accurate based on my own testing. This should actually read:
" "How much phish could a cuttlephish phish if a cuttlephish could phish phish
phish?"
If you can correct this error, I would love to start using your service
~~~
Intermernet
Also:
>Are cuttlephish phish?
>No. The term "phish" is deeply offensive to cuttlephish, who are proud
cephalopods.
s/cephalopods/cefalopods
------
x0ry
Love it! My recommendation would be to offer an option for allowing the target
to be tricked through the whole process. (Even if credentials are discarded
completely.) The idea here is nothing is left to the imagination. What you
have is great, but it requires them to read and be observant, which is not the
type of person who falls for phishing emails. Clicking the link is "No-No" #1,
don't exclude "No-No" #2 from your process.
~~~
naftaliharris
Thanks and thanks for the suggestion! One thought I'd had was longer/more in
depth campaigns. It's good to know other people would be interested in that as
well.
One thing I was concerned about was that people might not trust some random
guy on the internet to properly discard those credentials.
~~~
bigiain
I think you are completely correct in your second sentence there - there's no
way I'd use this if there was any chance of my colleagues actually disclosing
real credentials to a third party.
(Suspicious me is wondering if you're evil - 'cause if evil-me was in your
position, I'd be selectively showing your "you've been phished, ha ha!"
landing page to most people, but mining LinkedIn/Rapportive/Google for key
contacts at any domains that sign up, and displaying genuinely evil
credential-collecting-login pages if I got a hit from senior sysadmins or a
CTO/CIO/CSO...)
~~~
ThrustVectoring
The phishing page could be set up to have a fake form that sends no data, and
says "you've been phished" when someone tries to submit information to it.
At that level, though, the pen-tester really ought to have control over the
phishing landing page.
------
randomflavor
You should send the emails, and charge me to view the report.
~~~
TeMPOraL
That is an excellent idea! In fact, we've just implemented the billing
service, so please go to
[http://cuttIeph1sh.com/account/billing](http://cuttIeph1sh.com/account/billing),
log in to your account and provide your payment information to continue
receiving our phishing reports!
~~~
Intermernet
Cyrillic homographs[1] are your friend here :-)
[http://сuttlерhish.com/account/billing](http://сuttlерhish.com/account/billing)
(PunyCode [2]: [http://xn--uttlhish-f8g4if.com/account/billing](http://xn--
uttlhish-f8g4if.com/account/billing) )
Also, it seems that Firefox (v38.0.5 Windows) doesn't convert URL interpuncts
(mid-dots) into punycode, so clicking on something like
[http://www.billing·cuttlephish.com/](http://www.billing·cuttlephish.com/)
doesn't actually rewrite the URL in the address bar. Chrome converts it to
[http://www.xn--billingcuttlephish-c4a.com/](http://www.xn--
billingcuttlephish-c4a.com/) .
[1]:
[https://en.wikipedia.org/wiki/IDN_homograph_attack](https://en.wikipedia.org/wiki/IDN_homograph_attack)
[2]:
[https://en.wikipedia.org/wiki/Punycode](https://en.wikipedia.org/wiki/Punycode)
~~~
Manishearth
Filed
[https://bugzilla.mozilla.org/show_bug.cgi?id=1178095](https://bugzilla.mozilla.org/show_bug.cgi?id=1178095),
thanks!
~~~
Intermernet
No problem.
Out of interest, do the Firefox team and the Chromium team compare notes on
decisions like this?
Purely in this one area (IDN homograph attacks), it might be an idea to look
at the Chromium Unicode vetting rules (Which characters and combos get
"punycoded") as they seem to be more conservative from a "Latin" perspective.
I'm not sure if a "blacklist" (mentioned in the bug report) is the best way of
handling this. Perhaps only direct-encoding the "exemplar characters" for the
language setting, and punycoding everything else? I'm pretty sure it would
have eliminated the mid-dot issue, but perhaps this "whitelist" is too
prohibitive.
------
watmough
Neat, but doesn't seem very IT/corporate, which would surely be the intended
audience.
My company uses these guys: [http://www.knowbe4.com/](http://www.knowbe4.com/)
------
Buge
I often intentionally click links to phishing sites, and sometimes enter in
fake usernames and passwords. (I even wrote several bots to auto enter
thousands of random usernames and passwords.)
I don't like the click link = you lose idea.
~~~
ta92929
What if the phishing site also has a 0 day?
~~~
Buge
If they have a 0 day for my browser, then they likely have an enormous budget
with tons of ways of getting it to me besides phishing. I click so many links
per day via reddit, HN, and other sites that the security gained by not
clicking a phishing like is likely less than the education value of clicking
it.
I think the actual danger for me of clicking a phishing link is opening a
phishing tab, then moving on to another tab, then a while later coming back to
the phishing tab but forgetting it was phishing and entering my password. 95%
of the time I remember to check the url before entering my stuff, but everyone
makes mistakes.
------
runn1ng
Hm. I often click on obviously phishing links to see what's there. Would this
tool classify me as a victim?
~~~
amjd
Me too. I often intentionally click on phishing links to see how well the page
is done and where it's hosted.
OP should probably consider adding login pages etc (discarding the
credentials) to actually find people who would fall for it, as someone here
suggested. Many people click the links just out of curiosity.
------
jwcrux
Neat! I really like the easy pricing model.
Quick question - are you concerned about trademarks (Amazon and such) being
included as the phishing templates? Reason I ask is that I'm working on a
hosted project [1] similar to this and have considered including default
templates. I've held off for this exact reason.
Edit - another question, your screenshot in the intro page shows an email (in
the Gmail client) coming from "support@github.com". Github has spf records
setup so I would be interested to know how you manage to spoof the actual
email address itself without getting flagged as spam.
[1] [http://github.com/jordan-wright/gophish](http://github.com/jordan-
wright/gophish)
~~~
naftaliharris
Thanks, and very cool project!
> Quick question - are you concerned about trademarks (Amazon and such) being
> included as the phishing templates?
I'm honestly not 100% sure, but I think in the context of a phishing site
using trademarks like that falls under fair use. But IANAL.
> Github has spf records setup so I would be interested to know how you manage
> to spoof the actual email address itself without getting flagged as spam.
I don't know much about spf records, honestly--for every site I had to try
multiple "From" and "Reply-To" addresses to get the emails past gmail's spam
filter. Some of them didn't even arrive in my spam folder, (apparently they
just got killed on some intermediate hop). support@github.com definitely
works, at least for me--you should try it yourself and see how it goes.
Hope this helps!
~~~
afarrell
IANAL. I took a seminar freshman year on IP law.
The root of trademark law is preventing consumers from being confused or
deceived about brand affiliations. I believe using a trademark to refer to the
product/service symbolized by the mark is a protected case, so long as you are
clear that no endorsement exists. Looking at your language, this is abundantly
(and amusingly) clear.
You might have something to worry about with your insinuations about Dropbox
though. I'm quite sure they are strongly pro-cephalopod.
------
reagency
Consider changing pricing to $/click (pay per victim), so that companies are
paying for the value you provide (detection security holes), and the CTO can
"bet" the CEO that employees need better training/protection.
Much more upside for you.
~~~
spydum
The problem there is that the person/group conducting the test (presumably
security team of a 500 person org) doesn't know if it will cost 500 x
PerClickRate, or 5 x PerClickRate.. They don't yet know the stupidity of their
users. Variable pricing like that can be a deal breaker for a small company.
~~~
mfenniak
You could address that by creating a control on the price. "I want to run this
campaign against 500 users. But my budget is $100." The service sends out
e-mails up to the $100 cost if they all clicked through, then deducts the
actual expenses from the budget. In a few days, it sends the next batch of
e-mails targeting the rest of the budget. Continue until either the e-mails
are all sent, or the budget is expired.
~~~
spydum
I suspect explaining that pricing model is a sales risk. flat fee or price per
contact is far more intuitive I suspect.
Even reading your explanation, I'm not clear on what it will cost me -- this
sounds more like pre-paying? how long should it wait between batches? how
effective will batching be? Rumors of phishing/testing could move quick in the
organisation making the report outcome misleading.
------
gitaarik
What if this site occasionally sends out real phishing mails? If a lot of
sites are using it, they would have interesting stats one could use to target
the right audience.
Not saying they would, but they could get hacked of course...
------
gnyman
Another service which does a similar thing that's been around some time, I
used them but the spam filter ate all my fake mail, as it should :-)
[https://phish5.com/](https://phish5.com/)
------
hrbrtglm
How do you send your emails ?
If your customer is using google domains, microsoft 365 or what else, and the
employees do not fall in your phishing attempt and report your mail as spam,
you may be heading for some trouble with delivery afterward.
~~~
naftaliharris
I'm sending the emails directly from my server with the unix "mail" utility.
Ending up in spam is actually what most concerns me about this idea, and in
fact this concern was what led me to choose the "you don't pay unless someone
clicks on a link" pricing--I was worried that some of the emails might
eventually start ending up in spam after a few customers and wanted to make
sure I wouldn't be charging people if that happened.
I'm planning to see what works once/if the phishing emails actually start
ending up in spam.
------
noobermin
In case anyone one was curious, the "phishing" urls in the phishing emails
lead to this page:
[https://cuttlephish.com/cuttlephished](https://cuttlephish.com/cuttlephished)
------
ahmetmsft
I was doing exactly the same project probably 8 years ago when I was still a
high school student. I used to have a lot of websites, too but I never
launched as I thought phishing is probably illegal and unethical.
------
fokz
This is a useful service. But I imagine there will be some nontrivial issues
regarding spam filtering, server reputation, legal, etc.
How do you do email authentication? What are the headers that you put on your
email?
------
mikeknoop
Love the brand and name (reminds me of
[https://www.youtube.com/watch?v=GDwOi7HpHtQ](https://www.youtube.com/watch?v=GDwOi7HpHtQ)).
------
it_learnses
Are you hiring?
~~~
avn2109
He just closed a series A with a $10 billion valuation, so he's hiring rock
star full stack data scientists.
~~~
gargarplex
Incidentally, I am a rock star full stack data scientist looking for work in
NYC.
------
reagency
Would a company want to give you a list of corporate email addresses?
------
talles
That's a refreshing idea for a change. Well done!
------
jmatthew3
It's a living.
| {
"pile_set_name": "HackerNews"
} |
Google Cloud Gold support worth it? - nartax
We're a startup on their silver plan and I'm having a horrible time with support. Their last response was over 3 days ago on an issue that I've been able to reproduce consistently. I hate to pay more money with this experience but if it helps not being ignored then we'll have to pony up the cash because this is really hurting us.
======
QuinnyPig
Disclaimer: I'm not employed by anyone in this space; I'm a consultant who
focuses on AWS bills, but I've no formalized business relationship with any of
these providers past tiny accounts on all of them for a few ridiculous
purposes-- mostly comedic.
Google has amazing technology-- but I fear they lack the ability to execute
operationally at a level that makes me comfortable trusting then with a
business's livelihood. They turn things off too frequently for comfort; they
fail to realize that sales are relationship driven in ways that algorithms
fail to satisfy; they don't demonstrate a dedication to customer success that
makes me take their cloud offering particularly seriously.
I really don't see that paying them for a higher support tier is likely to
mitigate your issue...
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Want help selecting a good course - brandimage
I am a marketing guy planning a career switch. I want to learn how to build web apps using python and django. Not sure what online courses are available which teach Python and Django. Also, what should I look for in the following while selecting a course - curriculum, instructors, mentor support? Your guidance will be very helpful.
Thanks!
======
pincubator
I think the best way to learn a programming language is to learn it yourself.
Imho, these kind of things don't work well with courses; they often restrict
the way you think.
For Python, you can start from Learn Python the Hard Way (free)
[http://learnpythonthehardway.org/book/](http://learnpythonthehardway.org/book/)
After you finish this, start working on a small project and learn other things
on the way.
For mentor support; I think Stackoverflow is the perfect source to ask
questions and get feedback (but remember to search before asking a question).
~~~
brandimage
Thanks for the suggestions. My biggest problem is motivation and support.
Maybe I should have been more clear with my request - I think a 1:1 mentor
support is exactly the kind of accountability I need.
~~~
mjhea0
Check out RealPython.com. We offer custom mentoring/tutoring as well as a full
course taking you from the basics of the syntax up to advanced web development
with Django and Flask.
| {
"pile_set_name": "HackerNews"
} |
This Japanese hotel room is $1 a night. The catch? You must livestream your stay - herendin2
https://edition.cnn.com/travel/article/livestream-hotel-room-japan-intl-hnk/index.html
======
nriconalla
I can go with this deal of course. But those private people would definitely
decline this kind of promo as we called it. This is great for Vloggers who
review their stays in hotels all over the world. And Japan is a nice country.
------
renegadesensei
>Engaging in "lewd acts" with a romantic partner is not allowed during your
stay.
Sheesh. What's the point then?
| {
"pile_set_name": "HackerNews"
} |
Ask HN: SF over NYC? - podman
As a startup founder currently living in NYC, I'm considering a move to SF for the benefit of my company and for the tech community in general. Is SF still the startup mecca as it's always been portrayed? What are some of the pros and cons of both running and growing a startup, as well as living, in San Francisco? How does San Francisco compare to NYC as a city in general?
======
Aloisius
One thing to recognize about San Francisco is that the tech scene is actually
spread out across the whole peninsula, not just San Francisco.
While San Francisco has been experiencing a considerable renaissance (my last
three companies were HQed in SF), quite a bit of the tech scene is located
30-40 miles South in Santa Clara County.
San Francisco is physically smaller than Manhattan and has about half the
population. The rest of the peninsula is pretty suburban. Even in areas of SF
that approach Manhattan for density, it is not NYC and it is not trying to be.
If you like hyper-competitive people or any hint of pretension, look
elsewhere.
That said, yes, SF's startup scene is mecca. Really. The tech density is high
enough that you will see tech people literally everywhere, money flows
liberally and is well over critical mass for early adopters of technology.
The biggest cons to living here are:
* As a startup, hiring is hard because the best of the best often want to start a company, have started a company or do consulting for startups. I import a lot of people from across the country.
* Microclimates mean going from a t-shirt to a coat with a twenty minute drive.
* Rent is expensive. Maybe not Manhattan expensive, but no less competitive.
* A car are necessary if you live in the peninsula because of how spread out it is.
* You'll be in an environment where there are probably 3+ other startups doing your exact same idea.
Disclosure: I'm from SF.
~~~
impendia
> that the tech scene is actually spread out across the whole peninsula,
Is it? I was under the impression that it started around Redwood City or Menlo
Park and went south from there... and that there was also a lot in San
Francisco.
Are there startups around, say, San Mateo?
~~~
macros
Wikia was in San Mateo for a few years before moving to SF. Cheap place to
bootstrap, easy access to SF and the valley. Not convenient for people in the
east bay, but otherwise not bad.
------
YuriNiyazov
As someone who made a similar move years ago, the answer is, as always: it
depends.
What precisely do you expect to gain from moving? Do you already know people
in SF who are well connected and can introduce you to potential investors or
partners? Do you have such connections back in NYC?
Supposedly NYC is experience a tech startup renaissance, or, at least, so
everyone claims over there. Have you seen evidence of this? Have you
participated in it?
If you are just a founder with a laptop, a website and a movile app, but with
no revenue, visitors, or users, and you are not really plugged in to the
larger tech community around you in NY, then what evidence is there that you
would be plugged in to the larger tech community around you in SF?
~~~
podman
I think I see the difference being that NYC is very big and tech is a very
very small part of what's going on here. SF, on the other hand, is relatively
small and tech is a very large part of what's going on there.
Now I could be very wrong about this, but I feel like if you're out in SF and
you meet people it's somewhat likely they might be in tech where as in in NYC
it's very unlikely. In NYC, you have to go out of your way to find those kinds
of people, possibly going to tech events which, in my experience, sometimes
feel like you're going on a blind date.
Now it's possible that I just don't know the right people in NYC, but from
what I've heard from friends I have in SF, it seems like you're just more
likely to run into tech people making it easier to make the right kinds of
connections.
My company has business for over two years and is profitable. I'm not just a
guy with a laptop and an idea. I'm just not happy with the kinds of
connections I'm making in NYC nor the speed at which my company is growing.
Both of these could obviously be linked to me and not my city, but I think
it's worth finding out.
~~~
YuriNiyazov
Ok. I think you might benefit from the move. You should do a trial run,
though. Come out here and rent a cheap place for 3 months, and see what kind
of connections you make here during that period of time.
Cheap places (and, as compared to NYC, very relaxed landlords with very easy
sublet requirements) are mostly found in the East Bay. I live in Berkeley -
compared to NYC and SF it is cheap and very close to downtown SF where you'll
be spending most of your time networking anyway.
As far as I can tell, hiring techies here is just as bad as it is in NYC, so
don't come here thinking all the programmers will line up to work for you.
~~~
podman
Awesome. I'm planning on going out to visit friends the first week of
September. Should help me get a sense of the city before my lease is up so I
can make a slightly more informed decision.
------
dangrover
I started my last business in SF, got acquired by a company in NYC, lived
there for two years, and just moved back. I keep meaning to write a blog post
about this.
NYC's tech scene wasn't bad, but it really doesn't hold a candle to what
you'll find in SF.
Where SF really shines is how hospitable it is to people who are on the
margins. There isn't such a wide gulf between "having a job" and "doing a
startup." You'll find people all over the entrepreneurial spectrum, and it's
so much easier to meet people for some reason. Having technical skills (in
addition to whatever other skills you possess) makes you more respectable, not
less, as it does in NYC. And everyone is _so_ much chiller.
~~~
_delirium
My limited experience agrees with that, especially the difference in "proper
job" / hacker gap. Imo it's partly scenes being clustered differently rather
than purely what exists/doesn't. There is a lot of fringe/weird tech scene in
NYC, but more clustered with the new-media art scene, and quite disconnected
from the VC/startup scene. NYC Resistor seems to be making some progress
towards getting interest from both crowds, but my outsider guess is that its
SF analog, Noisebridge, is closer to the SF tech-biz scene than Resistor is to
the NYC tech-biz scene.
------
activepeanut
Stay where you are! Don't come here! My rent's already too high! ;)
------
docwhat
Pittsburgh FTW! Voted repeatedly[1] as one of the best places to live and it
has an excellent Tech community[2].
Ciao!
[1]
[http://www.dailymail.co.uk/news/article-1359195/Pittsburgh-b...](http://www.dailymail.co.uk/news/article-1359195/Pittsburgh-
best-place-live-says-Economist-Intelligence-Unit.html) [1]
[http://www.forbes.com/2010/04/29/cities-livable-
pittsburgh-l...](http://www.forbes.com/2010/04/29/cities-livable-pittsburgh-
lifestyle-real-estate-top-ten-jobs-crime-income.html) [2]
<http://www.pghtech.org/> [2] <http://pghtechfest.com/>
------
whichdan
How much do you hate winter?
~~~
podman
I like snow. I generally dislike cold if there isn't any snow. I grew up in
New York so I'm used to it.
~~~
astrojams
New York is closer to London, Paris, Germany, Italy, etc.. So if your startup
is doing a lot of business in Europe, live in NY.
------
eli_gottlieb
Why not Boston?
| {
"pile_set_name": "HackerNews"
} |
Automated tests for your infrastructure code - mooreds
https://terratest.gruntwork.io/
======
moon2
Great tip. I've been using Molecule [1] to write my Ansible roles. It's good
to be able to write your tests before heading to your roles. It also makes it
easier to reuse roles.
[1] [https://github.com/ansible-
community/molecule](https://github.com/ansible-community/molecule)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: List of bookmarking services? - michaelkscott
We've recently been seeing a good number of new bookmarking services showing up (or launching) here on HN. Most of them do the same things (sync, import, export) with some variation of added or redacted features.<p>There was an interesting one posted today called Tinmark. It looked good and promising but I was trying to remember what some of the other known ones were. Here's the list I came up with off the top of my head, please add more if you know...<p>Pinboard<p>Delicious<p>Evernote<p>Historious<p>Diigo
======
michaelkscott
Clickable:
<http://tinmark.com/>
<http://pinboard.in/>
<http://delicious.com>
<http://evernote.com>
<http://historio.us>
<http://diigo.com>
------
easonchan42
Kippt.com
<http://kippt.com>
------
dylanhassinger
Kiip
Zootool
xMarks
Dropmark
------
xr4tiii
Linkies
Http://www.linkies.com
------
Repat123
Scuttle
| {
"pile_set_name": "HackerNews"
} |
Penniless hero of ransomware epidemic has written more decryptors than anyone - miles
https://boingboing.net/2019/10/28/give-this-guy-a-grant.html
======
Pick-A-Hill2019
Duplicate of
[https://news.ycombinator.com/item?id=21375487](https://news.ycombinator.com/item?id=21375487)
&
[https://news.ycombinator.com/item?id=21381155](https://news.ycombinator.com/item?id=21381155).
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How much would you charge to mentor me in C++? - sedeki
I'm rather novice in C++. I know the basics. Where I lack is in design, design patterns and a higher level understanding.<p>How much would you charge for this? Per hour?
Please be realistic.
======
BSousa
Do you really need a mentor? Or just a list of good books and path to take?
What is your experience with other programming languages?
On the mentor charge, I think trying to get a mentor and pay a per hour thing
maybe wrong. I mean, I can charge you for an hour for a skype 'lesson' but
what about all the other work a mentor/teacher would have to do (checking code
you wrote, writing notes, writing assignments or doing 'lesson plans')?
If you are interested, you can email me (info on profile) and we can discuss
it further (for reference, I've wrote a book about C++ and having using it for
more than 10 years as a developer).
~~~
ddorian43
i coudn't see any info on your profile
~~~
BSousa
Weird, email is there, anyway: brunomtsousa@gmail.com
~~~
jaredsohn
The email address field can only be viewed by you and Y Combinator. Put it in
'about' if you want it to be publicly visible.
------
neilxdsouza
Not a direct answer to your question but ...
You should probably find someone in the Open Source world who is willing to
mentor you for free and you could contribute code to their project (thus
building a profile for yourself).
I myself am not very good in C++, but the more you program, the better you
become. You may need to read Exceptional C++ by Herb Sutter and some other
style guides and other books including Modern C++ Design by Andrei
Alexandrescu and C++ Templates : The Complete Guide by Josuttis and
Vandevoorde. The Stanley Lippman book mentioned in another comment is also
very good.
I work on an Open source project in C++ (but it's also got yacc and it's a
compiler (so it may be a high barrier for you, but if you'd like to give it a
try I would be happy to help for free) ), and I dont think I am good enough to
mentor someone, but you should be able to find an Open Source dev who will
mentor you, if you agree to contribute back. This way you can save yourself
some money and build a profile at the same time.
~~~
sedeki
Sounds interesting. Do you have a github and email?
~~~
neilxdsouza
Sorry about the late reply, I have no-procrast on, am going to disable it.
email: nxd_in@yahoo.com, github neilxdsouza (same as my HN username).
the compiler is not yet pushed to github - I will do so and update you (but I
dont have your email id either)
------
fredophile
I write C++ for a living. I don't have time for Internet mentoring but I will
suggest a few resources and general ways you can improve your code.
Are you a C++ novice or a programming novice? If you're a programming novice
you should take some time and learn some standard algorithms and data
structures. Introduction to Algorihms is fairly standard for university
algorithm course. Make sure you understand complexity and big O notation.
Understand typical container classes and their trade offs (lists, queues,
stacks, deques, various types of trees, etc).
The best way to improve your code is to write code. Reading good code also
helps but when you start out you can't tell good from bad. Here are some
questions to ask yourself as you write: Is what I'm doing clear? Will I be
able to understand what I did and why I did it if I read this a year from now?
What could go wrong? What inputs do I expect and what happens if I get
something unexpected? How hard will this be to update? How hard will this be
to debug if things go wrong?
For a novice I'd suggest reading Effective C++ and More Effective C++ by Scott
Meyers. Read a point or two a week and really focus on using them in your
coding that week. This will help you build up good habits. A lot of good
coding is good habits accumulated over time. Herb Sutter is another author
with excellent books for beginning to intermediate coders. He's also got a
really good blog.
Once you've read and used techniques from Meyers and Sutter get a copy of Code
Complete by Steve McConnel This book is a great resource. Once you've read it
all the way through you can read Modern C++ Design by Alexandrescu. Anytime
you find yourself thinking you should use a technique from Alexandrescu stop.
You've probably gone too far and over thought something. Maybe ten percent of
the time this happens you'll be right and template craziness is the right
approach. You'll save yourself a lot of headaches the other ninety percent of
the time by thinking more about what you're doing.
I know that some of these books are expensive but you really don't need to get
them all at once. You can take a long time reading through each of them and
learning to apply their content.
~~~
sedeki
Thank you for this answer. I will look into all these things but in particular
the algorithms.
Do you think it's worth learning esoteric algorithms or just the fundamental
ones in order to get a job?
Also, what do you think about design patterns? Do you use it in practice
(often)?
~~~
fredophile
To start with it would probably be best to focus on fairly standard data
structures and algorithms. Here are some you should learn about: stack, queue,
dequeue, linked list, doubly linked list, binary tree, heap, B tree. You
should be able to identify some times when you'd want to use one vs another
from this list and what tradeoffs they have.
Esoteric algorithms aren't necessary but can be nice to know. If there's an
area you want to specialize in or learn about you should learn the basic
algorithms for that area. There are lots of things I've learned that I don't
use regularly. For these things it's enough to know about them and have a
rough idea. That's what books and the internet are for.
Design patterns can be useful if the people you work with know them. Their
biggest benefit is giving you a common set of terminology that you can use to
describe something. If I tell someone I'm using a factory they'll usually know
what I'm talking about without needing more explanation. However, I wouldn't
worry about them for now if I was you.
------
fogleman
I was once contacted out of the blue by a local student who wanted some
programming lessons. I met him at a coffee shop and worked with him for about
an hour for free. We only did it once, but I was glad to help out.
------
codemonkeymike
I am sure if you went to any big CS university in your area you could find
some grad student who well versed in the requisite knowledge in being a good
C++ programmer (Like Algorithm design, linear algebra, discrete math), and
have them tutor you, generally the going rate is about $20-40 an hour. I would
also make it a point to make sure they are proficient in what they say they
are, a good set of questions or something of that sort. Find someone who
enjoys reciting what they know, and will let you pick their brain.
------
ddorian43
Also what would be a great resource to learn c++ ?
My goal is to contribute to open-source database (like create a data-type
extension(ex:hll) for postgres (iknow it's in c))
~~~
sedeki
"C++ Primer" by Lippman et al. is the best book in my opinion
[http://www.amazon.co.uk/C-Primer-Stanley-B-
Lippman/dp/032171...](http://www.amazon.co.uk/C-Primer-Stanley-B-
Lippman/dp/0321714113/ref=sr_1_1?ie=UTF8&qid=1393070675&sr=8-1&keywords=c%2B%2B+primer)
~~~
ddorian43
thanks
you probably should have posted the thread on a weekday
~~~
sedeki
Yes, probably :-)
| {
"pile_set_name": "HackerNews"
} |
Elixir as an Object-Oriented Language - weatherlight
http://tech.noredink.com/post/142689001488/the-most-object-oriented-language
======
lostcolony
"Object Oriented" is as loosely defined a thing, in practice, as "Functional".
Per the article, Alan Kay has given a definition of Object Oriented, that
Erlang arguably nails in a solid way, but while he was the founder, he's not
the only voice. Others have touted the importance of inheritance, and other
such things, which can be far harder cases to argue.
That said, I think the real value of Erlang is the actor model, and
specifically, the way it causes you to model things and concepts (or objects,
but without the baggage) as isolated processes. This is a direct parallel to
the kingdoms of verbs and nouns, but, many OO languages provide isolation for
their objects/things/nouns, but not for the processes/verbs. Or rather, the
process is not naturally related to the noun, but rather almost an Aspect
Oriented programming approach. The actual execution crosses the boundaries of
the objects/things/nouns (i.e., a single object's instance being accessed by
multiple threads, all contending to modify it; the verbs span the boundary of
the noun). But in the actor model, that doesn't happen; you have proper
isolation of state (the thing/noun/etc) in the actor, but also isolation of
execution (the process/verb) in the actor. That is, one actor = one isolated
chunk of both state AND execution.
It's this model that people are looking at when they say it's better at object
orientation. I think that's a bit misleading (per other comments), but there's
real benefit to it.
------
zzzcpan
This is of course not true, but PR. Here's what Joe wrote once:
"When I was first introduced to the idea of OOP I was skeptical but didn't
know why - it just felt "wrong". After its introduction OOP became very
popular (I will explain why later) and criticising OOP was rather like
"swearing in church". OOness became something that every respectable language
just had to have.
As Erlang became popular we were often asked "Is Erlang OO" \- well, of course
the true answer was "No of course not" \- but we didn't to say this out loud -
so we invented a serious of ingenious ways of answering the question that were
designed to give the impression that Erlang was (sort of) OO (If you waved
your hands a lot) but not really (If you listened to what we actually said,
and read the small print carefully)."
[http://web.archive.org/web/20020413132000/http://www.bluetai...](http://web.archive.org/web/20020413132000/http://www.bluetail.com/~joe/vol1/v1_oo.html)
~~~
imagist
Well, Armstrong may have been exposed to C++-style OOP like many (most?)
programmers, which would of course lead him to believe that Erlang was very
different from C++'s OOP. But languages that really orient toward object
orientation rather than a thin semantic layer over procedural programming look
a lot more like Erlang. Take Smalltalk for example: Smalltalk objects talking
to each other looks almost identical to Erlang threads talking to each other.
~~~
barrkel
Erlang uses asynchronous message passing. Smalltalk models method calls as
synchronous message passing. Therein lies an enormous amount of difference.
~~~
Myrmornis
Question about elixir: In the article's example, it looks like the HTTP.get
function call is turning the asynchronous message into a blocking call. Is
that common in elixir/erlang, for libraries to provide blocking utilities?
~~~
Arcsech
Yes, it's quite common for Elixir libraries to provide blocking calls. If your
program is well-designed though, the calling process will be "blocked" but
other processes will be executing at the same time. So having a process
waiting on a HTTP call has no impact on the overall performance of the system.
This is one of the big reasons why Erlang/Elixir's model of having many
processes is awesome.
It's also very easy to use the async/await pattern if you want to make a call,
do other work, and get the data from the call at some later point.
------
spedru
This sort of article and the sort of thinking underpinning it is a betrayal of
the uniqueness (relative to the more ``mainstream'' choices) of languages like
Elixir/Erlang. In fact, part of the point of using a language like Erlang or
Go (besides more production-relevant claims to fame like reliability or build
times respectively) is to break one's thinking out of the object-oriented
lens.
Not every abstraction is object-orientation. The notion of not caring about
the internals of something (as with the article's mention that "This is
possible because Elixir doesn’t care how the process works internally") is
something that goes deeper conceptually and further back in time than objects
as we know them.
It might be neat for weaning someone off an overwhelmingly OOP mindset, but
that's not what it seems to be trying to do. Instead, it's just demonstrating
how to use a new language without it forcing you to think differently, and
that's worrying. If every language is a fantastic object-oriented language,
why bother using any given one?
------
skrebbel
This entire article is about Elixir, not Erlang. Can someone fix the
submission title?
Also this fits with my experience with Elixir. I can apply all my OOP baggage
very well in Elixir; one active process maps beautifully one object.
The only thing I miss is a clear _terminology_ distinction between the Elixir
versions of an object and a class. Both are called processes, which is a bit
messy. Strictly a process is just the runtime thing (the "object") and the
"class" is just a function (or, in practice, a module that follows a certain
convention). But I've heard many people refer to the latter as "processes" too
and I'd love a better word.
~~~
di4na
Sooooo
A process is a function running in its own context.
A module is just a namespace to group some functions. But there is no real
link to what an object brings.
At best you could try to compare an OTP behaviour to a class but even that...
~~~
fzzzy
A process is an object. It accepts messages and sends messages. It
encapsulates state.
~~~
di4na
Sure. But... Then you enter the place where you get overloaded semantic. In
particular... the fact that "Object need Classes !" or "Objects are a thing
that encapsulate Functions with hidden state"
I have some ideas for post about the fundamental difference beetween RPC and
message passing. And how it impact how you think about your code (Java/Cpp vs
Smalltalk/Erlang). I think it is the main point.
~~~
masklinn
> In particular... the fact that "Object need Classes !"
That could hardly be less of a fact though.
------
weatherlight
Alan Kay, When he first created his OO language, SmallTalk 72, It didn't have
classes, or Inheritance. That development came later in Smalltalk 80. The only
thing that Kay cared about was encapsulated code that could pass messages
around to other encapsulated code.
Kay's thoughts on the matter are in the link below.
[http://programmers.stackexchange.com/questions/46592/so-
what...](http://programmers.stackexchange.com/questions/46592/so-what-did-
alan-kay-really-mean-by-the-term-object-oriented)
I made up the term 'object-oriented', and I can tell you I didn't have C++ in
mind
\-- Alan Kay, OOPSLA '97
------
programminggeek
I gave a talk on OOP and what I think Alan Kay was really talking about WRT
OOP [http://brianknapp.me/message-oriented-
programming/](http://brianknapp.me/message-oriented-programming/)
~~~
di4na
You sir are doing the talk i want to keep doing for the Elixir Community. I
sometimes wonder if we do not need a way to talk about this type of thinking
more publically...
------
weatherlight
Also who ever changed the title, It's "Erlang - The Most Object-Oriented
Language," as per the quote in the article! \-- "Joe makes a comment about
Erlang being “the most object-oriented language.”"
If you are going to Change it At least change it to "Elixir - The Most Object-
Oriented Language"
------
shadeless
Somewhat related, there was a fun Lightning Talk at ElixirConfEU, "Advanced
OOP in Elixir" \-
[https://www.youtube.com/watch?v=5EtV2JUU0Z4](https://www.youtube.com/watch?v=5EtV2JUU0Z4)
------
yellowapple
I feel like stepping the reader through creating a whole Mix project was a
little bit excessive. Was a simple code snippet demonstrating a process or two
(and maybe a module or two) really insufficient for that?
Otherwise a good read. The idea of Elixir/Erlang being "object-oriented" by
way of their application of the actor model was one of the "eureka" moments in
my transition from procedural languages (in particular, Ruby).
------
GrumpyCoder
Looks unreadable
[https://i.imgur.com/MTJziXw.png](https://i.imgur.com/MTJziXw.png)
~~~
tqkxzugoaupvwqr
Works in Safari Technology Preview 12 on Mac OS 10.11.6:
[http://i.imgur.com/cdN7ky2.png](http://i.imgur.com/cdN7ky2.png)
~~~
masklinn
Seems to work just fine on all OSX browsers (latest stable Chrome, Firefox and
Safari)
| {
"pile_set_name": "HackerNews"
} |
Company Raises the Price of a Drug That Fights Infant Epilepsy by 85,000% - Parbeyjr
https://futurism.com/company-raises-the-price-of-a-drug-that-fights-infant-epilepsy-by-85000/
======
CamelCaseName
My line of thinking has always been that developing and selling drugs is very
expensive. Without the possibility of being acquired and selling a vial for
five figures, there's no way it would have existed in the first place.
Going off the numbers in the article, MNK sold $1B last year. At $34K/vial
that's about almost 12 vials per infant (2,500/yr) per year. (Assuming the
course of treatment is one year)
When the vials were selling for $40 each, they would have brought in $1.2MM -
I'm no expert, but that doesn't even sound like enough to rent a lab and hire
scientists/technicians.
The real headline of this article should be, "MNK settles $100MM lawsuit over
anti-competitive drug pricing behavior" (or something more catchy) -- That, to
me, is the real interesting bit.
How regulations work around drug pricing is where media focus should be, and
there are just as many soundbites in that discussion for the media to latch on
to as well.
~~~
maxerickson
Of course, we know from history that it existed for $40 a vial. Looking into
it, it was identified in the 1930s and a synthetic analogue was developed in
the 1960s.
I imagine the one weird trick they were using to bring down costs is only to
devote a small number of days each year to producing the particular drug. It'd
be interesting to see the current production costs.
------
devoply
One way to deal with this problem is simply to take away the drug patent if a
company is found to engage in this sort of behavior, make that law. Their drug
becomes a generic to be manufactured by other companies if they are found to
be profiteering off a drug. Then under threat of losing it all hanging over
their heads, drug companies will behave themselves.
How do you decide what is profiteering, let the courts decide, just pass the
law and then pass the judgment to the courts. They will use common law and all
the other stuff they have under their belt to make that decision, economic
theory and what not.
The point is to get business strategists to understand that there is a risk of
bluntly increases prices to make a profit.
~~~
FullMtlAlcoholc
They also would simply refuse to research and develop any new drugs under that
regulatory environment. Drug companies are. not benevolent entities, they are
as cuththroat, or even moee so than Wall Street.
The real issue is the cost and length of clinical trials. I know this can be
lowered somewhat by allowing computational models, but it is a difficult
problem to solve unless biotech goes through the same garage Revolution as the
PC
~~~
gwright
One way to address this is to create different standards of evaluation and to
adjust the legal liabilities accordingly.
What I mean (very roughly) is something like:
Untested => you are on your own, don't imagine you can sue anyone if something
goes wrong, this is the default Conditional => some basic efficacy/safety
protocols have been evaluated, some limited liability for the drug companies,
Approved => full suite of evaluation done, liability still capped but at a
much higher level then Conditional
There would still be legal safeguards around fraudulent or negligent practices
of course that wouldn't be constrained by the capped liability I suggested
above.
------
inetsee
I couldn't see anything in the article (or the linked press release) to
indicate whether the settlement included reducing the price to something less
outrageous.
| {
"pile_set_name": "HackerNews"
} |
How Twitter Should Make Money - fryed7
http://alxhill.com/2012/11/how-to-make-money-a-guide-for-twitter/
======
mschaecher
They're basically already doing all of these in some form or another.
1\. They'd never describe it as this, but sign up as an advertiser and you
essentially get all of this.
2\. They're essentially doing this with their new interest based targeting for
advertisers, i.e. mining users for interests and then letting advertising pay
to target those interests.
3\. Promoted tweets all the way. While they don't have follower count as a
targeting vector yet, they do have everything from device and location, to
gender and interests, to @handles and time of day.
------
tehwebguy
> 3\. Injected tweets.
They are already doing this
~~~
alxhill
Interesting - I wasn't aware of this when I wrote the article, and certainly
haven't seen anything like it myself. Do you have source?
~~~
proksoup
[https://support.twitter.com/articles/142101-what-are-
promote...](https://support.twitter.com/articles/142101-what-are-promoted-
tweets)
| {
"pile_set_name": "HackerNews"
} |
Introducing Launch Now and Simplified Rules - mecredis
https://www.kickstarter.com/blog/introducing-launch-now-and-simplified-rules-0
======
alxjrvs
The "Launch Now" portion looks so targeted at getting you to avoid the use of
community managers (bright green vs. dull blue, A checkmark, right side of the
screen, "get help if you need it" vs. "wooohoo!"), is kickstarter losing money
by reviewing all of the projects manually? I can imagine they might be popular
enough.
| {
"pile_set_name": "HackerNews"
} |
The Deal Is Simple. Australia Gets Money, China Gets Australia - cwan
http://www.businessweek.com/magazine/content/10_37/b4194044972388.htm
======
nicko
I worked in the Pilbara for one of the big miners in an 'automated' laboratory
for a couple of years after graduating. AMA.
Couple of quick points for anyone tempted to make the move: As an employee you
get to interact with some pretty amazing tech, but most of it is created by
other firms. Direct employees typically end up operating, maintaining and
fixing things when they break. It's a great place to get hands on experience
solving problems under time pressure. If you want to work heavily in the code
base of automated robots/ trucks / trains, find a firm that does the
contracting work to work for.
Secondly, don't do it just for the money, the job becomes your life. The
article wasn't really about employee benefits but they did mention a salary
without saying exactly what it was for.
Michael, after 35 years earns 145,000 p.a. including superannuation.A typical
truck driver on a FIFO (fly in fly out) roster will fly up to the mine and
work 7 12.5 hour day shifts, followed by 7 12.5 hour night shifts, and then
fly back to the city to recover for a week (6 1/2 days). This is repeated
about 17 times a year.
14days * 12.5hour shift * 17 stints = 2975 hours. $145, 000 / 2975 = approx
AU$48 dollars per hour, including super. For that 20 year old on $92,000 it
works out about $31 an hour, assuming the same roster. getting that job is not
exactly easy either. The money is not bad, especially since you have nowhere
to spend it on while at the mine, but the guys who do this long term have it
in their blood and live to work at the mine.
Your social and dating life ( if you have one at all) gets fairly messed up on
that roster, and fatigue from night shift can get to you after awhile.
Fun Fact: The mines tend to prefer female truck drivers as they are more
gentle on the equipment.
~~~
adsyoung
Hi nicko. Do you have an email address I could ping you on. Would love to quiz
you about your experience :)
~~~
nicko
No probs, I just updated my profile with my email.
------
ryanwaggoner
The geopolitics are interesting, of course, but this was _really_ interesting
to me:
_He now oversees a test project of driverless Komatsu trucks that cart out
300 tons at a time, moving 80,000 tons over two 12-hour shifts. When the
drilling is good, he says, they can shift 120,000 tons in 24 hours. Without
drivers, the trucks are more reliable, with radar sensors stopping them from
crashing. They roll 24 hours a day._
~~~
teyc
some drill rigs are semiautonomous, with GPS. The control systems were priced
around $120k the last time I looked about 10 years ago. Any HN'ers want to
disrupt this? I know some drillers out this way.
~~~
nl
They have similar technology for farming, too.
If you wanted to disrupt this, I suspect you don't want to make it cheaper
(the mining companies have plenty of money), but to do a better job somehow.
(Where are you?)
~~~
w00pla2
> They have similar technology for farming, too.
This is called "Precision agriculture". Tractors and equipment enabled with
this can throw the precise amount of fertilizer/seeds at specific points.
Probably quite a hot topic in farming... Go to any farming show and you will
see this.
<http://en.wikipedia.org/wiki/Precision_agriculture>
------
maxklein
The first page of the article uses various journalistic tricks to make the
Australians seem like nice homely people, while the Chinese seem like faceless
big corporations.
------
barredo
Alternative Title: Chinese (amongst others) companies make money on Australian
soil. Also known as "Multinationals _multinationing_ "
~~~
ww520
Alternative Title: Capitalism at work, trade instead of war, each got what
they want.
~~~
Estragon
I don't think it's that simple. Consider, for example, that the Australian
Prime Minister, Kevin Rudd, lost his leadership largely because mining
companies were upset by his plan to tax them further. From the article:
Economic and diplomatic advances, however, have not
fueled a warm national glow. Australia's proposal to
increase taxes on mining became a national issue,
precipitating a three-month barrage of anti-government
campaigns paid for by the mining industry, which in part
led to the removal in June of Prime Minister Kevin Rudd,
a Chinese-speaking former diplomat. In the August federal
election, the ruling center-left Australian Labor Party
lost its majority due in large part to massive swings
against it in the resource-rich states of Western
Australia and Queensland. Supporters of the tax argue
that the resources are not coming back and that Australia
should participate more fully in the outsize profits.
Those against include the mine operators and the many who
work in the mines; mining salaries are, on average,
Australia's highest, according to the Australian Bureau
of Statistics.
~~~
brc
There were other factors at work with Rudd's removal. The mining tax was the
last act in a long play of dropping promises, reneging on commitments and
policy backflips.
The mining tax was a disaster and, in it's original form, was reverse
nationalism by stealth. The government was going to tax profits above 6% with
a 40% 'super tax' - terminology straight from a Marxist script. They were then
going to reimburse Miners for losses on projects. In reality, with the
government participating in 40% of the profits and 40% of the losses, it was a
part-nationalisation by stealth.
The reason the mining tax was so rejected, particularly by the mining states,
was that it was to replace state-based mining royalties with a Federal mining
tax. So instead of individual states receiving royalties for mineral wealth -
as per the constitution, the tax money would be funnelled to the Federal
government, which would then have power over where it was spent. The mineral
states stood to lose power over their own revenues, and the non-mining states
stood to gain income from activities that took place entirely outside their
borders.
The tax was beloved by pro-government tax-raising types and lovers of economic
theory and hated by pretty much everyone else.
~~~
nl
_There were other factors at work with Rudd's removal. The mining tax was the
last act in a long play of dropping promises, reneging on commitments and
policy backflips._
I agree with this
_The mining tax was a disaster and, in it's original form, was reverse
nationalism by stealth. The government was going to tax profits above 6% with
a 40% 'super tax' - terminology straight from a Marxist script. They were then
going to reimburse Miners for losses on projects. In reality, with the
government participating in 40% of the profits and 40% of the losses, it was a
part-nationalisation by stealth._
I agree the mining tax was a disaster, but it wasn't as bad as you are making
out. The 40% thing was always something that got the headlines, but as you
point out it isn't as simple as that because it would have reduced state based
royalties.
_The reason the mining tax was so rejected, particularly by the mining
states, was that it was to replace state-based mining royalties with a Federal
mining tax._
This is true.
_The mineral states stood to lose power over their own revenues, and the non-
mining states stood to gain income from activities that took place entirely
outside their borders._
This is also true, but not necessarily a bad thing. Australian urban centres
are a long, long way from the mines and some form of distribution probably
makes sense.
_The tax was beloved by pro-government tax-raising types and lovers of
economic theory and hated by pretty much everyone else._
I don't think the tax was loved by anyone except Rudd's kitchen cabinet (and
possibly only by 3 out of them too, judging from how quickly Gillard backed
off it).
But the goals of the tax are perhaps more broadly backed. The original article
touched on the fact that many mining companies avoid investing in the
communities (as you'd expect from a profit making entity), and this has caused
some issues. From the article:
_One of the local councils in the Pilbara, Shire of Ashburton, recently
refused permission to Rio Tinto to expand its camp around the Tom Price mine.
Instead the company has been asked to invest in facilities that will remain
after the mine is closed, to spend $247 million on housing, an air strip, and
other infrastructure at the town of Pannawonica. This kind of investment, and
the employing of locals, is the only move that is going to convince skeptics
like Tony Wiltshire.
"We were at a town meeting the other day when a representative from a mining
company said, 'Come on, we're all up here to make a dollar.' The locals in the
room looked at each other and thought, 'What?' We live here. We can cope with
the hot summers. We're here for the long term. We are actually better for the
multinationals than the contractors they fly in and fly out. We just need them
to wake up to the fact that they, and we, are all in it for the long haul."_
The idea of the tax was to formalize this kind of investment. The way they
went about it was probably the worst planned example of domestic politics in
Australia since WW2, though!
~~~
megablast
The tax was a great idea, and only hated by mining companies and people who
happened to believe everything they see on TV. Most other big mining countries
(Canada, Brazil, etc...) are also thinking about bringing in a mining tax like
this.
All Australians deserve returns from the investment, not just the tiny number
of miners, who earn ridiculous sums of money.
~~~
ShardPhoenix
If more people were willing to do those mining jobs, they wouldn't make so
much money.
~~~
nl
I think he meant the tiny number of mining companies which make all the money
- not the small number of employees who make good but not outrageous money.
------
jakarta
It's pretty interesting to see how intertwined China is with Australia.
If you ever want to short China, your best bet is to target Australia which
already has a bit of a property bubble, mining companies that are dependent on
Chinese demand, and a currency that is buoyed by commodity prices.
If China is indeed a bubble (at least in the medium-term) Australia is going
to experience a world of hurt.
~~~
mrtron
I don't see how China's demand for commodities could be a bubble.
They are building in huge volumes, but they have a huge volume of people.
There is a middle class developing. Check out Hans Rosling's talks about it.
There may be a temporary bubble in the stock market, but China is undergoing
an incredible shift towards becoming North American in every way. Beijing is
the best example of this - the entire city has been bulldozed and redeveloped
in a North American big box way. Shanghai is even further 'ahead'.
Traditionally Chinese have been very conservative with spending, preventing
quick economic growth. Credit cards are almost non-existent. You pre-pay your
bills, even electricity! But for some reason - perhaps the Internets media
influence, perhaps the one child policy, people are now spending money
quickly. Starbucks is extremely popular and more expensive than in Toronto,
NYC or SF! For the price of a coffee you could easily get a great meal with a
few drinks, yet coffee shops are filled.
If all of it is a bubble - it is so firmly entrenched at ground level that the
momentum of this bubble will burst through and last for decades. New
industries are developing rapidly. The insurance industry was almost non-
existent a decade ago, and is growing quickly in different ways there.
One thing is part of what is fuels the growth is international companies
spending a lot of money trying to establish new business in China. These
companies are risking money and bringing expertise into a really unknown
market and often losing money. China in the end benefits as they just sit
their with their vast market and allow foreign companies to try to create new
industries. If anything starts doing really well, the government could always
regulate that industry and take it over themselves.
I don't see anything stopping both China and India both developing a large
middle class. That will snowball growth for decades. Premium brands will
likely dominate - these folks want iPhones and Mercedes.
I think this Australian iron ore example is just the beginning of the scale at
which we will see commodities shifting to China. I expect timber to happen at
an enormous scale as their foresting practices seem quite unsustainable.
~~~
btilly
The first problem that China has right now is that government policy has made
it easier to bring money in than to take it out. This has caused a significant
asset bubble. If that asset bubble pops, it will be painful. That's exactly
what happened to the Japanese juggernaut that everyone was scared about 20
years ago.
The longer term problems are demographic. The one-child policy means that the
upcoming generation is smaller than the previous. Their work-force will soon
be shrinking. Furthermore many parents chose to abort girls to make their one
child a son. But a generation with a significant surplus of men is bound to
have interesting social problems.
If I had to bet, I'd bet that 30 years out we'll be hearing more about India
than China.
~~~
riffer
_30 years out we'll be hearing more about India than China_
I have that bet on, and in good size
Everything we know about top-down vs. bottom-up design says that China is
supposed to appear to be doing better than India, until eventually it suddenly
becomes clear that it is not.
------
adrianwaj
The key issue for me here is that Australia's not getting the same
infrastructure projects that China's African resource partners are getting (eg
roads, schools, hospitals.) Australia's probably getting more money instead.
For example: "Abuja — The Chinese government has concluded arrangements to
spend about $50 billion on development of infrastructure in Nigeria through
SINOSURE, the Export Credit Guarantee Agency of China."
<http://allafrica.com/stories/200803310705.html>
"Of some 900 projects China built in Africa, more than half are aimed at
improving local people's livelihoods."
[http://www.china.org.cn/opinion/2010-07/04/content_20416315....](http://www.china.org.cn/opinion/2010-07/04/content_20416315.htm)
~~~
etherael
Why would you think that Australia would have the same demand for basic
infrastructure that Africa does, considering the relative economic positions
of the vendors in each instance?
~~~
adrianwaj
_Unpredictability brings anxiety and hope in equal measure to the 8,000
Aborigines in the Pilbara, who hope their third-world living conditions might
be raised by the China boom. Tony Wiltshire, an indigenous mechanic who runs a
guild of Aboriginal businesses and tradesmen in the Pilbara, says the mining
boom's benefits could sidestep the local population._
------
Volscio
I know this is not exactly a new phenomenon for those who live in mining areas
(northwestern Canada, Australia, etc.) but I found it interesting when going
on a trip in the Outback that 3 of the men with us were miners (one from
Canada, one from the UK, one from Australia) who do the tough mining tour of
duty for part of the year and then travel for fun the rest of the year, rinse
repeat.
It's an odd life.
------
cletus
Honestly this kind of article annoys me.
It's split across 5 pages (more page views anyone?) and meanders through
largely meaningless anecdotes without ever really getting to the point.
The whole stream of anecdotes style of journalism is one I abhor. It's lazy,
can easily be abused (you can use anecdotes to prove anything) and is often
just cheap theatre to try and humanize something otherwise without substance.
~~~
bl4k
Blame Malcolm Gladwell and Chris Anderson. Those guys sold a ton of their
books (airport books) and now their style is creeping into journalism.
The stories are not used as supporting evidence, references or as facts to
build an overall conclusion to some new idea - but rather as infotainment,
where the anecdotes are the whole point of the story and there is no real firm
conclusion or over-arching theme that is proven.
The anecdotes are easy to digest and repeat, which helps not smart people
sound smart when they tell the same stories to a group of people.
------
brc
I'm not sure of what the point this article is trying to raise? Is it that
Australia is not in recession? Is it some sort of suggestion of xenophobia on
the part of Australians?
Australians have all seen this before in the 1980's when the Japanese were
buying real estate, companies and generally splashing cash around. I think
this time around most people are more relaxed about it, and a feeling of 'make
hay while the sun shines' is probably the most prevalent.
While the imagery of large parts of Australia being dug up forever are
evocative, it's not very realistic for anyone who has visited this part of the
world to think it's all going to run out soon. It's a massive, virtually
uninhabited area and the ground you walk on is literally red from iron ore.
~~~
zmmmmm
> make hay while the sun shines
If only we were actually making some hay, or put some of what we do make in
the barn instead of gorging on it. I think a better analogy is "let's party
while the champagne is flowing".
Instead of long term investments in permanent infrastructure and educating our
citizens we are largely just padding out the middle class with spurious
welfare benefits. That's almost worse than just throwing the money away
because it is creating an enormous future liability and false expectation
among average folks.
~~~
brc
I mostly agree with this. While I don't think we should increase mining profit
taxes, I do think some of the mineral wealth should be locked away into a
sovereign wealth fund - the type that can't be gotten at by sticky fingered
politicians of any persuasion. The wealth fund could be used for apolitical
purposes, like education funding (scholarships?), national dual lane freeway
construction, large-scale water projects. But only after it reached critical
mass and was throwing off a couple of billion per year in investment returns.
~~~
dejb
Actually wasn't the it called mining 'super' tax becuase it was to fund
superannuation? I know there was to be some increase in funding but I'm
actually not 100% sure because the govt did such a poor job of selling it.
Anyway I think this may be one of those cases where the XX cents we typically
get back from each dollar the govt taxes could be worth it to slow the party
down.
~~~
brc
It was called the MSPT or Mining Super Profits Tax.
Super Profits is a Marxist term (don't know if Marx coined it, or it just
became part of the canon) to reflect anything above a 'reasonable' rate of
return. In this case it was set at the government bond rate. Which makes you
question why any rational investor would invest money into a highly risky
mining venture when they coudl get the same returns from government bonds. I
suspect they thought they could kick off a nasty class war - which people seem
to have forgotten now that Swan was muzzled by Gillard and Rudd deposed. But
at the time Swan and Rudd were talking about 'evil foreign rich miners'. But
it backfired with some good PR from the mining companies, and a higher level
of awareness among the Australian people when someone is being deliberately
obtuse.
It was sleight of hand to link super profits tax with superannuation. In
reality the two have nothing to do with each other because employers pay for
employee superannuation out of their bottom line. The only superannuation the
government pays is for public service employees. The reason they did a poor
job in selling it was because if you actually looked at the details, the tax
and the superannuation actually had nothing to do with each other. Lindsay
Tanner actually quoted as much before changing his tune to the party line.
>slow the party down.
Imagine how much you would be upset if they decided to start taxing you extra
because they thought you were doing too well. It's the ultimate definition of
unfair. The miners struggled for decades - have a period of prosperity and are
whacked with extra taxes. That is not only unfair, it raises sovereign risk
and ultimately reduces foreign investment. You don't improve the economy by
knobbling your best performer.
~~~
zmmmmm
> Imagine how much you would be upset if they decided to start taxing you
> extra because they thought you were doing too well. It's the ultimate
> definition of unfair.
Umm, have you ever noticed how people are actually taxed? It's called a
"marginal" tax rate and it's a feature of just about every tax system in the
world, precisely because it is considered a "fairer" system to have people who
can afford to pay more share more of the tax burden.
One might even argue that the "super profits" tax is fairer than marginal tax
rates because it takes account of the amount of capital you have deployed to
achieve the result (to relate it back to people, if I have a family of 10
people to support I don't get a lower tax rate ... but a company with 10,000
people to achieve the same profit as one with 1000 people will get that taken
into account).
~~~
brc
I take it by those comments you've never actually invested any money? The
number of people employed has zero to do with making decisions about
investment returns. But that's been the problem all along - people with very
little idea about how capital markets and mining investment works deciding
they want some of the pie, and coming up with all types of justifications for
it.
I'm not talking about marginal rates - I'm talking about - hey you! You're
making too much money, I want some of that! Here's an extra 40% tax for you -
no consultation, no transition period, no questions asked. Would you defend
this tax if it was applied to technology startups because they made too much
money?
------
scn
"A 20-year-old off the street can come up to the Pilbara and earn A$92,000 a
year," says Boxy. The median household income in Australia is A$67,000.
Talk about inflationary pressures.
~~~
nl
Those jobs are well paid for a reason.
They are in the middle of nowhere (unless you've been there you have no idea
how isolated the Pilbara is), and the climate is less than ideal:
_The climate of the Pilbara is semi-arid and arid, with high temperatures and
low irregular rainfall that follows the summer cyclones. During the summer
months, maximum temperatures exceed 32°C (90°F) almost every day, and
temperatures in excess of 45°C (113°F) are not uncommon. The Pilbara town of
Marble Bar set a world record of most consecutive days of maximum temperatures
of 100 degrees Fahrenheit (37.8 degrees Celsius) or more, during a period of
160 such days from 31 October 1923 to 7 April 1924.[5]_
<http://en.wikipedia.org/wiki/Pilbara>
In the non-open cut mines, many jobs are underground and a lot of people don't
seem to cope well with being 1km underground.
A few well-paid jobs don't create inflation on their own. The inflation rate
remains a steady 3%.
~~~
scn
I'm not saying they're easy jobs but two weeks on one off with no expenses for
the two weeks on makes for a lot of disposable income even without those
salaries.
Inflation remains a steady 3% with a cash rate of 4.5% way above comparable
economies, with a high chance of further raises. People not in the mining
industry can't expect to compete with the salaries they're earning and the
high interest rates and Australian dollar are hurting households and other
exports.
Housing prices in Perth: <http://reiwa.com.au/res/res-salesgraph-display.cfm>
Does that look sustainable?
~~~
nl
Our rates aren't high by Australian historical standards:
<http://www.loansense.com.au/historical-rates.html>
Regarding the house price graph, I think you are being misled slightly by the
scale.
If you magnified the 1978-1993 graph it would look very similar - a HUGE ramp
up in the prices in the late 1980's, and then stagnant in the early 1990's.
Compare that to how the current graph looks: HUGE ramp up in 2003-2007, and
then stagnant.
I agree with what you are saying about the inequalities of mining salaries and
the problems with the crowding out effect. I think that contributes more to
the stratification of society rather than inflation, though.
------
helmut_hed
I was struck by this quote (in regards to Rio Tinto): _Until last year, China
was trying to buy more of the company_
As far as I know "China" does not buy things. Their government, maybe. I'd
like to know if a Chinese government entity or connected company was truly
involved in this. If not, the reporter is speaking carelessly...
------
sliverstorm
Exporting all your resources, particularly a staple like iron, seems like a
bad plan for Australia in the long run. Particularly because it is a raw
material.
------
c00p3r
It is better to say - Australia gets all those Treasures (American debt) and
China gets Australia - resources. It is really a good deal!
------
Charuru
A less sensationalist title: Australia gets Chinese Money, China gets
Australian Ore.
------
adrianwaj
_"My total salary and superannuation is A$145,000 a year, and when I'm in the
Pilbara I don't have to put my hand in my pocket. I have a house in the city
and an investment property and a 1972 Falcon pickup. I couldn't have had this
life without mining."_
and devastating the environment (and human and animal health) with all the
land-clearing, power consumption, water consumption, massive pollution and
toxic waste that comes with this and connected industries.
| {
"pile_set_name": "HackerNews"
} |
Please critique my launch page for Flourcast - fananta
http://fananta.github.io/flourcast
======
fananta
I'll set up domain forwarding correctly soon so ignore the oddity.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Wasn't e-mail deliverability affected today for domains hosted with dyn? - nodesocket
If you tried to send an e-mail to foo@twitter.com during the dyn outage, would the message be delivered? Perhaps delivered eventually (delayed).
======
pwg
> Perhaps delivered eventually (delayed).
This is how email is designed. Mail servers will continue to try to deliver
for quite some time before giving up.
The email transmission protocol was designed way back in the day of
intermittent internet connections, so it has already backed into its design
the concept of retry some number of times before giving up.
FWIW, twitter uses google as their email:
twitter.com mail exchanger = 30 ASPMX3.GOOGLEMAIL.com.
twitter.com mail exchanger = 20 alt2.aspmx.l.google.com.
twitter.com mail exchanger = 30 ASPMX2.GOOGLEMAIL.com.
twitter.com mail exchanger = 20 alt1.aspmx.l.google.com.
twitter.com mail exchanger = 10 aspmx.l.google.com.
So anyone who had twitter's MX records cached in their local DNS servers would
never have noticed a problem from the dyn outage (as least not until the cache
expired).
| {
"pile_set_name": "HackerNews"
} |
MacOS Sierra Code Confirms Thunderbolt 3 and 10Gb/s USB 3.1 in Future Macs - lelf
http://www.macrumors.com/2016/08/24/macos-sierra-code-thunderbolt-3/
======
tracker1
I just want to see a higher end Mac Mini that's closer in specs to the higher
end iMac... Going all the way to Mac Pro just isn't going to happen, and I
don't want an integrated screen.
| {
"pile_set_name": "HackerNews"
} |
A new model and dataset for long-range memory - atg_abhishek
https://deepmind.com/blog/article/A_new_model_and_dataset_for_long-range_memory
======
cs702
Another great blog post on great research by the DeepMind guys, who are also
simultaneously releasing a new dataset for long-range language modeling.
The post is worth reading in its entirety.
If I may summarize, the authors propose a transformer augmented with a short-
term memory mechanism (analogous to TransformerXL) as well as a long-term
memory mechanism (new) that learns to 'compress and memorize' embeddings from
the short-term memory. The model is trained on book-length samples (!!!!), and
seems to perform significantly better than prior models at generating language
with long-range contexts. To my eyes, text generated by the trained model is
virtually indistinguishable from human output, and qualitatively superior to
GPT2 samples.
~~~
JRKrause
Agreed that the generated sample is superior to similar outputs from GPT-2.
Looking at the additional samples in the publication, my first thought is that
the model cannot easily stray from or modify the context. Once a fact is
stored within the compressed memory, it seems the model cannot easily generate
sentences contradictory to that fact. This is problematic because frequent
changes to relational information (e.g. the location a character is standing)
is fundamental to story telling.
------
ColanR
So is there a place to download the trained model? I don't see anything but
the dataset available.
~~~
gwern
They probably won't since DM doesn't open-source most of its work. The authors
claimed way back in November that they'd at least open-source the code
([https://openreview.net/forum?id=SylKikSYDH](https://openreview.net/forum?id=SylKikSYDH))
but nothing yet. (The model isn't so big that open-sourcing it is all _that_
important. It's no Turing-NLG [https://www.microsoft.com/en-
us/research/blog/turing-nlg-a-1...](https://www.microsoft.com/en-
us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-
microsoft/) that's for sure!)
In the mean time, there's always Reformer, which has Trax and PyTorch
implementations.
~~~
hooande
I believe the transformer-xl pre-trained model can also be downloaded, to
provide a similar long term memory functionality as the compression
transformer. I don't have a direct link, but it's available via huggingface
[https://huggingface.co/transformers/pretrained_models.html](https://huggingface.co/transformers/pretrained_models.html)
~~~
gwern
Yeah. I didn't mention Transformer-XL because I'm not sure how much of a long-
range dependency it actually learns to handle. The only papers I've seen on
recurrency indicate that they tend to learn very short-range dependencies,
while something like Reformer with direct access to thousands of timesteps
seems more likely to actually be making use of them.
------
ganzuul
From the description in the research paper of how they compress the memory it
sounds like a form of meta-learning.
Perhaps a network like this would be interested in reading the same books more
than once. Perhaps it could find favorite books it wanted to read many times.
~~~
nloladze
So the simple of it, is that it links parts of memory together? Just like
human memory? Trying to keep the most valid parts of it together.
~~~
ganzuul
In principle a feature of compression is exactly this. Lots of potential in
this space.
------
zackmorris
Thank you, this just made a huge connection for me between the role of sleep,
memory, and its role in decision making (in the "consolidated episodic
memories" link):
[https://www.ncbi.nlm.nih.gov/pubmed/28641107](https://www.ncbi.nlm.nih.gov/pubmed/28641107)
I was suffering from sleep apnea at this time last year and was on call 1 out
of every 3 weeks so was not defragging my brain's hard drive. I got decision
fatigue and my productivity fell to 10%, which led to me being unable to work
for several months.
| {
"pile_set_name": "HackerNews"
} |
Show HN: VNote 2.0 – PlantUML editing via Preview Tunnel - tamlok
https://github.com/tamlok/vnote/releases/tag/v2.0
======
tvmalsv
This looks very interesting, how did I not know about it already? I was
searching for what was available in the uml/diagram space (such as PlantUML,
nomnoml, Mermaid, etc.) just last week. Obviously, VNote isn't just about
diagrams, but the timing of this Show HN was interesting to me.
Any other suggestions for diagram generators that parse a text DSL and
generate good looking charts & diagrams?
~~~
tamlok
VNote is a Markdown note-taking app. Markdown is excellent to hold script such
as PlantUML, mermaid, and so on. So it is possible to support DSL, too.
~~~
tvmalsv
Yes, and I really like the idea of embedding the text definitions of diagrams
in my markdown files, _and_ being able to view it and export to html/pdf. Very
nice.
I'm somewhat familiar with PlantUML, BlockDiag, Flowchart.js and a couple
others, but am simply wondering if there are any really outstanding ones out
there that I simply don't know about. For example, maybe one that's much more
aesthetically pleasing, or maybe one that built-in support for the entire AWS
ecosystem.
I recently saw a nice AWS diagrammer, but it's drag & drop, I don't think it
has a way to build a diagram from a text description.
------
baronseng
I just discover another way too. Visual studio code plus markdown preview
enhanced. With vim mode you got pretty much the same thing.
For the developers among us that is just one plugin away.
[https://shd101wyy.github.io/markdown-preview-
enhanced/#/](https://shd101wyy.github.io/markdown-preview-enhanced/#/)
~~~
tamlok
This plugin just shows you the graph, right? VNote can scroll the
corresponding element into view and highlight it, which is very helpful when
the class diagram is really large on a small screen.
By the way, when you double click an element in the preview graph, VNote could
highlight the originating definition code in the editor.
The Live Preview Tunnel, I think, is what makes VNote different from other
editors. :)
------
WhatIsDukkha
This looks like a pretty serious piece of software, very nicely done.
I don't see anything that would pry me away from evil/orgmode/emacs but very
compelling if you aren't using that combination yet.
~~~
tamlok
Give it a try! :)
VNote supports Vim mode in the bone (Ctrl+J/K everywhere to navigate), Captain
Mode (from the leader key of Vim), Navigation Mode (each widget shows two
chars to select to focus), Universal Entry (just like CtrlP in Vim to search
and jump), and so on.
------
kbumsik
That's great piece of software! I heavily use PlantUML but never found this
kinds of nice editor before!
As a Qt newbie, I'm curious the reason why you choose Qt Widgets instead of
QML. Maybe because it is hard to make it desktop-feel app using QML?
~~~
tamlok
When I started VNote two years ago, I was just a traditional C++ player. So Qt
Widgets is the choice. I think it can give me the most freedom and power.
When VNote did not support PlantUML, I used VS Code and plugin to edit and
preview PlantUML. It is annoying that I need to scroll to the right element
every time I update a line. Why not just hit the element corresponding to
current line? So VNote provides this now.
| {
"pile_set_name": "HackerNews"
} |
Big Breakthroughs Come in Your Late 30s - ghosh
http://m.theatlantic.com/health/archive/2014/02/big-breakthroughs-come-in-your-late-30s/283858/
======
lkrubner
A distinction I read somewhere, that I think is useful, is that people tend to
be either primarily conceptualist in their thinking, or they are empiricists
who learn from experience. Conceptualists have their big breakthroughs before
the age of 35, and empiricists have their big breakthroughs after 35.
In conceptual fields, such as math and physics, the big breakthroughs happen
young. Werner Heisenberg was 27 when he came up with the Uncertainty
Principle, and Einstein was 26 when he discovered relativity.
In fields where progress is primarily empirical, such as biology, the big
breakthroughs tend to happen later. Alexander Fleming was 42 when he
discovered penicillin and Jonas Salk was 40 when he invented the vaccine for
polio.
This distinction can be extended to artists. To write a great empirical novel,
one rich in observed life experience, one must live a long time, and therefore
Tolstoy was 41 when he wrote War and Peace. But to write a novel where one
demonstrates new techniques for grammar and structure and pacing (a novel
noteworthy for conceptual innovation) then one will be young, and therefore
Hemmingway was only 26 when he wrote The Sun Also Rises.
~~~
jballanc
Einstein, in particular, is an interesting case. He was 26 when he had his
annus mirabilis (1905) in which he published 5 ground-breaking papers in
physics, including the Special Theory of Relativity.
_However_ it took another 11 _years_ until he was able to grasp enough of the
mathematics to finally formulate the General Theory of Relativity in 1916, at
age 37. Of course, if you're going to use statistics from Nobel Prize winners'
works, you have to keep in mind that Einstein won not for the General Theory
nor the Special Theory of relativity with which we usually associate him. He
won for his paper on the photoelectric effect, one of the other papers he
published in 1905...at age 26.
_Edit_ : It's probably also worth pointing out that, someday in the future,
it's not inconceivable that we may eventually place the Einstein-Podolsky-
Rosen Paradox in the same league as Relativity. That paper was published in
1935, when Einstein was already 56 years old. If it is not currently
considered of the same merit as Relativity, that is only because we still
don't have a good handle on the full implications of that work. (And, yes, I
realize that ultimately EPR will likely prove to be wrong, but a great
scientist teaches as much by being wrong as by being right.)
~~~
elteto
Could you elaborate a little on why is (or might be in the future) the EPR
paradox as important as Relativity?
~~~
jballanc
I can give it a shot...
So Einstein, Podolsky, and Rosen were sitting around (presumably) looking over
the mathematical foundations of Quantum Mechanics, when they noticed
something. In certain situations, you could end up with _one_ wave function
describing _two_ physically separated particles in a mutual superposition of
states. The consequence is that altering the state of one particle would
instantaneously cause the state of the other to become resolved. Effectively
they "discovered" quantum entanglement (which has since been verified as a
real phenomenon, not just a mathematical curiosity).
What really makes the EPR paradox important, though, is what it implies about
reality. Einstein, et al. realized that there were only two ways to explain
quantum entanglement. One possibility is that the entangled particles contain
extra information, inaccessible to normal observation, about their respective
states and which way the superposition will resolve. This is the so-called
"hidden variables" solution. The other possibility is that an action on one
particle is, in fact, instantaneously causing an effect on the other particle.
While this is not, strictly speaking, a violation of General Relativity as no
information is exchanged, Einstein found this possibility so unsettling that
he famously coined it "spooky action at a distance". The conclusion from the
EPR paper was that the "hidden variables" solution was more likely, implying
that Quantum Mechanics was an as-yet incomplete theory.
Fast-forward a couple of decades, and we're just beginning to appreciate that
"spooky action at a distance" is actually more likely to be the correct
explanation. As I mentioned before, what this implies about reality _itself_
is pretty mind blowing. To go even further, we've since learned that
entanglement can not only occur between particles separated in space, but also
between particles separated in time. Quite literally, the future and the past
may be linked by this "spooky action at a distance".
We still don't fully grasp what, exactly, this means. One possible implication
of this is that our existence as sentient beings may simultaneously be a
consequence and the cause of a universe that can give rise to sentient beings.
Needless to say, even though it's likely that Einstein was wrong about "hidden
variables", the course of investigation that the EPR paper set the physics
community down is at least as important as Relativity.
~~~
cynicalkane
Not to be "that guy", but a lot of this is terribly incorrect:
* General relativity did not come from "grasping the mathematics" of special relativity. SR can be completely understood by an undergraduate and is an internally consistent description of mechanics and electromagnetic phenomena. Like Newtonian mechanics before it, it doesn't need anything more to be consistent.
* Local hidden variables are not "likely wrong", they are provably impossible. It is impossible to have hidden variables without breaking the principle of causality.
* Entanglement does not break the principle of causality. "Spooky action at a distance" is not an "effect" in a well-defined physical sense. It cannot be used to send information or cause things to happen.
* All of this could be--and was--understood without the EPR paper. Relativity was the most important thing since Newton. EPR was minor in comparison.
* I have no idea what connection you're trying to draw between sentient life and QM.
~~~
jostylr
Note _local_ hidden variables is impossible. The pilot wave theory is the
theory of "hidden variables" that led Bell to his theorem. Basically, EPR
shows that either nature is nonlocal or there were hidden variables. Bell
showed that hidden variables had to have something nonlocal about them. So
Einstein created relativity and helped to highlight how nature has an aspect
that seems incompatible with it (basically, there is a "now" which, however,
may be undetectable and is not our "now").
For those curious, the nonlocal hidden variables are the positions of the
particles. Very hidden. So hidden, they are the only thing we see in
experiments!
The particles are guided by the wave function. This resolves all the weird
paradoxes such as Schrodinger's cat. It provides a great way to understand and
investigate nonlocality, spin, identical particles, etc. in a very precise
theory that even has broadly applying existence and uniqueness of solution
theorems, unlike classical mechanics.
[http://plato.stanford.edu/entries/qm-
bohm/](http://plato.stanford.edu/entries/qm-bohm/)
~~~
tomp
Hi! I'm a mathematician, I've always been curious about physics, but I've
never understood some of the concepts of QM, such as what is "observation" in
the Schrodinger's cat paradox, why QM means the universe is not deterministic,
why hidden variables cannot exist, ... mostly because every physicist that I
was able to talk to has been unable to properly explain these concepts. Do you
know any books/articles/sources about QM that could understand these concepts,
without going in unnecessary mathematical and physical details (i.e. using the
least physics and mathematics necessary to explain the paradoxes of QM)?
~~~
jostylr
Tough question. The reason you probably did not understand them is that there
are reasons are faulty.
The textbook QM says that when experiments happen, the continuous wave
function evolution stops and a new wave function is used in its place, chosen
randomly based on a prescription using probabilities coming from the original
wave function's decomposition in terms of an operator's (matrix) eigenvalues.
This is a postulate in their view and that's that.
It makes no sense since what is an observation? They don't explain. They just
know it. They use it when doing their experiments and it works well enough.
Attempting to formalize it leads to wrong conclusions.
Bohmian mechanics/pilot wave theory is a deterministic, hidden variable theory
that works. Within that context, you can understand the rise of operators as
observables and the entire collapse rule which turns out to be a convenient
approximation to reality in this theory; no actual collapse occurs. There is
just one wave function on configuration space (3n dimensional space, n being
the number of particles in the universe) evolving continuously via
Schrodinger's equation and the particles themselves being guided by the wave
function. It is the configuration space for the wave function where
nonlocality arises from. Understanding its role in relation to relativity is
the key question to understand.
The wave function evolves with lots of its branches being irrelevant which is
why we can effectively get rid of them, i.e., collapse the wave function.
As for resources, I recommend the stanford page I linked to above. There is
also [http://www.bohmian-mechanics.net](http://www.bohmian-mechanics.net)
which has a great deal of material including an introduction:
[http://www.bohmian-
mechanics.net/whatisbm_introduction.html](http://www.bohmian-
mechanics.net/whatisbm_introduction.html) and some faq videos
[http://www.bohmian-mechanics.net/videos_faq.html](http://www.bohmian-
mechanics.net/videos_faq.html)
I like Bell's book of his articles, Speakable and Unspeakable in Quantum
Mechanics. A wonderful read from a master.
~~~
selimthegrim
If you'd read the book and worked through it you would understand why a hidden
variable theory is not experimentally justifiable.
~~~
jostylr
I have read it and I must have missed the part you refer to. He was a strong
proponent of pilot-wave theory though towards the end he also started to like
GRW which itself consists of two distinct ontological possibilities.
In as much as QM makes predictions, pilot wave theory makes the same
predictions. But pilot wave theory has the advantage of being an honest theory
that actually does make predictions. QM suffers from needing an external agent
to collapse the system, an agent that is never specified, particularly on the
universal level. Bell puts it very eloquently about whether one needed to wait
for the first form of life to do it or perhaps one with a PhD to collapse the
universe. He concludes it must be happening more or less all the time and that
the mechanism needs to be explained in the theory. We can either change
Schrodinger's equation as in GRW or we can add additional variables such as
positions of particles as in pilot wave theory. Or we need to accept that most
of reality is unlike our actual experience of a single reality such as in many
worlds.
------
onmydesk
As someone in my late 30s I think the main reason is this- Its time then to
stop fking around and just get it done. It dawns on you at this age where you
are in your life, how far you've got to go and it annoys you that thus far you
didn't get 'it' done yet.
You're not staring death in the face but you're close enough to feel its
influence. If not now when? That spurs you on, beyond any motivation you ever
had at any age before. The experience helps, the realisation that those before
you weren't any more special than you helps, but the ticking clock motivates
like nothing else.
Here's my own personal motivator that has meant the most at this age.. — 'Most
men lead lives of quiet desperation and go to the grave with the song still in
them.' \- Henry David Thoreau. That should scare the hell out of you, unless
you're not old enough yet.
~~~
_random_
There is always a back-up plan of having kids, focusing on raising them well
and hoping that maybe they will come up with something cool.
~~~
nostrademons
That's a really heavy load to put on your kids. They should be free to live
their lives as they want to live them and not shoulder the weight of mommy &
daddy's unfulfilled dreams.
~~~
dllthomas
Eh, that partly depends on how narrowly you define "something cool".
------
sobes
Seems to be corroborated in tech by some nice examples:
Jimmy Wales: founded Wikipedia at 35 and Wikia at 38; Marc Benioff: started
Salesforce at 35; Mark Pincus: started Zynga at 41; Reid Hoffman: founded
Linkedin at 36; Robert Noyce: started Intel at 41 with a 39 year old Gordon
Moore; Irwin Jacobs was 52 and Andrew Viterbi was 50 when they founded
Qualcomm; Pradeep Sindhu: founded Juniper Networks at 42; Tim Westergren:
started Pandora at 35; Robin Chase: founded Zipcar at 42; Michael Arrington:
started TechCrunch at 35; Om Malik: started GigaOm at 39; Reed Hastings:
started Netflix at 37; Craig Newmark: started craigslist at 42
... and the list goes on and on. check out this Quora post (source of the
above) for more interesting examples: [http://qr.ae/tG78W](http://qr.ae/tG78W)
------
jupiterjaz
I just finished Marvel Comics: the untold story by Sean Howe. One thing I was
really surprised to learn was that Stan Lee was in his 40's and had already
been a comicbook editor for 20 years before he co-created all the famous
Marvel heroes like Fantastic Four, Spider-Man, and The Hulk.
~~~
oscargrouch
What about Bukowski, that was working in a post office, til 49 years old, when
he dropped out to start his first novel ?!
~~~
vinceguidry
He didn't 'drop out', he was incentivized to the tune of $100 a month for life
to quit.
~~~
cwaniak
And that was when $100 was an equivalent of 3 ounces of gold which is about
$4k today. Barrel of oil and S&P500 had roughly the same price in gold then as
they do today.
~~~
dredmorbius
By CPI deflator, $100 in 1969 is worth $634.76 today.
[http://www.usinflationcalculator.com/](http://www.usinflationcalculator.com/)
Not sure if that calls into question your statement, or suggests that the CPI
is drastically understating inflation.
~~~
cwaniak
By CPI deflator we never doubled our (US) monetary base since 2007.
~~~
dredmorbius
I'm not following you. What's your point?
~~~
cwaniak
What exactly you don't follow? You need me to define 'monetary base' to you?
You need me to explain to you how to use google/wiki?
------
cscheid
But look at the variance on those distributions. "Late 30s" is a poor
description. How about "half of the winners are between 28 and 45"? Not so
exciting then, I would guess.
Better yet: look at distribution of "age when paper was written", modeling the
generating process is scientist X at age Y writes a paper with Z=0 if no award
is won, Z=1 if award is won. Is it obvious that this distribution conditioned
on Z=1 is different from the unconditional one? Not to me.
------
phillmv
The corollary is Cheap, Easy To Exploit Labour Is Most Readily Available In
Your Twenties™.
(The VC lemma being, "Labour costs are a big majority of web startup input
costs")
~~~
Retric
That's assuming all labor is equivalent which seems ridiculous on the surface.
I mean start-ups are hardly shoveling snow.
Young founders get more free publicity seems far more realistic to me.
~~~
coldtea
> _That 's assuming all labor is equivalent which seems ridiculous on the
> surface. I mean start-ups are hardly shoveling snow._
They hardly do anything impressive computer-science or programming wise
either.
Programming like something Facebook, Twitter, Tumblr et co, including a lot of
the early scaling challenges, is not that much removed from "shovelling snow".
------
kev009
Correlation != causation
To me, a lot of these folks are artificially limited by slow and encumbered
education methodologies. The data seems to confirm that, as a physicist can
more easily begin independent work, while the others need to wait until they
have accreditation/equipment/funding/etc.
I think this delay of productivity would be especially avoidable in high
school and undergrad programs.
If you could begin advanced fields in your early 20s, instead of your 30s, I
speculate the distribution would shift left quite a bit.
------
ja30278
I am in my middle thirties now, and I feel that I'm probably the best I've
ever been. Part of this, I think, is that true understanding comes from the
ability to contextualize ideas into some larger framework of knowledge. I find
myself revisiting things that I've learned earlier in life, but am now able to
see them in a more meaningful way because of the experience and knowledge that
I've gained in the interim.
In some ways, the idea that knowledge in combinatorial is perfectly obvious,
but it can be really encouraging to realize that the things you learn today
are making you more capable of learning and internalizing new things later.
------
danso
I'd like to see this data evaluated across different time frames. Maybe
prodigy was more pronounced at a young age in the past century because people,
well, became full adults and died at earlier ages? Now that in today's Western
democracies, we've essentially delayed adulthood to at least the mid-20s,
marriage until the 30s, and retirement into the 70s...this time delay, plus
the fact that the discoveries we make now are more specialized and require
more domain knowledge...it seems that the average age for breakthroughs will
continue to rise.
~~~
judk
Dying between 15 and 40 was almost never statistic-perturbingly common in
human societies that were stable enough to support anyone doing creative work
that survived to be noticed later.
Most variation in average lifespan is explained by infant mortality and life
estension of the elderly.
~~~
sitkack
Flaw Of The Averages.
------
dredmorbius
Actually, Khazan is losing the plot. The whole point of the research she's
referencing through 3 levels of indirection ( _the Atlantic_ , _NBC News_ ,
_Nature News_ , and _Proceedings of the National Academy of Sciences_ ) _is
that key breakthroughs in Phyiscs are occurring at ever increasing age with
time._
Granted, the research looks only at Nobel Prize winners in physics, but the
general reason stated: that there's more information to learn and assimilate,
suggests a general principle of increasing complexity and decreasing returns
to innovation, which is a key point raised by Joseph Tainter ( _The Collapse
of Complex Societies_ ).
See Weinberg & Jones: "Age dynamics in scientific creativity"
[http://www.pnas.org/content/108/47/18910](http://www.pnas.org/content/108/47/18910)
Also highlighted in an earlier Nature News item on W&J is the increasing
reliance of breakthroughs on expensive equipment, not always accessible to the
most junior researchers, an observation also consistent with increasing
complexity and diminishing returns with time:
_Other experts in scientific creativity welcomed the study but note other
reasons why the age of laureates might have increased, such as improvements in
health or the fact that, in many fields, research now requires expensive
equipment. "21- and 22-year-olds simply don't get access to this kind of
equipment," says Paula Stephan at Georgia State University in Atlanta, who
provided some data for the Nobel study. She adds that it isn't always possible
to pinpoint "one magic date" when scientists made their discoveries._
[http://www.nature.com/news/2011/111107/full/news.2011.632.ht...](http://www.nature.com/news/2011/111107/full/news.2011.632.html)
------
callmeed
As a twenty-eighteen year old, this article made my Sunday.
~~~
noname123
As a 27 year old, I can't help but wonder how much of the article was written
to be aimed at the older millennial/Reddit crowd's insecurity of aging. But I
welcome the prognosis and look forward to my days as a 37 year old still
pwning young noobs in Dota2/League/CoD/2K.
~~~
shubb
My thought is that if the achievements of a 40 year old took 20 years to
accumulate, then they must have been working at something for 20 years.
That sort of achievement is one that is earned, rather than coming from luck.
Which means it is the sort of success one can emulate, but only if we have the
personality and determination to put the work in.
The 22 year old that got bought out by yahoo for 1bn probably did nothing
other start-up founders didn't. There is little that can be learned from him,
except to throw the dice more often.
The 44 year old who slowly built a property portfolio by living cheap and
seizing opportunities (cheap mortgages, cheap properties) that come around
only every decade or so, simply by playing the game so long... I could copy
his example, but then I wouldn't be able to buy nice things with that money,
or use it as startup runway. I'd have to make sure I was always in a stable
(boring?) job to support the credit rating needed for several mortgages.
So the difference between the two kinds of success is important, and we can
learn from the second. But if what we learn is that you can spend your life
buying success, is that cost too much?
------
kevinalexbrown
Most of these people were working extraordinarily hard up to that point,
though. Not necessarily hard at one thing. But hard.
------
sireat
It goes without saying those achieving breakthroughs have been building up to
that moment basically all their adult (and most likely teenage) life.
As a soon to be 40 year old jack of many trades but master of none, I am still
looking for someone achieving anything meaningful starting from scratch later
in life.
What I mean by this is someone achieving a mastery of some skill, when one has
not done deliberate practice previously.
I suspect the answer is that unless you have been building your inner pattern
recognition for basics of your field of expertise since late teens/early
twenties, you are unlikely to get very far starting at the later age.
For example Einstein already had fluid mastery of calculus at 15 (just like
Feynman), which was a nice building block for later work. I am not even going
to start on Von Neumann.
------
bhicks005
Is it possible that studying Nobel Laureates skews these numbers? It would be
extremely difficult to receive a Nobel for an achievement late in life because
of the usual lag time between the achievement and the award and the fact that
Nobel prizes are not awarded posthumously.
~~~
6cxs2hd6
It's possible. Furthermore, the article talks about _noted_ achievements --
the case where someone's work is _recognized_ as particularly good.
Your best or most-creative work isn't necessarily the most-recognized.
Being better-recognized probably also correlates with having built a network
of connections, understanding institutional politics, having built up social
capital (favors to call in), and so on. And it wouldn't be shocking if that
tends to skew stronger among older, more experienced people.
------
thewarrior
Here's an interesting exception : The poet Rimbaud wrote all of his poetry
before the age of 20 and is considered to be one of the greatest poets of all
time.
~~~
scarmig
Keats similarly wrote all of his extant poetry between 19 and 25. (He died at
25.)
~~~
to3m
"Last December 13th, there appeared in the newspapers the juiciest, spiciest,
raciest obituary it has ever been my pleasure to read.
"It was that of a lady named Alma Mahler Gropius Werfel, who had, in her
lifetime, managed to acquire as lovers practically all of the top creative men
in central Europe. And, among these lovers, who were listed in the obituary,
by the way, which is what made it so interesting, there were three whom she
went so far as to marry: One of the leading composers of the day, Gustav
Mahler, composer of "Das Lied von der Erde" and other light classics, one of
the leading architects, Walter Gropius, of the "Bauhaus" school of design, and
one of the leading writers, Franz Werfel, author of the "Song of Bernadette"
and other masterpieces.
"It's people like that who make you realize how little you've accomplished. It
is a sobering thought, for example, that when Mozart was my age, he had been
dead for two years."
(Tom Lehrer)
------
vidarh
I co-founded my first startup at 19, and several more before I was 25. I've
done a couple since. I'm now nearly 39. Not had the big payoff, but I'll try
again sometime.
To me, it feels like the biggest thing is the combination of willingness to
take risks coupled with outlook on life and experience.
At 19 I had no business experience, no experience at the business I went into
(4 of us started an ISP), had never set up a router or a Linux server. We all
had to learn everything from scratch. I ran board meetings, did phone sales,
configured Cisco routers, took support calls from users using Trumpet Winsock
- a program I'd never seen on anything but screenshot - using Windows 3.x, an
OS I'd never spent more than 10 minutes consecutively with. I negotiated with
suppliers, and creditors at times. I negotiated contracts with partners. And
so on.
Five years later, I knew just how unprepared we had been. Had I known what I
did then about how tough it was at 19, I would likely not have started the
company (but had I known what it led to in terms of contacts and opportunities
- I still would have; did not get an exit, but it still paid off). Had I had
the knowledge and experience I have _now_ on the other hand, with the
willingness to take a risk and life situation I had then, I'd have jumped
right into it.
What has changed apart from having had 20 years to learn is partly that I make
far more money and have far greater outgoings, and a family. I can't take the
same risks, and my potential loss if a company can't pay a good salary is far
greater.
I'm _also_ more risk averse simply because my experience makes me far more
likely to spot fatal problems with many potential startups I might have jumped
at in my youth. But of course there's also the risk that I'll overlook things
because of changes in perspective or because I misjudge the risks, or because
I would have gotten lucky if I'd taken the risk.
Another major change is simply life outlook. While I was never the totally
reckless type, and never all that obsessed with money, today the money just
isn't particularly important. I want enough to ensure security, and it'd be
nice to have enough to just work on my own projects, but I don't particularly
care if I get rich. That changes my assessment of any startups drastically -
I'm no longer prepared to jump at an opportunity to get rich if it's not
something I'm sufficiently excited by. I don't feel I'm in a hurry to prove
anything. I have what I need, and then some. I'm far more secure in myself in
every way than I was at 19. I'm not going to pretend like I wouldn't love to
get that multi-million exit, but it's not something that matters to me _now_
(I'm sure it'd matter to me if it happened, though).
Instead, what I _do_ think about, are ideas that fascinates me. And some of
those ideas have been germinating for 20 years. Maybe none of them, nor any
new ideas will ever _click_. If so, no big deal. But if something "clicks" I
am vastly better prepared, and I believe I'd be far more likely to succeed.
~~~
moens
I created 4 reasonably successful companies in my life, and about 12 that fell
flat. The first success was in high school. But the first million+ was at age
34. At first I thought the article was a little too academic, but... honestly
it fits my life pattern. So... there you have it.
~~~
bananamansion
12 fell flat? what made you keep going?
~~~
vidarh
Consider that the vast majority (something like 9/10) of companies fail within
2-3 years. If you're not willing to deal with failures, chances are you'll
stop trying long before you get to 12. If you deal with the odds, you keep
going as long as you get something out of it
Also, even companies that "fall flat" may be worth it. For my part, a large
part of my current salary is down to my experience and contacts from my
various startups. And each one of them have been fun for the most part.
------
nraynaud
\- Mom, it's not that I'm a slacker, it's just that I'm only 34!
------
jv22222
What I took away from the OP was... the curve keeps going after 35. Phew.
------
ekm2
For immigrants from relatively underdeveloped nations,add another 10 years.
------
netcan
We ned to to make some serious progress on extending youth!
------
olsonea
The timing of this submission is impeccable. My 36th birthday was yesterday.
Thanks for the inspiration!
------
loceng
It's mostly just related to having had enough time to work on a problem.
------
facepalm
Or not - I'm over 40 without any big breakthroughs.
------
arikrak
A 25-year old brain may have more raw "horsepower" than a 45-year-old brain,
which would make a big difference in math and physics, but wouldn't matter as
much in e.g. poetry.
~~~
taurath
It doesn't have more horsepower, a 25 year old brain has a lot more gaps in
its knowledge that is skipped over, whereas the 45 year old brain has more to
connect and consider.
------
flibertgibit
There may be hope for me yet according to this, but at the moment, I really
have no fucking clue what to do.
My early thirties were filled with great ideas for startups. My early fourties
are filled with depression that I'm past my prime, don't have the energy, risk
tolerance, or money to do a startup, and don't have money to go back to
school, so it really doesn't matter what my passion is. Somehow I still need
to find a passion and change, though, because my death is impending. Maybe my
fifties will be the realization that I am who I am and I'm fine just being bad
at everything.
~~~
sobes
Just because you find a passion, doesn't mean you have to spend money on it.
There are an insane amount of free learning resources online (in tech/dev,
which I'm assuming what your pro background is). Similarly, I'm seeing an
increasing number of bootstrapped startups (they just don't make as much noise
as the funded ones).
My twenties were about realizing and coming to terms with the fact that there
are people who are smarter, more clever, and more knowledgeable than me. My
thirties are about figuring out how to work with these people as much as
possible. I do feel that I'm late to the game on this. If I got over my ego
earlier and/or had a more collaborative mindset I'd be in a better spot.
Keep looking for that passion. But don't do it alone. Once you find it, it'll
be with a group of people "better" than you to help you figure out what to do
with it. :)
------
dhfjgkrgjg
Who would have thought? That in your late thirties, you have gained
experience, knowledge, contacts, maybe even a degree of financial support, all
of which lend towards the formation of breakthrough ideas. Now, where is my
prize?
~~~
poolpool
Its common sense that universal health care and higher taxes for everyone is a
great idea. Depending on who you ask its also common sense that universal
health care is a ridiculous concept and no one wants to pay for other people's
hospital visits.
~~~
basicallydan
I do! We've been doing it in the UK for decades, very few people complain
about it.
~~~
10feet
If you really lived in the UK you would now that lots of people complain about
everything, all the time. Especially the NHS.
Now, that doesn't change the point that universal health care exists in most
countries, and they are better of for it.
~~~
basicallydan
Actually, strictly speaking I do live in France at the moment, so your bizarre
implication is actually true - but I did just move here 7 days ago.
Anyway, yes people _do_ complain about the NHS in general, for all sorts of
reasons, but I'd wager that most people don't complain about the fact that we
_have_ the NHS, and realise that it's actually really great compared to the
situations in countries which don't have universal healthcare. Just a
clarification, really.
~~~
cwaniak
I'm a Polish guy with US passport who spent many, many years in the USA and my
feeling is as follows: In the US people complain about one single thing
regarding the healthcare - cost. In Poland they'll complain about long waiting
lists, not enough doctors, not sufficient care, basically everything is worse
than in the US - except the cost.
I for one don't think that the major objective of healthcare system is not to
bankrupt people. Its main objective is to safe life. I wish more people would
be worried about saving lives more than money.
~~~
asabjorn
Most countries with socialized healthcare systems that I am aware of actually
has a longer life expectancy than the USA. This is true for Canada, Germany,
Norway, etc.
I do not know if life expectancy is a measure for quality of health care, but
it is at least an indicator of how good the general health is.
On a personal note I would say that my experience with the US health care
system has left a lot to be desired, and I experienced the care as better in
my native Norway. That said, my girlfriend is a doctor and her health plan
seems a lot better so my experience can be anecdotal.
~~~
cwaniak
> Most countries with socialized healthcare systems that I am aware of
> actually has a longer life expectancy than the USA. This is true for Canada,
> Germany, Norway, etc.I do not know if life expectancy is a measure for
> quality of health care, but it is at least an indicator of how good the
> general health is.
I agree. Actually Poland which is 2nd world country really, has better life
expectancy that USA! However, this doesn't mean that Polish healthcare system
is superior to American. This means that if all you do whole life is eating,
stressing out at work, having no friends with family 2 time zones away, and
zero exercise - even the best healthcare system in the world won't help you.
So while I agree that people live longer in most civilized places than in the
US, I still claim that the US healthcare system as long as money isn't a major
concern for you - it is just the best system in the world, period. But even
they can't help you if you have been on McDonald's diet foe the past 20 years
and the most exercise you do is 10 meters from home to car.
> On a personal note I would say that my experience with the US health care
> system has left a lot to be desired, and I experienced the care as better in
> my native Norway. That said, my girlfriend is a doctor and her health plan
> seems a lot better so my experience can be anecdotal.
Not sure how good they would be about it in Norway, but in the US they
diagnosed me with Ehlers Danlos type 2 in a matter of weeks while in Poland
literally dozens of doctors I visited didn't know what's going on. In my
particular case, where the condition in from 1 in 3,000 to 1 in 50,000 the
major incentive to find out what's wrong with me for the doctors was really
fear for me suing them. Again, money in this system really works both ways. I
said my insurance company, you test me genetically for ehlers danlos or I sue
your ass. They paid $6k for testing and I was tested positive. From
perspective now I think that in Poland some doctors at least knew or suspected
my condition but had no incentive to do anything. And money or loosing it is a
great incentive.
~~~
asabjorn
Yes, I agree that the USA is the best healthcare system for people with money
and/or great insurance.
I do not know which insurance is great and how many people has access to this.
I do however know that when I tried to read my plan, the legalese was so dense
that it such as well have been written i Japanese.
My University of California at Davis group insurance plan costs $500 per
quarter, and that rate is given for a pool of generally healthy young
students. Do you know what a startup person has to pay for a great health care
plan? Is a obamacare plan good?
~~~
cwaniak
Obamacare has many plans in it to choose from. I don't save on "these things"
and get as expensive as I can.
------
notastartup
I wish that I could have a breakthrough this year, at the age of 27 my life's
mission was to make it before turning 30 and times running out and I'm super
anxious that I have not had much success.
------
kimonos
Great post! I agree with this!
| {
"pile_set_name": "HackerNews"
} |
Show HN: My self-published C# Programming book: Print, PDF, ePub and Mobi - fekberg
http://blog.filipekberg.se/my-book/
======
manojlds
Can you (or have you) blog about the experience? Tips and tricks? I am
planning on self-publishing a book on Powershell.
~~~
mattmanser
He already did, 2 days ago on HN:
<http://news.ycombinator.com/item?id=5128197>
I'm not sure he really should have titled this Show HN as he already has shown
it to us. Very recently.
~~~
vinkelhake
It's just a bit of self-promotion. Compared to the last time[0], it seems
pretty benign.
[0]
[http://www.reddit.com/r/programming/comments/okpoh/more_read...](http://www.reddit.com/r/programming/comments/okpoh/more_readable_code_with_anonymous_functions/)
~~~
fekberg
Is there something wrong with showing off work that you are proud of? If
something I share here gets upvotes then apparently some people find it
interesting.
Not trying to upset anyone here, I just want to share my work and share my
story, that's all.
~~~
RobertHoudin
Using several aliases to promote your enterprise is deceitful and yes, there
is something wrong with that.
~~~
fekberg
Right, but I am not talking about the reddit stuff which was one year ago and
I learnt from that mistake.
------
philliphaydon
It's a good book for any .NET developer. Introduced a few things I didn't know
about, well done fekberg.
------
mje
Great to the point book. Purchased this book around Christmas and read the
first week of the year. I especially liked the Parallel chapter. One thing
fekberg, have you tried purchasing the book via create space? I would have
given up on step 4 of 16 if the book was on sale.
~~~
fekberg
Thanks, I'm glad you're enjoying the book!
Yeah I've purchased my book on CreateSpace a couple of times, I didn't find it
much different from other e-stores, maybe it's because I had previously
created an account?
As far as I know you go through: Buy -> Shipping address -> Shipping method ->
Payment -> Receipt. Was it more steps for you than that?
------
candicorr
I'd like to try it, but I can't find any sample chapter.
~~~
fekberg
There's a "Look inside" feature available on Amazon (
[http://www.amazon.com/C-Smorgasbord-Filip-
Ekberg/dp/14681521...](http://www.amazon.com/C-Smorgasbord-Filip-
Ekberg/dp/1468152106/) ). It will let you read a part of a Chapter.
------
celticninja
what level of programmer is this aimed at?
~~~
fekberg
I've had both beginners and very advanced programmers read it and both found
things that they liked.
When I wrote it I had someone that knew a bit about C# in mind and wanted to
explore C# and get a gasp on when to use async/dynamic/roslyn/plinq/etc.
Check out the forewords over at <http://books.filipekberg.se> and check the
"Look inside" on Amazon to get a better understanding of what you will get
from the book.
~~~
celticninja
I shall do that, I am a beginner starting out in C# so if I do go the route of
buying it and learning from it I will let you know how it goes.
------
walke
Congrats!
~~~
fekberg
Thank you! It's a very special feeling when you receive the first package of
proof copies.
| {
"pile_set_name": "HackerNews"
} |
LetsLunch Launches in New York Today - rhartsock
http://techcrunch.com/2011/04/27/letslunch-launches-in-new-york-today/
======
chrisaycock
Hmm, Stack Overflow is no longer on the application page. Choices are now
limited to LinkedIn, Twitter, and Hacker News.
| {
"pile_set_name": "HackerNews"
} |
iPad Pro Concept - peterkchen
http://dribbble.com/shots/1504969-iPad-Pro-Design-Concept/attachments/226492
======
DannyBee
Uh, a monolithic component design would not "make them more durable".
Most of the non-monolithic design is strain relief. The point of things like
strain relief is often to avoid force being transferred to weak points like
electrical contacts (and sometimes to avoid going under the supportable
bending radius of the internal wire, though a lot of this is just amazingly
flexible these days).
Making it a monolithic component makes it all act as a single component. In
fact, now rather than straining individual points, you'll strain them all at
once!
This would not improve anything, it would make it significantly worse.
Also, coatings are not magic. If there was such a coating, everyone would use
it :)
------
iLoch
I like the mockups, but fail to see the purpose. As a professional, wouldn't
you want professional precision? A programmer/writer would hate typing on it,
a graphic designer would hate sketching/designing on it (if it has the same
responsiveness as the current models - I do like the idea of pressure
sensitive touch, Surface has this), a photographer would laugh at the camera,
a DJ/producer wouldn't even consider the speakers for any practical use. IMO
it misses the mark for every category of consumer it could possibly target,
bar the "Faux-Pro" users - the "Beats by Dre" users of the world.
------
chrisduesing
Interesting ideas, I would like to add pressure sensitive stylus support to
compete with the Wacom Cintiq Companion.
------
snowwrestler
If the camera is in the logo, where is the WiFi antenna?
| {
"pile_set_name": "HackerNews"
} |
The new elite: how startups are replacing resumes - sthomps
https://www.sokanu.com/blog/the-new-elite-startups-replacing-resumes/
======
doozy
The appeal of startups is they are the only chance to be paid what one is
worth. The alternative is to toil for the rest of your working life making
enough to live but not enough to ever be financially free.
The elephant in the room is software developers are massively underpaid. If so
many are willing to take the risk is because a one in a hundred chance of
success is better than a 100% chance of going nowhere.
I've played the startup lottery, twice, and look forward to do it again. I'd
rather own a percentage of a company that has a minute chance of success than
a contract signed in blood for a "market salary" for the rest of my life.
------
bernardlunn
I think it makes sense because programming is more of a power law than bell
curve. The best are not just a bit better than average but 10x or more better.
~~~
dudul
And it's not the case else where? Like in finance for example?
| {
"pile_set_name": "HackerNews"
} |
Properly setting up Redis and Sidekiq in production on Ubuntu 16.04 - thomasrw
https://thomasroest.com/2017/03/04/properly-setting-up-redis-and-sidekiq-in-production-ubuntu-16-04.html
======
mperham
I disagree with the redis quickstart guide - making the user download a
tarball, compile source and install systemd service scripts is ridiculous. As
long as the Redis version is somewhat recent, they should install straight
from the distro. That should replace your first two sections with `apt-get
install redis-server`.
~~~
JdeBP
The problem for many third party softwares that motivates them to bypass the
package management is that the lag time for Debian and Ubuntu LTS releases is
huge. People going the apt route are going to be stuck with redis 3.0.6 from
2015 for years.
* [http://packages.ubuntu.com/xenial/redis-server](http://packages.ubuntu.com/xenial/redis-server)
Some softwares, such as MySQL Community Edition, come with third-party-made
packages, and the instructions are "Add us to your package manager's list of
repositories; and pull Debian/Ubuntu packges from us.". The downsides of this
are:
... for the system administrator, the necessity of a degree of trust that the
third-party package repository will not sneak in bad versions of other
packages. Package managers are not good at allowing system administrators to
control and to limit what systems will pull from individual repositories.
... for the third party developers, the need to know how to build
Debian/Ubuntu packages, and to keep up to date with all of the integration
work that the Debian/Ubuntu maintainers would be doing for them, including all
of the Debian/Ubuntu patches.
* [https://sources.debian.net/src/redis/2:3.0.6-1/debian/patche...](https://sources.debian.net/src/redis/2:3.0.6-1/debian/patches/)
This highlights a problem with the quickstart guide that you got wrong. It
does not make the user install systemd service scripts. It makes the user
install _van Smoorenburg rc scripts_ , complete with PID file nonsense, run
levels, hand-rolled log management, and a lack of correct (or indeed any) LSB
headers. This is two init systems behind the times for Ubuntu, which was
upstart before it was systemd, and which hasn't been van Smoorenburg rc for
over a decade now.
Ironically, the systemd service units are some of the very things that are
added on by the Debian/Ubuntu maintainers that one loses by not going the
Debian/Ubuntu packaging route:
* [https://sources.debian.net/src/redis/2:3.0.6-1/debian/redis-...](https://sources.debian.net/src/redis/2:3.0.6-1/debian/redis-server.service/)
* [https://sources.debian.net/src/redis/2:3.0.6-1/debian/redis-...](https://sources.debian.net/src/redis/2:3.0.6-1/debian/redis-sentinel.service/)
Adding systemd service units to redis proper, in contrast, has been stalled
for a while now:
* [https://github.com/antirez/redis/pull/2004](https://github.com/antirez/redis/pull/2004)
* [https://github.com/antirez/redis/issues/3251](https://github.com/antirez/redis/issues/3251)
~~~
mperham
Yikes, that's even worse. init.d is a tire fire.
As someone who maintains software that uses Redis heavily, I view 3.0 as
somewhat recent and very usable for most. There hasn't been significant
functional changes in Redis since 2.8 (the SCAN commands). Nothing significant
in 3.0; the GEO* commands are the major feature in 3.2.
I still have plenty of customers on Redis 2.8.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Nesteggly – When will you retire? - yeahgoodok
https://www.nesteggly.com?src=hn
======
yeahgoodok
After a career in financial advisory space, I became convinced that financial
advisory companies are designed to keep you afraid and dependent on their
wealth management services. Their ultimate goal is to have you save as much as
possible and die with a vast fortune, which they’ll then have the privilege of
overseeing for a 1% annual fee. If this wasn’t true, then their incentive
structures wouldn’t be what they are.
Retirement planning doesn’t need to be so complicated that you pay someone
upward of $10,000 per year to answer simple questions like, “How much should I
be saving?” or “How much should I withdraw from my portfolio?” or “When can I
retire?” These are all questions that can be answered by formulas.
So, four months ago I debuted a proof of concept of what I wanted to be the
ultimate retirement calculator. It combines several useful ideas:
1. For those who aren’t retired yet, simultaneously models both the accrual AND drawdown phases. This eliminates the guesswork of manually calculating how much savings you’ll need in the future.
2. The ability to plan around social security and/or a pension (the typical thing to do is to either pretend that they don’t exist or almost exclusively depend on them).
3. And lastly, the ability to draw down your savings.
It ended up being pretty popular[0], so I incorporated everyone's feedback and
then some. Here’s the shortlist of enhancements:
1. Graphs to illustrate the balance of your portfolio and the components of income during the retirement phase.
2. A ‘pension’ parameter, for folks who have a non-inflation-adjusted pension (90% of pensions are like this).
3. A ‘size of estate’ parameter, to specify how big (or little) an estate you’d like to leave behind.
4. Numerous UI improvements, including slider bars, financial insights, and a HUD to facilitate experimentation and ‘what if’ analysis.
Looking forward to another great round of feedback.
[0][https://news.ycombinator.com/item?id=23146836](https://news.ycombinator.com/item?id=23146836)
| {
"pile_set_name": "HackerNews"
} |
Zoetrope: Back-Button to the Future (video) - prakash
http://www.technologyreview.com/video/?vid=183
======
prakash
Article:
[http://www.technologyreview.com/printer_friendly_article.asp...](http://www.technologyreview.com/printer_friendly_article.aspx?id=21769&channel=web§ion=)
Publication: <http://www.cond.org/zoetrope.html>
| {
"pile_set_name": "HackerNews"
} |
Python URL manipulation made simple. - grun
https://github.com/gruns/furl
======
xnxn
Neat! One problem I can see is that query strings aren't dictionaries, so
valid parameters can vanish:
>>> furl('http://www.example.com?foo=1&foo=2')
furl('http://www.example.com?foo=1')
See Werkzeug's MultiDict for one way to handle this correctly.
~~~
grun
Query parameters as a dictionary was an ease of use tradeoff over the rarely
utilized flexibility of repeated query parameters.
https://github.com/gruns/furl/blob/master/furl.py#L142
Werkzeug's MultiDict looks like a good combination for the best of both
worlds, ease of use and flexibility. Thanks for the reference.
~~~
ubernostrum
Repeated query parameters aren't "rarely utilized" -- look at how any sort of
checkbox or other multi-select works.
------
greut
Most of those tools don't like IRI like: <http://müller.de/> or the most
famous <http://☃.net> (<http://xn--n3h.net/>) and that's a shame! Just like
many URL shorteners…
For example, Django do it wrong (even though I tried:
<https://code.djangoproject.com/ticket/11522>):
>>> print(django.VERSION)
(1, 3, 1, 'final', 0)
>>> print(django.http.HttpResponseRedirect(u"http://müller.de))
Content-Type: text/html; charset=utf-8
Location: http://m%C3%BCller.de
While werkzeug is good!
>>> print(werkzeug.__version__)
0.8.1
>>> werkzeug.redirect(u"http://müller.de).headers
Headers([('Content-Type', 'text/html; charset=utf-8'),
('Content-Length', '247'),
('Location', 'http://xn--mller-kva.de)]
------
AlexeyMK
Interesting - would this make sense to be included into Requests
[<http://docs.python-requests.org/en/latest/index.html>] in some way?
~~~
kenneth_reitz
I've been contemplating the same thing, actually. I'm not sure if this exact
library belongs (maybe), but some of the functionality could definitely be
useful.
------
j4mie
Similar project: <https://github.com/zacharyvoase/urlobject>
I built a template library for Django on top of it:
<https://github.com/j4mie/django-spurl>
------
ramidarigaz
Looks awesome! I don't have a use for it at the moment, but I'll keep it in
mind for future projects.
Edit: Does anyone have more information on the license? I do plan to build
amazing things, but it would be nice to know under what conditions I can
release those amazing things to the public. I assume it means no restrictions,
but I'd hate to go stepping on someone's toes.
~~~
grun
No restrictions. Use however you deem fit.
~~~
securetoken
You should include an explicit license on github, if you want others to use
it.
~~~
kkolev
Can you share some pointers?
E.g. a page giving an overview of BSD, GPLv2, GPLv3, LGPL, the Creative
Commons bunch (no idea if they are usable for source code). Basically I'm
wondering if there's a "Intro to licensing your crap on github" post somewhere
on the net...
------
Toddward
Interesting, but, to be fair, I don't find urlparse or urllib to be all that
frustrating or tedious to use.
~~~
mikedougherty
Yea, this just seems like rearranging the tokens to achieve the same result.
Though, the same sort of thing could be said about path.py, which I absolutely
love using.
------
jparise
I wrote a similar Python library (named yuri) a couple of months back:
<https://github.com/jparise/yuri>
I was motivated to write it in response to one of Quora's Programming
Challenges (<http://www.quora.com/about/challenges>). It seemed like a fun
problem, and I had never had a good reason to read through the applicable
RFCs. I wonder if furl was similarly motivated?
I don't find the standard library modules very difficult to use, however, so I
haven't spent much more time on it since then.
------
perfunctory
This looks like a nice library. So is python-requests. Every time I see this
sort of libraries though, I can't help to wonder - why standard python
libraries are so bad that people need to create these helpers.
~~~
davbo
They aren't "so bad", you can't expect the standard library to meet the
requirements of everyone. Most of what this library achieves can be done with
urlparse, it just has some additional "nice to haves". Also, see the comments
about not using a MultiDict as a good indicator of why this problem isn't
solved in the stdlib.
------
pyre
I attempted something similar, but got caught up trying to make it span all
URIs and attempting to create an RFC-compliant implementation:
https://github.com/bsandrow/urilib
------
dorkitude
I love anything that adds developer productivity and reduces developer pain,
so I definitely love this. Here's hoping it catches on within the Python
community.
------
epynonymous
very interesting, i ran into urlparse's shortcomings while prototyping a web
crawler just last week! i created something similar to identify domain,
subdomain, directories, pages, and fqdn. note that if you run sockets.getfqdn
against most cloud servers then you get some weird string with ip numbers e.g.
182.43.210.102.static.cloud-server.com
let's merge some code!
| {
"pile_set_name": "HackerNews"
} |
Ask HN:How to break out from online freelance marketplaces to find quality work? - umenline
hello all<p>i experiment with freelancing as c++/java developer for 2 years mainly along with my day job
in well known freelance marketplace site.
i probably dont need to tell you folks how very hard it is to get good quality work there and get the pay that your work is worth.<p>i call it "red light district of freelances" . even though i managed to do very well there and got very high reputation for my work manly to learn how is it to be freelance.<p>but the pay is low .and i found out that employers that usually using those sites don't like to pay more if you keep working for them out side the site. (maybe a little more)<p>and the competition with the low bid'rs from developing countries is very hard.
BUT now as i want to go full time freelance i can't count on such site's. i want to get to known by people that needs contractors and willing to pay as i deserve.
not 20$ for hour for c++ job hi.. i got 15 years experiences. and from where i am
my type of developer worth 70$ and more.
how do i break out ?
======
thifm
Contribute to open source projects. As soon as you have commit access to gcc,
you will be making at least $200k+ year, hehe.
But seriously, devote sometime to create your image, whether writing open
source or giving presentations. It's good and it makes you more skillful. I
actually love writing OSS, even more than money.
A good freelance job offer usually COMES to you. The one I've been working to
I've got from a HN spreadsheet. I contribute to one of the libraries that they
used internally in the project so... they accepted me at first sight.
I don't have your amount of experience and I make ~$40/h(which is low by
market's standard!). Having good communication skills and being able to let
your client have almost no overhead from managing you can clearly be VERY
VALUABLE. It's actually the employer's dream to find someone like this.
I've also worked with bad freelance jobs: didn't pay me in time, codebase was
shitty and so on. I advise you to drop ship and look for something else. I
have pleasure when I work
Aim to be a contractor bringing way more on the table than just flipping bits
skills. :-)
~~~
umenline
yeah thanks, this is why it was important for me to try out to be freelancer
before i intend to be full time freelancer , i know its hard.
------
timjahn
We're building matchist (<http://matchist.com/talent>) to solve this problem
by creating a place for quality developers to find quality work.
We've found that our clients prefer developers who are great communicators and
can provide guidance on their projects. They're looking for partners, not just
somebody with Rails skills.
As a few have commented here already, your written communication could be
improved to help inspire more confidence in potential clients. (I mean this as
constructive feedback, not criticism.)
~~~
domrdy
Are you going to accept non-U.S. citizen developer sign ups any time soon ? I
guess this is coupled to stripe offering their service outside of the U.S.
~~~
timjahn
We plan to but don't have a timeline of exactly when yet. It is indeed coupled
to Stripe at the moment, but it's also a conscious decision on our part to
start small with what we know (US culture, law, and how it relates to
freelancing) before growing into other countries.
------
wallzz
I recently start working in freelance , and I think that the payment doesn't
worth it ,and the employers are not generally serious, you keep waiting for
the payment , and they usually don't pay , now I deliver work like a demo , it
only works for a few days , just so that I can be sure they pay . ps: what
websites do you use ?
~~~
umenline
odesk
------
pm24601
A developer that makes $70/hr is more than a developer - they are an
architect, a persuader, a researcher and a leader.
In other words, soft skills, non-technical skills - communication skills. Yes
there some recluses that can get top dollar and not be social but they are the
exception.
~~~
umenline
how much c++ / java 15 years experience take ? (Linux/windows server/client )
~~~
pm24601
I hire on odesk regularly ( over 4100 hrs ). I would not hire based on the
English structure used in your question. I don't think you would be able to
communicate successfully. So personally not much.
So improve the English or find the subset of jobs that do not require English.
(apologies for English being such a pain to learn)
~~~
umenline
i have dyslexia its very hard for me to write in English but hi i craft great
applications (: and i do understand you , and if we where talking in Skype i
guess you could understand me very well also . 1200 hours in odesk and 5
star's reputations can prove it
~~~
pm24601
Great. Glad to hear it.
Most communication I have with developers is in the form of comments and email
- written communication.
High-end developers are doing more than creating their own code - they are
commenting on ideas and design documents.
Your dyslexia is a barrier to communicating and persuading. How are you going
to minimize its impact on your life?
Maybe you can try using a font to help -
<https://www.google.com/search?q=dyslexia+fonts>
but in the end - it is your problem to solve.
| {
"pile_set_name": "HackerNews"
} |
Motorola Releases Source for GPS Watch - Tsiolkovsky
http://sourceforge.net/motorola/motoactv/news/2012/11/motoactv-release-1711-source-code-available/
======
new299
motoactv source has been available for a while. I guess this is just an
update?
| {
"pile_set_name": "HackerNews"
} |
Mapping Motor Vehicle Collisions in New York City - lil_tee
https://toddwschneider.com/posts/nyc-motor-vehicle-collisions-map/
======
takk309
Nice work, I am glad that there is a paragraph talking about exposure. Crash
trends based strictly on total number of crashes are easy to predict just
based on where there is more traffic. Using crashes per vehicle mile traveled
for road segments or crashes per entering vehicle for intersections can help
tease out trends. Controlling for severity is also important.
When I do a crash analysis for a city, one of the tasks I do regularly for my
job, I generate a crash rate and severity index for each intersection. The
severity index is basically a weighted average based on severity, non-
injury=1, minor injury=3, and severe injury or fatality=8. The crash rate and
severity index are divided to create a Severity Rate. While not perfect or
statistically valid, it does help identify trends. Also, I am in a rural state
so it is rare that there are enough crashes to make any statistically valid
conclusions.
~~~
mikeash
What’s the basis for the severity weights? I’d expect the weights to be way
more spread out, like 1/10/100/1000\. It would definitely not be a good trade
off to eliminate nine non-injury crashes at the cost of one additional
fatality. But I certainly could be missing something about this sort of
evaluation.
~~~
dsfyu404ed
Fatalities are a tiny minority of crashes and aren't really interesting to
study because usually you basically wind up studying the behavior of drunk
people and people who don't wear seat-belts and if you filter those out
there's not much data left making meaningful conclusions hard to draw. Fatal
accidents are often just normal accidents with a couple aggravating variables
on top (e.g. person rear ends semi-truck instead of normal truck or person
gets in minor accident but not wearing seat-belt) so it doesn't make sense to
fixate one them. Anything that reduces normal crashes by some amount will also
affect fatal crashes.
~~~
takk309
Drunk drivers and people that don't wear seat belts are still worth reviewing.
While there are rarely engineering solutions to the fatalities that result, it
can help inform education programs and initiatives. Amazingly, buckle-up and
don't drive drunk advertising can make a difference.
~~~
apendleton
They absolutely are, but are rare enough that it's difficult to reach
statistical significance when talking in the aggregate. That a particular part
of town went from one fatality one year to zero fatalities the next year is
probably not evidence of the success of any particular safety-related policy
intervention, it's just noise. Studying all crashes provides a proxy that
hopefully helps decrease the odds that the fatal ones will occur will making
it possible to make robust, data-driven claims about success or failure.
~~~
takk309
On a project I am currently working on, we saw pedestrian fatalities shift
from 7 to 13 in consecutive years. it is a nearly 100% increase but like you
said, it is just noise. This is in a city with around 100,000 residence.
Convincing politicians that it is just noise is a whole different story.
------
clhenrick
I've worked extensively with this dataset on a similar project,
[http://crashmapper.org](http://crashmapper.org), and through that process
found that the data is extremely error prone. Perhaps 20% of the collisions
recorded are not geocoded (e.g.lack lat, long coordinates) and don't contain
other location information such as street, cross street, and zip code that
could be used to geocode them. It appears that some precincts of the NYPD do a
better job at recording a crash location then others. Even more of the data
lacks values for "contributing factors" so it seems difficult to use as a
metric for analysis. Often there is a mismatch between the total number of
persons injured or killed and the number of pedestrians, cyclists, or
motorists injured or killed. Furthermore, whomever maintains this dataset will
periodically go back in time and update it seemingly at random, editing
existing data or adding new data, potentially months or years back in time.
Often it appears to be that the data maintainer is changing values for fields
such as the number of pedestrians, cyclists, motorists injured or killed.
Presumably this is because more information surfaced about an incident at a
later point in time and the city must go back and update it. However this can
result in stats from the data not aligning with the NYPD's or DOT's official
stats from a previous year. I would advise anyone to keep these facts in mind
if trying to use the data for analysis and policy recommendations, such is
open data.
------
xyzwave
Having done something similar for the Long Beach, CA area in college, one of
the most interesting takeaways was the relative spatial distribution between
fatal and non-fatal accidents.
Non-fatal accidents clearly clustered around high traffic areas, but fatal
accidents didn’t reveal the same clustering. Instead they appeared to be
uniformly distributed across the city.
I’m sure there is an explanation in this, and this was only 10 years data for
a single city, but it always felt a little spooky that these accidents were
equally likely to happen anywhere (though most likely later in the night).
~~~
icsllaf
High traffic areas tend to move traffic much slower than lower density areas.
Getting into a fatal traffic crash when going 15 miiles per hour in stop and
go traffic is much harder than when you lose control of a car when going 50 on
an empty street.
------
jermaustin1
I'm not sure what constitutes a "collision", but in 2015, I lived on Lexington
between 121 and 122 and saw the investigation of a Hit and Run of a homeless
man. I talked to a couple of the witnesses who saw it happen.
This incident was at Lexington and 123rd. In the data, I do not see this
incident.
------
karussell
The question is if the highlighted area are really more dangerous or if there
are just more visitors. Shouldn't one take into account the traffic counts?
BTW: there is similar (open) data for Germany:
[https://unfallatlas.statistikportal.de/](https://unfallatlas.statistikportal.de/)
(It clearly shows the problem I mentioned)
Update: sorry, it seems that this issue is already discussed in this thread
~~~
djtriptych
Yup. This is pretty much a map of population density in NYC.
------
jdlyga
Lots of crashes in Hell's Kitchen. That area is full of people going out to
bars and restaurants, tiny sidewalks, and lots of impatient drivers trying to
get through Manhattan to New Jersey.
------
bonyt
The map of total deaths includes a significant blip on the west side near Pier
40 and the Holland Tunnel, which I think is from the 2017 truck attack.
[https://en.wikipedia.org/wiki/2017_New_York_City_truck_attac...](https://en.wikipedia.org/wiki/2017_New_York_City_truck_attack)
Map: [https://imgur.com/a/jNbOv7W](https://imgur.com/a/jNbOv7W)
~~~
kevin_thibedeau
That area has a higher rate of incidents in general because of Brooklynites
trying to avoid the excessive toll on Verrazano when leaving the city.
------
ryeguy_24
I would bet that the shadow/light patterns on Roosevelt Avenue & 94th Street,
Queens cause significant visual distractions to drivers and pedestrians.
------
dsfyu404ed
Drivers mostly hit other things when there's too many things demanding their
attention (poor visibility + difficult left turn + busy traffic + bikes +
pedestrians = high risk of accidents) so this is probably just a heat map of
intersections that are the busiest (in terms of things going on, not
necessarily throughput).
I'd like to see a month by month heat map.
~~~
magduf
>Drivers mostly hit other things when there's too many things demanding their
attention
And this is exactly why humans shouldn't be driving. Hopefully human driving
will be banned before too long, as machines can do it so much better.
------
brianbreslin
I would love to pay the author to do this for my city. I'm fairly certain I
could get the local govt to pay up for this.
------
slowhand09
Very impressive!
------
skizm
[https://xkcd.com/1138/](https://xkcd.com/1138/)
~~~
InitialLastName
To be fair, they mention that in the first paragraph after they introduce what
the data is actually doing.
There is still a value to looking at a population-correlated heatmap in order
to draw conclusions from the discrepancies between the two.
| {
"pile_set_name": "HackerNews"
} |
Android and the GPLv2 death penalty - sciurus
http://lwn.net/SubscriberLink/455013/7b3cbe56fc85e6d3/
======
click170
This paragraph from the bottom of the article presents a vector of attack
(albeit a small one) on the kernel that I hadn't considered before, but it
seems like one we may quickly encounter in the future.
"One would guess that a copyright troll with a small ownership [of kernel
code] would succeed mostly in getting his or her code removed from the kernel
in record time. Big holders could pose a bigger threat. Imagine a company like
IBM, for example; IBM owns the copyright on a great deal of kernel code. IBM
also has the look of one of those short-lived companies that doesn't hang
around for long. As this flash-in-the-pan fades, its copyright portfolio could
be picked up by a troll which would then proceed to attack prior infringers.
Writing IBM's code out of the kernel would not be an easy task, so some other
sort of solution would have to be found. It is not a pretty scenario."
~~~
vogonj
"IBM also has the look of one of those short-lived companies that doesn't hang
around for long."
this is probably the funniest sentence I have ever read.
~~~
sixtofour
It's conceivable that Motorola Mobile might have had GPL-licensed linux code.
That story might be playing out differently had Microsoft bought them instead
of Google.
~~~
jrockway
In the end, the code gets removed and rewritten. Most of Linux is drivers for
hardware that you don't have anyway, so the chances of this affecting the
average person are minimal.
Honestly, nothing but good could come from being forced to remove key parts of
Linux. A lot of it needs a good rethinking, but breaking compat would not be
tolerated. If it's legally required, then people don't have a choice.
~~~
sixtofour
"In the end, the code gets removed and rewritten."
Undoubtedly true, but what about deployed/sold devices? I wasn't real
impressed at the pace of the update getting to my Epic, which I believe was
caused in part by different motivations between Samsung and Sprint. What about
devices that are no longer supported by updates? They're "out there."
~~~
jrockway
I guess they get an emergency patch or Samsung pays damages. Just like they
already do for the eight billion patents various trolls claim their phones
infringe on.
------
tzs
The gist of it _seems_ to be that yes, once you violate GPLv2 you need
explicit permission to get your license restored (which would be hard to do in
the case of the Linux kernel), but hey, don't worry about it. The copyright
owners are nice people who just want the source distributed, so go ahead and
ignore the license as long as you intend to eventually comply--there won't be
any serious consequences.
------
aninteger
This is a subscriber-only article. By linking to a subscriber link we are not
helping LWN.
Quoting LWN: "Reader subscriptions are a necessary way to fund the continued
existence of LWN and the quality of its content."
~~~
jrockway
But LWN has a feature specifically for sharing subscriber-only articles on
social news sites, and that's why everyone can see this article. Presumably if
this is killing them, they'd just turn that feature off.
~~~
aninteger
Fair enough I guess. I don't work for LWN. I just felt slightly guilty by
accessing a subscriber link that was probably intended more for "friends of a
subscriber" than the whole internet community.
~~~
jandevos
Aren't we all friends here? :)
Honestly, though, if LWN was worried about this, they could put some sore of
limit on the number of times a subscriber link could be used (per time unit?)
or something. I don't think they are, and rightly so: the exposure to their
content (which is very good value for the small amount of money they ask
subscribers), as well as their liberal policy on these 'friend links' probably
lead to more subscribers. Personally, I want to read the LWN articles when
they appear there (on their RSS feed, to be precise), and not when someone
gets around to posting a link on Hacker News.
------
Uchikoma
What I did not understand in the original article: For what is the license
void? For BusyBox or for BusyBox 1.0.3. Is it valid again - new distribution -
with BusyBox 1.0.4? Or is it the "same" software?
[edit] Just saw this is also discussed in the linked article.
PS: In philosophy this problem is known as Ship of Theseus.
<http://en.wikipedia.org/wiki/Ship_of_Theseus>
~~~
lurker19
Software doesn't violate licenses, people do. Anyone who violates the license
of a work loses their license to the work. A more pointed question then is
about who violates/loses the license: an individual coder? A
corporation/foundation?
~~~
jandevos
I think the gist was: if you lose your license to distribute Busybox X, does
that impact your rights with respect to Busybox Y?
My take on this (but IANAL) is that you do not really lose the rights to
distribute a specific version, but the rights to distribute specific code (or
compiled versions of it), and that would carry through to all code in version
Y that was already in version X, but not any newer code that is unique to
version Y. That won't be very useful, though.
[Edit] Just read the part of the article that deals with this question.
Personally, I hold to my take on this matter -- that the scope is not a
particular version of the distribution containing the code, but to all the
bits of code of which the license was violated -- even if you also have access
to the code in another way. My reasoning for this is that this could otherwise
open up a pretty simple loop-hole: you'd only have to get someone else, whose
license is not yet revoked, to fork the project, and release a new 'version'
of the program, to get your rights to the code back. That can not have been
the intention of that clause in the GPL.
~~~
lurker19
Ah I thought you were naming BusyBox as an infringing work, not an infringed.
Anyway, it does seem, as mentioned in the article, that if the copyright
holder rereleases a similar work under a similar license, a past infringer
gets a fresh shot, unless the author specifically publishes their revision
history as a single work.
The point of the GPL really is about sharing, as the article suggests, not
about punitive damages or post-violation injuctions. GPL-using authors do not
want to prohibit infringers from future sharing, they want derived works
published freely.
Much of the point of GPL and Free Software is about freedom for users, not
making war against others. A past infringer gaining access to to code is not a
loophole, it is part of the goal of the Free Software movement, which is
unrestricted access for anyone to exploit software privately or to share it
publicly (but not to allow public exploitation of non-shared code).
------
nokcha
Perhaps I'm missing something, but it seems that under Section 6 of GPL, a
former violator could get a new license by simply receiving another copy of
the software from anyone who still has a valid license:
> 6\. Each time you redistribute the Program (or any work based on the
> Program), the recipient automatically receives a license from the original
> licensor to copy, distribute or modify the Program subject to these terms
> and conditions.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What is preventing rust from replacing c/c++? - bulldoa
======
jfaucett
Basically I think it boils down to the fact that the c/cpp tooling is actually
really sophisticated at this point and developers have already learned how to
deal with the pain points of c/c++ so there is little incentive to change.
Especially, since it means swapping out a highly advanced ecosystem i.e.
UnrealEngine for the chance to spend the next 5 years building your own not-
as-good system to then be able to start to program your own game. And its like
this in every C/C++ industry I'm aware of (embedded devices / games / desktop
apps).
I'm a rust fan, even more so of cargo (wish C/C++ had a cargo). But at the end
of the day it almost always boils down to transition costs vs. benefits, and
the benefits currently don't outweigh the transition costs IMHO.
~~~
memracom
I think you are correct in regards to the people who are currently using
C/C++. They already have done the hard work to make it usable for their
systems. But newer developers will look at the complexity of C/C++ development
and choose anything but.
Maybe it is like politics, rather than attacking your opponents you should
offer something that they do not. Then you will build your own constituency
and gain influence. It will take a long time, but if you keep on providing
benefits to users, then attrition might take you to the top. When someone
attacks your camp, ignore them. You do not need a better way to attack them,
you just need to stick to the knitting and make your offer provide benefits to
users. External forces will likely decide whether or not you overwhelm your
opponents.
~~~
AstralStorm
The problem with this approach is that you would really have to offer
something groundbreaking to displace C and C++ while also matching portability
(only matched by Java variants) and performance (often unmatched).
The about only languages that could compete right now are extended Ada for
high assurance environments and Fortran variants for HPC. They are old but not
dead and quite usable if you have the dollars to buy proprietary or invest
time.
Microcontrollers and constrained environments are another critical niche for C
and C++, especially given proprietary compilers.
Rust does not go far enough in safety to displace Ada (or Spark variant) while
not offering improved performance and comparable portability with all the
pains of a new language. It also does not do build once deploy everywhere
stuff Java and JS attempt. Rust is neither here nor there. A good effort but
not a solution.
~~~
smt88
> _you would really have to offer something groundbreaking to displace C and
> C++ while also matching portability_
If you offer something groundbreaking, you don't need to match portability.
Many people will be willing to make the tradeoff.
> _Microcontrollers and constrained environments are another critical niche
> for C and C++_
Why do you assume Rust can't be used for these niches? I understand the
potential issues with old environments, but what about modern/future ones?
> _It also does not do build once deploy everywhere stuff Java and JS
> attempt._
Rust compiles to JS today. It will eventually compile to WebAssembly.
~~~
steveklabnik
(It compiles to WebAssembly today too)
~~~
AstralStorm
Interesting, but is there any advantage there over more dedicated languages?
If not, then again, Rust has no clear selling point. It really has to be
better at something. Matching is not good enough.
It definitely is not easier to use nor embed. (To compete with the likes of
Python, Ruby or Clojure.)
~~~
smt88
> _easier to use nor embed_
"Easy" has no objective definition. Does it mean "getting code to run"? Or
does it mean "writing bug-free code"?
For me, easy means the latter. It means that human error is harder to
introduce into the code. A great language should not rely on the shaky pillars
of discipline or expertise, because neither of those are easy to find or
enforce, nor are they consistent. A disciplined expert might be too sleepy to
write safe code, for example.
Rust is much easier to use than C and C++ because the compiler helps you so
much and replaces discipline/expertise. That's the whole point. Rust prevents
you from doing something you don't intend to do (or at least it does it better
than most languages).
Python, Ruby, and Clojure don't have the same guarantees, and none of them can
be used without garbage collection, making them unsuitable for a variety of
cases where Rust can be used.
~~~
AstralStorm
Compiler does not help you. It is not a proof suggestion system like
Isabelle/HOL or the like.
In fact error messages rust produces rival the terrible nature of ones in
older C++ compilers thus far.
The compiler only prevents you from making mistakes. Intentions do not even
enter into it. See, some of the performance critical code in our apps has to
work around even the lax C++11 rules. It is completely impossible in rust
without using unsafe stuff liberally - specifically gets about 1000x slower
and this matters a lot. Compilers know nothing about intentions, nor can a
borrow checker enforce intentions unlike a type system.
~~~
burntsushi
> Compiler does not help you. It is not a proof suggestion system like
> Isabelle/HOL or the like
This seems like a gratuitous misrepresentation. A compiler does not need a
proof suggestion system to help you.
> In fact error messages rust produces rival the terrible nature of ones in
> older C++ compilers thus far.
No, it doesn't. The error messages are wonderful and constantly improving.
> It is completely impossible in rust without using unsafe stuff liberally -
> specifically gets about 1000x slower and this matters a lot.
Can you give specific concrete examples?
------
fulafel
What's preventing the tree you planted from replacing the old tree next to it?
Nothing really, ask again in 30 years.
Software is slow, your OS and browser have taken 20+ years of C/C++
development with top professional development teams to become the nearly
tolerable products they are.
~~~
mhink
> What's preventing the tree you planted from replacing the old tree next to
> it? Nothing really, ask again in 30 years.
That's... actually one of the best descriptions I've ever heard about these
tradeoffs. :)
------
Turing_Machine
Rust will never take off until someone (or some group) uses it as the basis
for a new, mind-blowing, must-have thingie.
For C it was Unix.
For JavaScript it was the web.
My guess for when something like that will happen with Rust? Never.
Unfortunately, verbose, anal-retentive languages like Rust are simply poorly-
suited for writing new, mind-blowing, must-have thingies. On the scale of
languages that support exploratory programming, I'd put Lisp at the top,
JavaScript fairly near the top, C somewhere above the middle (though it was
quite close to the top when it was new), and Rust somewhere down in the
FORTRAN and COBOL region of the spectrum.
Note that this is completely orthogonal to type-safety, running speed, or any
other traditional measure of programming language merit. The shoals of the
language ocean are littered with the wreckage of "better" languages (the
various Wirth languages, e.g.). What really matters is whether a programmer
can sit down and casually explore a cool idea in a few minutes of spare time,
and you can't do that in a language that requires half a page of code to
concatenate two strings.
Swift has managed to roll type safety (more or less) into a language that's
actually fun to use, or at least not actively painful. Rust hasn't.
~~~
frik
Elixir brought a sane syntax to Erlang.
A Swift or Go inspired simple syntax could revive Rust.
~~~
AstralStorm
Syntax is not even really a consideration. See how much people put up with in
Perl and PHP world. PHP used to be language for web until displaced by JS.
~~~
frik
That's your opinion. PHP, JS (nowadays to some extend also Go and Swift) are
so successful, BECAUSE they have a simple and C-like syntax. Rust syntax is
rather an afterthought, unfortunately.
------
fuwafuwa
Code already written in C/C++. I don't think there's anything stopping Rust
from snowballing at this time. It's built some very strong libraries for
certain problem domains, and because there's package management, it's possible
to reuse it. This has already substantially reduced the friction of "systems"
programming since you don't run into a challenging build process. But for the
majority of folks the answer for an immediate need is still a tool or
framework they already use.
------
memracom
1\. Rust is not C/C++ 2\. Lots of software is already written in C/C++ 3\.
Rust follows the same convention as C/C++, that you integrate functionality by
writing a library, compiling and linking it into a monolithic binary 4\.
Systems that rely on a monolithic binary are NP hard to replace with something
different 5\. Replacing a monolithic binary works best, in practice, when it
is done by refactoring into services that are integrated by something other
than a link editor. Although the buzz is all about web microservices, the
reality is that message queuing to tie together microservice, macroservices
and monolithic binaries, provides better performance more consistent with the
linker/monolithic model. 6\. Go with its channels and packages may be a more
advanced model, and is also competing with Rust. In other words, it is not
just a question of comparing Rust to C/C++ 7\. The JVM with its JIT has in
many cases equaled or exceeded the performance of C/C++ monoliths, but
languages like Clojure and Scala take it well beyond mere Java. 8\. It may be
that replacement of systems is driven more by economics than by technology. In
other words it is not Rust that will determine the fate of Rust, but the
economic sucess of companies using Rust for mission critical systems. In 20 to
30 years, the businesses that have succeeded will determine which technology
is better.
~~~
AstralStorm
Java Achilles foot has always been memory use and really compute constrained
environments. Things like high end video game engines are not being written in
Java, Closure or Scala for these very reason.
I would be very careful about JIT performance being even close. See, the
language does not even expose SIMD well. Doing anything hard realtime (heck,
even more complex UI) with a JVM in the way is a big pain.
------
twobyfour
Inertia. You don't rewrite an existing codebase unless the current one is
simply not capable of doing what you need it to. And there are a lot of
existing codebases in C and C++.
~~~
AstralStorm
Even when you do rewrite, there has to be a clear benefit. Rust does not offer
a clear one.
(No, reducing amount of bugs does not do anything when pitted against well
tested mature software that already exists.)
~~~
mmstick
Rust does much more than reducing bugs. It is able to reduce memory
consumption, decrease the required amount of CPU cycles, and increase
maintainability / adaptability. ADTs aren't a joke, and neither is Cargo, move
semantics by default, and the borrow checker / borrowing and ownership
mechanics as a whole. Shipping solutions in Rust can be done at record speeds,
and integrate with existing software. Rust is wildly successful, and there's
no way of stopping it.
------
nickm12
This question is both a leading and also ill-posed. What is preventing Rust
from replacing C/C++ _where_? In existing applications? For new code
development? In some particular application (e.g. game engines)?
There are lots of good answers in the comments here. I think Rust has a good
story for why you might choose it over C/C++ for some uses today, but it's
still early days. If Rust is going to take share from C/C++, it will do so
over the period of decades, not years.
------
kennywinker
Just one data point, but: I'm currently learning about embedded
microcontrollers. The standard starting point for this is arduino, which is
C/C++ based, so there is a strong pull towards C++ already. But I really
wanted to avoid C/C++, and I spent a long time researching alternatives. Rust
was one of the most mature for MCUs, but a) it's not very mature and b) you
have to pick an MCU with a lot of flash memory as even the smallest rust
binary is >100kb. For reference, one of the chips I've been testing out for
this project has 127kb of flash. This evening I loaded on a medium-sized C++
project and it took just 24kb.
So that's what's preventing ME from using rust instead of C++
~~~
35bge57dtjku
There's really no way to trim down the rust binaries??
~~~
steveklabnik
There absolutely is; the smallest known Rust binary is 151 bytes.
~~~
AstralStorm
How about a smallest real application that doors something useful? Say, basic
networking on the level of static pages like uhttpd. (Used in many routers.)
Or a busybox ash equivalent at least.
------
jstewartmobile
The chorus of Rust zealots who can't let a week pass by without writing yet
another "Why won't you stupid plebeians stop using C++?" article.
That alone makes me want to crack open some STL code just to spite them, and I
_hate_ C++!
------
afuchs
Why hasn't another language, like Ada with similar goals to Rust, replaced
C/C++?
~~~
acomjean
Its the operating systems libraries. They're in C.
I did a lot of ada programming (radar) and we had a fairly large library of
wrappers to the C network/ ipc / process control that the operating system
provided. It was a royal pain. So much so that at least a couple subsystems
moved to C (mainly to use the math libraries......)
~~~
AstralStorm
More modern styles of Ada work pretty well. The trouble is in getting the code
to run in a restricted environments. MCU compiler authors tend to provide C
only.
Perhaps a transpiler would be better.
------
AnimalMuppet
Changing languages isn't free. Rust isn't enough better than C++ for most
developers (or their management) to switch.
Or, if you're more of the skeptical sort, Rust doesn't _seem to be_ enough
better. You could argue that C++ developers don't think they write code that
creates memory issues very often. So they underestimate the benefits. (And the
borrow checker is a different concept, so they may also overestimate the costs
of switching.)
The net result, though, is that they perceive the benefits to not be worth the
costs.
~~~
JoeAltmaier
Perhaps they estimate the benefits correctly. In that case, they may never
switch.
~~~
AstralStorm
More modern styles of C++ just wholesale avoid most problems that are solved
by the borrow checker. Smart and owning pointers, dynamic and checked data
structures thanks to operator overloading. There are remaining warts but not
bad enough in practice.
~~~
jjpe
That's not really all that relevant for most existing commercial code since
legacy code tends to not get rewritten at all unless it's absolutely
necessary, and thus remains stuck with the Old Ways.
~~~
AstralStorm
Indeed, but unlike a rewrite, you can refactor staying in the same language at
your leisure, piece-wise.
Plus the argument was mostly pertaining to new code.
------
wbond
For me, equivalent support for the MSVC toolchain on Windows. Obviously it
won't ever be 100% without support from Microsoft, and a lot of work has been
done, but some items on [https://github.com/rust-
lang/rfcs/issues/1061](https://github.com/rust-lang/rfcs/issues/1061) feel
important enough to give me pause before spending much time with Rust.
~~~
steveklabnik
I use it daily, and it works well. I'm still a newbie Windows person so I
probably don't notice these pain points...
------
goodplay
Another reason that I don't see given its fair chance might be politics. The
human race has a wide variety of values that are often counteracting. Forcing
a specific set of values participants must adhere to in order to interact with
the community will undoubtedly alienate adopters.
That's why most projects, companies, and even governments tend to strip down
the rules that govern them to what is necessary for them to function (e.g.
separation of religion from state for governments).
The common reason given by the rust community to justify their behavior is
that technology does not exist in a vacuum, and that technology can not be
separated from politics. I completely agree. However, politics should be
tackled on a layer incompletely independent from technology. Mixing the two
only causes instability and uncertainty for rust. Here are two risks that
arise from this mixing. I'm sure there are many, far more serious, risks
besides these two:
\- What happens when (not if) a significant shift in values occurs in the
community? Will it collapse?
\- What about technical or legal changes to the project that were driven by
community values rather than technical or legal merits? This is not far
fetched when technology and politics are made inseparable in the way it was
done in Rust.
I use rust in a professional context, and I appreciate what it brings in terms
of technical benefits. but I would be lying if a said this aspect of it didn't
worry me. I can not recommend its adoption to colleagues from other companies
if asked for this very reason (I was asked once so far).
Community instability or the mere perception of such is a big barrier to
adoption, at least for companies.
~~~
mmstick
Using a programming language does not force you to join a community; and what
is the Rust community, anyway? Which Rust community? The Code of Conduct
pertains to the official venues of communication, not the language itself. All
communities have rules. And the rules are quite normal of any community that
wants something resembling civility.
------
jlarocco
Why would it?
As a C++ developer, I think Rust's main selling point (memory safety) isn't as
big of a problem as it's made out to be. For every unsafe memory access that
results in a crash or exploit, there are millions and millions of memory
accesses that work just fine and never cause a problem. Tools like Coverity
and Valgrind can catch a ton of errors, and its not clear Rust offers any
significant advantages over C++ with Coverity.
Parallelism is nice, too, but C++ already has a ton of options there: OpenMP,
MPI, TBB, OpenCL, standard library threads, boost/standard library async
tools, etc.
On the other hand, as an end user, "written in Rust" or "written in <any
language>" isn't something I care about. I don't even remember the last time I
had a piece of software crash due to a seg fault, so a vague promise of being
"safer" isn't good enough to accept an otherwise inferior product.
I guess to summarize, C and C++ developers are going to keep writing C and C++
and if Rust is going to take over the world, the people advocating for it will
have to write better software in it and beat out software written in the other
languages.
~~~
xfer
Coverity is expensive and majority of software people use are open-source
which don't make use of it(hence all the CVEs).
As an end user running a server, i __very __much care about whether my web
server has vulnerabilities. And the promise about safety is not vague, it has
clearly defined semantics.
To comment on your last point, it's seeing some light already, ripgrep is
already beating grep in performance and is purely written in rust. And i agree
that this is how rust will beat C++(if ever), by writing "better" tools than
those already exist.
~~~
jlarocco
> Coverity is expensive and majority of software people use are open-source
> which don't make use of it(hence all the CVEs).
It's definitely an area where the closed source tools are ahead of the OSS
alternatives, but there are open source (and free closed source) alternatives
to Coverity. PVS Studio, for example, has a free download:
[https://www.viva64.com/en/pvs-studio/](https://www.viva64.com/en/pvs-studio/)
> As an end user running a server, i very much care about whether my web
> server has vulnerabilities. And the promise about safety is not vague, it
> has clearly defined semantics.
You say that, yet you're still running a ton of software written in C and C++.
~~~
mmstick
> You say that, yet you're still running a ton of software written in C and
> C++.
Nice assumption. Much of the software I use is actually implemented in Rust.
And soon, I'll be using an operating system that's 100% written in Rust, with
a 100% Rust Servo-powered web browser. It's only a matter of time.
In addition, I've also implemented my servers (web server included) in Rust,
and they have zero dependencies upon C/C++ software, so your entire point is
rendered invalid.
------
kierank
People to write code instead of sitting on their high horse. That and
performance as good as c, not c-like, not selective benchmarks, not other
nonsense.
------
trelliscoded
Greenfield systems work by startups isn't really done these days. If it's not
greenfield, it's probably already in c/c++. If it's not a startup, they
probably already have a bunch of expensive c/c++ tooling (static analyzers,
IDEs, CI systems, ...) that they aren't going to get in rust.
~~~
mmstick
Rust has made major strides in the cryptocurrency industry. Cryptocurrencies
are largely becoming powered by Rust, because with cryptocurrencies, there is
no room for error.
------
altotrees
I would've started using Rust way sooner if academia hadn't forced me to use
c++ or c for anything systems related. Upon graduation I felt like a kid in a
candy store with all the languages and platforms I hadn't tried in my four
years of fun.
------
IshKebab
Partly time, but also it's much more difficult to get "something that works"
in Rust than it is in C++ due to the borrow checker.
Of course your C++ code will probably not be as safe as your Rust code, but
many people don't care about that.
~~~
AstralStorm
It probably will be just about as safe. The benefits of the borrow checker
only become apparent when you're doing complex things. C++ allows you to write
bad code easily though, though the most obvious modern way is safe typically.
At least until multiple threads or processes happen or you have hard realtime
requirements.
------
childintime
I guess that, if you can dislodge C in the embedded space, everything else
will follow.
So my question is: when can I use Rust on the Cortex M0 (or on a 32 bits
RISC-V), with an interactive debugger, preferentially from a simple IDE?
~~~
steveklabnik
Cortex M works today, but since I'm not in the space I can't comment on
debuggers or IDEs.
~~~
AstralStorm
Rust produces quite fat code even without safeties on and when using a tiny
libc...
If you have megabytes of flash to spare, it could probably be fine...
~~~
steveklabnik
> even without safeties on
Most of Rust's "safeties" are only compile-time, and so have no effect on
binary size.
~~~
AstralStorm
Many of them cost nothing over base rust tax, but that is not the issue.
------
symlinkk
Over 30 years of momentum.
------
cbanek
It doesn't need to replace it (it's not like c++ is being deprecated), and
there has to be real reasons to port over existing code, because there are
real costs.
The technically best format doesn't always win. Just look at Betamax vs VHS.
------
solidsnack9000
If it happens, it will take some time. Java has been replacing C++ for many
years now...
~~~
szemet
A big difference is that Rust has a zero overhead interop with C, so it it
allows mixed projects or gradual porting.
So Scala or Kotlin replacing Java, or F# replacing C# may be a better analogy.
What happens there, is that Java and C# are improving just enough that many
people are simply not considering the change... And C++ are trying really hard
nowadays too - new version with interesting features every few years etc...
(By the way - I'm a Rust fan, and also a functional programming fan, so I
would be happy if C++, C#, Java would slow down a little to let the
alternatives really catch up popularity;)
~~~
solidsnack9000
I am actually on your side on this one. But the changeover, I think you will
agree, will be somewhat slow. So no one should judge Rust too harshly, for not
replacing C++ in the Unreal Engine, Microsoft Office Suite, &al. even in the
next 5 years.
------
flukus
Binary compatibility. As a developer I want to be able to:
1\. Write an app in rust.
2\. Use a rust library in that app, via the native rust interface (using c
interfaces negates a lot of the pros of rust).
3\. Dynamically link that app to the distro repository version.
4\. Have the distro apply security updates to that library without recompiling
my app.
5\. Ignore cargo completely.
As a user I want to:
1\. Not download and install multiple copies of common rust libraries (as is
the cargo norm).
2\. Not be in doubt whether a particular application is using the latest
patched libraries.
It seems that rust community has learned from the mistakes of c, c++, and
java. But they failed to learn the successes as well.
~~~
jjpe
While your use cases are definitely valid for you, this is not a Rust issue,
but something that's 100% decided by your distro. They decide if, when and how
they integrate Rust into it.
As for the multiple copies issue with Cargo: I generally agree with you that
this definitely feels like 1 source of major blow up in project compile times,
and it would be nice and useful if that were reduced as much as possible. But
there is a reason that this system is in place: each crate defines how it
compiles and consumes its dependency crates, so if crates A and B both depend
on crate C, then A might compile and consume C differently than B does. This
results in effectively different libraries once compiled, and ATM there is no
way to predict in advance what the effect of any given compile flag will be
since any flag can switch on/off arbitrary code. Hence Cargo keeps A's C
dependency separate from B's.
Then there's the not doubting whether an application uses the latest patches.
Up to a point, this can be automated through exploitation of SemVer: if Cargo
could figure out that a new library version is API and ABI compatible with the
previous version then I'd argue that Cargo should perhaps just auto-update it.
A bigger issue is what happens when a new major version of the library is
released, as those tend to break APIs in order to perform maintenance. That is
something that as of yet has not been automated.
~~~
flukus
> While your use cases are definitely valid for you, this is not a Rust issue,
> but something that's 100% decided by your distro. They decide if, when and
> how they integrate Rust into it.
It's a rust issue because it's up to the rust compiler to generate ABI
compatible binaries, at the moment there is no ABI so there isn't much distros
can do. It seems they have no intention of adding one any time soon:
[https://github.com/rust-lang/rfcs/issues/600](https://github.com/rust-
lang/rfcs/issues/600)
This is why system libraries are nearly always in c and not c++. The ones that
aren't (Qt) have a long history of broken language bindings.
~~~
jjpe
This is not as much as a problem as you believe. Rust is perfectly capable of
making objects that have a C API and ABI precisely for this reason, and the
most popular distros only work with supported compile targets anyway. That
said, a distro like Debian might indeed encounter some issues at this time for
the less prominent platforms eg MIPS or PowerPC based chips. That shouldn't
prevent Debian from including Rust altogether though, they merely should
exclude it on those less or even unsupported platforms.
------
Sag0Sag0
Also some people just do not like coding in rust and that combined with the
fact that its new means they refuse to switch.
------
joshAg
Time.
| {
"pile_set_name": "HackerNews"
} |
A platform for large-scale neuroscience [video] - mousetree
https://www.youtube.com/watch?v=Gg_5fWllfgA&index=5&list=PL-x35fyliRwjCR-gDhk1ekG4jh2ltgKSV
======
vesche
While I think the presentation was interesting it was fundamentally just more
proof that "the brain has patterns". Wish that the talk had more depth in
terms of understanding or implementing artificial neural networks.
Understanding that the presentation was primarily to endorse Spark to more
data mining applications however, the talk seems successful.
~~~
frozenport
>>terms of understanding or implementing artificial neural networks
There is no motivation for a connection between artificial and biological
computational networks. A production ANN such as those used by missile
interception systems is not requires to match what the neurons in your brain
do. In fact, the biologists may find that our brains are not very efficient
and often make mistakes. As a practical matter, the behavior of neurons is
still up for debate with many models of interaction proposed.
There are often more effective models for machine learning, especially if you
have lots of data, such as Random Forest.
~~~
vesche
>>There is no motivation for a connection between artificial and biological
computational networks.
I would disagree, and say that future ANN research has high motivation to
become as powerful and complex as BNNs. And I think maybe you get ahead of
yourself by saying that our brains are not very efficient when the fastest
supercomputer we currently have built is still 100,000x less computationally
powerful than a human brain.
[http://www.wired.com/images_blogs/wiredscience/2013/05/neuro...](http://www.wired.com/images_blogs/wiredscience/2013/05/neurologist-
markam-human-brain3_f.jpg)
"Neural Networks attempt to bring computers a little closer to the brain’s
capabilities by imitating aspects of information in the brain in a highly
simplified way. Although neural networks as they are implemented on computers
were inspired by the function of biological neurons, many of the designs have
become far removed from biological reality."
[http://scholarsresearchlibrary.com/EJAESR-
vol2-iss1/EJAESR-2...](http://scholarsresearchlibrary.com/EJAESR-
vol2-iss1/EJAESR-2013-2-1-36-46.pdf)
------
frozenport
How do they segment neurons? How can they be sure they are ablating a single
neuron?
~~~
etrautmann
some of their analyses are performed on raw voxels, but individual neurons for
ablation experiments can be identified either manually or using video image
processing to identify single neurons. It is a hard problem and a number of
people are working on improving it, but for many analyses it may not be
strictly necessary as a preprocessing step.
------
mousetree
Definitely the most interesting presentation at the Apache Spark Summit 2014
~~~
closetnerd
I didn't understand exactly why the forms of research this allows for is so
interesting or important even? He just claims that the high dimensional
visualizations are "obviously important and powerful" but doesn't explain why.
I'm always skeptical of researchers who come out with pretty visualizations
actually just looking for funding and/or recognition.
~~~
mousetree
It's interesting as it shows that people are using Spark for something other
than for the traditional web/enterprise analytics. Most of the summit was
focused on BI related use cases so it was really nice to see something
different for a change.
It's important as the speed and interactivity of Spark has apparently helped
the lab quite a lot in their research efforts. Some of the things shown in the
video, particularly the real-time refocusing at the end of the video, would
have been a lot more difficult, if not impossible, without something like
Spark (or similar)
~~~
closetnerd
I think Spark is from the Berkeley AMPLabs which is primarily focused of
machine intelligence, data mining and artificial intelligence in general.
Most, if not all, of there projects
([https://amplab.cs.berkeley.edu/projects/](https://amplab.cs.berkeley.edu/projects/))
are geared at solving similar problems.
I just don't see how this particular application is so important. At most, it
seems to be demonstration of sparks capacities but again it was geared for
high performance cluster computing anyways.
------
reader5000
Let's be honest. The rat is not "enjoying" it.
Also I'm in the camp that trying to reverse engineer the brain by studying
neural activity is like trying to reverse engineer Angry Birds by studying
register activity on an iphone.
Nevertheless, very cool visualizations.
~~~
closetnerd
Quite. The Blue Brain project did a similar thing by mapping all the neurons
in the neocortical column of a cat (or rat?) and visualizing the neural
activity and it hasn't yielded anything ground breaking.
Here's Henry Markram's Ted Talk:
[https://www.youtube.com/watch?v=LS3wMC2BpxU](https://www.youtube.com/watch?v=LS3wMC2BpxU).
But honestly, its really just something to impress people who don't have any
idea of what kind of implications this would have.
But I do think that obviously understanding neural activity is important but
only if its going to help yield understandings of the basic structural units
of the neocortex. Trying to understanding the neural activity of millions upon
billions is definitely not going to go anywhere.
~~~
etrautmann
The approach taken by Markram is quite different from what Jeremy is
presenting here. Markram is attempting to simulate large systems, and this
work is controversial, largely because many people don't think we understand
neural systems well enough to model them with appropriate fidelity at that
scale.
The work presented here represents a new take on a more classically accepted
approach. Record an animal's behavior as well as the neural activity, and
attempt to understand how neural activity controls the observed behaviors. The
difference here is a dramatic increase in scale and computational capability
by recording the entire brain of a zebrafish at once, and being able to run
analyses in a few seconds instead of hours, which has real implications for
experimental neuroscientists.
~~~
closetnerd
You're right, I made a mistake in implying that the "Markram did a similar
thing...".
Specifically I'm criticizing trying to analyze the neural activity of entire
brain regions. Neural simulation or actually trying to record neural activity
wasn't really my point. I assume both currently have drawbacks weather its
"not having accurate enough neural models for simulations" or "not
sufficiently or accurately being able to record neural activity".
Yes being able to run analysis in a few seconds instead of hours has real
implications for experimental analysis but again the benefits of the
technology were never under my criticism or skepticism.
I suppose what I'm specifically alluding to is my criticism of trying to do
such large scale analysis of neural activity itself given how little we know
about what we should be trying to look for at such a scale.
I say "we" but I should clarify I'm no neuroscientist.
| {
"pile_set_name": "HackerNews"
} |
Vector Math in Google's Dart using SIMD - tosh
http://www.youtube.com/watch?feature=player_embedded&v=CKh7UOELpPo
======
tosh
If you are interested in Google's Dart and web application performance these
are 27 minutes worth watching.
SIMD support will enable massive speed ups to 3D graphics, 3D physics, WebGL,
Canvas, physics simulations, bullet physics et al. an important step for the
web as a platform.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Keep Python or switch to Rust/OCaml/Haskell? - mccajm
Using Python makes hiring easy, but it's hard to verify and bugs are frequently missed when code is pushed to production. Safe languages with strong typing like Rust, OCaml, Haskell catch more bugs before deployment and are easier to verify, but are hard to learn and hard to hire for.<p>Does any one have experience of introducing these in mission critical environments? Was the shift worth the effort or is sticking with the status quo a better policy?
======
trcollinson
Before going down this path, I suggest you pick up and read The Mythical Man-
Month. The experience the author has closely matches my own experience.
What you’re describing has very little to do with python and a ton to do with
your software developers and your environment. If you have bugs that are not
caught before you reach production, how is your testing? Do you have a good
level of unit test converge which can easily catch type safety issues? Do you
have integration test and regression tests?
As the mythical man-month states there is no silver bullet. No change in
language will solve any significant portion of issues without further changes
to your process. Nor will it increase productivity in any significant way.
It also mentioned the theory of irreducible numbers of bugs. Changing
languages will change where you see bugs but not the number or severity of the
bugs.
Of course you also have the second system effect which can be quite
interesting. It basically states you’ll have many of the similar issues from
your first system but also a bunch of new ones from your second.
So what do you do? All is not lost. I would focus on making small but
meaningful changes within your current team and environment. You’ll find the
most long term success there.
Going with a new language is new and exciting but it’s also extremely risky.
------
cjbprime
Are you using the optional types in Python? Plenty of people (i.e. most) are
using non-functional languages in production. There are probably many best
practices you could adopt to cut down your bug count without throwing away
your code base.
------
a-saleh
It depends.
If I would have at least one other developer excited about more advanced
static types, I would try to sell to the project lead something along the
lines "Hey, me and $OTHER_DEV would like to take
$SMALL_PART_OF_THE_APPLICATION and try to rewrite it in
$LANGUAGE_OF_YOUR_CHOICE ... we think the type-system should catch quite a lot
of those bugs we have been dealing in prod!" If the lead will take you up on
that, you might have a nice way to introduce the language and figure out if it
really is that much better.
If that is not an option (i.e. no easily identifiable sub-component/service)
you still might be able to add types through mypy, and maybe even investigate
something like mokeytype
If there is really strong pushback to do static-type checking, you can still
make mission critical systems in untyped language. Erlang does this, and you
can take the lessons learned from i.e. erlang and apply it in your python
code-base. Based on my experience, limitting local state, and use more
functional approach does wonders to testability and stability.
If the team is not a fan of functional programming, Maybe even throw in some
advanced testing techniques, like fuzzing endpoints, doing property-based
tedts or if it is applicable something like chaos-monkey :)
If there is no time for that, then focus on the process. Good review culture,
good test-writing culture, have CI and integration tests ...
To be honest, I would try all of these, starting from the last suggestion,
making my way to the first :D
[1]
[https://github.com/Instagram/MonkeyType](https://github.com/Instagram/MonkeyType)
------
kjeetgill
Naturally there are tons of aspects to consider for a change like this; the
biggest probably being your team's existing experience, preferences, and the
kind of software you're building.
A ton of python programmers love go (aka golang) and at least where I work
thats what out python codebases are turning into. Those are mostly command
line tools but go should work well for services.
If you're building service I'm going to throw out an alternative that you
might not want to hear: Java or Kotlin. I personally prefer Java but coming
from python it is going to feel repetitive and verbose. They're a kind of
sweetspot between straight-forward, battle-tested, and popular.
Compared to Rust, OCaml, Haskell either Java or Kotlin will probably: \- Have
a larger talent pool to hire from similarly to python. Java programmers won't
have a ton of trouble getting on board to Kotlin. \- Have a large ecosystem of
library support similar to python. Kotlin has access to java's substantial
library ecosystem. \- Have fewer new programming language concepts to learn.
(I think java just has primitives and generics?) Kotlin has more going on but
you won't need it all at once. \- Produce a codebase more architecturally
similar to your python code base. I imagine it will be fairly more straight
forward to migrate.
Nothing against Rust, OCaml, Haskell, I've dabbled in them and they're all
compelling. But as a frequent python programmer myself I'd say Java and Kotlin
make more sense given your concerns about hiring and migrating python.
Disclaimer - I'm a big fan of JVM land. I suppose most of what I said could
apply to C# and F# too but ... ehhh.
Hope this helps! Let us know what you do!
------
oftenwrong
Define "mission critical". Are people going to die if bugs get shipped? If so,
be aware that static type checking will not get you there alone, but it should
be considered mandatory.
Beyond that: If your team is most familiar with Python, it is probably best to
stick with Python. [http://mypy-lang.org/](http://mypy-lang.org/) might be the
best option: you get static type checking for new code, you can add static
type checking to your existing codebase, and you get to use something you're
already familiar with.
I recommend reading
[http://boringtechnology.club/](http://boringtechnology.club/) \- especially
the "let's switch" part.
------
rs86
I used elixir at work for a few projects, and we will use it in more projects.
We rewrote a rails app using the Phoenix framework and now we are using elixir
in services.
From my anecdotal experience functional programming leads to code that is
easier to understand and test. You can make quality code in OOP, but I think
functional programming makes it easier.
We also used Elm for a few prototypes and we're immensely pleased. We do not
use it in production because static typing + purity is not something you can
do easily find devs for.
Also, Erlang's OTP and BEAM are a pleasure to use.
------
steveklabnik
[https://www.rust-lang.org/friends.html](https://www.rust-
lang.org/friends.html) is a list of companies using Rust in production
environments, and [https://www.rust-
lang.org/whitepapers.html](https://www.rust-lang.org/whitepapers.html) is a
list of whitepapers we're building up. Both may be of interest!
| {
"pile_set_name": "HackerNews"
} |
Google Code Jam 2015 - faza
https://code.google.com/codejam
======
analognoise
So I see the simple (naive) solutions, but what about when the problem sets
get very large - I definitely need practice there. Any good resources that
others have found for these types of puzzles?
~~~
faza
[https://www.topcoder.com/community/data-science/data-
science...](https://www.topcoder.com/community/data-science/data-science-
tutorials/)
| {
"pile_set_name": "HackerNews"
} |
Bitcoin mining on a vintage Xerox Alto: very slow at 1.5 hashes/second - darwhy
http://www.righto.com/2017/07/bitcoin-mining-on-vintage-xerox-alto.html
======
indescions_2017
It actually puts compute power in perspective for me quite nicely, as
GigaHashes per second (GH/s) is the most common metric in mining pools. My
core i5 laptop gets around 10MH/s. A gaming desktop with GTX1050 pushes above
100MH/s.
Compare that to a state-of-the-art Antminer at ~11Th/s using 1.1kW. Estimating
a large pool size of ~250 Ph/s. And the Alto's rough payout calculation of
about a billionth of a penny for 2017 looks right!
Any chance ethereum and zcash mining are up next for that poor old Alto that
just wants to retire and dream of electric sheep?
~~~
kens
If I understand etherium and zcash mining, they use algorithms designed to
require a lot of RAM so it is ASIC-resistant. (I.e. they want mining to be
practical on a PC, not requiring custom hardware.)
Unfortunately for the Alto, the gigabytes of data required for mining won't
fit into the 512kB memory. Even swapping to the hard disk won't work, since
that only holds 2.5MB. Thus, while Bitcoin would have been possible in the
1970s, etherium and zcash wouldn't have been.
~~~
miahi
Not only lots of RAM but lots of very fast RAM. This is what makes it ASIC-
resistant for now (it's not hard to attach a huge RAM to an ASIC, but going
for hundreds of GB/s is really expensive). So unfortunately it's GPUs for now.
~~~
xj9
\> unfortunately
i think you are missing the point. the idea is to _prevent_ ASICs from taking
over. if you _could_ build one that could mine ETH or zcash, it would be a
failure from their perspective.
~~~
miahi
It's unfortunate because the GPU prices and availability right now are going
nuts. I wanted to buy a GTX 1070 and it's out of stock with weeks of waiting
on preorders on most European websites, with price increases of 40-50% in the
last 30 days.
~~~
ven_bug_trap
In the short term it may seem more unfortunate, but the problem with expensive
ASICs is that they make centralisation far easier, by making it less feasible
for ordinary people to do the mining.
~~~
Klathmon
The people behind sia coin believe differently [0] and after reading that I'm
more inclined to believe them.
The short version is that since "commodity" hardware can mine many things, you
can't gauge how much power there could _possibly_ be. Someone with 100,000
GPUs could turn their whole system to your coin and have over 50% of the
hashing power for a day, then go back to whatever else the day after.
With ASICs, the devices can't be used for anything other than that specific
coin, so to "save up" your hashing power to sneak in a day that you can get
majority hashing power would be stupid. You'd have to be giving up the income
from them while you waited.
The article explains it much better than I can, I really recommend it.
[0] [https://blog.sia.tech/choosing-asics-for-
sia-b318505b5b51?gi...](https://blog.sia.tech/choosing-asics-for-
sia-b318505b5b51?gi=43997fdbccfc)
~~~
kens
What's stopping a government agency from "saving up" their hashing power so
they could get majority control over Bitcoin if they needed to? Seriously, if
you were the NSA, wouldn't this be a sensible thing to spend a few million
dollars on? And they could use their mining hardware now, hidden through
pools, so it would almost pay for itself. (Yes, I know I'm getting into
tinfoil hat territory.)
~~~
Frogolocalypse
In practical terms a 51% attack is not good, but actually has a fairly
straight-forward solution to solve. Change the PoW. A 51% attack can't steal
your coins, only restrict your spending them for the length of time they are
conducting the attack. So the issue isn't that you have 51% attack
possibility, but that people think you do, and furthermore think you're going
to abuse it. If a government attacked it, they'd simply hard-fork away from
the mining algorithm, and millions of dollars of hardware investment gets
flushed down the toilet.
------
rjsw
My copy of The TTL Data Book suggests that the ALU was able to do XOR, an
earlier blog post stated that microcode opcode bits go through a table instead
of being fed directly to the 74181 function pins.
I wonder why Xerox didn't want to use all the ALU features.
~~~
kens
In microcode, the Alto provides the 16 most useful functions of the 74181
(calling most of the 74181's operations "mostly useless"). This includes XOR
and OR. However, the Alto copied the Data General Nova's instruction set,
which doesn't include OR and XOR, so you can't use these instructions from
machine code. I think there's extended microcode that provides "extra"
instructions for XOR and OR and an improved BCPL compiler to make use of them,
but I haven't tracked it down.
See the Alto hardware manual (page 4) on bitsavers for details:
[http://bitsavers.informatik.uni-
stuttgart.de/pdf/xerox/alto/...](http://bitsavers.informatik.uni-
stuttgart.de/pdf/xerox/alto/AltoHWRef.part1.pdf)
~~~
userbinator
Here is a document I found on the DG Nova instruction set:
[http://users.rcn.com/crfriend/museum/doco/DG/Nova/base-
instr...](http://users.rcn.com/crfriend/museum/doco/DG/Nova/base-instr.html)
If you scroll down to "Arithmetic/Logic Instructions" you'll see that they did
not have room for XOR nor OR, since several of the 8 opcodes that fit into the
3-bit field are what we'd normally think of as "one-operand", but have been
expanded to be "two-operand" (oddly enough, there is an increment but no
decrement instruction either.)
It's interesting to compare to two other well-known CPUs with a 3-bit ALU
opcode field:
The Z80's (and 8008/8080/8085) ALU opcodes are: ADD/ADC/SUB/SBC/AND/XOR/OR/CP
The x86's ALU opcodes are: ADD/OR/ADC/SBB/AND/SUB/XOR/CMP
~~~
kens
That's an interesting comparison with microprocessor ALU operations. (6502 is
similar to Intel but doesn't have add/subtract without carry.)
Note that the Nova uses two additional instruction bits for the carry. Thus,
the Intel instruction sets use two of the 8 opcodes for add with carry and
subtract with borrow/carry, but the Nova doesn't. So it should be easier for
the Nova to fit in additional useful ALU instructions. (Not to mention the
Nova has 16-bit instructions.)
------
dijit
To be fair. I'm impressed it's able to do it at all.
I compiled a "hello world" in C staticly on my laptop the other week as a demo
of how things have grown; to my horror it wouldn't have even fit in memory of
my first computer.
~~~
kens
Well, I got SHA-256 running on a 1960s punched card machine with 16K of memory
so the hardware requirements aren't too much.
[http://www.righto.com/2015/05/bitcoin-mining-on-55-year-
old-...](http://www.righto.com/2015/05/bitcoin-mining-on-55-year-old-
ibm-1401.html)
------
yuhong
I wonder about Ryzen's new SHA1/SHA256 instructions, not only for Bitcoin but
also things like SHA1 collisions.
~~~
mrb
AMD's and Intel's SHA256 instructions are utterly useless to compete with
ASICs: they use approximately 10 000 times more energy (4 orders of
magnitude!) than a 16nm Bitcoin mining ASIC per hash: 1000 J/GH vs 0.1 J/GH.
Source: I wrote one of the first GPU miners, founded a Bitcoin mining ASIC
system integrator company, etc.
~~~
le-mark
From your perspective, how should an individual with a non trivial sum of
money ($20k for example) get started mining BTC? Do you favor mining or
investing by purchasing?
~~~
mrb
Large professional miners spend on average $0.05/kWh on their mining ventures,
including all other non-electrical overheads. So as a small miner with
virtually zero overhead, you should only mine if your electricity cost you
$0.05/kWh or less. If you do, buy an Antminer S9.
But personally I prefer simply investing in BTC.
------
Animats
They're using the British Cruddy Programming Language? Don't they have the
Mesa compiler up?
BCPL is about halfway between assembler and C.
~~~
kens
Still working on getting Mesa running...
BCPL isn't as primitive as I expected. It's surprisingly similar to C, except
lacking types. C's structs, unions, bitfields are almost a direct copy of
BCPL, along with the ternary ? operator. C's lvalues, rvalues, and pointers
are also just like BCPL.
BCPL has way more control flow statements than C: if EXP do STATEMENT, unless
EXP do S, test EXP then S1 or STAT2, test EXP ifso S1 ifnot S2, while EXP do
S, until E do S, S repeatwhile EXP, S repeatuntil EXP, S repeat, switchon EXP
into CASES, etc. The C language trimmed out a lot of the redundancy. BCPL's
switchon statement is just like C's with fall-through cases unless you use a
break.
------
janci
Does it run DOOM?
~~~
lostgame
I know this is a joke, but the Alto (IIRC) was the first computer to have a 3d
maze like game - I believe they even played it over LAN!
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How do you come up with unpopular, unintuitive solutions in your job? - behnamoh
In my field, I've noticed most brilliant solutions are those that challenge a well-established <i>intuition</i> about which things work or how things are. But most times it's hard to find such counter-intuitive ideas because we're already trapped in the mindset that finds these ideas bizarre.<p>A good example is RAF airplanes during WWII:
https://www.motherjones.com/kevin-drum/2010/09/counterintuitive-world/<p>My question is: how do you find things that people seem to agree upon, and how do you challenge that? If you can also bring up examples, that would be great.
======
ggm
That WWII example is Operations-Research. The key point here is that the
'counter-intuitive' part has solid data science behind it. P.M.S. Blackett was
a nobel class scientist. So I guess the key learning is: be formidably smart.
It might also need to be said both Blackett and Dyson (I believe) bitterly
reflected on their advice being misunderstood.
| {
"pile_set_name": "HackerNews"
} |
What do you think of these websites? - PendulumMoves
I am 49 years old woman and up til spring had been a teacher of 15 years. I have been off sick but in that time did my business plan to deliver food hygiene certificates. Just as I thought I was ready to launch I ruptured my tendong and have just come out of a leg plaster and am using my vaco boot to get about. In this time I said I would design a website or 2 for a friend. I used bootstrap theme called Rival. I have done 2 websites using the same theme but got more out of the creative one. Let me know your thoughts. Could I do this for a living? hungarianinterpreter.london and youaretheone.london ... I would be interested in any questions you may have in relation to these. I had previously used wordpress and a blog theme. Best wishes PendulumMoves :-)
======
triast
Not bad. Check out Reddit, maybe /r/webdesign, to get some good feedback on
your approach.
| {
"pile_set_name": "HackerNews"
} |
iPhone X Notch Remover - adamlinscott
https://itunes.apple.com/us/app/notch-remover/id1277467873?mt=8
======
ZeroGravitas
Neat idea for a little app, though I see some confused commenters thinking it
does more than it really does.
Another option is to use a wallpaper with some pure black elements and some
kind of broken up pattern near the top to disguise the notch.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Apple webserver hardware - mechanician
I am trying to decide what type of hardware to buy to use as a personal webserver. I am an Apple guy, so my initial thought was to go with a Mac mini. Is there value in going with a laptop instead? Yes it is more expensive, but it also seems like a more compartmentalized solution as well. Regarding the mini, is it worth it to spend an extra $400 for the server version that comes with OS X Server?
======
jrwoodruff
I don't have the mini, but I'm running os x 1.4 server on an old G4 xserve.
Coming from a graphic design/mac background, the initial server setup was dead
simple. The workgroup and server admin tools are excellent and allow you to
administer every function of the server remotely. It ships with Apache, tomcat
and jboss and mysql, etc. and provides nice interfaces to administer all
services. When I bought the xserve it was running server 10.2, I considered
updating it to a regular 10.4 or linux install, but found the tools and other
utilities (and stability) convenient enough to purchase a copy of 10.4 server.
Hope that helps.
All said, I love my mac server :)
------
towndrunk
I'm using a mini for a server in the home office. Basically, it's just a file
server but I do have Tomcat and MySQL on there as well. Like the other poster
said, it's dead simple to set up and work with.
| {
"pile_set_name": "HackerNews"
} |
Keyboard Not Found. Press F1 to Continue. Why? - samdung
https://alphahole.net/?p=1011
======
qubex
This is spectacularly badly written, it’s truly bafflingly bad.
| {
"pile_set_name": "HackerNews"
} |
Blade Runner 2049 underperformance at box office a 'mystery' to Denis Villeneuve - doener
https://www.yahoo.com/entertainment/blade-runner-2049-underperformance-box-office-mystery-denis-villeneuve-despite-career-best-reviews-140047845.html
======
tqh
After Ghost in the Shell i don't watch remakes in cinema. I'll watch it when
it comes to Netflix.
~~~
anotheryou
anything better than that ghost in a shell butchery :)
I really liked the sound on this one, might be worth going to the cinema for.
------
fractallyte
Hmm... perhaps because it was rubbish?
I'm deeply suspicious of anyone who claims to have loved it.
| {
"pile_set_name": "HackerNews"
} |
Why Unicode Won’t Work on the Internet (2001) - jordigh
http://www.hastingsresearch.com/net/04-unicode-limitations.shtml
======
lcuff
This article was written before UTF-8 became the de-facto standard. According
to Wikipedia, UTF-8 encodes each of the 1,112,064 valid code points. Much more
than Goundry's (the author's) 170,000. Goundry's only complaint against UTF-8
is that at the time, it was one of three possible encoding formats that might
work. Since it has now been widely embraced, the complaint is no longer valid.
In short, Unicode will work just fine on the internet in 2016 as far as
encoding all the characters goes. Problems having to do with how ordinal
numbers are used, right-to-left languages, upper-case/lower-case anomalies,
different glyphs being used for the same letter depending on the letter's
position in the word (and many other realities of language and script
differences) all need to be in the forefront of a developer's mind when trying
to build a multi-lingual site.
~~~
WayneBro
> In short, Unicode will work just fine...
> Problems............all need to be in the forefront of a developer's mind
> when trying to build a multi-lingual site.
It will work. Just fine though? It sounds like way too much work!
~~~
Dylan16807
Unicode handily solves the problem of storing text.
Manipulating text, though, is inherently nightmarish. No format can prevent
that.
------
TazeTSchnitzel
UTF-16, and non-BMP planes, were devised in 1996. The author seems to have
been 5 years late to the party.
> The current permutation of Unicode gives a theoretical maximum of
> approximately 65,000 characters
No, UTF-16 enables a maximum of 2,097,152 characters (2^21).
> Clearly, 32 bits (4 octets) would have been more than adequate if they were
> a contiguous block. Indeed, "18 bits wide" (262,144 variations) would be
> enough to address the world’s characters if a contiguous block.
UTF-16 provides 21 bits, 3 more than the author wants.
Except they're not “in a contiguous block”:
> But two separate 16 bit blocks do not solve the problem at all.
The author doesn't explain why having multiple blocks is a problem. This works
just fine, and has enabled Unicode to accommodate the hundreds of thousands of
extra characters the author said it ought to.
Though maybe there's a hint in this later comment:
> One can easily formulate new standards using 4 octet blocks (ad infinitum) –
> but piggybacking them on top of Unicode 3.1 simply exacerbates the
> complexity of font mapping, as Unicode 3.1 has increased the complexity of
> UCS-2.
They would have preferred if backwards-compatibility had been broken and
everyone switched to a new format that's like UTF-32/UCS-4, but not called
Unicode, I guess?
~~~
jordigh
Maybe the errors in the article are more a statement of how complicated and
improperly communicated Unicode was... and mostly still is! While I think I
understand most of how UTF-8 works, I still have to read and re-read how
codepoints and planes and encodings and decodings work together. It's a pretty
complicated beast that could very easily be misunderstood when it was less
popular than it is now.
It's still widely misunderstood today.
~~~
TazeTSchnitzel
I'm not sure that's fair, Unicode's encodings are pretty straightforward,
particularly compared to some other character sets. Most of the complexity
comes above the encoding level.
~~~
0x0
It also doesn't help that the classic LAMP stack has very confusing defaults
and badly named functions:
* PHP has functions named "utf8_encode()" and "utf8_decode()", when they should have been called "latin1_to_utf8_transcode()" and "utf8_to_latin1_transcode()"
* MySQL for the longest time used latin1 as a default character set, then introduced an insufficient character set called "utf8" which only allows up to 3 bytes, not enough for all possible utf8 encoded codepoints, then introduced a proper implementation called "utf8mb4".
* mysql connectors and client libraries often default their "client character set" setting to latin1, causing "silent" transcodes against the "server character set" and table column character sets. Also, because their "latin1" charset is more or less a binary-safe encoding, it is very easy to get double latin1-to-utf8 transcoded data in the database, something that often goes by unnoticed as long as data is merely received-inserted-selected-output to a browser, until you start to work on substrings or case insensitive searches etc.
* In Java, there are tons of methods that work on the boundary between bytes and characters that allows not specifying an encoding, which then silenty falls back to an almost randomly set system encoding
* Many languages such as Java, JavaScript and the unicode variants of win32 were unfortunately designed at a time where unicode characters could fit into 16bits, with the devastating result that the data type "char" is too small to store a single unicode character. It also plays hell on substring indexing.
In short, the APIs are stacked against the beginning programmer and doesn't
make it obvious that when you go from working with abstract "characters" to
byte streams, there is ALWAYS an encoding involved.
~~~
jordigh
Does any programming language get Unicode right all the way? I thought Python
did it mostly correctly, but for example with the composing characters, I
would argue that it gets it wrong if you try to reverse a Unicode string.
~~~
deathanatos
My basic litmus test for "does this language support Unicode" is, "does
iterating over a string get me code points?"¹
Rust, and recent versions of Python 3 (but not early versions of Python 3, and
definitely not 2…) pass this test.
I believe that all of JavaScript, Java, C#, C, C++ … all fail.
(Frankly, I'm not sure anything in that list even has built-in functionality
in the standard library for doing code-point iteration. You have to more or
less write it yourself. I think C# comes the closest, by having some Unicode
utility functions that make the job easier, but still doesn't directly let you
do it.)
¹Code units are almost always, in my experience, the wrong layer to work at.
One might argue that code points are still too low level, but this is a basic
litmus test (I don't disagree that code points are often wrong, it's mostly a
matter of what can I actually get from a language).
> _try to reverse a Unicode string._
A good example of where even code points don't suffice.
~~~
TorKlingberg
I basically agree with you, but note that code points are not the same as
characters or glyphs. Iterating over code points is a code smell to me. There
is probably a library function that does what you actually want.
~~~
deathanatos
I explicitly mention exactly this in my comment, and provide an example of
where it breaks down. The point, which I also heavily noted in the post, is
that it's a litmus test. If a language can't pass the iterate-over-code-points
bar, do you really think it would give you access to characters or glyphs?
------
jimjimjim
[http://utf8everywhere.org/](http://utf8everywhere.org/)
a very useful site especially when having to explain what utf8 is to other
devs when working in a windows shop.
~~~
wmccullough
'working in a windows shop.'
Surely you're flattering yourself.
~~~
jimjimjim
no seriously. I'm a windows application dev, and I have been for >decade.
If all you see around you is wchar_t and LPCWSTR then that is what unicode
means.
------
herge
Man, UCS-2 is the pits. I still remember fighting with 'slim-builds' of python
back in the day.
Any critique of unicode while not assuming UTF-8, which allows for more than 1
million code points) is a bit suspect in my opinion. The biggest point against
UTF-8 might be that it takes more space than 'local' encodings for asian
languages.
~~~
mjevans
Wikipedia has a summary of comparisons:
[https://en.wikipedia.org/wiki/UTF-8#Compared_to_UTF-16](https://en.wikipedia.org/wiki/UTF-8#Compared_to_UTF-16)
Advantages
* Byte encodings and UTF-8 are represented by byte arrays in programs, and often nothing needs to be done to a function when converting from a byte encoding to UTF-8. UTF-16 is represented by 16-bit word arrays, and converting to UTF-16 while maintaining compatibility with existing ASCII-based programs (such as was done with Windows) requires every API and data structure that takes a string to be duplicated, one version accepting byte strings and another version accepting UTF-16.
_Text encoded in UTF-8 will be smaller than the same text encoded in UTF-16
if there are more code points below U+0080 than in the range U+0800..U+FFFF.
This is true for all modern European languages.
_ Most communication and storage was designed for a stream of bytes. A UTF-16
string must use a pair of bytes for each code unit:
* * The order of those two bytes becomes an issue and must be specified in the UTF-16 protocol, such as with a byte order mark.
* * If an odd number of bytes is missing from UTF-16, the whole rest of the string will be meaningless text. Any bytes missing from UTF-8 will still allow the text to be recovered accurately starting with the next character after the missing bytes.
Disadvantages
* Characters U+0800 through U+FFFF use three bytes in UTF-8, but only two in UTF-16. As a result, text in (for example) Chinese, Japanese or Hindi will take more space in UTF-8 if there are more of these characters than there are ASCII characters. This happens for pure text[nb 2] but actual documents often contain enough spaces and line terminators, numbers (digits 0–9), and HTML or XML or wiki markup characters, that they are shorter in UTF-8. For example, both the Japanese UTF-8 and the Hindi Unicode articles on Wikipedia take more space in UTF-16 than in UTF-8.[nb 3]
~~~
deathanatos
The biggest disadvantage of UTF-16, IMO, is that programmers blindly assume
that they can index into the string as if it were an array, and get a code
point out — which you can _not_ ; you'll get a code unit, which is slightly
different, and might not represent a full code point (let alone a full
character).
UTF-8's very encoding quickly beats this out of anyone who tries, whereas it's
easy to eek by in UTF-16. The real problem is that the APIs allow such
tomfoolery. (Some have historical excuses, I will grant, but new languages are
still made that allow indexing into _code units_ without it being obvious that
this is probably not what the coder wants.)
~~~
int_19h
Strictly speaking, this is a disadvantage of languages that have strings as
first-class types _and_ allow indexing on strings in the first place (and
specify it to have this semantics).
For the most part, the developer shouldn't really care about the internal
encoding of the string, but the language/library should also not expose that
to them.
------
mcaruso
More like, "Why UCS-2 Won’t Work on the Internet".
------
khaled
An extensive and very informative, though a bit sarcastic, rebuttal (from 2001
as well): [https://features.slashdot.org/story/01/06/06/0132203/why-
uni...](https://features.slashdot.org/story/01/06/06/0132203/why-unicode-will-
work-on-the-internet) (via
[https://twitter.com/FakeUnicode/status/786324531828838400](https://twitter.com/FakeUnicode/status/786324531828838400)).
------
oconnor663
> Thus is can be said that Hiragana can form pictures but Katakana can only
> form sounds
That sounds really weird to me. Does that sound right to any native Japanese
speakers here?
~~~
xigency
This writing is sort of a strange metaphor, but I guess the point the author
makes is that kanji can be transliterated as hiragana but not katakana. The
writer goes on to talk about traumatic brain injuries so I guess he's aiming
at the cultural value of each syllabary.
I'm not a native speaker, but if I were to make an equally strange metaphor as
the author, katakana feels like writing in all capital letters.
~~~
achamayou
Kanji can be transliterated either way, and both forms are lossy since there
are so many homonyms and kana only encodes sounds. It's traditional to
annotate difficult Kanji pronunciation with small Hiragana called Furigana,
for example in children's books. But it could be done all the same in
Katakana. Modern Chinese words that Japanese borrows are usually translated in
Katakana for example.
~~~
klodolph
Kana mostly contain just sounds, but do contain some morphological
information—there are homonyms in kana as well, after all. This is a bit rare,
however.
------
zoom6628
The paper is far more interesting for its informative background on the the
use of the character sets in CJK region.
------
david90
Good to see we've had a breakthrough after 15+ years.
------
jbmorgado
Why "640K ought to be enough to everyone"
~~~
Dylan16807
The opposite, really. It's arguing why 16 bits is not enough. But it does this
while casually dismissing the solutions we already had.
------
reality_czech
Hilarious, a document from 2001 talking about why Unicode is unsuitable to
"the orient." At the end, I half expected to read that "Negroes have also
proved to be most unfavorable to it."
~~~
darkengine
Simply because of his use of an outdated synonym of "Asia"? If anything, I got
the impression that the author was critical of Westerners for being
insensitive to the needs of Asian computer users. I think this is because of
Han Unification, but he does not mention it by name.
~~~
sundarurfriend
In this author's case, I agree that the intent doesn't appear to have been
malicious. But calling it just "an outdated synonym of Asia" is like saying
Negro is just an outdated synonym for Black (after all, negro is just Spanish
for black). In both cases, it ignores heavy historical and cultural
implications that come with the words.
~~~
hueving
What are the cultural implications of using Orient as a synonym for Asia?
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Any developer who transitioned from Tech to Quant Trading? - mightymosquito
As a developer what did it take for you to transition from Technology to Quantitative trading, and how does it compare to your life working in Technology?
======
praeconium
Its bound to be extremely hard because though quants code and there should be
many similarities with soft dev - its a different beast. Many scientist make
the switch and run major firms, but someone like a front-dev making switch,
would be interesting to see. More likely to be smth outside of soft dev role
that got him in.
Quant crowd hangs on NuclearPhynance forum, but for what You are asking
WallStreetOasis is the place.
But..
1\. Not really PhD, but You have to learn math somewhere.
2\. Start trading crypto Yourself.. there are so many exchanges giving You
direct access for free.. fetch a market feed, reconstruct limit order book in
real time, backtest some strategies, do deep learning models. They do the
same.
3\. You have to be lucky, just like anywhere. There are lots of unsatisfied
people in tech though it pays more than ever.
4\. No idea.
5\. Two years or so, but previous roles were quantitative.
I am trying to make transition myself, let me know if You need more info..
www.vladovukovic.com
~~~
mightymosquito
What would you suggest I read up on in quantitative finance and mathematics to
get started with algorithmic trading.
My background: I have a computer science graduate with about 5 years of
experience building very high throughput consumer facing applications as a
backend developer.
------
thiago_fm
You can do it by yourself, if you create any algo that is able to perform
above 50% hit, you can get infinitely rich. Why work for a company if you can
do it? Or why would a company that does it would exist if somebody solo can do
this?
I'm a fundamentalist, so I hardly believe anybody can reliably make money
doing this. Maybe with HFT or insider information, which both are fields I
don't really wanna compete in, as they both feels like cheating to me(and the
second, illegal).
If anybody could reliably make money doing trades of any kind, which doesn't
either abuse insider information or latency of systems, they could as well
predict the future, the lotto ticket etc.
The people I think work on this field are working for big companies and
investing other people's money, which generally got a huge amount and by
themselves can sort of try to manipulate the market, usually have a taste of
inside information and generally get outperformed in the long term by somebody
that just reads balance sheets.
But this is a comment from an outsider, which has been making constantly more
gains than S&P(by a tight margin, dunno if this will last, maybe I'll just
regress towards the mean) for a decade, just checking balances sometimes. I
just invest in companies which are well managed, by my criteria, while I work
fulltime as a software dev.
Just try to imagine how many companies with open capital exists, and how bad
management can be, if you could find the good management in good markets, also
try to understand more how the government will influence it, you could create
much more powerful methods.
I've done a fair of trading though, mostly flipping IPO's, but I don't believe
somebody can make money doing trades.
------
godelmachine
By Quantitative Trading, you mean full-time trading, right? Like buying and
selling stocks. I don't think this designation requires knowledge of
algorithms.
~~~
mightymosquito
By quantitative trading I refer to people who do algorithmic trading for a
living.
People from tech generally end up becoming quantitative developers(implement
trading strategies other people make),and some go even further and make their
own trading strategies(which it what traditional quants do. Correct me if I am
wrong here).
I would like to know experiences of people who transitioned this field from
pure technology. Some key points are:
1\. What did you have to learn as a experienced developer to break into this
field(do really NEED a Phd in Mathematics or Finance?!?!) 2\. Whats the work
like? 3\. Does it pay as much as it hyped to be? 4\. How does it compare to
your life back in Tech? 5\. How long after working in tech did you make a
switch?
| {
"pile_set_name": "HackerNews"
} |
How to send a link from your laptop to a whatsapp friend? - Husain_Hashem
http://www.linkwik.co
======
desbo
web.whatsapp.com
| {
"pile_set_name": "HackerNews"
} |
‘Blockchain’ is meaningless - wglb
https://www.theverge.com/2018/3/7/17091766/blockchain-bitcoin-ethereum-cryptocurrency-meaning
======
SeanLuke
Cryptocurrency, welcome to AI's long nightmare.
AI has forever had buzzword-complaint terms. Consider "Agent", which describes
a very specific thing (namely autonomy), but that didn't stop everyone and
their dog outside the field from calling their software a "database agent" or
a "compiler agent" or whatnot, equating "agent" with "program". Indeed there's
an entire non-AI _field_ called "Agent Based Modeling" \-- of which I am a
participant -- which completely misunderstands the term. So AI gave up and
switched to "Intelligent Agent", which was unfortunately an even _more_
buzzword-compliant word. Now we started seeing "intelligent graphics agent" or
"intelligent login page agent". Finally in the late '90s AI moved to
"Autonomous Agent", a word straight from the Department of Redundancy
Department. But it worked! Nobody outside of AI seems to use it: my theory is
that most people don't really know exactly what autonomous means.
~~~
camillomiller
I mean... "crypto" !!! Can you imagine what cryptographers had to witness
every day for a couple of years now, while the term was increasingly skewing
towards a completely misleading meaning?
~~~
nailer
It's like if people involved in:
\- ATMs
\- eCommerce
\- Foreign intelligence and signals
or any other similar field where crypto is essential started calling
themselves 'crypto'. Yes, these field use crypto, but no, they're not crypto.
Just like cryptocurrencies aren't crypto.
This is crytpo: [https://cr.yp.to](https://cr.yp.to)
This is crypto:
[https://crypto.stackexchange.com/](https://crypto.stackexchange.com/).
Things that use crypto are not crypto.
~~~
bdamm
The meaning of words change when the rest of humanity takes it for their own.
Claiming "crypto are not crypto" will just leave you behind as the guy with
his fist in the air cursing the world.
Nuclear. Hacker. Patriot. Rubber. The audience matters. Besides, I doubt that
any cryptanalyst would mind writing out cryptography any more than they'd mind
writing out TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256.
~~~
nailer
I'm stating crypto is cryptography. There's not multiple cryptographies as far
as I'm aware so 'crypto are' doesn't make sense.
And yes, words change over time, but in that case I can just refer to (all)
cryptocurrency as 'Bitcoin' since that's the common definition. In fact it's
far more popular to refer to cryptocurrency as Bitcoin than it is to call that
'crypto'. Short version: this one isn't settled yet.
How did nuclear, patriot and rubber change?
~~~
bdamm
Nuclear was a branch of physics, then became a weapon, then became a US senate
procedural change "going nuclear". Patriot was an anti-government rebel, then
a pro-government supporter, then a missile, then an act to boost government
and corporate controls within digital content, and sometimes an American
football player. Rubber was a tree, then a commodity, then a condom. "A
rubber".
~~~
nailer
Nuclear still means to do with nuclear energy: it being used as a simile
doesn't change that meaning.
'patriot' means someone who stands for what is perceived to be American
values. A missile being named after it doesn't change that.
Rubber is more interesting and perhaps a better comparison 'are you wearing a
rubber?' during sex or 'when the rubber hits the road' in a speech, or 'do you
have a rubber' during an exam.
So lets let's say someone says they 'specialise in rubber': it seems
reasonable to believe they mean the material: they don't mean condoms or tyres
or erasers.
------
sheepz
The blockchain hype is crazy.
My favorite truism that I see people spouting is that Bitcoin (or
cryptocurrencies are) is worthless, but blockchain is a huge innovation.
Little do these people understand that the true innovation is Bitcoin's proof
of work as a consensus mechanism with the economic incentive of mining. Most
of these "blockchains" are just distributed databases, which are not
trustless, decentralized etc.
~~~
tylersmith
Yeah it's really weird to me because while you can do cool things with
blockchains, on their own they're just a data structure. Without a trustless,
decentralized, and permissionless consensus model it's basically just a big
git repo. The blockcahin data structure, a tree of blocks of state changes, is
not at all a recent innovation. And not something that would typically get
hyped by the average non-programming related media.
~~~
tomtheelder
When people use the term blockchain they don't just mean a chain of state
changes, they mean one that is backed by the sort of decentralized consensus
model you described. The nomenclature isn't ideal, but the convenience of
having one word to describe a somewhat complex topic wins out.
I mean even the Wikipedia article on blockchain treats it this way [1],
mentioning the consensus model in the initial blurb, and asserting that
Satoshi invented the blockchain.
[1]
[https://en.wikipedia.org/wiki/Blockchain](https://en.wikipedia.org/wiki/Blockchain)
~~~
tylersmith
"Distributed ledger" [1] is a far better term for here IMO. If there's
quibbling about "distributed" or "decentralized" then just change those terms
as appropriate for the system under discussion.
[1]
[https://en.wikipedia.org/wiki/Distributed_ledger](https://en.wikipedia.org/wiki/Distributed_ledger)
------
Ajedi32
git has been using a blockchain for the last 12 years, since long before
Bitcoin:
Each commit includes the hash of its parent commit. These commits form a
chain. Changing the contents of any past commit would break the chain. Each
user of the repository keeps a copy of the blockchain on their PC, and they
can fetch new "blocks" from any other peer with `git fetch` or `git pull`. The
consensus model is based on human review of the underlying code; humans get to
decide whether to include a block in their version of the repository.
~~~
tomtheelder
Common usage of the term blockchain implies a trustless, distributed consensus
model. Even the Wikipedia article on blockchain references this in it's
opening blurb, and asserts that Satoshi invented the blockchain [1]. Google
trends also shows that the term practically didn't exist prior to widespread
recognition of Bitcoin (even if you adjust the window to hide the recent
spike, it was virtually unused) [2].
It's not the best nomenclature in the world, but that's just what it has come
to mean.
[1]
[https://en.wikipedia.org/wiki/Blockchain](https://en.wikipedia.org/wiki/Blockchain)
[2]
[https://trends.google.com/trends/explore?date=all&q=blockcha...](https://trends.google.com/trends/explore?date=all&q=blockchain)
~~~
Ajedi32
> trustless, distributed consensus model
Still sounds like git to me. The main difference is that Bitcoin's consensus
model is fully automated, whereas git's relies on humans making decisions on
which commits to include in their copy of the repo. (Though I suppose "trust"
can indeed be one factor humans use when making such a decision.)
~~~
spookthesunset
> The main difference is that Bitcoin's consensus model is fully automated
Except when it isn't. For example, the DAO "hack" resulted in humans
overriding the automation to roll back history to bail out certain
"investors". Or all the bitcoin forks. Or the fact that bitcoin mining is
largely done by about three mining groups in China.
So Bitcoin is "trustless", but really believing in bitcoin involves every bit
as much trust (or dare I say, faith) as any other thing in your life does.
> Though I suppose "trust" can indeed be one factor humans use when making
> such a decision.
Watching the bitcoin/blockchain hype-train has shown to me that humans require
trust to thrive. To thrive you need to trust that you have a stable
government, that you eat breakfast cereal doesn't contain lead, that you can
walk past 99.99999% of people without getting stabbed.
Take away trust and suddenly the world get massively more expensive.
Transaction costs go through the roof. You have to personally inspect every
aspect of your breakfast cereal maker and all their vendors (eg: the folks
that made the fertilizer used to grow the grain (don't want toxic chemicals),
the hygiene of the employees operating the production line (don't want piss in
your cereal),etc ). If you don't rigorously verify everything, you'll get
fucked and die. In bitcoin this happens all the time - you can't trust
_anything_ in bitcoin--you can't trust your wallet maker, the printer used to
print a paper wallet, the exchange you use, the software you use, the OS you
run on, you can't trust any of it. Any break in the system _will_ result in
total loss of your bitcoin (and then you'll be called a moron by other
bagholders because you didn't follow all 231 "simple" steps required to secure
bitcoin)...
In short, society can't exist without trust. Bitcoin and the blockchain are
attempts to remove the need to trust each other. The result is a cumbersome,
non-scalable, deliberately inefficient design that _still_ requires human
trust to function. Which is why I personally think they are worthless
technology in search of a problem. Every problem they try to solve still
requires human trust, which renders their use pointless... Might as well just
use git. It is far more efficient, can handle vastly more data, and doesn't
require more electricity than many countries use in order to operate.
~~~
wan23
This is sort of a strawman. No one is saying that trust should be removed from
society in general. Bitcoin proponents think it's desirable to have a way to
store and transfer money without having to trust a single party. Anyone who
has had Paypal freeze their funds or had a customer request a chargeback after
receiving service could tell you why that might be desirable.
------
wintom
It’s not meaningless. Saying Blockchain is meaningless is like saying the web
is meaningless or the internet is meaningless. Just because you either don’t
understand it or you feel it’s too broad does not make it meaningless.
Blockchain is a distributed ledger that guarantees consensus without any
central party. It has many other properties, like making the transactions
public in most cases, and many other important things that are different from
centralized databases and non public databases we have been using for the last
30 years.
~~~
nailer
I don't know why you're being downvoted. That seems a reasonable definition of
blockchain, in much the same way one could give a reasonable definition of an
array as a sequence of items.
The article's premise that 'The term blockchain is misused and frequently
misunderstood' (which is true) means 'Blockchain is meaningless' doesn't hold
water and this is very typical for a Vox technology article.
~~~
jgh
I think gp's definition is still too broad. Consensus is not an inherent
property of blockchains, for example. You can make blockchains without any
consensus at all, or with some central authority prescribing reality. A
blockchain is simply a kind of data structure, like a linked list or a hash
map or an array.
~~~
seanalltogether
Exactly. Is "blockchain" just the data structure, or is it the data structure
+ the protocol for communication + the software to validate it?
Another example might be bit torrent. I personally wouldn't classify bit
torrent as just the data structure. I would say its the data structure, the
protocol and the end clients. However if someone used these technologies to
create a different format and a different protocol, I wouldn't think of it as
bit torrent.
------
AndrewDucker
_In general, if the transactions are gathered together in blocks, and it is
blocks that are secured on the chain using cryptography, and it is designed to
be tamper-resistant and produce immutable records, the system qualifies as a
blockchain_
That works for me.
~~~
tylersmith
"designed to be tamper-resistant and produce immutable records" is really the
only thing important here as far as most people are concerned. The part about
chaining blocks of data together is an implementation detail that may or may
not last long term.
------
Sangermaine
"Blockchain" means a 394% surge in your stock price:
[https://www.bloomberg.com/news/articles/2017-10-27/what-s-
in...](https://www.bloomberg.com/news/articles/2017-10-27/what-s-in-a-name-u-
k-stock-surges-394-on-blockchain-rebrand)
~~~
rtkwe
It also potentially means trouble with the SEC:
[https://www.investopedia.com/news/sec-may-crack-down-
firms-u...](https://www.investopedia.com/news/sec-may-crack-down-firms-using-
blockchain-name/)
------
cornholio
"Blockchain" has come to designate a very well defined concept: any technical
and financial construction designed for speculative investment in unregulated
securities out of reach of traditional authorities, usually having a very
shady, deflationary or ponzi-like economic structure and strong money laundry
potential.
There is nothing fundamentally novel in distributed databases or their
application in finance, we knew how to build "blockchains" since the 70`s.
Satoshi's proof of work chains were the first anonymous implementation and
that's what started the "crypto revolution".
~~~
Theodores
I am of the same conclusion, however, the dreaded blockchain has those that
are true believers. They will cite the Estonia example and they will cite
Wikileaks and how that show was able to be kept on the road thanks to Bitcoin
payments that were provided when Visa and their ilk in 'fiat land' decided to
take their payment methods away. They will cite a land registry in some poor
African country and tell you how good it is that their evil government cannot
take the 'peasants' land away.
If that is not enough to suppress your questioning of blockchain they will
also roll out the commerce examples, e.g. the Walmart supply chain or whatever
it is.
You will be told about some of these '541t coins' (as you call them) in
venerated ways, as if the goals of their founders put the efforts of everyone
from Mother Theresa to Elon Musk in the shade as far as noble ambition for
humanity is concerned.
It is like they are trying to fool ninety percent of you for ninety percent of
the time.
Maybe you work in some computing discipline that is tediously dull compared to
the parsnip science that is blockchain. Perhaps you are a small cog in supply
chain management or maybe you are at the cutting edge of some new 3D printing
technology that is revolutionising something important, e.g. dentistry. Up
until now you have felt that you may have been pleasing your clients and
delivering value not just to your company but to the wider world. You think
deeply about any conceivable way that blockchain technology could
revolutionise things for you. Or maybe do what you are doing already but
better because of the magic blockchain.
Even though you know your sector inside and out and your industry pretty well,
your mind draws a blank. This is despite your code being used in many
different parts of business applications. Your only conclusion is that there
is no use whatsoever for blockchain in what you do.
The blockchain bore who has not spent 'a career' doing some useful industry
thing will be able to declare that you are just not getting it. Clearly your
mind is just too limited, you are old in the head and unable to grasp what
blockchain is about, how it works and how it is going to change everything. If
you can't even see one use with all that stuff you know and work with then
clearly you have not understood what blockchain is all about.
By now you have also had time to think about the examples of blockchain use
presented by the blockchain bore. With Wikileaks it was all a scam, it
conveniently kicked off the Arab Spring, confirmed a few things we knew
already, led to the Assange sideshow but lead to little change.
Plus how much does it really cost to host a few web pages? £200000 a year in
hosting fees? What fools they were to sell their bitcoinz!
So what remains?
As you say, gambling on unregulated securities. Paying for illegal things
online is the second use for the dreaded blockchain.
------
IncRnd
The world "blockchain" is not really meaningless. Originally, a blockchain was
a cryptographically chained chain of blocks in a proof of work system. It is
still used that way, but it is also repurposed by other marketing schemes of
many cryptocurrencies.
------
nixpulvis
I think we all agree it's some for of "secure" ledger. Otherwise yea, it's not
a super well defined term. Regardless it's obviously over hyped and buzzed.
Git for example could be called a blockchain if you really wanted to.
------
philipodonnell
> The term has become so widespread that it’s quickly losing meaning.
I mean, that's how language works, right? Words mean things at certain points
of time and then over time those things sometimes change and new words appear
to better distinguish between the new things. Or words get contextualized
within a specific industry and take on different meanings depending on your
perspective.
Such an evolution does not mean a word is "meaningless" it just indicates
you'll have to provide more context in order to communicate your idea, which,
again, is how language works.
------
CodexArcanum
"Blockchain" is a data structure, a pretty well defined one. The buzzword is
basically a synonym for "distributed (, decentralized?) system." Buzzwords arr
usually annoying and misused. Object-Oriented Programming as practiced isn't
really what was intended by the term. Memes with cat pictures are only vaguely
a subset of Dawkins' memes. "The Cloud," ugh, let's not even.
------
granaldo
The killer app of blockchain is people trading coins in and out speculating on
altcoins [https://www.coingecko.com/en](https://www.coingecko.com/en)
------
brwsr
Well, at least less meaningless than the widely used word "Economy"
representing our financial system of waste, where consumers have to keep
buying stuff to keep the system intact.
------
xg15
While the general gist is true, so far you could separate the buzzword uses
from the non-buzzword uses pretty well by checking whether or not they use an
article before the word "blockchain". Though this bit then kind of worries me:
> _Bitcoin, Ethereum, and other cryptocurrencies have entered the mainstream
> discourse, but they’ve also been joined by a concept that is widely
> circulated, but poorly understood: “the blockchain” or just “blockchain.”_
I fear they are learning...
~~~
bertil
I honestly can’t tell from context whether the article is a sign of
understanding. To be blunt, I’m not sure I can tell for people who talk about
“(the) (I/i)nternet” either, and I met some of those who invented it.
------
arisAlexis
Anyone remember newsweek title "Internet? bah" around 1995?
~~~
flukus
The internet is a communication platform that allows to send messages and
share files from millions/billions of machines in a global network. A simple
definition that explains it's capabilities, something blockchain lacks.
~~~
arisAlexis
a blockchain is an immutable ledger that keeps transactions transparent and
untampered. can be used for all money transactions and also for asset
ownership and log transparency by governents.
It is very simple.
------
_pmf_
Well, not meaningless ... it can mean whatever you want, and it will not be
strictly wrong.
~~~
BinaryIdiot
I think that is the point in the article; it is used in many different ways so
simply saying you're using the blockchain doesn't have much meaning and yet
people look at it as a sign of a forward facing company.
------
empath75
It's interesting that an article that provides at least 6 different meanings
for it says that it's meaningless.
~~~
rs86
Journalists suck
------
jacinabox
With all due respect, that's crap. Blockchain means exactly what Satoshi
Nakamoto defined it to mean in his seminal paper.
~~~
organsnyder
"Means exactly"... to whom?
~~~
astrodust
Coinbeards!
------
jbob2000
It's about as meaningless as the word "communism". Tough to come up with a
definition for that, but I still know what it "means".
I guess you just have to read a bunch of different sources to get the full
picture of what a blockchain is. It's kind of hard to communicate abstract
ideas, so you need a variety of sources to paint the picture in your head.
~~~
sheepz
I think communism is pretty well defined, in its ultimate form its the
abolition of private property. This is in strict opposition to capitalism.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How to monitor activity of remote employees? - kull
We are a fast-growing company, but all 100% remote, with team members spread across the world. With below 10 team members, everybody was working 24/7 and it was easy to spot somebody is dropping a ball. With a team of 20, it is now difficult to keep an eye on people, and some employees are taking an advantage of being remote. Especially managers, whose responsibility is to help others and keep track of things, so the number of emails, calls, and chats is not a good indicator of their engagement.<p>The ideal tool will be super simple and take screenshots few times a day. Anything you can recommend?
======
nherment
Hi Daniel,
I won't comment on your thought process and won't try to change your mind. I'm
running a remote team as well (software development). Objectively, any tool
that you install on your employees computers can likely be circumvented by the
most motivated worker.
Monitoring will set the expectations for employees to spend XX hours on the
computer doing work related tasks. This is only a proxy for what you are
really trying to achieve: high productivity.
There are a few caveats to the approach of monitoring employees for hours
logged:
\- it sets the expectation as time spent instead of productivity
\- maximizing hours in front of a computer is counter productive to maximizing
productivity. This is mostly true for creative work (programming, design,
problem solving, etc.). I won't post links, don't take my word for it, do your
research.
\- you will miss out on a lot of very good employees. Top performers can
(usually) easily get a job. Monitoring employees has a negative connotation.
This is anecdotal but I know a few top programmer and they will run away as
fast as possible from your company.
Rather than intrusively monitoring, look at your high performers and use that
as your base productivity. That could be your expectations for all employees
(again, I don't necessarily condone this but you seem to have very high
expectations). Add a bit of flexibility to this as not everybody is always top
performing (personal life, preoccupations, etc.).
Then, the non top-performer will need your help in getting to top-performer.
By coaching them, you can then focus on these less performing team members and
get them on a growth path to meet your expectations. You will have to cut
people whom you think will never meet your expectations.
best of luck
------
ohjeez
As detaro wrote: Measure results, not behavior.
Are deadlines missed? Products sub-standard? Then you have something to worry
about (and remote work is not a factor).
Are team members regularly having one-on-one meetings with their managers? Are
the employees comfortable enough that they can be open with the manager about
any blocks or problems encountered? That's a good sign. If you want to count
anything, keep track of one-on-ones.
This may be of some use (free reg required).
[https://www.safaribooksonline.com/library/view/the-remote-
wo...](https://www.safaribooksonline.com/library/view/the-remote-
workers/9781491995129/)
~~~
dailyvijeos
Exactly. Treat adult employees like adults, Netflix-style.
Another aspect is near-realtime organization-visible projects, issue status
(and ticket system) and KPIs that show attribution to prove people to gain
visibilty and credit to avoid “what does person X do all day?” This also makes
performance reviews more automated for both sides of the table.
Having unified, unform tiers of performance management processes that ensure
minimum of staff and management fudging plus including qualitative, 360
feedback also helps scale the management and HR layers.
~~~
ohjeez
I've always found that if you trust people, and you communicate openly with
them (such as explaining what you want), you are rarely disappointed.
~~~
dailyvijeos
I wouldn’t say “always.” Also, trust isn’t binary: it’s earned and varies. You
usually wouldn’t give a new employee sole, unrestricted access to megadollars
and you wouldn’t micromanage microtransactions of a 10 year manager. Delegate
with risk management, e.g., common-sense. If a person starts making big
mistakes, then you take non-nuclear corrective actions/find out what’s going
on. On the other side, if someone improves, reinforcing actions are possible.
------
11thEarlOfMar
> some employees are taking an advantage of being remote.
Would be interested in what you're measuring or observing to support this.
I worked remote for 10 years.
Two things come to mind: \- Work Backwards: Determine what the results are
that you expect are reasonable, discuss and agree with each employee what
those results are, and then check back periodically to see where they are in
achieving those results. Being remote means exactly that you have to trust
them to be committed and diligent in doing their work. Each employee is in a
different environment, different time zone, different family situation. Trying
to apply a common behavior pattern will just frustrate you.
\- Manage it in the front end: When hiring, specifically look for people who
are self-motivated. Yes, this can be difficult to judge in an interview or
two, and again, especially remote, but there are markers such as what they've
got on git hub, patterns of completing prior work, excitement they exude for
technology, expressions of curiosity towards your products/services, entering
constructive solution mode when presented with problems, and others.
Above all, good luck!
------
stephenr
> With below 10 team members, everybody was working 24/7
Wut?
> Anything you can recommend?
This isn't a technical problem and trying to solve it by completely invading
your employee's privacy is a fucking horrible "solution".
------
al2o3cr
With a team of 20, it is now difficult to keep an eye on people
Don't worry, whatever tool you find should definitely help solve your "too
many people in the company" problem.
------
dozzie
Erm... why the hell do you care about them _working_? You should care about
them _producing results_. Even more so if the team is remote. Results should
be much easier to see than the process of working, since they are already
delivered to the company anyway.
------
detaro
> _The ideal tool will be super simple and take screenshots few times a day.
> Anything you can recommend?_
That sounds like an incredibly bad idea. That's something the cheapest-rank
freelancer websites can do (they are hated anyways, but people have little
alternatives), you don't want your employees to feel like that. If they don't
reach the results expected of them, address that, not when and how they do it.
------
kull
Thank you all for your answers.
Few things to add: we are very technical co-founders, dealing with 'people' is
something we are still learning.
Also, the trouble is with customer support, not dev team.
But I see what you all saying, and I do agree, this is not a great solution.
It is new to me, to try to monitor employees, I had never had any issues of
this kind. On top of that the problem is really with one or two employees, so
making the entire team suffer is silly.
We will talk to those employees and probably just monitor the productivity
with weekly goals etc.
~~~
gregjor
> we are very technical co-founders, dealing with 'people' is something we are
> still learning.
Maybe employee 21 should be someone with people skills. Successful and
productive teams and companies focus on people, not on tech.
------
dikaio
Hi Kull. I’ve tried numerous solutions out there and while none of the ones
I’ve tried coompletely satisfied my requirements I’ve found ActivTrak to be
the most reliable product. There may be better products I haven’t found but
for small to medium sized teams this might fit.
[https://activtrak.com](https://activtrak.com)
| {
"pile_set_name": "HackerNews"
} |
Changing Education Paradigms - a wonderful animation - Fuzzwah
http://www.youtube.com/watch?v=zDZFcDGpL4U&hd=1
======
Fuzzwah
I hope I know everything and am rich by the time I have kids, so I can quit my
job and home school them.
| {
"pile_set_name": "HackerNews"
} |
One Thing You Can Do to Jack Up Your Web Traffic - nickbarron
http://www.nickbarron.co/one-thing-that-can-jack-up-your-web-traffic-and-a-free-tool-to-help/
======
zrail
Copyhackers[1] is a _great_ set of ebooks that teach you how to write better
copy. They have an entire book just about headlines, iirc. Highly recommended.
[1]: [http://copyhackers.com](http://copyhackers.com)
------
sam6
The link for the excel tool/plugin is not working, I will unsubscribe to the
newsletter Nick is sending.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: is it ethical to copy ideas from a similar site? - alain94040
I need the advice of the HN community: a few days ago, a site similar to ours came up with a much nicer web interface and flashy Ajax. It made some shortcomings of our site very clear. So we re-implemented our key page, clearly inspired by this other site.<p>You can judge for yourself: the site that inspired us is http://www.builditwith.me . Our old page is here: http://fairsoftware.net/publicProjects and our new implementation here: http://fairsoftware.net/startup-ideas-find-co-founders.<p>I initially tried to contact the author of BuiltItWithMe to see if they would like to share or discuss, but didn't hear back after several days.<p>I believe in transparency and openness, but I'd really like to hear the opinion of people on HN. What is the right thing to do? I wouldn't want the other guys to believe that we are stealing from them (I'm pretty sure that we aren't, legally speaking). I don't think there should be any hard feelings. Am I just naive? Where do you draw the line, when you hold yourself to higher standards than just "anything goes on the Internet as long as you can get away with it?" Asking for permission doesn't seem practical. What else?<p>Thanks! (I'm somewhat optimistic based on the recent comments on http://news.ycombinator.com/item?id=989120)
======
oneplusone
It's fine to steal interface ideas. However, I think you took a step backwards
in doing so. Yes, what you had wasn't very good. However, what you have now is
even worse. Your original implementation was on the right track and just
needed some refinement.
~~~
gommm
I agree, the old version gave more information on what positions were needed
at a glance and was a bit more clear...
------
mgrouchy
I don't think its unethical at all.
I think you did more than most would in trying to contact the buildItWith me
guys and that is certainly commendable.
I also don't think there is any confusion between the two products(From a look
and feel perspective) so no harm no foul. The internet is filled with people
taking "inspiration" from other sites and products. This can be evidenced
simply by the number of websites that look like
<http://www.campaignmonitor.com> .
------
Tawheed
Blatantly copying is a bad idea. Understanding why they're different, and
figuring out the motivations behind their (different) choices and then
reflecting on the choices you've made for your product is a good idea.
Jason F at 37Signals wrote about this a while back:
[http://37signals.com/svn/posts/1561-why-you-shouldnt-copy-
us...](http://37signals.com/svn/posts/1561-why-you-shouldnt-copy-us-or-anyone-
else)
------
gte910h
Yes, it's called competing.
~~~
pierrefar
And someone will eventually do the same to you. Be flattered when it happens.
------
overgard
From a philosophical standpoint, I don't think you were wrong to borrow
interface ideas.
(I hope I'm not overstepping my bounds to say though, that while copying some
of their better ideas, I think you may have also copied some of their worse
ideas. The way scrolling works on both sites is very confusing -- your
original site was better in that regard. )
~~~
PebblesRox
I agree about the scrolling problems. I can't see the entire description for
an app because I can't move anything in that panel on the right. I can zoom
out to view all the text, but then it's too tiny to read.
------
tortilla
Honestly, I think your (original) site looks better.
It (updated site) has a similar layout, but that's about it. You didn't copy
their look, maybe their layout. But if you asked someone on the street, I
doubt there would be any confusion.
edited: in parentheses to clarify
~~~
KWD
I agree. The original site was better in my opinion. The other site that
inspired you is a nightmare to view. Design is as much about function as it is
style.
------
felideon
Obligatory quote? "Good artists copy, great artists steal." - Attributed to
Picasso
~~~
steveklabnik
Then there's this: [http://www.myconfinedspace.com/wp-
content/uploads/2009/07/th...](http://www.myconfinedspace.com/wp-
content/uploads/2009/07/the-bad-artists-imitate-the-great-artists-steal-
banksy-500x333.jpg)
~~~
cschep
yes yes yes, love bansky.
------
hxa7241
Yes, copying is fundamentally moral/ethical. (Although where it conflicts with
a law, like copyright, it is another question -- does the law have
precedence?)
(As I have written in other places:) To evaluate something morally, we can
follow Kant (in 'Groundwork For The Metaphysic Of Morals'), whose fundamental
moral rule is: Act only if the maxim of your action can be willed as a
universal law. That is, we ask: would we want an action to become a general
law? If a digital object is good, then copying it duplicates and spreads that
good. And the incidental cost of copying is practically nothing. We can
certainly wish this were a general law: if everyone copied freely and widely,
we would all benefit – we would all receive very much more good, and at
negligable cost.
You might say there is a loss to the originator, in losing an exclusive
advantage. But that really has a hidden presupposition of some social or legal
construction like copyright: there is nothing intrinsic in the actions or
materials that suggests or requires exclusivity. Abstracts are naturally
copyable. Everyone knows and expects that. And ultimately everyone benefits
from it.
The copyability of things is like a free natural resource. It should be
exploited and used as much as possible. That is the message of the ethical
argument. The sad thing is that the copyright/patent/IP attitude has become so
inculcated that many people no longer see the underlying truth.
------
mattmaroon
It's not only ethical, it's dumb not to. It's what businesses have been doing
since time immemorial. To watch your competitor improve and not do anything to
compete is stupid. Ask Yahoo or KMart.
From the legal angle you're probably alright. We recently built a game largely
based on another game (sort of the way omgpop built blockles, which is really
just Tetris) and I've spoken to attorneys about this extensively. Other than
crossing certain lines (trade dress, trademark infringement, etc.) you're
probably safe. If you're using their art, or their branding, you may have
trouble, but they certainly can't claim to have invented AJAX and a better UI.
You should seek legal counsel of course if it ever becomes a problem.
------
imp
IMHO I hate their interface. That flashy Ajax is too overwhelming. Your plain
list is easier for me to read. Have you tried doing user testing with your two
pages and then also test how users interact with your competitor?
------
apsurd
Aren't you supposed to be developing _your users_?
In other words, what does it matter what _their_ site does? Your users are on
_your_ site are they not? This builditwithme "looks nice"; it has 129 people;
what are you worried about?
Get some feedback from the people on your site. Talk to them, find out what,
how, why, where they are looking for and (not finding) help with their
project. It makes the farthest sense, at least in my opinion, to try to answer
those questions by looking at some other guys site design.
------
jamie_ca
Seriously, that's a step backwards. I get no overview, can't just browse, and
with my browser window at 1024x768 your non-scrolling sidebar sometimes
doesn't have a more info button at the bottom.
------
TrevorJ
I find that it is important to not just blindly copy something, but to truly
unpack the principles behind it and attempt to improve upon it. This allows
you to be inspired by great work but to (hopefully) take it even further and
polish it even more.
If you can't change or tweak anything about the original inpiration then it's
a good bet you don't fully understand _why_ it works or how, so you aren't
going to grow in the process.
------
carbocation
We have had people inform us that they were going to mimic the core idea of
our site (and now they have, with venture backing). I wouldn't be worried
about borrowing an interface.
------
makeee
What are you copying from builditwithme? I don't see much of a similarity.
Anyway, I think your original site is much better than the new one.
| {
"pile_set_name": "HackerNews"
} |
A radio station that no one claims to run (2017) - bookofjoe
http://www.bbc.com/future/story/20170801-the-ghostly-radio-station-that-no-one-claims-to-run
======
segfaultbuserr
Related: North Korea has a secret diplomacy network which transmits messages
from Pyongyang to North Korea embassies worldwide. Radio hobbyists have
successfully reverse-engineered its modem and protocol. This is quite unusual,
we don't know the internal workings of any similar systems used by other
governments, so reading about the North Korean system can give us a idea about
what a typical, sophisticated shortwave digital communication system used by a
government might look like.
Another fascinating fact is the system sometimes transmits cleartext messages!
The system allows two types of messages to be sent. The first type is fully
encrypted normal messages, but one can also send "chat" messages, which allows
the operators of two systems to communicate about their working schedule,
system maintenance, etc, and they are unencrypted plaintext, which revealed a
lot of about the internal of this system. Opsec failure? They should really
just use a hardcoded symmetric key for this purpose...
Network Structure:
* The North Korean diplomatic shortwave network follows a forwarding tree structure, as they limit contacts between their stations to hops of usually no more than 5000 kilometers. Pyongyang sits at the root at the tree structure, as ultimate origin or recipient of all messages between it and embassies. Messages from Pyongyang are transmitted and relayed by North Korean embassies across the world, hop by hop along each branch of the distribution tree, until they are received by their final recipients. Messages from embassies are relayed and forwarded back to Pyongyang in the reverse way.
[http://priyom.org/diplomatic-stations/north-
korea/network](http://priyom.org/diplomatic-stations/north-korea/network)
Protocol:
* North Korean diplomatic communications utilize a proprietary ARQ modem of unknown name, unofficially known as DPRK-ARQ. The modem is based on BFSK bursts transmitted in lower sideband mode. There are two possible waveforms: 600Bd/600Hz as the default, and 1200Bd/600Hz as an option.
A complete protocol analysis is given by: [http://priyom.org/diplomatic-
stations/north-korea/dprk-arq-p...](http://priyom.org/diplomatic-
stations/north-korea/dprk-arq-protocol)
Message Format:
[http://priyom.org/diplomatic-stations/north-korea/message-
fo...](http://priyom.org/diplomatic-stations/north-korea/message-format)
------
Jonnax
According to the Wikipedia article some urban explorers found this logbook
from one of its previous transmission sites.
[http://www.numbers-stations.com/media/sample-
uvb76-logbook.p...](http://www.numbers-stations.com/media/sample-
uvb76-logbook.pdf)
------
tgsovlerkhgsel
Because the article never mentions it, many people probably know this as
"UVB-76".
More info:
[https://en.wikipedia.org/wiki/UVB-76](https://en.wikipedia.org/wiki/UVB-76)
[https://www.sigidwiki.com/wiki/The_Buzzer_(ZhUOZ_MDZhB_UZB76...](https://www.sigidwiki.com/wiki/The_Buzzer_\(ZhUOZ_MDZhB_UZB76\))
~~~
mopsi
The second link leads to [http://priyom.org/military-stations/russia/message-
format](http://priyom.org/military-stations/russia/message-format), which
reveals a fairly simple format that relies on one-time pads. Each military
unit has envelopes with unique word grids, e.g.
[https://i.imgur.com/ItpKnNy.png](https://i.imgur.com/ItpKnNy.png) The radio
station transmits messages as envelope identifiers and line/column numbers of
those grids, and recipients piece them together.
Simple and impossible to break unless envelope contents leak. Given how rarely
the system is used, it probably serves as a low-tech backup for high-tech
radio systems.
------
bnchdrff
[https://en.wikipedia.org/wiki/The_Conet_Project](https://en.wikipedia.org/wiki/The_Conet_Project)
is a great compliation of numbers station samples
------
teamonkey
Here's a BBC radio documentary from 2005 on the Lincolnshire Poacher
[https://youtu.be/Wvr6o7fBcTY](https://youtu.be/Wvr6o7fBcTY)
------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=14988362](https://news.ycombinator.com/item?id=14988362)
------
monkpit
[http://priyom.org/](http://priyom.org/)
------
diebir
Ha-ha I had to check whether this was the place where I served in 87-88. No,
my unit work location was about 10 miles south along the same road. Also a
bunch truck based radio stations, buildings, huge antenna fields, etc. We did
radio surveillance on NATO countries in northern Europe, Unit 75752. Our
transmitters were yet a few miles south of that location (you normally want to
separate the transmitting and receiving as far as practical).
~~~
asutekku
Any theory what this one might’ve been?
------
drdeadringer
Number Stations (and similar like the station described in the article here)
have a place in my heart. One of the items on my "hobby projects list" is to
make a small version of one -- nothing the FCC would knock on my door about,
clearly -- since the topic fascinates me so.
------
jonnydubowsky
COMMAND 135 ISSUED
~~~
markbnj
If that command really were used as the article suggests then you'd think it
would be followed by a bunch of other sigint derived from an increase in
activity by the units that received the command.
~~~
gmueckl
Maybe that is how they arrived at the suggested of the command? The article is
short on attributions for these claims though.
------
grueblur
deadman snitch.
| {
"pile_set_name": "HackerNews"
} |
Apple releases MacBook Pro Update - preek
http://store.apple.com/us/buy-mac/macbook-pro
======
atrilumen
I just bought one last week. I hate it when that happens.
~~~
osxrand
You have two weeks in a lot of countries to return it, I'd exercise that, and
pick up the new model.
~~~
atrilumen
That's the plan. Thanks!
------
avoidwork
I'm let down by the decision to use an obsolete GPU.
~~~
TD-Linux
They _all_ have Iris graphics now. That seems like a big improvement.
~~~
duskwuff
With the exception of the top-end 15" model, which also has a Nvidia 750M.
------
dandruffhead
Just bought mine in April :/
| {
"pile_set_name": "HackerNews"
} |
The bicycle is making a comeback in US cities - antongribok
http://www.bbc.com/news/world-us-canada-36778953
======
michaelwww
The true revolution has hit public consciousness yet and that is the rise of
electric bicycles. I own one and nearly every time I ride it cross town I
think again that it's nearly the perfect form of human transportation. It is
said that pedal bicycles are the most efficient form of human transportation.
With a motor and a lithium ion battery that charges up at a rate of 150W for 3
hours and then delivers 12-15 miles of transport at 20 mph, depending on much
pedaling is done to assist uphill, and I imagine that you only increase that
efficiency. Electric bikes are allowed to use the bike lanes in California,
while fossil fuel powered motors are not, so bikes can take routes that beat
all the car and truck traffic. I don't break a sweat riding an electric bike
so I don't need to take a shower when arriving at work. Take the optimistic
tone of this article and add electricity and you really have a revolution.
~~~
artursapek
Are electric bicycles desirable mainly because of lower physical effort, or
higher top speed? I am an avid cyclist in NYC and I don't think I would be
comfortable going any faster than my current top speed. And I really enjoy the
exercise. So why would I choose an electric bicycle over my traditional one?
~~~
Frondo
In my experience (as someone who cycles 50-75 miles a week) people go for the
electric bikes for less exertion. Probably 3-4 times a week people on e-bikes
pass me on hills, zooming along.
They always seem to be heavier-set than your typical bike commuter, so if the
motor's getting them out on the road and getting some exercise and fitness in,
I think that is just fantastic. Anything to get people out of their cars!
Maybe they'll even upgrade to a real bike some day :)
~~~
buovjaga
> Maybe they'll even upgrade to a real bike some day :)
Several stories I've read about ebikes point to stats about nearly all ebike
owners getting hooked and continuing to buy them instead of regular ones.
That said, an ebike turns into a regular bike immediately after you turn off
the assistance.
~~~
Piskvorrr
Regular and heavy as hell; those batteries won't pull themselves (well they
_could_ , but you get the point ;))
To give my anecdote, I'm considering getting an e-bike for commuting, or
perhaps retrofit onto my regular bike: the ride is not long (~10 miles x2),
but hills on both sides are significantly easier with an assist.
~~~
buovjaga
One way to think of it: assistance for commuting to avoid sweating / no
assistance with battery left at home when wanting to exercise.
Btw. I've had an ebike for 4 years and I'm a skinny 34-year-old man. It can
still be plenty of work to ride it during the Finnish winter :)
------
intrasight
I've commuted by bicycle while in school in Toronto and at a couple of jobs
since. But I've ceased to be interested in sharing roads with cars. The
weight-scale of the two vehicle types is just too different, and I don't
bounce as well as I used to. I do love biking on trails though, and welcome
the additional trails being built around the nation and here in Pittsburgh.
------
cauterized
If there were protected bike lanes along my commute (about 3 miles), I'd cycle
to work in a heartbeat. (Well, showers at the office, preferably clean ones,
would also help.)
Instead, only about 25% of the trip has bike lanes at all. And they share a
curb with the buses (which might be the worst arrangement possible, since
cyclists and bus drivers have to keep dodging one another in and out of the
lane - endangering the cyclists and slowing the buses).
Instead, I walk when the weather is right. Cycling right beside cars would
scare the shit out of me.
~~~
dominotw
I had 3 of my coworkers killed in Chicago in last 2 years. One to to drunk
driver, 2 to visibility issues( one was extreme downhill curve where a cyclist
would suddenly appear out of nowhere. Other was a truck making a left turn
between two cars, no way the truck driver could have seen the cyclist) .
Only people with a deathwish would ride a bike on streets shared by other
vehicles.
Don't be stupid, think of your family.
~~~
beardicus
> Only people with a deathwish would ride a bike on streets shared by other
> vehicles.
I'll grant that the risk of death in an accident is probably higher on bike,
but the risk of death by being sedentary should also not be ignored in the
calculation. I don't have a deathwish, and I commuted 30 miles a day on a bike
for years. It was great. I've never been in better shape. We have a problem
with cars killing people with alarming regularity. The solution probably isn't
to avoid riding bikes or being a pedestrian.
~~~
dominotw
surely there are better ways to get in shape that don't involve risking your
life on daily basis.
~~~
gleenn
I think you're being pretty harsh. I don't have a death wish, I ride my bike
frequently in SF because I love to ride, it's great exercise, it's by far the
fastest way to get to work, and it's one less bicyclist-killing machine on the
road. From a philosophical point of view, if everyone did what I did, there
would be a lot of upsides for everyone. If everyone decided to ride a car,
there would be many downsides.
------
djhworld
I live in London (UK), cycling is fairly popular here but I'm terrified of
being killed so don't bother with it.
I'm wondering how the US approaches bicycle safety as a whole, it's all well
and good building a cycle lane, but in some places that's not feasible so you
have to merge with traffic. How do US drivers treat cyclists?
~~~
steveax
If you need to take the lane, take the lane. Do not offer what might look like
a "squeeze by" to auto drivers but rather ride in the center of the lane and
move to the right when it is safe to do so.
~~~
crispyambulance
I agree, I think many novice cyclists inadvertently put themselves in dicey
situations by being "too nice" to car traffic. It truly is safer to be visible
and temporarily "in the way" than to ride in the door zone or gutter and risk
getting doored, side-swiped or right-hooked.
Being nice might momentarily appease an impatient driver, but it puts the
cyclist at risk and makes them appear erratic to other traffic.
------
api_or_ipa
I don't understand why so many people use their cars for every chore in the
Bay Area. A decent bike commuter can do 16 mph which is plenty fast for most
trips. It's also 'free' in that you burn lunch instead of fuel.
~~~
SirensOfTitan
I would love to ride a bike around SF, but the thought of being hit by a car
is sort of terrifying to me. As a result I ride the bus everywhere: it's non
ideal.
~~~
abrkn
I was looking into buying a bike for commuting from Mission to SoMa until I
noticed how there are multiple hit and runs reported every week. Several times
have I witnessed Uber/Lyft drivers (with me as a passenger) endanger
bicyclists during the morning commute. How do they solve this in other cities?
~~~
Maultasche
The Netherlands does this very well. Car traffic is nicely separated from
bicycle traffic, which is nicely separated from pedestrians. There's even an
entire nationwide network of bicycle paths.
I suggest using Google street view to look around. Usually, you can see a bike
path off on the side of the road that's nicely separated from the cars. I
don't think that Google has gotten around to taking street view pictures in
all the bike-only paths in more rural areas, although I hear they've done that
in some places.
The Netherlands has been committed to improving bicycle use and safety since
the 1950s, so they've built up a lot of infrastructure with that in mind. In
fact, I hear that in any accident in the Netherlands involving a car and
bicycle, the car is always at fault unless they can prove that the bicycle was
being reckless. It makes cars a lot more careful, and encourages riding a bike
instead of driving.
From what I hear, the effect is from all this that automobile traffic in is
pretty congested and slow. The Dutch government seems to be making a general
policy of discouraging car driving and encouraging alternate forms of
transportation, as a means to unclog the roads and reduce pollution. The
Netherlands is very dense, so pollution would be very bad if all those people
were driving everywhere.
I'm not Dutch, but sometimes I wish I were, because I would very much like to
bike to work on a regular basis. My city in California has some bike lanes,
but they aren't very wide and are right next to the cars. They've been
starting to build bike paths that are completely separate from roads, but we
don't yet have a city-wide bike path network yet. I suspect we'll eventually
get one, but it will be decades away. I'll go biking with my children on quiet
residential streets or the bike paths, but there's no way I'd take them on the
bike lanes on the bigger roads.
------
nashashmi
In New York, one thing I have strongly noticed is the decrease in smog even on
rush hour crowded avenues. This alone has made it drastically healthier to go
outside whether it be on bike or walk. Just a few more years and we will be
able to avoid long trips to the parks and do recreational walking immediately
outside of our doors.
------
Havoc
Just bought a bike this weekend. Great fun thanks to cycle lanes along the
beach. Don't see it being a viable mode of transportation for day-to-day
though.
>a bike lane next to [...] church infringed on religious freedom by preventing
members from parking.
That's a pretty bizarre interpretation of "religious freedom".
------
pasbesoin
I want three things: 1) Stay dry. 2) Not wear out my normal nor "dressy"
clothes. 3) Ride safely.
I can even forgo # 1 if I get to change and shower at the end.
So... you want cyclists? Find out what's really keeping them off bikes, and
fix it.
P.S. And theft. You can just park your car and walk away from it -- mostly.
~~~
Zigurd
Some of these things could be solved with better electronics. Like providing
tagging/tracking technology that would keep thieves away, and a proximity
warning that would flash a bright taillight at cars coming too close or
approaching too quickly.
------
Zigurd
There are some bright spots, like Cambridge, but there needs to be vastly
better enforcement of having drivers keep a distance from cyclists. If
cyclist-aware autonomous braking were required for cars, that might do it in a
few years. But American drivers are angry and incompetent.
------
marknutter
I really hope that motorcycles make a comeback as well. They are incredibly
fuel efficient, they don't take up nearly as much room, are far cheaper to buy
and maintain than cars, and a heck of a lot of fun.
~~~
justinator
> They are incredibly fuel efficient,
Sadly, this is a myth.
[https://practicalfrugalliving.wordpress.com/2013/02/02/busti...](https://practicalfrugalliving.wordpress.com/2013/02/02/busting-
the-myth-that-motorcycles-save-gas-and-money/)
[http://latimesblogs.latimes.com/greenspace/2011/09/mythbuste...](http://latimesblogs.latimes.com/greenspace/2011/09/mythbusters-
motorcycle-emissions.html)
[http://motorbikewriter.com/motorcycles-fuel-
economy/](http://motorbikewriter.com/motorcycles-fuel-economy/)
~~~
marknutter
None of those links refute my claim that motorcycles are fuel efficient. The
first link says that large displacement bikes are "no more efficient than a
prius" which is one of the most fuel efficient cars on the road. Take any new
250cc motorcycle and compare it to any new compact car and the motorcycle will
have higher fuel efficiency every single time.
The only article that seems to support your assertion is the mythbusters one,
which doesn't refute the claim that motorcycles are more fuel efficient, but
instead cites the fact that proportionally more pollution is emitted by
motorcycles than by cars which is a completely separate issue altogether. That
would be easily solved by stricter pollution control standards on bikes which
would absolutely happen if more people started riding. The article then goes
onto admit:
"Despite the MythBusters' findings, emissions are only part of the story of a
vehicle's true greenness. According to the Motorcycle Industry Council,
motorcycle manufacturing requires thousands fewer pounds of raw materials than
automobiles. They require less fossil fuel, so they require less energy to
pull that fossil fuel out of the ground. They use fewer chemicals and oils
than cars. And motorcycles produced today are 90% cleaner in California than
they were 30 years ago."
So I stand by my original statement. Did you even read the articles, or did
you just whip off a quick google search and link the first three articles that
seemed to support your claim?
~~~
justinator
> but instead cites the fact that proportionally more pollution is emitted by
> motorcycles than by cars which is a completely separate issue altogether.
But a pretty big one, that you're side-stepping by quoting a special interest
group from the motorcycle industry. Fuel efficiency could be better, but
emissions are always worse.
Here's another source for you, Ana-Marija Vasic and Martin Weilenmann.
Comparison of Real-World Emissions from Two-Wheelers and Passenger Cars
[http://josiah.berkeley.edu/MiniProjects/Vasic2006.pdf](http://josiah.berkeley.edu/MiniProjects/Vasic2006.pdf)
I would wager that ab bicycle will have a better fuel efficiency, emissions,
and environmental cost of production than a car, or a motorcycle.
~~~
marknutter
I was never comparing motorcycles to bicycles. Of course bicycles are more
energy efficient. But compared to cars motorcycles are far more efficient -
especially electric motorcycles which are starting to hit the mainstream now.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Engineers turned into marketing people, how and why? - mezod
======
nealrs
Oh wow. I feel like I just asked the reverse of this question 3 min after you
posted this.
I was a mechE at Caterpillar for 4 years before I decided that my position was
a local max in the organization. So I studied, took the GMAT, and applied to
b-school. Ended up at NYU and did lots of internships in branding / publishing
/ marketing and eventually found a route into tech marketing via account
management at an ad-tech company that needed someone with solid excel skills &
was willing to get on the phone.
Good marketers are eager to talk to people and figure out what their product
can/should be doing for others. Then they feed that back to Product. I think
Product/Marketing are two sides of the same product development coin.
One way to position yourself as a marketing candidate is as an excellent SME
on tech -- you get tech better than your colleagues, so you become invaluable
to the team.
~~~
mezod
hehe, nice!
I'm actually an engineer who likes to create products but who really sucks at
marketing. I never looked at marketeers over the shoulder but now that I am
trying to market my own products I realize what a hard and important work is.
I think I did a good job up to the point of finding a real need that people
are willing to pay for a solution, but one thing is to get the early traction
and a whole other is to actually be able to live from it... the struggle is
real :P
| {
"pile_set_name": "HackerNews"
} |
Unix: A History and a Memoir, by Brian Kernighan - f2f
https://www.cs.princeton.edu/~bwk/
======
Natales
I've always felt we don't appreciate history very much in our industry.
I've been lucky enough to work and hang out with some of the co-founders of
very impactful projects, such as OpenStack and Cloud Foundry, and there are so
many stories I've heard that I'm sure would be insightful and valuable lessons
for whomever is embarking on new ideas. And yet, we all move so fast, that
there is no time to stop and write them down.
I'm glad BK did. UNIX is foundational to essentially all software-driven
technology today, in one way or another. His book (along with Dennis Ritchie)
on the C Programming Language made a huge impact for me as a CompSci student
in the 80s, as did UNIX itself (Ultrix and DG/OS were my fist UNIX variants).
I look forward to read his book.
~~~
msla
There's also the unavoidability of narratives, and how they influence what
people look up to begin with. For example, there's a Unix history narrative
which begins at Bell Labs goes to Berkeley and then out to the world; this is
already extremely limited, in that it ignores Wollongong (where the first Unix
port was done, to the Interdata/32, and where important work on TCP/IP
networking was done) and what AT&T did with Unix after they closed the sources
and what the Research Unix people were up to after Seventh Edition, but I
think the biggest loss is that it completely sells Multics short: Unix began
when Bell Labs left the Multics project, so Multics, in this narrative, is
frozen in time as this unfinished thing that Our Heroes are already bailing
out of, and that's what gets handed down, as if Multics never progressed an
inch beyond 1969. Heck, you can even see this as Myth #1 on the multicians.org
site:
[https://multicians.org/myths.html](https://multicians.org/myths.html)
> 1\. _Myth: Multics failed in 1969._ Bell Labs quit, Multics survived.
Now that we can use Multics about as easily as we can use Ancient Unix
versions under emulation, you can spin up a perfectly functional 1980s-era
Multics and see that, no, really, Multics evolved into something you can do
stuff on.
That's the problem with narratives: They're _both_ inevitable _and_ inevitably
limiting, narrowing the focus to what makes a comprehensible story as opposed
to a day-by-day list of what happened. Humans create narratives as naturally,
and as unavoidably, as breathing, but we have to be aware of what they do to
our comprehension of history.
~~~
pjmlp
Specially annoying with that narrative, for those that care about computing
history, is as C and UNIX are sold as the first of their kind, invented on
magic moment, hand-waving what everyone else was doing.
Since history belongs to winners, if it wasn't for the accessibility of old
conference papers and computer manuals, that would be indeed the only one we
had to believe on.
~~~
zantana
I always thought one of the most interesting insights into this was the Unix
Haters Handbook
[https://web.mit.edu/~simsong/www/ugh.pdf](https://web.mit.edu/~simsong/www/ugh.pdf)
especially Dennis Richie's anti forward:
_The systems you remember so fondly (TOPS-20, ITS, Multics, Lisp Machine,
Cedar /Mesa, the Dorado) are not just out to pasture, they are fertilizing it
from below.
Your judgments are not keen, they are intoxicated by metaphor. In the Preface
you suffer first from heat, lice, and malnourishment, then become prisoners in
a Gulag. In Chapter 1 you are in turn infected by a virus, racked by drug
addiction, and addled by puffiness of the genome.
Yet your prison without coherent design continues to imprison you. How can
this be, if it has no strong places? The rational prisoner exploits the weak
places, creates order from chaos: instead, collectives like the FSF vindicate
their jailers by building cells almost compatible with the existing ones,
albeit with more features. The journalist with three undergraduate degrees
from MIT, the researcher at Microsoft, and the senior scientist at Apple might
volunteer a few words about the regulations of the prisons to which they have
been transferred._
------
sprachspiel
Everyone interested in the history of computing should read The Dream Machine
by M. Mitchell Waldrop. The book pretends to be the biography by a little-
known, but highly-influential guy named Licklider, but is in fact maybe the
best general history of computing. It covers Turing, von Neumann, ARPA,
Multics, DARPA (the internet), and Xerox PARC. Alan Key recommends it as the
best history of PARC.
------
koffiezet
I'm always amazed how well Brian Kernighan can explain things. I love the
episodes on the Computerphile youtube channel with him.
Recently I discovered the AT&T history channel, with this gem:
[https://www.youtube.com/watch?v=tc4ROCJYbm0](https://www.youtube.com/watch?v=tc4ROCJYbm0)
There is a massive difference in appearance and clarity between him and the
other people appearing in that video, even the "presenter"...
~~~
gftsantana
> Recently I discovered the AT&T history channel, with this gem:
> [https://www.youtube.com/watch?v=tc4ROCJYbm0](https://www.youtube.com/watch?v=tc4ROCJYbm0)
I go back to this video every once in a while ever since I discovered it a few
years ago. I just think it is super relaxing. When I first watched it, I was
beginning to use Linux and, when I opened my terminal emulator, I was like:
"It's a Unix system, I know this!" The pipelines explanation was incredibly
clear.
I also love the Computerphile episodes with professor Kernighan.
------
greatquux
Regarding availability of an ebook version, I just wrote to bwk and got back a
reply within a few minutes:
\-----Original Message----- From: Brian Kernighan <bwk@cs.princeton.edu> To:
Mike Russo <mike@papersolve.com> Subject: Re: please publish ebook of Unix
memoir!! Date: Mon, 28 Oct 2019 11:12:48 -0400
Mike --
I have just uploaded a Kindle version, but it has to go through Amazon's
approval, which could take a day. I also can't see a preview on a physical
device, so I have no idea whether it will actually look right. If it doesn't,
I'll have to pull it and figure out an alternative.
Brian K
On Mon, 28 Oct 2019, Mike Russo wrote:
if you get an ebook of it out there it will sell even more!! and thanks for
writing it!
\--
Michael Russo, Systems Engineer PaperSolve, Inc. 268 Watchogue Road Staten
Island, NY 10314 Your random quote for today: ..you could spend _all day_
customizing the title bar. Believe me. I speak from experience. -- Matt Welsh
~~~
stevej_cbr
It's now available. Big thanks to BWK for doing this.
Australia: A$11.99 [my local site]
[https://www.amazon.com.au/dp/B07ZQHX3R1/](https://www.amazon.com.au/dp/B07ZQHX3R1/)
or US$8.20
[https://www.amazon.com/dp/B07ZQHX3R1/](https://www.amazon.com/dp/B07ZQHX3R1/)
vs printed book US$18
[https://www.amazon.com/dp/1695978552](https://www.amazon.com/dp/1695978552)
------
segmondy
I look forward to reading this
I own and have read his following books and they were all superb!
The Go Programming Language
The Practice of Programming
The C Programming Language
The AWK Programming Language
~~~
QualityReboot
Just out of curiosity, why awk? I've only ever used it for simple text
splitting and didn't really know people did more with it. Is it a tool worth
learning?
~~~
troydj
To really appreciate the need for awk, imagine writing one-liners and scripts
in the late 80s where Perl or Python weren't present. The associative arrays
in awk were a game changer. Of course, today there is no need to use awk for
multi-line, complex scripts because Python or Perl does the job better (and
both languages are more scalable). However, awk is still quite useful for one-
liners. But for those developers who never use the one-liner paradigm of
pipelines on the command line, this is something they don't realize they're
missing.
Brian Kernighan mentions in the book that awk provides "the most bang for the
programming buck of any language--one can learn much of it in 5 or 10 minutes,
and typical programs are only a few lines long" [p. 116, UNIX: A History and
Memoir]. Also keep in mind Larry Wall's (inventor of Perl) famous
quote/signature line: "I still say awk '{print $1}' a lot."
More background on awk from Brian Kernighan in a 2015 talk on language design:
[https://youtu.be/Sg4U4r_AgJU?t=19m45s](https://youtu.be/Sg4U4r_AgJU?t=19m45s)
------
jasoneckert
I read this one on Saturday (bought it from Amazon after I saw it posted here
earlier in the week). It's very good at detailing how UNIX was developed in
the early days and how it exploded after 1979 with V7 - and in a way that
isn't difficult to read whatsoever. There are some sections about the inner
workings of UNIX I already knew - but skimming through those allowed me to
catch a few historical gems I didn't know.
------
aasasd
The ‘Customers who bought this item also bought’ on Amazon is pretty telling:
the Snowden book, a Yubikey, an electronics testing tool, coolers for
Raspberry Pi, retractable Ethernet cable, wire-type soldering iron tip cleaner
(what even is that), and sandalwood shaving cream.
~~~
LameRubberDucky
What I saw on the first two pages of Customers who bought.... list was:
Database Internals, Snowden book, Algorithms book, BPF Performance tools, Your
Linux Toolbox, The Go Programming Language, The Pragmatic Programmer, Quantum
Computing, A Programmer's Introduction to Mathematics, An Elegant Puzzle
I had to go up to page 14 of the items list to find all of the items you
listed and to find the wire-type soldering iron tip cleaner.
Edit: Took out unneeded snark. It seems I fell prey to Amazon algorithms.
~~~
aasasd
> _Isn 't this a bit of a cherry picked list?_
I only see these items plus another cooler and a Macbook stand.
~~~
LameRubberDucky
We've been analyzed and found wanting. Dang marketing algorithms.
------
Myrmornis
Anyone able to comment on his recent non-programming-language books? For
example, how's "Understanding The Digital World"?
[http://kernighan.com/udw.html](http://kernighan.com/udw.html)
~~~
danso
Was just about to comment that in lieu of buying an ebook version of the
currently discussed Unix book, I discovered he had written "Understanding the
Digital World" and bought it on impulse. Currently going through the first
chapter now. Like all of his books, extremely well-written. I was surprised at
how much the introduction focused on data sharing and the Snowden-revelations,
which he segues into after talking about how he and his wife weren't using
Airbnb, Uber, and Whatsapp, on a recent vacation trip.
------
Aloha
I bought the book, and read it, enjoying it greatly.
Amusingly, Alexa notified me when it arrived, and the notification was "Your
purchase UNIX has arrived."
~~~
jhbadger
However, you had also ordered a book about castrati singers, so the
announcement was ambiguous.
------
softinio
Is there a kindle/digital/ebook version of this book? I can't seem to find it.
~~~
kernelsanderz
This is very frustrating. From what I understand of the publishing industry,
the Kindle sales are not counted towards the main NYT best-seller list, so all
the focus on the initial launch is around selling the physical copy. This is
said to be done to prevent gamification and manipulation, but this still goes
on with the hard copies also, where would-be best-selling authors often pay
third-party companies to buy a lot of copies of their books from certain
bookstores.
Source: A friend of mine who has been trying to get onto the NYT best seller
list.
~~~
danso
Ebook sales are counted as part of the NYT bestsellers list. In 2010, they
initially did separate ebook listings, but it’s all combined into the general
categories: [https://www.vox.com/culture/2017/9/13/16257084/bestseller-
li...](https://www.vox.com/culture/2017/9/13/16257084/bestseller-lists-
explained)
I can’t remember a single mainstream title that hasn’t had an ebook version
for preorder and first day sales. This includes “Bad Blood”, “Super Pumped”,
and “Catch and Kill”.
------
bexsella
I haven't received my copy just yet, but I am keen to get reading it. Although
there is some fun conversation to be had with people from a non-tech
background when you're excited for your copy of Eunuchs: A History and a
Memoir to arrive.
------
sp0ck
Watching Brian Kernighan and Professor Brailsford on YT Computerfile is pure
pleasure to hear and see how world look from their perspective and quite often
allows to use the same principles today. We believe so much changed in his
industry but some foundations are the same like 50 years ago :)
~~~
throw56
correction. Isn't it computerphile?
------
arduinomancer
I've seen a lot of Brian Kernighan on the YouTube Computerphile channel and
he's very well spoken and interesting to listen to. Might have to pick this
up.
------
jen_h
Looking forward to reading this! Had the pleasure of seeing Brian Kernighan
and Ken Thompson speak at VCF East earlier this year, totally worth a watch:
[https://www.youtube.com/watch?v=EY6q5dv_B-o](https://www.youtube.com/watch?v=EY6q5dv_B-o)
I highly recommend hitting up a Vintage Computer Festival if you've got the
opportunity
([http://vcfed.org/wp/festivals/](http://vcfed.org/wp/festivals/)). Not only
did Kernighan and Thompson speak, but also Joe Decuir of Atari & Amiga fame.
------
nindalf
The link says “Published by Kindle direct publishing” but I can’t find the
kindle version on amazon. Or indeed any ebook version. Do Kindle books not
show up for pre order usually?
~~~
davidgerard
CreateSpace (paperbacks) is now part of Kindle, so you can release a paperback
with no ebook on Kindle.
------
rajesh-s
I agree with you on that we don't tend to appreciate history and attempt to be
aware of how long things have been around. I'm looking forward to read this
too!
On that note, this post reminds me of another great book (eye-opener of sorts)
that I read a while ago that takes the reader through the history of some
important milestones in hardware and software
[https://www.amazon.com/Computer-Book-Artificial-
Intelligence...](https://www.amazon.com/Computer-Book-Artificial-Intelligence-
Milestones/dp/145492621X)
After I read this, I was surprised to learn that the concept/ideas which seem
very recent because of the interest/research around it have actually been
around for a long time. For instance, I learnt that "Secure Multi Party
Communication" has been around since 1982, Verilog since 1984, AI Medical
diagnosis since 1975.
------
unlinked_dll
I find a lot of books on subjects I'm interested in are quite dry. Is this a
departure from that? I'd like something I'd enjoy, and something I can buy as
a gift for some family members who enjoy nonfiction on topics they aren't
necessarily versed in.
------
notpeter
As others have noticed, there is no Kindle version. Look Inside is enabled so
you can start reading in-browser:
[https://www.amazon.com/gp/reader/1695978552/ref=as_li_ss_tl?...](https://www.amazon.com/gp/reader/1695978552/ref=as_li_ss_tl?tag=prod-
detail-20&link=reader)
While the story/writing is interesting, like most self-published titles this
book needs some professional layout/editing help. The second and sixth pages
of chapter 1 are full page Google Maps of central NJ and Google Satellite view
of Bell Labs. Eeek.
This may also be why there is no Kindle version yet. Many pages have full-
color images and would need significant changes for a decent Kindle reading
experience.
------
dcminter
For those interested in the subject I can recommend the very readable "A
Quarter Century of Unix" by Peter Salus.
------
davidgerard
Amazon Best Sellers Rank: #243 in Books
like, that's _all_ books on Amazon US
------
xiaodai
Purchased right away!
------
iamjk
You'd think he would have found someone to design a nicer cover for him.
~~~
jmarcher
You have no idea. I received my copy a few days ago. The whole cover is a low-
resolution bitmap for both the image and the text.
The book is a great read otherwise!
~~~
imwally
Seriously, I thought I received a knock-off when I first opened the package.
------
vymague
Probably not true. But it's fun to imagine that even Brian Kernighan were not
able to find a publisher, and needed to resort to self-publishing.
~~~
sritchie
I can report that this is not true in this case - Brian wrote his book over
the summer and, I think, wanted to get it out for the Unix 50 event at Bell
Labs this past week.
------
f2f
Currently #1 in "Best Sellers in Computing Industry History" on amazon, which
is somewhat of a weird category :)
~~~
tomhoward
It's almost as if the more "best seller" lists there are, the more authors
will make efforts to get onto them :)
------
nankomo
old is gold fit here
~~~
mseepgood
The book is brand-new.
| {
"pile_set_name": "HackerNews"
} |
Move over, Kindle. iriver Story HD is Google's e-Reader, $139.99 - uberstart
http://www.zdnet.com/blog/gadgetreviews/move-over-kindle-iriver-story-hd-is-googles-e-reader-13999/26083
======
JonnieCache
Unless things have changed, I can vouch for the quality of iRiver's kit,
particularly the high quality of their firmware. I have owned a number of MP3
players from them over the years, and all were superb products.
------
acabal
That is one ugly-looking device. But if we're trying to compete on e-ink
readers, I think the device to compete against now is the Nook Simple Touch. I
loved my Kindle when I had one, but after using the NST I realized how rough a
lot of the edges on the Kindle are. It really is a clear and solid step above
the Kindle, and it's the device you need to compete with for e-ink. Wifi vs 3g
doesn't matter; nobody buys books often enough to really truly NEED 24/7
connectivity across the globe. The NST touchscreen is shockingly responsive
and totally obviates the need for a physical keyboard.
As for integration with Google Books--while I haven't used GB myself, I read a
lot of Project Gutenberg books, and I'm assuming the two are similar
platforms. The problem with PG books is that the formatting is just awful for
a lot of the ebooks. Sometimes the person compiling the book will slavishly
stick to a print edition, going so far as putting page numbers in or other
print ephemera; or sometimes the internal coding is so bad that it only looks
good on a small subset of readers; or sometimes the book is OCR'd and thus
riddled with typos. What I'm saying is, you get what you pay for--and a lot of
the free public-domain ebooks are just awfully compiled.
Hell, even some books I've bought from the Amazon store have been very
obviously OCR'd with no second thought to ebook presentation or editing--a big
example is one of Stephen King's Dark Tower books, which are huge sellers that
you'd expect to have been ported to digital with at least a little attention
to detail. Not so: the one I bought from the official publisher was obviously
OCR'd and so riddled with typos and errors that I returned it to Amazon
without finishing it.
So I guess what I'm saying is, I'm not impressed with this device. It's a
generation behind the Nook Simple Touch, and integration with Google Books
isn't enough to interest me given the awful or non-existent editorial
oversight in most public-domain ebooks.
~~~
merryandrew
"Wifi vs 3g doesn't matter; nobody buys books often enough to really truly
NEED 24/7 connectivity across the globe."
Totally agree, personally. Some people use the 3G for other stuff, but I only
read books. Btw, purchased Google eBooks are not like PG, though, they are
formatted like purchased Amazon and B&N eBooks.
"It's a generation behind the Nook Simple Touch, and integration with Google
Books isn't enough to interest me given the awful or non-existent editorial
oversight in most public-domain ebooks."
I'm interested in the new nook too. I saw it in the store, but didn't bother
with checking it out. Next time I will. Integration with Google is also
irrelevant to me, but access to Google eBooks (and all the other ePUB books
with DRM via Adobe) totally matters to me, and this is why I won't buy a
Kindle, yet.
When it comes to well-formatted public domain eBooks, I like Feedbooks.
~~~
moskie
I used to have a 3G Kindle, but it was stolen, and now I have a WiFi Kindle.
I agree with what you guys are saying, but without the 3g, my impulse
purchases have really gone down. For example, I used to buy periodicals while
on the train with some regularity, but now I hardly ever do.
So providing 3g on an e-reader might be a good investment in that sense.
------
davidw
Nothing jumps out at me as a Killer Feature that would make me want to own
this rather than my Kindle.
I can see Google competing with Facebook, but Amazon? Seems like they should
rethink that battle - going head to head is not going to be a winning
strategy.
~~~
camiller
Yeah, but I don't own a Kindle. If I owned a Kindle I probably wouldn't be
inclined to switch either having invested in books locked (albeit easily
unlocked) to that platform. I had been considering the Kobo reader since Kobo
is the only vendor that also offers an app for my phone as well. I might also
consider this iRiver reader since I know that the Google Books website works
fine on my phone.
~~~
kenjackson
_I had been considering the Kobo reader since Kobo is the only vendor that
also offers an app for my phone as well_
What phone do you have?
~~~
camiller
Palm Pre
~~~
davidw
Ok, so that's a killer feature for you and two other guys...
~~~
davidw
Joking aside, there are serious economic issues: the Google device needs some
Big Advantage over the Kindle in order to win market share, and compatibility
with a phone no one owns is not it.
------
merryandrew
Google's eBook strategy is excellent. Google licenses eBooks at prices that
rival Amazon and B&N, and the eBooks can be read in browsers, on cell phones,
on eReaders from B&N, Sony, and now iRiver, etc., and if and when Amazon
enables the Kindle to read ePUB eBooks that employ Adobe Content Server DRM
(e.g., eBooks that Google, B&N and Sony sell, and public libraries lend), then
Google eBooks will be accessible practically anywhere. Pretty shrewd move on
Google's part.
The iRiver device looks pretty good, but as duiker101 points out, the Kindle
has the unique 3G advantage.
------
nickpp
Is Google the least innovative high-tech company? I cannot think of a single
Google product that wasn't either an improved copy or a takeover of another
company's product.
From search to gmail, maps, android, latitude, g+, tv, and now this.
I AM glad they usually make them better and more polished competitors, but
still... not an ounce of originality?
~~~
capnrefsmmat
This isn't a Google product -- it's designed by iriver. What's new is that it
uses Google Books as its online bookstore, which potentially gives you access
to the millions of books they've scanned from university libraries and such.
------
winsbe01
is it just me, or does it look shockingly similar to the Kindle (overall
shape, screen size, etc.)? I am a Kindle owner, and love it, my only quibble
being the DRM they use. on that front, the 3mill+ public domain books on
Google Bookstore is a winner. But I don't think it will be a "Kindle killer".
------
moskie
Is there a hope that with more (successfully) competing e-book vendors, more
DRM-free books will be in the works? As a Kindle owner, the biggest thing
preventing me from considering buying a Google books based device would be the
fact that I couldn't read the books I previously purchased through Amazon on
it.
Is there any hope that DRM'ed e-books will have a similar trajectory as MP3s
did?
~~~
winsbe01
I have thought about this as well, and I hope (and think) there will be. The
Kindle DRM wasn't a problem initially, because it was one of the first widely
sold e-readers with a huge company backing it. however, as more devices come
out that can compete with the quality of the Kindle, a change will need to
happen.
I was about to type "No one will buy Amazon e-books without a Kindle, and no
one will buy a Kindle if they can't read the books elsewhere", but I'm
backstepping. The current Kindle business model reminds me of Apple/iTunes,
something that makes me shudder. If they can still make enough money off DRM
content, then maybe the incentive to make it open is small.
~~~
moskie
And the problem with saying "No one will buy Amazon e-books without a Kindle,
and no one will buy a Kindle if they can't read the books elsewhere," is that
it's already untrue. With the Kindle app on iOS and Android devices, plenty of
people are doing just that. Don't really have numbers to back that up,
though...
So another option came to mind: E-readers that can run apps. Apps like Kindle,
or Google Books. That might just be wishful thinking, though.
~~~
CrazedGeek
The Nook 2nd Edition is apparently really easy to root and install apps on:
<http://forum.xda-developers.com/showthread.php?t=1138564>
~~~
moskie
Right, that's true, but I was thinking just of e-ink devices. It's a tough
sell, though, since e-ink devices are really only good for reading, so there
might not be much incentive for creating an open e-ink device that can run 3rd
party apps.
Amazon has started to allow 3rd party apps on the Kindle, but I don't think
it's been very effective or popular, and I don't expect a Google Books app on
there any time soon.
~~~
CrazedGeek
The N2E is an e-ink device :)
<http://www.barnesandnoble.com/nook/index.asp?PID=35699>
I agree on the openness. I don't get the impression that many manufacturers
are interested in an e-ink Android device though- B&N and Notion Ink are
basically the only ones, IIRC.
------
duiker101
As long as Kindle has free 3g there is no way i'm going to change it. It can
be bugged as hell, but the possibility of browsing my email or HN and
someother sites from almost _everywhere_ for _free_ cannot be easilly
replaced.
------
iloktr
They call it Google's ebook reader, but did Google have anything to do with it
other than providing the content?
| {
"pile_set_name": "HackerNews"
} |
Searle's Chinese Room: Slow Motion Intelligence - Liron
http://lshap.blogspot.com/2011/02/searles-chinese-room-intelligence-in.html
======
SamReidHughes
However: For all we know, it could be that consciousness only happens at
certain speeds or for certain physical layouts of computation.
They don't have similar structures -- one uses parallel computation, the other
is sequential.
~~~
Liron
Agreed. Regarding parallel vs sequential...
I'm pretty convinced that any Turing-complete system in the physical universe
can exhibit consciousnes. And I expect that a description of consciousness
will take the form of a map from descriptions of qualia to descriptions of
_algorithms_ , rather than mapping to descriptions of something lower level.
In other words, I expect that the best description of consciousness will talk
about mathematical substructures of the theory of computation.
Talking about parallel vs sequential architecture is an example of such a
substructure, and therefore may be worth including in a description of
consciousness.
| {
"pile_set_name": "HackerNews"
} |
Twitter Has A (Secret) Reputation Score For Every User - obilgic
http://techcrunch.com/2010/11/17/twitter-has-a-secret-reputation-score-for-every-user/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29
======
dotBen
When I went to the launch/welcome party for Klout in the building for Twitter
HQ, I invited a Twitter employee friend to "come down from the top floor" to
check it out.
He walked in and said "yeah we have this internally already, these guys are
fucked".
I think the idea of a reputation score around your twitter account makes total
sense, and is definitely a marketable product. And the Klout team are a great
bunch of guys.
But when Twitter offers it there is no real point to a 3rd party option which
is going to be inherently less accurate.
Yes, Klout now calculates Facebook authority too but unless you have created
an account and linked your profiles its meaningless _(and what %age of folks
have done that?)_. Klout dodged a bullet today in that Twitter didn't announce
this info going into the API, but I don't think that's far out...
~~~
urbanjunkie
"inherently less accurate"?
I can see that there's potential for Twitter's offering to be good, but no
guarantee that it's perfect.
'Reputation' is a nebulous concept, and Twitter certainly don't have a lock on
how it's defined and calculated. Who knows which centrality measure they've
chosen, how they're valuing and determining influence, or any number of other
measures. The main advantage they have is of being closer to the raw data (and
potentially better access to the entire data set, although a rigorous sampling
methodology could take care of that)
~~~
dotBen
_"The main advantage they have is of being closer to the raw data"_
That's my point. Both companies have the ability to come up with the most
amazing formula for working out reputation but applied equally, Klout is going
to be "inherently less accurate" because they have to trickle feed the data
via the API and many interesting data points that may be stored in the Twitter
database are not exposed on the API.
Twitter has direct access to all data points.
~~~
urbanjunkie
Like what though - if you have enough information to generate a social graph,
and you have access to the firehose, then you're going to be have what you
need. Social scientists will often use a sample set as a very acceptable proxy
for full data set analysis
(<http://www.insna.org/PDF/Connections/v18/1995_I-1-9.pdf>)
There's little evidence that the Twitter guys have come up with an
outstandingly amazing reputation algorithm and that this niche has been
filled. The opaqueness of Klout doesn't fill me with confidence that their
methodology is valid or truly meaningful.
------
abraham
Really TechCrunch? You are surprised by this?
~~~
arfrank
That was my response too. It's like when Apple revealed their iPhone testing
labs. Obviously these companies have internal tools/stats that people have no
need to know about, but definitely make their company better able to perform.
~~~
protomyth
The iPhone testing lab story at least had some cool pictures.
------
Xuzz
I'm also guessing this is used in the Top Tweets algorithm, which seems to
favor lesser known users in the amount of interest needed to get chosen, but I
have no way to back that up.
------
bkudria
If this news is surprising to you, you should rethink your mental model of,
well, the business of tech startups.
------
joshu
one assumes that this is just the converse of a spam score...
| {
"pile_set_name": "HackerNews"
} |
JavaScript right on the hardware - chinchang
http://technical.io/
======
eob
First off, this is cool. It is tremendously exciting to see the bar for
hardware hacking getting lower and lower. To the people complaining "Why JS? C
is fine!", remember that once the complaint was "Why C? ASM is fine!".
At the same time, I can't help but grin that we on the CS side find a way to
erase all the gains in performance and efficiency as soon as the EE guys make
them.
There has to be come kind of universal constant: the limit, as technology
proceeds into the future, of the execution time of "Hello World" is some fixed
number. Because as soon as we get better hardware, we invent an even weightier
runtime environment to slap on it. ;)
~~~
10098
I don't have much problem with programming an embedded device in a high-level
language, but why JS specifically? Why not Lua, which just as fast, and is
better than javascript in many respects?
~~~
coreyja
Slightly off topic, but is it just me or has Lua gotten a lot of attention on
HN in the past weekish? I haven't heard much of Lua before but recently it
seems to have come up a lot more.
And even more off topic (sorry), what makes Lua better/different than other
scripting languages?
EDIT: On jacobwcarlson suggestion, I Googled Lua vs Python, and found this
Wiki, [http://lua-users.org/wiki/LuaVersusPython](http://lua-
users.org/wiki/LuaVersusPython) Not a speed comparison, but gives a fair
amount of difference in the actual languages. And yes it is on a Lua users
site, so may be biased, but on my very light reading didn't seem too bad.
~~~
Aldo_MX
Lua is a language designed to be embeddable, Javascript is a language that
didn't even had the chance to be properly designed.
Although I'm one of those Lua "unadopters", for me it's better to simplify
stuff to my audience, and I have considered seriously to use JavaScript for my
next project instead of Lua.
~~~
TheZenPsycho
Both javascript and lua were designed for very similar goals. It just so
happens that the designers of Lua had more time to do it- plus they aren't
burdened with the need to always be backwards compatible, so lua (now at
version 5) breaks scripts written for older versions.
So they each have their trade offs.
~~~
Aldo_MX
I'm pretty confident that if Lua were as widespread as JavaScript is, a
backwards incompatible change would be less likely to happen, although with
Corona SDK, I'm also confident that Lua will have an opportunity to shine :).
~~~
TheZenPsycho
I"m not sure that "widespread" really would do it. The other difference
between Javascript and Lua is that- while Javascript has been embedded in
adobe software, modified to become actionscript, and is the scripting language
in some games, its primary use has always been the web- with its specific DOM
api. The web carries with it the backwards compatibility burden.
Lua doesn't have to be backwards compatible because nobody _has_ to upgrade to
the latest version of lua- and there isn't much of a standard library to break
anyway- All the things you'd traditionally use a library for are provided by
the outer application lua is embedded in- an application likely not written in
lua itself- and so if you do decide to upgrade your app/game the only thing
you break is individual scripts.
~~~
Aldo_MX
With a widespread adoption more security-related issues and bugs get
discovered, with new security-related issues and bugs comes new fixes, and
those fixes (especially the security-related ones) IMO are the incentive to
update.
Whenever you release a new version with backwards-incompatible changes you
discourage your audience to update and they fragment.
Your product exist because it solves a problem, and the moment your product
gives more problems than the ones it solves, you discourage your users to use
your product and they begin to look for options, and if an option that meets
their needs doesn't exist they will stall (ex. Windows XP).
This means extra effort, since you have to maintain at least the most used
major versions, and at the very least provide security updates to your users.
Fragmentation is a <s>headache</s> migraine that you want to avoid whenever
it's possible...
With a widespread adoption you have to evaluate deeply whether the
fragmentation troubles are worth the value the backwards-incompatible changes
will provide. It's not the same trouble having a hundred users, than having a
billion users.
~~~
TheZenPsycho
You're not wrong, but I think with lua the fragmentation problem is pushed out
to the app developers who embed lua in their various games and applications.
Do the developers of lua concern themselves much with this? I suppose
slightly, but mostly in the form of documenting what's different and how to
convert old code to new code. The lua philosophy seems to be that a
simpler/smaller implementation trumps backwards compatibility.
~~~
Aldo_MX
You are also right, but when a project has small adoption, usually the users
are people that feel identified with and like the project.
But suddenly the project becomes a product, and the product gets adoption
(every time more).
The users will use your product because of different reasons (ex. it's what
other people use / it's what can solve better X problem / it's what I got
recommended / etc.), the users that like and feel identified with _the
project_ become a small niche in comparison with the people that need the
product.
The users become the ones that shape the product, you have to balance between:
Feature A. Which is what you want to do for the project. Feature B. Which
people are telling you they need for the product.
Will you develop exclusively "Feature A"s, because you are following a
philosophy?
Think for a moment what would had happened if Microsoft had adopted Lua for
IE3 instead of reverse engineering Netscape's JavaScript (assuming that IE
would have still won the 90's Browser Wars thanks to Lua). Are there flying
cars in that parallel universe? Is Lua uglier than today's JavaScript? Has
Douglas Crockford written _Lua: The Good Parts_ yet? :)
------
farnsworth
Forgive me for not being hip but why try so hard to put JS in new places? It
seems to be just an unfortunate historical accident that JS is one of the most
popular languages in the world - does anyone actually _like_ it compared to
other modern scripting languages? Could we invest in CoffeeScript instead, at
the very least?
~~~
benihana
Because everyone* knows JavaScript. I'd wager that the vast majority of
professional software engineers work on the web, or very close to the web, and
JavaScript is the _lingua franca_ of the web. I mean just look at the front
page of Hacker News on any given week. It's full of talk of JavaScript and new
JavaScript libraries and new things to do in JavaScript. When you think about
how many more people this can reach without them having to learn yet another
language for a very specific thing, it makes sense.
*hyperbole, please forgive me for being not being completely literal.
~~~
blackhole
I may know javascript, but my god I wish I didn't have to.
~~~
javis
Many people do enjoy writing JavaScript and view it as just as good as Python
and others.
~~~
PommeDeTerre
I've dealt with these kind of programmers on numerous occasions. While they
may claim to know other programming languages, they usually don't know much
about them at all, in reality. If they do know another language, it's PHP,
which is just about as bad as JavaScript in most respects.
Any programmer who has real experience with a variety of different programming
languages will become very aware of how inferior JavaScript inherently is.
JavaScript's problems go much beyond the quirks or oddities we see with
programming languages in general. They're severe deficiencies (the lack of
proper modules or namespacing, and the lack of proper class-based OO), or
inexcusably stupid design flaws (semicolon insertion, its broken scoping
rules, its broken type system, its broken prototype-based OO, its broken
comparison operators, and so on).
No intelligent, experienced, self-respecting programmer will want anything to
do with such a ridiculously flawed and broken language. They surely will not
see it as good as Python or any other language that isn't rife with the
unjustifiable stupidity that permeates every aspect of JavaScript.
~~~
thedufer
Well, that's just unnecessarily offensive. I'm proficient in Python, Java, and
C, and was at some point also fluent in PHP, Ruby, and Lisp. While I wouldn't
particularly enjoy coding in Javascript, CoffeeScript is, to date, the best
language I've worked with. I debug in JavaScript, so I'm not that far away
from it.
Why is "class-based OO" necessary? There's nothing wrong with prototypal
inheritance. The way modules are done in Node is pretty powerful compared to
other languages I've used. The scoping rules are different, for sure, but I
don't really see why you'd call them "broken". They're internally consistent
and easily comprehensible.
Semicolon insertion is, admittedly a problem. The solution, of course, is to
put in your own semicolons. If you do that, and use something approaching
reasonable whitespace conventions, there isn't really a problem. JavaScript's
`==` the like are, for sure, broken, but that's nothing a `===` can't fix.
It's not like the other languages you mentioned wouldn't have issues without
reasonable conventions.
~~~
PommeDeTerre
There's nothing "offensive" about pointing out realities, even if they may be
painful for some people to accept.
I find your arguments somewhat odd. You do openly admit that you "wouldn't
particularly enjoy coding in Javascript". People don't say such things about
good programming languages, especially when arguing in favor of them to some
extent.
I also find it odd that you argue that there's nothing wrong with prototype-
based OO, yet claim that CoffeeScript is the best language you've worked with.
One of CoffeeScript's most useful and important features is that it adds very
simplistic class-based OO to JavaScript. Go look at the example code in the
"Classes, Inheritance, and Super" section of the CoffeeScript home page to see
what I'm talking about. The CoffeeScript code is tolerable; the JavaScript
that's outputted is horrendous. Hand-written JavaScript is often just as bad,
if not worse.
The various JavaScript "module" systems are purely hacks. They abuse existing
language features to fake modularity, poorly. They're nothing like the proper
module support of other languages. And at least you admit that semicolon
insertion and the broken comparison operators are serious issues. Many other
JavaScript advocates refuse to, for whatever reason.
There's nothing wrong with admitting that JavaScript is a really bad language.
I think you know that it is, and want to admit it, and I think you should. It
doesn't deserve to be defended, because its problems are generally inexcusable
in every respect.
~~~
mwcampbell
> There's nothing "offensive" about pointing out realities, even if they may
> be painful for some people to accept.
The problem is that your comments about JS tend to contain more hyperbole and
opinion than undisputed reality.
JS obviously has flaws. But so does English. It's good to have a natural
language that a large percentage of the world's population, across
nationalities and ethnic groups, can speak. I think the same applies in
programming. Programming languages are not just for telling a computer what to
do; they're also for collaborating with other programmers. And once a code
base is written in a particular language, it's often hard to make a case for
rewriting it in a different language. So why not use a language that is
popular, is cross-platform, is vendor-neutral, has multiple optimized
implementations, and is likely to remain popular and well-supported for many
years to come? JavaScript is that language.
FWIW, I have much more experience with Python and Lua than with JavaScript. I
also do some work in C++. Yet, despite JavaScript's flaws, I'm defending it as
a general-purpose programming language.
------
nawitus
From the title I expected the CPU to actually run JavaScript, like a certain
decades old computer (of which name I can't recall).
~~~
phpnode
you probably mean a Lisp Machine -
[http://en.wikipedia.org/wiki/Lisp_machine](http://en.wikipedia.org/wiki/Lisp_machine)
~~~
nawitus
Found what I meant: B5000 directly supported higher level languages.
~~~
unwind
Thanks, that
([http://en.wikipedia.org/wiki/Burroughs_large_systems#B5000](http://en.wikipedia.org/wiki/Burroughs_large_systems#B5000))
seems like a very interesting processor. A few tidbits that caught my eye:
- All code automatically reentrant
- Partially data-driven tagged and descriptor-based design
- First commercial implementation of virtual memory
------
twog
So much negativity in this thread & on HN in general. Not everyone has
hardware experience, and this looks great for newcomers.
~~~
hardwaresofton
To add to this, abstracting away from hardware is exactly what C (which most
people consider low-level) was built for. No one wants to write assembly, and
even less people want to write 0s and 1s or punchcards.
Successfully-executed abstraction is a wonderful thing, people have been
trying to abstract away (but retain performance) of C for decades. I welcome
newcomers, maybe someone will get it right
~~~
damian2000
Agreed, the state of art of most microcontroller manufacturers these days is
to push out a (usually buggy) Eclipse based IDE with undocumented C libraries.
They all have their quirks and issues - noone has got it right yet from the
developer's point of view. This seems like a step in the right direction,
albeit taking a big performance hit.
------
chad_oliver
This is very cool, but I don't see how it can compete with boards like the
BeagleBone Black. The BleagleBone Black is $45 for a 1GHz CPU and 512 MB of
memory, yet the access to low-level hardware is just as good.
~~~
Florin_Andrei
Very, very different things.
The BBB and the Raspberry Pi are designed to be relatively high-power (both
computational, and power draw from DC) devices running a true multiuser OS.
This thing is more akin to an Arduino Micro or a Teensy - a low-power
controller that could run a very long time on a tiny battery, no OS to speak
of, just a single loop of essentially real-time code.
I just made a hardware clock for my PC (7-segment LED display mounted in a CD-
ROM bay slot). I used an Arduino Micro to drive the display.
I may build a dedicated media server at home. A RasPi or BBB would be perfect.
I'm thinking to launch a stratospheric balloon. I need something to hold
together and drive a GPS sensor, temperature sensor, VGA camera, SD card, and
radio transmitter. Total weight and power consumption are severely limited. An
Arduino or Teensy would be great.
Do a wall-mount big LCD screen at the office, showing the vital stats of our
website in real time, for all to see? A RasPi or BBB.
Or you could go even more bare-metal and do everything with an AVR that costs
$1 and a few components that you recover from the last floor sweep, like this:
[http://florin.myip.org/blog/how-make-halloween-creepy-
blinki...](http://florin.myip.org/blog/how-make-halloween-creepy-blinking-
eyes-atmel-avr-microcontroller)
See the differences? Horses for courses.
~~~
ippisl
I agree about the arduino and batteries, but this board is a bit different.
It uses external memory which takes more power. It uses javascript which could
take much more power. And the wifi also might not be low power. So it's not
clear yet how low power it is.
------
taurath
I'll take "most terrifying things you could tell an electrical engineer in
2005" for $2000, Alex.
~~~
kyzyl
I had this thought was well. As as physics major and an EE major, it's a
(somewhat humorously) terrifying concept, at some level.
Now this has more or less been discussed to death in the threads above and
below, but while I don't necessarily think running JS on a uC is a _great_
idea, I don't see a problem with letting non-hardware-versed folk write low
level code. People who know hardware will probably not use it for anything
critical, and if it makes someone's life easier without endangering others or
utterly diluting the community (knock on wood...) then so be it. There's no
reason to be elitist about low level programming.
At this moment my workspace consists of a bit of code for an 8bit
microcontroller, a FPGA layout + VHDL, and a mixed signal high speed PCB
layout I've been laboring over for the better part of two weeks. Take that,
"low level" C! Now I could go off about how superior and/or necessary my low
level methods are, but my friend who's working on an ASIC design might say
something similar to me. There's always a lower level. As always it's a little
bit about preference, and a lot about using the appropriate tool for the job.
I might have a problem if web developers started cramming JS onto arduinos and
calling themselves competent hardware engineers, but (and I think most people
in the field would agree with me) I suspect the likelihood of that happening
is negligible. It becomes apparent very, very quickly when someone is
pretending to know how to hardware.
------
malandrew
So a lot of people have been complaining: "Why not language X, Y or Z instead
of JavaScript?"
This to me makes me think that there is a space out there for a board that has
some equivalent to vagrant/docker but for microcontrollers, where you can just
flash the device with a image supporting a language of your choice.
Near as I can tell, there is nothing I read on the product launch page that
says that JavaScript is supported at the physical hardware level.
------
proee
Javascript doesn't support integers, so it doesn't seem like a good language
for programming low level hardware. What about bit manipulation (XOR, AND,
Shifting, etc)? This is critical for a lot of serial data communications and
I/O controls.
~~~
teebrz
JS supports integers; it just doesn't have an integer type so you can't always
store, operate, etc on them efficiently (though a smart engine can do some of
that for you). It has bitwise manipulation operations which treat the Number
as a 32 bit int. There are also Typed Arrays which make it easier to work on
raw binary data. Though yes, it can occasionally get a little finicky trying
to interface a high level dynamic language with lower level stuff.
Anyway, for this I'm sure there will be a lot of libraries that handle that
for you and expose a fairly high level api; like the blinking led in the
example.
~~~
esailija
And when the engine cannot do that for you, there is always Math.imul
------
tehwebguy
This is cool!
If you are a veteran programmer it may seem dumb but there are plenty of
people who only know JavaScript (& HTML). Some of those people will be utterly
blown away that they can control actual, physical "stuff" with those skills.
------
jevinskie
Nifty! I presume this is running on a Linux kernel? How much memory does node
use? I would think that kernel + JS VM would chew up quite a bit of your 32 MB
of RAM.
~~~
zokier
> I presume this is running on a Linux kernel?
I doubt that. It has M-series ARM microcontroller, Linux is usually used with
A-series ARM CPUs.
~~~
zhemao
How would you run Node on it otherwise? If it's not running an operating
system that Node already supports, it means they've done the very large amount
of work needed to port a Javascript interpreter to a bare-metal environment.
I'd be highly impressed if they had actually managed to do that, but I'm
doubtful. More likely they are using some stripped-down Linux kernel and
userspace. A combination of uCLinux and BusyBox should work rather nicely.
------
hardwaresofton
It is very rare that I actually put my email into one of those "sign up for
updates" things, this is pretty awesome, can't wait to hear back from you guys
------
peterwwillis
This is the most elaborate troll i've ever seen. Kudos to the team for
creating what is literally the physical embodiment of everything I hate about
technology.
~~~
delluminatus
In contrast, this troll is quite transparent.
------
knodi
Please I don't need more JS in my life, i need less JS in my life.
~~~
smilekzs
Many others do seem to need more, though.
------
tambourine_man
180mhz ARM Cortex-M3 LPC1830
32mb SDRAM
I'm amazed that such wimpy hardware can run modern JS satisfactorily.
~~~
test-it
It won't run satisfactory.
Reminds me of that time Sun made the Java CPU and the JavaStation. A hardware
implementation of the JVM. It ran 20 times slower than the Microsoft JVM on an
average Windows PC.
------
ontouchstart
JavaScript is the most deployed dynamic embedded programming language in the
world (consider all the web browsers on desktop computers and mobile devices).
If we expland this environment to customer hardware, we also expand our view
of UI/UX to a different level. We should thank Moore's law.
~~~
nfoz
Are you implying that JS developers know more about UI/UX than others?
Most websites have horrible usability, ranging from "everything in CSS
popups!" to "every site uses a different widget toolkit, deficient in new and
exciting ways".....
I hope I just don't understand your comment.
~~~
ontouchstart
I am not a big fan of the visual UI/UX either. My point is that there is more
to the UI/UX than widgets and popups. With JS on hardware, we might have a new
way to design HCI other than those pretty wireframes in Photoshop.
IMHO, JS is a laguage designed for APIs, which is good for exploring ideas and
protyping. There is always a way to push the performance to another layer,
like CSS boosted by GPU.
------
tn13
This is precisely what I wanted to work with.
~~~
dkuntz2
Why? Because it uses JavaScript? I'd rather stick with my Arduino and use C. I
don't get the whole "JavaScript Everywhere" meme, I'd rather use almost any
other modern language (note the almost).
~~~
Tichy
You prefer C over JavaScript? Header files? Pointer arithmetic hell?
Precompiler Macros? Manual Memory Management? Seriously?
To each their own, though.
~~~
dkuntz2
No. I prefer C over JavaScript for embedded devices. And actually in most
places, too.
I know I'm weird, but I really like pointers, because I really like being able
to manually setup data structures, and enjoy the power provided by pointers.
Manual memory management isn't horribly fun, but for embedded devices I would
rather be in charge of that over the chip having to do it for me. It means
that if there's ever a memory overflow, it's my own damn fault, and I have the
ability to fix it.
JavaScript is an acceptable scripting language. It's not my favorite, but it
works. I just don't think scripting languages should really be used on
embedded devices.
~~~
lttlrck
Not even for scripting them?
~~~
zhemao
There's not really much need for scripting on an embedded device. Embedded
systems are meant to do a single task and run continuously. If he is already
comfortable with C, he clearly doesn't have much use for a language like
Javascript when it comes to microcontrollers.
------
aufreak3
It was both a pleasure _and_ a pain for me to read about this! Pleasure -
because a language with closures is finally getting "closer to the metal"
(even if it means an abstraction layer sitting in between). Pain - because
that language is not a Scheme/LisP!
This would be so much cooler if I could tap into an REPL remotely and blink
out a Morse code on the LEDs :)
PS: I have no idea what makes most of us go "cool!" whenever some form of
remote control of a hardware device is presented :)
------
zdw
I'm curious about performance on this, compared to something like the Arduino
YUN, which is basically a OpenWRT MIPS system (which uses a Lua UI by default)
and an Arduino tacked on the side:
[http://arduino.cc/en/Main/ArduinoYUN](http://arduino.cc/en/Main/ArduinoYUN)
The CPU in particular seems quite underpowered assuming that they're
clockspeed comparable, but I'm not familiar enough with the M-series ARM cores
to give a proper opinion.
~~~
errordeveloper
This is way better then Yun, IMO. Yun is an awkward design, I find very few
reasons they had to put an 8-bit AVR and AR9 on one board. WTF. I really like
this solution with a beefy LPC18xx and CC3000, it's totally cool by me.
------
gfwilliams
I've been working on another embedded JavaScript project - first posted on
here almost a year ago, and soon to go on KickStarter:
[http://www.espruino.com](http://www.espruino.com)
This is really interesting though. It looks like they've got linux and node.js
into 32mb RAM, which is seriously impressive. It seemed as if people were
trying and failing on Carambola (but that may have been because it didn't use
an ARM CPU...)
------
outside1234
Did anyone find details (or have an idea) on power consumption?
<selfishPlug> I'm the maintainer of nitrogen
([http://github.com/nitrogenjs/service](http://github.com/nitrogenjs/service)),
which is a node.js based project to provide web services and client libraries
for devices like this. Check it out if you are interested in devices like
this! </selfishPlug>
------
mistercow
Hmm, I can see some potential difficulties with making JS work efficiently on
that MC. It only has single precision floating point instructions, for one
thing, and I can't tell how many cycles those instructions take. I'm guessing
they'll just do a variant of JS that uses single precision, but even so, it
seems like it would be hard to squeeze any kind of performance out of it.
~~~
diydsp
I took a brief glance at the LPC1830 and it seems it doesn't have hardware
floating-point at all. (None of the mainstream Cortex M3 products that I've
seen do). Do you have better info on this?
So, I believe its FP support is through software and could be written to be
double or greater precision. If this is true and if JS can only do floats, it
would be bad news for many basic mathematical operations running on this chip.
But hey, as someone pointed out above:
People don't want performance. They want libraries.
But with no FPU, count me out. I'd rather have my stm32f4. Just sayin'.
~~~
thedufer
> If this is true and if JS can only do floats, it would be bad news for many
> basic mathematical operations running on this chip
By spec, JS only has double-precision floats. However, since that means that
32-bit integers can be exactly represented, V8 optimizes to integer
calculations unless actually using floats becomes necessary. I would guess
that on this device floating point operations are done in software, but also
that they rarely occur in practice.
------
bcent
Certainly an interesting concept ... and I'm sure that will be some cool
things that come out of this project.
I'm not sure I like the idea of a high level language, trying to control low
level hardware. (seems almost counter intuitive)... however, if you wanted
such a thing ... Node seems to be the way to do it... from both an
accessibility and speed perspective.
------
mcdougle
Very interesting. I feel like I've been seeing a major trend towards
Javascript-based-everything the past few months!
I wasn't sure if I should link this here, but this article really sparked a
train of thought that brought me to write this:
[http://blog.mcdougle.net/?p=54](http://blog.mcdougle.net/?p=54)
------
eplanit
Language-specific hardware has been done before, but do these really make
sense? I remember excitement at the prospect of "Java CPUs". Did any of them
become commercially or even technically important?
[http://en.wikipedia.org/wiki/Java_processor](http://en.wikipedia.org/wiki/Java_processor)
------
6ren
So, it's not actually JS in silicon (would that add much performance anyway,
since JS is so dynamic? at best it would be asm.js-like, I'd guess...), but a
supported language for an embedded device. I had a ghastly bizarro moment
there: JS strikes me as bash + C syntax (not tsch)... in silicon...
------
ulisesrmzroche
I for one am really happy the JS era is here. Finally! Also don't listen to
the haters. Ya'll know most of us make web apps so stop fronting. Though, I
guess now I can make a battlebot too! JS is shaping up to be a great language
and environment as more and more people get on board.
~~~
ulisesrmzroche
Actually, on second thought, if anyone wants to join me in starting a HN
battlebot league, hit me up!
------
damian2000
Great idea and marketing, but I just wish they'd chosen Go - I would have
bought one in a second. Correct me if I'm wrong, but the lack of sane, static
typing in JS just seems crazy to me when you come to writing low level code.
------
Sealy
That looks very very cool. Im really excited to see how this will change the
game.
I just wanted to ask the readers here... anybody notice the sticky tape
holding it together in the second picture?
------
SlaterVictoroff
Had the pleasure to try it out last week and I found it incredibly easy to get
things up and running. I think it has a lot of potential. Excited to see where
this goes.
------
K0nserv
As far as I understand not having an Android or iOS device will severely
cripple the user? Is this the case, are there any plans for Windows Phone?
------
kenster07
There will alway be a need for performance beyond what js can be provide, in
time and memory.
So don't worry, js will not take over...completely.
------
marcamillion
Wow....this is soo cool. Even though I am no fan of JS, I can't wait for Ruby
to be embedded on hardware like this.
Oh the possibilities!
------
smanuel
"the assembly language for the web" just got renamed to... "the assembly
language". Very cool.
------
oscargrouch
a good example of hardware hobyism and of the modern days eletronics
renaissance..
maybe its not this one.. but the next computer hardware revolution will born
like this.. the same way jobs and wozniak did in the 70´s .. from pure passion
hope my kids create its own gadgets like we did with legos in our days..
long live to the hacker spirit!
------
dnautics
will it support asm.js?
------
dpweb
whole world goin JS crazy
------
kingmanaz
It would be interesting to see scheme implemented in hardware, complete with
primitives intended to be wired to low-level pointer manipulation functions.
Further, it would be interesting to see someone seasoned in low-level C sit
down with a copy of Lion's Unix Commentary and the NetBSD sources and attempt
to implement a minimal Unix atop the scheme hardware using only said
hardware's scheme dialect ("SysScheme"?).
~~~
chewxy
So, lisp machines of the 80s?
~~~
kingmanaz
No, scheme machines of the 10s.
------
marssaxman
I feel sorry for the people who need this.
~~~
Widdershin
You know when people talk about negativity in Hacker News comments? This is
what they mean.
~~~
marssaxman
Not sure what you mean by that. I meant exactly what I said: I feel sorry for
anyone whose linguistic toolbox is so limited that Javascript is their best
option for writing low-level code for manipulating electronic devices. It is
hard to imagine a more profound mismatch between the tool and the problem it
is supposed to solve.
~~~
test-it
I feel sorry you lack basic understanding of linguistics. Any major
programming platform today is too complex for a single individual to
understand completely. One can spend his time learning several languages (and
the platform they run on) or concentrate on a single language. I'd say it's
obvious the second choice leads to higher productivity.
You don't see professional musicians trying to play 10 different instruments
on world level. Or olympic athletes competing in 10 disciplines. Or lawyer
practising 10 domains. But some programmers like to pretend they have 10
languages in their "linguistic toolbox".
~~~
zhemao
> You don't see professional musicians trying to play 10 different instruments
> on world level.
Yeah, but a violinist wanting to learn how to play the piano doesn't try to
rub the keys with a bow. If you are a Javascript Web Developer, picking up a
microcontroller is already going pretty far out of your domain. You might as
well learn how to use the tools that were already suited for that task.
| {
"pile_set_name": "HackerNews"
} |
Language as a Window into Human Nature (RSA Animate) - trbecker
http://www.youtube.com/watch?v=3-son3EJTrU
======
RiderOfGiraffes
Please, _please,_ don't anyone follow the link to "Drive: the surprising truth
about motivation" and think "Woah! I'll submit that to HN" - and then submit
it.
Please, don't. It's been submitted _many_ times before.
| {
"pile_set_name": "HackerNews"
} |
The Surprising Power of Virtue Labeling - rbanffy
http://nautil.us/issue/61/coordinates/why-you-should-tell-everyone-theyre-honest
======
nstart
I appreciate the ideas of this article (and book). It's a hard to digest one
and possibly controversial as well. But at least with raising children, it
does seem to work. Especially in the negative way.
There's a pattern parents are supposed to follow now. When a child does
something undesirable, you criticise the individual action. When they do
something desirable, you praise the overall quality.
Eg - Child attempts to pull glass off table. Instead of calling them naughty
or careless, we explain to them that doing things that could break glass is
dangerous to them and the people around.
When a child picks up their toys and puts them back where they belong, we say
"thank you for being so helpful", or "thank you for being so neat".
At least anecdotally, calling children mischievous tends to have this kind of
reinforcement on them where they then start to adopt the label of being
mischievous.
Admittedly, I don't know if it's just confirmation bias. It could be me
thinking of their behaviour differently by avoiding any labels in my head too.
~~~
ThomPete
If that was the case we would live in a very different world. Discipline is
necessary for children, they cant reason, why something is supposed to be in a
certain way, so speaking to them like they do at an age of 5, is the real
danger here.
As they grow older and understand more complex concepts you can start to
educate them on the nuances and reasons but in my experience, they will
automatically start asking you at the right age and you should be able to just
use that as your trajectory IMO.
~~~
jacobolus
I’m not really sure what you mean by “discipline”.
Anecdotally, as the parent of a not-quite-2-year-old, I have found that when I
treat 2- to 4-year-old kids at the playground with basic human respect, and
pay attention to their interests and feelings, they end up following me all
around subsequently (and e.g. running over to me when they see me the next
week, wanting to show me their toys and tricks, ...), as if they were craving
non-controlling non-judgmental human interaction they otherwise were not
receiving from other people they interact with on a regular basis.
Watching other parents and caretakers, there are a substantial proportion who
constantly deny kids’ feelings, tell them what they are supposed to feel and
think, baby them by doing things for them that they could do for themselves,
prevent them from doing not-very-risky things ostensibly for safety reasons,
pointlessly force them to do irrelevant things they don’t want to do, etc.
YMMV.
~~~
ThomPete
Thank you for asking. I don't mean "discipline" in a physical way of course. I
simply mean that I am not going to explain to them why they are not allowed to
do certain things or why I want them to do certain things when they are too
young because they don't' understand why. I was responding to the idea that
you need to explain to kids why or we would get mischievous kids.
With regards to the feelings. It's not as simple as you seem to portray. Kids
will use that to get things their way there are plenty examples of parents who
accept all their kid's emotions at face value and end up running around being
controlled by their kids.
So a balance is needed.
And yes I agree that they should do as many things themselves which also for
our kids means getting a lot of bruises for falling down from things (within
reason obviously)
It's all about balance. Don't deny your kid their emotions when they are real
but "call them on their bluff" when they are not. That's my approach at least.
~~~
jacobolus
> _I simply mean that I am not going to explain to them why they are not
> allowed to do certain things or why I want them to do certain things when
> they are too young because they don 't understand why._
You might be surprised. Even when kids don’t fully understand the explanation,
they can understand tone of voice, and might still appreciate that the adult
has reasons for acting and is trying to explain instead of making decisions
capriciously. The act of explaining also forces the adult to think about the
reasons for their decisions, which gives a chance for reconsideration when the
decision was in fact arbitrary. Finally, explaining things gets both parties
in the habit of talking about their feelings and decisions instead of
expecting everyone around to guess them and then getting mad when they are not
understood.
(For a simple example, if we just read the same book 3 times in a row, and I
want to read something else, I can either (a) explain that I am bored with the
book and need some variety to keep my sanity, and let the kid pick the next
book, or (b) realize that I don’t actually care which book we are reading, and
just dive in for time #4; either of those is better than just “we aren’t
reading that book again, we are reading this one” which is just a unilateral
decision.)
> _With regards to the feelings. It 's not as simple as you seem to portray.
> Kids will use that to get things their way there are plenty examples of
> parents who accept all their kid's emotions at face value and end up running
> around being controlled by their kids._
For instance, when someone falls down and is fussing, either “oh get up, you
are fine, that didn’t hurt” or “oh my gosh! you must be really hurt! let me
smother you in kisses and make it better!” are telling the kid what to
feel/think, and if the response doesn’t actually match the severity of the
fall, then this is quite confusing for the kid. A better response is “are you
alright?”; then the kid can decide if they are actually just surprised, or if
they are in pain and want comforting, or if they are seriously injured and
need more significant help.
Again, anecdotally, I have observed that the parents who get run over by their
kids tend to be the same ones who are consistently ignoring or denying the
kids’ feelings (or sometimes largely ignoring the kids’ existence unless they
are misbehaving). Demanding things and throwing tantrums is one of the few
ways the kids find to get the attention they crave.
Accepting someone’s feelings doesn’t mean giving them whatever they want. It
just means listening to what they are actually saying, and then responding to
that in an empathetic way (a good start is to just echo back what they are
saying or what you think they might be feeling).
To any parents out there: I recommend the book _How to Talk So Kids Will
Listen & Listen So Kids Will Talk_.
~~~
ThomPete
Besides the first part which I just don't agree with I agree with the other
things and as I said it's not either or but finding a balance.
The kids are looking to you to find out what maximises your attention so
that's what you have to manage before anything else. Of course you are not
just saying "oh get up, your are fine" to someone who is actually crying
because they hurt themselves but there is plenty of room from that and then to
someone who cries for every little thing.
So again balance. Read the situation and react to that.
~~~
jacobolus
In my opinion it is never appropriate to tell someone who just fell down and
is making a crying sound “oh get up, you are fine, stop crying” etc. No
response at all would be better. But you do you.
~~~
ThomPete
Where have I said it was appropriate? Strawman much.
------
westoncb
> Beyond these preliminary observations, though, there does not seem to be a
> well-developed and widely accepted model in the psychology literature to
> explain how character labeling makes such a difference.
Sure there is. The results can by explained by how the brain resolves/reduces
cognitive dissonance:
[https://en.wikipedia.org/wiki/Cognitive_dissonance#Reduction](https://en.wikipedia.org/wiki/Cognitive_dissonance#Reduction)
A central feature of many contemporary cognitive models is the notion of a
'schema,' which can be thought of as a record of beliefs/assertions on some
subject. Cognitive dissonance arises when observations create a contradiction
within a schema; reduction of cognitive dissonance involves modifying the
schema in a way that (as best as possible) accounts for all observations.
(Sometimes there isn't a good way of doing it and some pretty strange behavior
may result.)
Another way of looking at it is that your schemas constrain your behavior so
that the actions you take, insofar as it's possible, are consistent with both
your goals and your relevant schemata.
So if you can actually modify someone's schema for the subject of their self,
you can influence their behavior so that it becomes consistent with that
modification.
Hence, theoretically, you tell someone they're "so compassionate" etc. and
they start playing the part. Now, in practice, I doubt this will do much. Your
statements are a bit of data supporting some assertion that can become part of
their 'self schema,' but there's a whole lot of other data coming from other
places that will conflict with the b.s. you're telling the person.
~~~
yosito
I believe there are also studies about the psychology of not wanting to let
people down and about loss-aversion which also have an application here.
~~~
westoncb
That's s good point. Could definitely be an amplifying factor.
------
peatmoss
I’m not necessarily a fan of telling people “you’re an X person” (other than
whimsically like as in “you’re a gentleman and a scholar”). But I am a fan of
biasing feedback in favor of praising good actions rather than condemning the
faults.
I had a two music teachers back in the day. One would call me on every flaw in
a fairly negative way. I got to the point of dreading my weekly lesson. I got
to college and the prof there gave targeted attaboys for things I was doing
well, or (this is key) showed signs of improvement at. I worked much harder
for the attaboys than I did to avoid disparagement.
That’s not to say flaws can’t be corrected ever... it’s just that positive
encouragement is typically more effective than negative signaling in my
experience.
~~~
slededit
There are people that respond better to (constructive) criticism than praise.
To me praise has zero meaning and I can't really do anything with it. A good
criticism however gives me information on how to improve.
------
jdoliner
Follow this author's advice and a generation from now you may find yourself in
a world where none of these virtuous labels have any meaning at all. That's
the real hell right there, imagine a world not where everyone is dishonest,
but where it's impossible to even distinguish honesty from dishonesty, your
mind has been compromised, you can't even articulate the difference between
the two. This is what we've seen happen with the self-esteem movement over the
past 3 decades, no one even really knows what that phrase means, it's just
this weird mantra people like to recite.
~~~
robotkdick
I had the same reaction after reading the article. I have a feeling we're in
for more and more of this sort of "meta" research, which leads to more and
more meaninglessness.
It feels like something out of a Kafka or Orwell or PK Dick novel.
Next up: compassion training!
~~~
andrei_says_
I _do_ teach compassion. Mainly to people who need it in their relationship
with themselves and their children.
It has a lot to do with un-teaching (self-)oppression and habitual cruelty.
It’s a wonderful superpower I have, limited by the fact that I can only teach
it to someone whose request is sincere.
~~~
robotkdick
My comment was a joke based on mass requirement through social pressure of
people to self-oppress as presented in novels such as "1984."
Phillip K. Dick also described a machine in "Do Robots Dream of Electric
Sheep," which injected the right chemicals to emulate emotional
predispositions, such as compassion.
That said, I'm certain your one-on-one training is very helpful to those with
an open heart and mind. My brother had to go to AA after drinking and beating
his son. The whole time he complained, and I couldn't help thinking to myself,
who beats their child and cannot muster up enough compassion to admit they
were wrong. He was later diagnosed as manic, and I've noticed he also
demonstrates a predilection for self-cruelty (per your comment).
He works at Oracle (for example). No disrespect meant to people who work at
Oracle, but I've heard it's not a very easy-going place to work.
~~~
andrei_says_
Thank you for the clarification, I did miss the reference.
Your brother’s story brings me sadness. I see a lot of suffering, both for him
and for the ones around him. I hope he is able to find a way toward healing.
Introspection can be an important step of this process.
Labeling is a violent process as it replaces the person in our relationship
with the label we project onto them, thus confining them to that projection,
taking away from their humanity. It is easy to recognize it when applied with
the explicit purpose of dehumanization (Jews as “vermin”, immigrants/refuges
as “illegals”).
Mental health diagnoses do a bit or a lot of this, too, unless one is able to
intentionally look at the diagnosed as a human being, especially in difficult
situations.
The mention of Oracle makes me think that Pieter Hintjens’ book, Psychopath
Code, may be of interest for you — it looks into the mental health of
organizations.
[https://legacy.gitbook.com/@hintjens](https://legacy.gitbook.com/@hintjens)
------
dgreensp
If you don’t prejudge everyone as lacking virtue, you have no reason to
insincerely tell them they are virtuous to try to make them more virtuous.
If you want to do some good in the world, sincerely appreciate the good in
people and help them feel seen in it.
------
everdev
> The idea is to attach a label to people you know. Labels make a big
> difference. For with that label attached to them, there is a good chance
> that they will try to live up to it. And perhaps the more they care about
> living up to the label, such as honest person, the more they will actually
> become that honest person.
I could see this working in some contexts, but in others I'd be worried about
trying to influence someone's identity or self-awareness.
I'd imagine the pressure of trying to live up to a label you didn't choose for
yourself could be overwhelming for some people.
~~~
_jal
I know people who do this, and I have to resist the urge to bait them.
Frankly, I find behaviors like this manipulative, and have enough of a
cantankerous streak to manipulate back, if I notice it and become irritated
enough.
Another side of this is that I've seen people do this when frustrated, and
after a while it can start feeling nasty and passive-aggressive and difficult
to respond to. Or be around.
------
whatshisface
Hidden beneath the ridiculous claims and the n=3 studies, there's an
observation of something most of us know: if you treat people like they're
going to steal something, they'll pick up on the idea that a lot of other
people must have thought stealing was a good idea, and steal a little
themselves. What I really want to know is, do infantilizing school and college
programs reduce or increase responsible behavior?
------
jl2718
The famous Jim Rohn quote: “you are the average of the 5 people you spend the
most time with” was once clarified to me by a psychologist as “you become what
the 5 people closest to you think you are”.
------
api
I had a related thought a while back: that maybe our extreme political
cynicism is actually fueling a culture of corruption. If you're guilty until
proven innocent why bother?
~~~
jamiek88
That’s the entire plan and point of the propaganda we’ve been under.
[http://www.businessinsider.com/russia-undermine-west-
democra...](http://www.businessinsider.com/russia-undermine-west-
democracies-2016-10)
It’s working a treat.
Never before have intelligence agencies had such direct access to populations.
------
Pete_D
Nit: I would be _astonished_ to hear that any recent clinical trial involving
placebos had taken place without all participants knowing that they might be
given one. FDA guidelines on informed consent[0]: "Procedures related solely
to research (for example, protocol-driven versus individualized dosing,
randomized assignment to treatment, blinding of subject and investigator, and
receipt of placebo if the study is placebo-controlled) must be explained. ...
The description should also provide relevant information about any control
used in the study. For example, whether the control is a medically recognized
standard of care or is a placebo (including an explanation of what a placebo
is)." Searching the web for "placebo IRB" turns up more detailed institutional
guidelines; UCI's is one of the most in-depth[1].
I believe this is relevant because the author's comparison of placebos and
virtue labelling hinges on whether or not the practices are deceptive. But the
norms around informed consent mean that using placebos in clinical trials does
not, IMO, involve deception. I'm not sure if there's a comparable mitigation
to make virtue labelling not deceptive (can you imagine handing someone a
consent form saying "I will sometimes lie to you about whether or not I think
you are a good person, for your own good"?)
[0]
[https://www.fda.gov/RegulatoryInformation/Guidances/ucm40497...](https://www.fda.gov/RegulatoryInformation/Guidances/ucm404975.htm#description)
[1] [https://www.research.uci.edu/compliance/human-research-
prote...](https://www.research.uci.edu/compliance/human-research-
protections/researchers/placebo.html#Protocol)
------
somberi
Tempted to quote Rumi:
"Either seem as you are or be as you seem."
In Turkish:
"Ya olduğun gibi görün ya da göründüğün gibi ol."
------
darkerside
I can't agree with the execution as described. It's manipulative and
dishonest. Instead of squandering virtuous labels at every opportunity, do
something much simpler and more genuine. Simply notice when people do
something good, or right. With time, they'll internalize that feedback, and I
believe the positive effects will last longer because the recipients make the
connection to an actual action they have taken and can sense the authenticity
of the statement in a way that coheres to their actual worldview. Much
different from telling people, "you're honest", when they've done nothing to
indicate it.
~~~
yosito
You're very good at reading the whole article before commenting. ;)
------
Jarwain
It's the idea of the self-fulfilling prophecy!
I feel like Horoscopes and Zodiac signs do this, in a sense. To prescribe
traits to an individual based on some quality out of their control, like their
birth date, and have those traits told to them repeatedly throughout their
lifetime (if they're surrounded by the kind of people who would do so), that
individual would grow Into those traits
~~~
jeffdavis
I've always wondered how much your birthday does matter. There are fundamental
reasons, like seasonal changes that may directly or indirectly impact
development. And there are also arbitrary cutoffs for things like school.
~~~
benji-york
[https://www.sciencedaily.com/releases/2010/02/100202101251.h...](https://www.sciencedaily.com/releases/2010/02/100202101251.htm)
------
Sean1708
> In the process the doctor does her best to come across as medically
> authoritative and confident, so that the participants believe they are
> getting an effective drug treatment. Otherwise the placebo won’t work.
I was always under the impression that placebos worked even when patients new
that they were getting placebos, is this not true?
~~~
verbify
Presumably the placebo effect is stronger when patients do not realise they
are getting placebos.
~~~
carapace
Nope. There is no magic in the word "placebo". Nor in the fake pills. The
effect is solely due to "set and setting" establishing belief. The "placebo
effect" is on a spectrum with hypnosis. It has nothing to do with the actual
pill or injection whatsoever, those are just _props_ , like in theater.
(Sorry to badger the point, it just seems so under-appreciated and
misunderstood.)
~~~
verbify
Do you have studies to back up the claim that placebo is binary and not a
spectrum?
~~~
carapace
I don't understand your question. What do you mean by "binary"?
In any case, I'm saying that the so-called "Placebo Effect" is activated by
the patient's belief that they will heal, which is created by the "story" told
and acted out by the people around them. The word "placebo" has no inherent
effect, nor does the pill, it's all a kind of _hypnotic suggestion_ that
activates a somewhat mysterious ability of people to heal themselves.
The GP said, "Presumably the placebo effect is stronger when patients do not
realise they are getting placebos."
This reflects a common misconception, usually born out of the "fact" that
"fake pills can't cure you".
But it turns out that this "fact" in, in fact, not true. _Fake pills can cure
some people some of the time._ This fact is as established as a fact can be,
over and over again, in thousands of studies and cases.
Imagine someone with an illness and who has never heard the word "placebo" or
anything about it. Maybe you two are stuck on a deserted island. The point is,
you can pick up a bit of bark or shell or a pebble or something, wave your
hand over it and mumble some magic words, and tell them _with total
conviction_ that it has a good chance of curing them... _because it 's true._
So my first point is that it doesn't directly matter whether people know
they're getting a fake pill or not, it depends on what they've been led to
believe about the efficacy of fake pills. Anything that lends credibility
(ability to believe) to the possibility of healing for the patient can be
expected to "amplify" the "effect".
My second point, which I think you're asking about, is that the whole healing
is done by the "story" engendering belief which then somehow activates the
patient's own ability to heal, and that this is a kind of hypnosis, or at
least related to hypnosis. Because, again, there's no magic in the word
"placebo" either to strengthen or weaken the "placebo effect" and "fake pills
can't cure you". Right? So it has to be "all in your head".
I mean what would you call it if I told you a story and gave you a piece of
candy and your illness cured itself? Is that not hypnosis?
There's a joke people tell to disparage "alternative medicine": If it works,
it's just called "medicine".
My third point is that this should happen for the "Placebo Effect"! The power
of _theater_ \-- of storytelling and the mind --to engage healing should form
a third pillar of medicine along with Biochemistry and Surgery.
~~~
verbify
> The power of theater-- of storytelling and the mind --to engage healing
> should form a third pillar of medicine along with Biochemistry and Surgery.
Let's say hypothetically that two patients received this 'theater treatment'
\- but one had an amazing actor and the other had a mediocre actor. The
amazing actor convinced Patient A of the efficacy of the treatment better than
Patient B.
If you believe that 'theater treatment' or the placebo effect is binary then
it doesn't matter how good the actors are, and therefore they will get better
at the same rate. If you believe 'theater treatment' isn't binary, then the
more the person believes in the treatment, the faster their condition will
improve. That's what I mean by binary vs a spectrum.
There's evidence that placebo treatments aren't binary - e.g. placebo
injections work better than pills -
[http://journals.plos.org/plosone/article?id=10.1371/journal....](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0067485).
Therefore, with reference to my original statement 'Presumably the placebo
effect is stronger when patients do not realise they are getting placebos'. In
both cases, the placebo effect will work, however it will work better if the
theater is more convincing - and it's more convincing when they aren't told
it's sugar pills.
------
mirimir
Damn, "virtue labeling". I guess that this is a riff on "virtue signaling".
But seriously, there's no dishonesty involved if you simply acknowledge people
for being honest etc. It is true, on the other hand, that only people who are
thinking about lying will say that you can trust them.
~~~
phyzome
It's not a riff on it, they just have the same word in them.
~~~
mirimir
It's more than just the same word. It's the same word, used in the same ironic
and mocking way.
~~~
phyzome
What? No. Both of these come out of the academic literature.
People on some parts of the internet _use_ the term "virtue signaling" in a
mocking way. I haven't seen "virtue labeling" used that way.
------
tw1010
Crazy shit happens when you expose the rules of the social game we're all
playing.
------
CookieMon
I have heard the Nobel Peace Prize awards explained this way - more an attempt
to sway than to congratulate.
------
phyller
Every time I open a nautil.us page it opens a bunch of processes that max out
several cores on my CPU and eventually my cooling fans boost up to maximum.
Hmmmm.
I should email them and tell them that they are a respectable and responsible
company that ensures they nor their employees nor their advertisers would ever
run unnecessary code on their visitor's machines.
------
carapace
What a weird, almost sociopathic article. If you are nice to people and
encourage niceness they tend to act nicer. Reifying this blindingly simple
thought into "virtue labeling" isn't scientific, it's scientism. The author is
advocating insincerity and _deception_ as a way to promote good character.
This sort of neurotic thinking always has such obviously absurd
contradictions.
Insincerity is in contradiction to good character, and attempting to foster
good character in others by deliberately lying to them is obviously
hypocritical and foolish.
Virtue is a greater solvent than water. You can always find a way to
_honestly_ encourage better character in others, if only you start with
yourself. (Someone once hugged Stalin and he _cried_. Stalin! He said
something like, "You're the only one who has treated me like a human being."
It didn't stop him doing what he did, but my point is even he was frail before
virtue.)
------
Mikhail_Edoshin
In 1950s, I think, there were extensive research about behavior and how it
changes in response to external influences, run chiefly by F. B. Skinner. The
overall conclusion of that was that punishing the undesirable behavior does
not work; that is, it will likely change the behavior, but not in the desired
way. The only reliable way to change the behavior was to reward the desirable
behavior; there's also some details on how to dispense the rewards (first
regularly and then over irregular intervals). Skinner even wrote an utopian-
genre book _Walden Two_ about the future society that engineers its own
culture using behaviorist method. (For example, quite a few diseases can be
prevented or detected early simply by undergoing regular medical observations;
yet not that many people do this regularly. What if we could change our
culture to shape the desired behavior? Wouldn't it result in much more
efficient health maintenance?)
For quite some time I regarded this as a humanist approach, but then I changed
my mind and consider rewards used in this way not much different from
punishments. Even if you don't try to use it manipulatively (which is
obviously evil) but actually care about another person, it's still too
dangerous and, I believe, eventually detrimental to the person's well-being.
Besides, it's very easy to deceive yourself that you're not trying to
manipulate while you actually are.
Consider that, for example, many video games follow the Skinner's model to the
letter: they reward the desired behavior (playing) by dispensing some virtual
rewards from time to time. Not regularly, so it's not boring, but often enough
so that you get the boost and continue; and from time to time they give
"better" rewards (more rare or in exchange to a number of smaller ones, so
they look subjectively more expensive). It's easy to see people addicted to
these games.
------
bitwize
I imagined Retsuko calling Mr. Ton "honest", and he became the shiny-eyed,
smiling Ton everyone is scared of.
~~~
Jarwain
Reminded me of the end of Kubo and the Two Strings, where the humanized
antagonist is bestowed virtues that he then embodies
------
musgrove
Why is this person so worried about manipulating other people to be "virtuous"
in his eyes? Clean your own house first, the community will thank you for it.
| {
"pile_set_name": "HackerNews"
} |
The Blind Man Who Taught Himself to See (2011) - bjhoops1
http://www.mensjournal.com/magazine/print-view/the-blind-man-who-taught-himself-to-see-20120504
======
ColinWright
Submitted and discussed at length many times - here's just one:
[https://news.ycombinator.com/item?id=2284007](https://news.ycombinator.com/item?id=2284007)
They're all fairly old, so if you have something new to add you'll have to do
it here. If you value HN wisdom at all, it might be worth reading previous
discussions.
| {
"pile_set_name": "HackerNews"
} |
Greek PM calls referendum on bailout terms - antouank
http://www.theguardian.com/world/2015/jun/26/greece-calls-referendum-on-bailout-terms-offered-by-creditors
======
pjc50
Well, this is going badly; lots of grey top-level comments.
The real underlying issue is: who has budgetary authority in Greece, and what
does that mean?
State debt does not usually come with strings attached. Even if the repayments
cannot be made, debtors don't get to make policy. The big exception is the
IMF. The ECB is now also trying to take on that role.
The ECB is trying to run Greece like a chapter 11 bankruptcy. Not only a
repayment plan, but dictating who wins and who loses in Greek society. The
critical issue is pensions, which are considered "high" despite that not
really being true: [http://blogs.wsj.com/brussels/2015/02/27/greeces-pension-
sys...](http://blogs.wsj.com/brussels/2015/02/27/greeces-pension-system-isnt-
that-generous-after-all/)
There is an extra factor in the pension system that out-of-work benefits are
really not generous or even functioning at all:
[http://greece.greekreporter.com/2014/06/16/85-in-100-unemplo...](http://greece.greekreporter.com/2014/06/16/85-in-100-unemployed-
do-not-receive-benefits/) So it has become common for elderly people to
support younger ones. Cutting pensions too deeply will blow up the
unemployment problem.
The question is really poverty+independence vs vassal status. Quite a lot of
people are saying they are willing to put up with poverty to preseve their
dignity.
~~~
chappi42
It's not poverty+independence, it's
- poverty+bad_government+continued_corruption+leftish_dreaming+failure
vs.
- poverty+listen_to_advices+reform+better_life
The 'quite a lot of people' might want to think about their ego and should not
confuse it with dignity. It's Greece vs. the whole of EU. And do you really
think one is independent if one is broke?
~~~
DominikR
Wow, you simply presume that they (the Greek) are wrong and the West is always
right with their good advices and reforms.
Just look at what kind of better life the "good advice" and "reforms" of the
West has brought the people in Africa, the Middle East and the various western
supported military dictatorships around the world.
This way of thinking (we are always good and mean good) is the fundamental
underlying issue in the conflict between the west and the east that has been
raging now for more than a thousand years.
Western way of thinking is all about superiority, cruelty and power disguised
as good advice because we know best whats good for others.
~~~
cpncrunch
I don't think that word means what you think it means. Greece is actually the
founder of western civilization, so Greece is the "West".
~~~
johnchristopher
I don't think OP implied Greece was the east in its comparison and
description.
That way of thinking also applies to west entities: the powerful one is always
right.
------
rsp1984
What is actually happening here is that Tsipras is simply trying to save face.
Attempting to get the Troika (EU, IMF, ECB) to bend in the negotiations was a
suicide commando in the first place.
Now he realizes that his plan failed and is trying last minute to shift
responsibility to the Greek public (i.e. "let them decide what the right path
is"). In any outcome event (Greece leaving Eurozone or Greece accepting Troika
terms) he would be able to point to the referendum and tell everyone that it
wasn't "his fault".
~~~
TillE
Given that he and his entire party are loudly supporting a "no" vote, I don't
think this is the case. Their position is clear.
They simply don't have a democratic mandate to accept continued, worsened
austerity. Everyone basically agrees that if the vote is "yes", that will be
the end of the Syriza government. If it's "no", well, then they have the
support to continue rejecting austerity.
~~~
return0
> They simply don't have a democratic mandate
they were elected this January (democratically)
~~~
vacri
A mandate is an authorisation to do something, not an open-ended power. The
rest of that quoteed phrase references what the mandate was about - they were
elected on a campaign to do the _opposite_ course of action.
~~~
return0
Then whats the point of the referendum. If they insist on non-austerity, make
a plan outside the euro.
------
Alkim
It is interesting to think about the motivation of the Greek elected
officials. To me it looks like they are trying to avoid the responsibility of
making a painful decision. It also appears quite likely that the Greek
citizens will vote against reducing pensions, etc.
So with that in mind, what happens after 5 July? The rest of the EU will be
placed in the uncomfortable position of either caving to the demands of the
Greek citizens or somehow taking the position that the referendum is
irrelevant--which will look very undemocratic. But to cave to Greece would
mean the rest of the EU taking on an indefinite financial drain.
I don't see how this ends any way other than Greek banks collapsing or Greece
leaving the EU.
~~~
return0
They are looking for the easy way out. The greeks will predictably vote for
euro and they 'll be called to resign, which they will. Bubble popped, new
coalition government accepts bailout terms and reality resumes.
~~~
junto
And if that happens we are back to square one. Greece can't pay this back.
Whether it was wrong of them to accept a loan they couldn't pay back is now a
mute point. Greece are bankrupt. Everyone just needs to accept that fact.
A new government that accepts the terms of yet another loan they can't pay
back is just going to end up in the same mess two years down the line.
Germany wants them to turn into some German style mini-clone. Germany needs to
accept that other states are like children; naively, you think you can bend
them to your will and force them to behave in this bmanner you think is
acceptable, but you are a fool if you believe that to be the case.
------
lifeisstillgood
This is a huge deal, and is a make or break time for the European Project. We
are now asking (a second time) for a democratic mandate to negotiate against
an unelected (but appointed by elected officials) economic central bank.
I honestly did not expect this one :-)
Inhave spent some time trying to wrap my head round this and it does seem that
the Greek government, fairly or unfairly, is going to have have its legs
broken over this, but if serious fiscal Union of other steps towards solving
the fixed exchange rate and debt issues across all of Europe are not included
then the ECB is at fault - they need to turn round to the rest of the council
of ministers and say "you need to include this - we don't have the authority"
------
acd
I picture a visual picture of Newtons falling apple towards the ground with
gravity as the rising debt in comparison with Greece ability to pay those
debts. It is certain that the apple will fall to the ground its just a matter
of time vs distance until it gets there analogy with the ability to pay of the
debt. The Greece debt default are inevitable unless the Greece people suddenly
behave like German productivity wise which is quite unlikely. The latest
wording was that Greece needed the new loan to be able to pay off the old the
old IMF loan payment which is coming up end of June. That to me sounds very
much like something similar to a Charles Ponzi scheme, you need new
participants in the scheme in this case a new loan to keep it rolling.
A programmer spelled it out on a blog entry, its the same currency the Euro,
German productivity is rising with x percent per year versus the Greece are
falling in comparison with Germany. Combine that with unwillingness to pay
taxes in Greece and you have bad funding, non efficient government with high
debt.
My suggestion, let the Greece people default on their debts, let the original
bond holders take their investment losses on the investment. Give the Greece
people back their local currency the Drachma it will then become a cheap
country to have a vacation in and people will go there instead of other
countries.
~~~
ZeroGravitas
The original lenders were in so deep they'd have crashed the French and German
economies, and taken the rest of the EU with them, if not bailed out. But the
politicians don't like talking about this as a northern European bank bailout,
preferring to blame the Greeks.
The Greeks have issues, just like a lot of homeowners got in too deep, but if
your bank, or your banking sector can't deal with bankruptcies then its not
really the bankrupts fault, They took the deal they were offered with interest
on one side balanced against risk on the other.
------
evanpw
There is no moral dimension to this situation: Greece is not morally obligated
to pay back its existing debts, and the rest of Europe is not morally
obligated to loan more money to people they don't believe will pay them back.
(If you think that Greece "needs" the money and Europe should give it to them
because they're suffering, there is a list of 140+ countries poorer than
Greece here:
[https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP)...](https://en.wikipedia.org/wiki/List_of_countries_by_GDP_\(PPP\)_per_capita)).
It seems like a bad idea to _me_ to default and leave the Euro. It will almost
certainly be economically worse for the Greeks in the near-term than accepting
the Troika's conditions for more loans, but it could be rational if the Greeks
value independence very highly compared to maintaining their current standard
of living. If you're going to do something with such dire consequences,
though, a referendum seems like the way to go.
------
return0
Amateur move at best. Should the germans and the french follow suit with a
referendum whether they should bail out greece again?
The government that acted like nagging teenagers doesn't want to grow up, and
chooses to bail. It saddens me that i voted for them.
Truth be told, the EU is forced to look in the mirror this time: Despite all
the rhetoric and despite most europeans liking the idea, the greeks do not
become more like germans and the germans do not become more like spaniards.
~~~
konstruktor
> Should the germans and the french follow suit with a referendum whether they
> should bail out greece again?
Yes, and they should have done so much earlier. If the other Eurozone
governments would have listened to popular demand, Greece would not have been
bailed out, and the default would not affect their taxpayers as much as it
does now. The whole bailout programme was hubris, and it is time to call the
bluff.
~~~
bornabox
Well, it's politics. A lot of the big lenders were german banks. So it made
sense for the german government to support more loans to Greece, because those
were used to keep paying back german Banks. Grossly simplified, but that's
what has been happening in the last years.
------
venomsnake
That is the right move. Also Germany should stop talking about fiscal results
and demand political reforms. Clean corruption in the administration and good
fiscal results will follow on their own.
And Germany should elect finally someone competent. Merkel is a joke.
~~~
chappi42
You are right but 'demand political reforms' is easier said than done. Greece
is paralyzed since 7 years without progress. Motte: blame others do nothing,
cry.
Merkel is no joke, she is the most intelligent and gifted politician around.
(Of course this is as subjective as your statement, but I'm right :-)).
~~~
venomsnake
> she is the most intelligent and gifted politician around.
She is entertaining Cameron's crazytalk of secession. Instead of just saying -
go on, let me see you. And that means she has no idea what she is doing.
~~~
ZeroGravitas
Cameron doesn't want to leave the EU. He represents business interests, who
overwhelmingly benefit from EU membership so he's fighting to stay in.
But his party also has lots of people who cling to dream of the British
Empire, and don't like social democracy, who dislike what the EU stands for.
Those people were threatening to defect to a smaller party, which under the
UK's ridiculous first-past-the-post system meant that those single issue
voters could heavily influence policy despite their relatively small numbers.
As a result we're going to have a referendum, that Cameron didn't actually
want to have, and which he's going to campaign on the pro-EU side for.
So Cameron needs to make it _look_ like the EU is making concessions, so that
he can return to the UK and campaign to stay in and make it seems like he's
won some great victory. Note that he's being very vague about what he wants
from the EU, this is because he's going to claim victory whatever happens, and
he knows probably nothing will happen.
Merkel also knows this, and knows he knows, and so the only thing she needs to
do is not embarrass Cameron too much in order for the UK to vote strongly to
remain in the EU. Calling his bluff and exposing this nonsense for what it is
would be fun, but not particularly smart.
------
tomp
It's really sad that it has come to this.
------
nextweek2
The real solution is that Germany leave. The Euro could then devalue for the
benefit of all the weaker Eurozone members: Greece, Italy, spain, Portugal and
Ireland.
~~~
XorNot
Germany really really doesn't want to leave the Eurozone. A strong Deutschmark
would tank their export economy.
~~~
mafribe
That is rather questionable. German exports did very well before the Euro. And
neighbouring countries not in the Eurozone are also doing well. Switzerland,
UK, Denmark etc.
~~~
XorNot
It's not like they'd take a hit. But that's not the point - the investors
leveraging themselves into the millions of the basis of fractional %
improvements/decrements care very much about any possible, natural contraction
you might expect from a German mark holding very high value.
------
tsotha
Of course he wants a referendum. That way whatever happens people can't blame
it on him.
IMO that's the sign of a weak leader.
~~~
mrweasel
I don't think so, quit the opposite actually. He can't give the Greeks what
they want, it's not within his means, not without breaking promise he made
earlier on. Arguably he may have made promises that where perhaps never
realistic.
The people want two contradictory things, based on the anti-austerity and "We
want to stay in the Euro" protests. It seems fair to ask the people "Do you
want the Euro, regardless of what austerity measures that might require" or
"Do you want to avoid austerity and risk having to leave the Euro"?
~~~
tsotha
>I don't think so, quit the opposite actually. He can't give the Greeks what
they want, it's not within his means, not without breaking promise he made
earlier on. Arguably he may have made promises that where perhaps never
realistic.
If you put it that way he's not a leader at all. He's just an opportunistic
liar taking advantage of a bad situation.
| {
"pile_set_name": "HackerNews"
} |
Dogecoin Soars $40M in Value Following Chinese Exchange Opens - schenecstasy
http://www.ibtimes.co.uk/dogecoin-value-soars-40m-follow-chinese-exchange-opens-1436085
======
pkulak
I don't really get the appeal of Dogecoin. I really like its intention to to
keep some inflation going forever. That combined with the low transaction fees
seems like it could be a currency that actually ends up being spent instead of
hoarded. But, it's still 100% proof of work, so if it really does take off, a
good chunk of the world's energy is going to be used to find the first n
digits of SHA hashes (or whatever work Doge uses).
| {
"pile_set_name": "HackerNews"
} |
How to Hire Better Marketers Using This One Simple Technique - uladzislau
https://blog.drift.com/hire-better-marketers/
======
jeffshek
One thing that's slightly unfair is you disproportionately weed out a lot of
good candidates who don't have the time to fill out the entire screener type
of questions.
However, you do get the "diamonds in the rough" who go the extra mile and do
the research to find the right groups to relate to though.
| {
"pile_set_name": "HackerNews"
} |
What clients are proven to be vulnerable to Heartbleed? - Angostura
http://security.stackexchange.com/questions/55249/what-clients-are-proven-to-be-vulnerable-to-heartbleed
======
patio11
This is _particularly_ of interest to those of us who e.g. have a web
application with an embedded HTTP client for e.g. processing web hooks,
hitting APIs, downloading image files for avatars, etc. If your application
can be coerced into fetching either a) an attacker-chosen URL or b) _any_ HTTP
URL, you can be sent to a malicious server which heartbleeds you. (If the
attacker can specify the URL it's trivial, if you get any HTTP URL then the
attacker can use a privileged vantage point to MITM the HTTP connection then
301 redirect you to a better URL.) Can you imagine any freed memory in your
appserver's process which you wouldn't want an attacker to have? Good answer!
------
bradleybuda
We _just_ pushed out a tester (we wanted it for ourselves and decided to make
it available to others):
[https://reverseheartbleed.com/](https://reverseheartbleed.com/)
Thanks to @patio11 and others for point out the 'other half' of this
vulnerability and motivating us to get a quick fix out.
------
beachstartup
yes, don't forget to restart your applications after updating openssl
libraries. this includes clients!
| {
"pile_set_name": "HackerNews"
} |
Unpatched KDE Vulnerability Disclosed on Twitter - ga-vu
https://www.zdnet.com/article/unpatched-kde-vulnerability-disclosed-on-twitter/
======
pbhjpbhj
I posted this too, I'm surprised it has no traction.
| {
"pile_set_name": "HackerNews"
} |
The Revival of Concorde - cryptoz
http://www.telegraph.co.uk/luxury/travel/83904/concorde-flights-planned-to-resume-and-aircraft-proposed-for-display-in-london.html?curator=MediaREDEF
======
idlewords
For anyone with a few hours to spare geeking out about Concorde, I recommend
this thread:
[http://www.pprune.org/tech-log/423988-concorde-
question.html](http://www.pprune.org/tech-log/423988-concorde-question.html)
It starts off slow with a technical question, but gradually pulls in some of
the plane's designers (and even a flight attendant) discussing every aspect of
the plane's design and operation. One of the most amazing aviation threads on
the Internet.
~~~
yitchelle
It was fascinating to see the prototype drawings on the wing designs on this
thread, plus many other design aspects of the Concorde. Thanks for sharing.
------
johngalt
The Concorde is one of those early implementations that is just big enough to
barely work, yet never really succeeds, while also sucking all the air out of
the room for any other ideas. The aviation fan side of me would love to see
the Concorde fly again but my gut says that trying to resurrect the Concorde
will only delay any chance of regular supersonic flights in the future.
~~~
idlewords
There's nothing about Concorde that ruined it for anyone else. The problems
that killed SST programs—high fuel costs and noise—remain unsolved. The plane
was a pretty remarkable piece of engineering given the physical constraints.
~~~
qq66
One more thing that shut down the Concorde was the post-9/11 increase in
security procedures at airports, greatly diminshing the relative advantage of
supersonic flights. If getting on the plane takes an extra (unpleasant) 45
minutes on each end, then the percentage time savings from supersonic flight
is reduced.
~~~
hn9780470248775
Are post-9/11 security times really longer? Security screening actually seems
pretty efficient (i.e. quick) to me. Even before 9/11 we had baggage x-ray and
passenger metal detection. These measures were introduced in December 1972.
~~~
miah_
Yes. I have to opt-out every time I fly now which adds ~20-30 minutes to my
visit to security.
~~~
hughes
Opt-out of what?
~~~
diyorgasms
Presumably the millimeter wave body scanners, which are both invasive and
relatively unstudied in its health effects.
~~~
toomuchtodo
Apparently millimeter wave radiation can be ionizing under the right
circumstances. That's sort of terrifying.
[http://arxiv.org/abs/0910.5294](http://arxiv.org/abs/0910.5294)
~~~
oakwhiz
The paper you referred to doesn't make any claims with regards to ionization -
the effects considered are mainly having to do with the molecular dynamics
behaviors of DNA.
------
sandworm101
With the increased number of ultra-rich these days, Concorde or another SST is
probably more viable then ever. It's certainly less ridiculous than talk of
suborbital rocket planes.
The kid in me wants to see Concorde fly as an aircraft, but the working adult
would see it as yet another toy for the very rich, undeserving of any special
consideration from my ilk. When I see a ferrari drive by I think "cool car"
but I certainly would be against any and all tax breaks or special treatment
to keep it on the road. Mounting a Concorde on a special platform in the
middle of the London Tames seems like a cheap tourist trap.
~~~
gaius
_The kid in me wants to see Concorde fly as an aircraft, but the working adult
would see it as yet another toy for the very rich, undeserving of any special
consideration from my ilk._
Did you ever hear the term "jet set" used to describe the glamorous social
elite, well that term comes from the days when international air travel was
the exclusive preserve of the super rich. Nowadays you can jump on a plane for
less than the cost of a meal at the airport (+ taxes of course). Small-minded
attitudes like yours would have kept us living in caves because people in mud
huts were the "undeserving rich".
~~~
superuser2
As far as international air travel, I've never seen a flight from the US to
Europe for less than $1,000. That's a damn expensive airport meal.
Short-haul commuter flights are a different thing entirely.
~~~
gaadd33
Really? I routinely travel to US to Europe for between $600 and $800. Still
not an airport meal but not that insane.
------
dankohn1
I flew Concorde right after it came back into service in late 2001, right
after the crash. Highlights were feeling the heat on the windows and seeing
the curvature of the Earth. Downsides were tight seats, and the fact that it
didn't save that much time vs. a conventional jet. But, it was amazing, and I
still wear my Concorde cufflinks. It still seems complete uneconomic.
------
PhasmaFelis
It's really weird to see a frequent fliers' club come together and open their
wallets with the sort of dollar values normally seen at, say, alumni events
for a major university, or perhaps a charity fundraiser for an ongoing natural
disaster.
Then again, when one reads things like "A particularly extravagant excursion
was a one-day visit to the pyramids in Cairo in 1982; priced at £780, it was
marketed as the most expensive day trip in the world," and ponders the sort of
mindset where "most expensive in the world" is considered a marketing point,
F. Scott Fitzgerald does come to mind: "Let me tell you about the very rich.
They are different from you and me."
------
VeejayRampay
The French and the English built the Concorde in the late 60's and the French
also built the HST known as "TGV" in the early 70's, a train that to this day
(with newer iterations) holds speed records for a conventional train (proof
that the initial design was excellent). After all this, people still like to
joke about how the French are "not good engineers".
~~~
maus42
Well, usually people who are saying that are thinking about the everyday
results of French engineering, like the electronics of Renaults and Citroens.
------
userbinator
Interesting trivia: the maximum external dimension of the Concorde's fuselage
is 3.32m, whereas the overall diameter of the GE90-115B, the engine used on
some 777s, is 3.429m. It's a very narrow plane.
~~~
privong
For another point of reference, the max cabin width of a Boeing 737 is 3.53m
and the fuselage width is 3.76m[0].
[0]
[https://en.wikipedia.org/wiki/Boeing_737#Specifications](https://en.wikipedia.org/wiki/Boeing_737#Specifications)
~~~
brc
So you could fit a Concorde fuselage inside a 737? wow.
------
grecy
"To see the future, go to a museum and look at the Concord" \- Jeremy Clarkson
------
bambax
> _the club is aiming to purchase a Concorde currently stationed near Orly
> Airport in Paris_
Here it is on Street View:
[https://www.google.fr/maps/@48.715764,2.3727748,3a,75y,274.5...](https://www.google.fr/maps/@48.715764,2.3727748,3a,75y,274.58h,85.91t/data=!3m6!1e1!3m4!1s1mVqhC-
jLCt-tdB-SaKOQA!2e0!7i13312!8i6656)
and here's a picture I took of it a year ago:
[http://imgur.com/UtD294i](http://imgur.com/UtD294i)
It doesn't appear to be in great shape.
------
billiam
A sign of our imminent extinction, no doubt. Let's double down on JP-4! Party
like its 1989, or til the seas are flooding the runway!
------
cstross
Supersonic commercial passenger travel is almost certainly dead for the
foreseeable future. (Small supersonic bizjets are another issue entirely.)
Here's why:
To start with, Concorde was a high-maintenance airframe, more like a military
aircraft than an airliner. Each airliner averaged about a day of maintenance
in the hangar per two hours of flight, so made one return trans-Atlantic
crossing per week. It also burned roughly 100 tons of fuel shipping 100
passengers between NYC and London in 3h30m, compared to a 747 burning the same
amount of fuel to ship 450 passengers between London and San Francisco; about
5-10x the fuel burn per passenger-mile.
But the reason it eventually tanked in the market ...
Suppose you're flying London-NYC, post-9/11\. You can queue up for security
checks 2 hours before you take your seat, then either fly on a regular
subsonic airliner or Concorde. After your flight you spend 1 hour getting
through immigration and customs at JFK. Total time on security/queueing: about
3 hours. Total time in flight: 3h30m or 7h. So Concorde only cuts your _end-
to-end_ travel time from 10h to 6h30m.
Note that a Concorde seat is a cramped, narrow coach-class seat. Okay, there's
first-class food and drink and a buffet in the departure lounge: but it's
still coach-class leg-room. Meanwhile, a first class seat on a 747 gets you a
lie-flat bed along with your posh nosh ...
But if you have the money to fly Concorde, you have the money to pay for a
seat on a private bizjet or a charter service like Netjets. It's still
subsonic, but you by-pass the _entire_ check-
in/security/boarding/immigration/customs mess. Just drive through a gate and
up to your bizjet, board it, and it takes off _when you 're ready for it_, not
vice versa. And on arrival, an immigration officer comes out to meet you and
stamp your passport (if you pay extra -- part of the service). Travel time:
7h.
The takeaway is that the super-rich/first class jet-set passengers deserted
Concorde because they could get the same travel time for the same money on
private business jets, without being treated like cattle by the TSA.
Even if you could wave a magic wand and streamline the queueing/bureacracy for
Concorde passengers, there remains the fact that if you re-started services
tomorrow they'd only be able to manage one flight in each direction per day
(if they had a fleet of six hulls magically preserved and ready to fly).
Whereas the bizjet is ready to fly whenever the passenger wants it.
Upshot: mass supersonic jet travel is dead for the time being. A market _may_
exist for supersonic bizjets (and indeed Aerion claim to be bringing a Mach
1.5 bizjet to market within the next 5 years -- but someone or other has been
saying some variation on this theme since about 1990). The only way we'll see
supersonic passenger airliners the size of or larger than Concorde again is if
we get a propulsion technology breakthrough that makes it feasible to run a
scheduled daily-or-more-frequent service across distances that make it
desirable -- LAX to Tokyo or Beijing, London to Sydney, and so on. Nuclear,
anyone?
------
swehner
Something called global warming? What I understood, this is quite a wasteful
machine.
~~~
andreasvc
All manner of transportation and industry is contributing to global warming.
Unfortunately, that is typically not an important factor in these kinds of
decisions.
| {
"pile_set_name": "HackerNews"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.