text
stringlengths
44
950k
meta
dict
Should software be patentable? That's the wrong question to ask - llambda http://www.zdnet.co.uk/news/intellectual-property/2011/10/29/should-software-be-patentable-thats-the-wrong-question-to-ask-40094152/ ====== nemoniac The central argument holds: a machine process implemented in hardware is in essence no different from one implemented in software. But you can drive that argument in two directions. 1\. The hardware invention is patentable therefore the software invention should be patentable. 2\. The software invention is unpatentable therefore the hardware invention should be unpatentable. Because it suits his purposes, the author article opts for the former.
{ "pile_set_name": "HackerNews" }
Ask HN: Who are working on young open-source projects? (or know such?) - pankratiev I mean projects that just started recently and are not so popular.<p>I am working on a flexible discussion platform for open-source projects. If you are interested, could you please drop me a few lines at vladimir@tagmask.com ====== cperciva I released kivaloo just recently. As far as I'm aware, it currently has zero users (Tarsnap isn't using it yet), which probably qualifies it as "not so popular". ------ rkalla I see a lot of node.js work here... I'll be the super-uncool guy in the group and admit that so far I've only ever released Java libraries under Apache 2. I just released a CloudFront log parsing library: <http://www.thebuzzmedia.com/software/cloudfront-log-parser/> and a simple XML parsing library (speed of pull-parsing with the ease of XPath-esque expressions): [http://www.thebuzzmedia.com/software/simple-java- xml-parser-...](http://www.thebuzzmedia.com/software/simple-java-xml-parser- sjxp/) and one library that seems to be picking up is an image-scaling library I have made a few releases of: [http://www.thebuzzmedia.com/software/imgscalr-java- image-sca...](http://www.thebuzzmedia.com/software/imgscalr-java-image- scaling-library/) There are a handful of other libraries: <https://github.com/thebuzzmedia/> I just haven't finished the project-pages for them. ~~~ pankratiev Do you have any interest in a flexible discussion platform for your projects? ------ m0hit PGfb is an extremely nascent project (only one day of work), that started at a hack-a-thon at Berkeley. Aim is to bring PGP encryption to facebook using browser extensions. <https://github.com/m0hit/PGfb> ~~~ pankratiev Looks very promising. Can I contact you? ------ JamesChevalier This is my first open source project... It's an open source "LaunchRock"-type site: <https://github.com/JamesChevalier/Launch-Soon> ------ __david__ I wrote a program called daemon-manager to scratch an itch of mine (<http://porkrind.org/daemon-manager/>). It lets you manage non-root daemons from your user directory without requiring root permissions to start and stop them. I've been dogfooding it for about 6 months and just can't live without it any more. I think other people would be interested in using it but I'm not very good at marketing. I also co-wrote and maintain commit-patch (<http://porkrind.org/commit- patch/>) and it's another that I can't imagine living without. I've been using it for about 8 years and it has slightly more recognition--but not much. ------ kstenerud I created a universal framework template for iPhone/iOS (lets you build static frameworks that work on device and simulator): <https://github.com/kstenerud/iOS-Universal-Framework> Also, a nicer interface to iPhone audio: <https://github.com/kstenerud/ObjectAL-for-iPhone> I also put out my Objective-C programmer's toolbox: <https://github.com/kstenerud/Objective-Gems> ------ olalonde Just wrote a small native Node.js extension for displaying desktop notifications: <https://github.com/olalonde/node-notify> Tutorial available here: [http://syskall.com/how-to-write-your-own-native- nodejs-exten...](http://syskall.com/how-to-write-your-own-native-nodejs- extension) Node.js has a lot of potential but is still pretty young. ------ metachris I've recently started appengine-boilerplate, which makes setting up new appengine projects much quicker and more fun. Includes html5-boilerplate, openid-authentication, memcaching, etc. <https://github.com/metachris/appengine-boilerplate> If anyone wants to help, authentication with OAuth and MailChimp integration would be great places to start :) ------ excid3 While I started it a while ago, Keryx for Ubuntu/Debian is pretty new to the Linux community and not near as popular as it could be. <http://keryxproject.org> It's a GUI tool I built in Python to help offline users update and install new software on Linux. I no longer have time to work on it and would love it if someone would like to take over maintaining it for me. ------ baudehlo I recently started creating an SMTP server using Node.js: <https://github.com/baudehlo/Haraka> ~~~ pankratiev I saw it. It's very interesting. I am working on flexible discussion platform for open-source projects. If you are interested, how can I contact you? ------ indexzero We open-sourced our fullstack application server for node.js at NodeConf last week: haibu. <http://github.com/nodejitsu/haibu>. The project had been internal for a year and as such is pretty mature technically, but has a nascent community. If you're thinking about running node.js in production definitely check it out! ------ pestaa I'd like to point out Hyde (github.com/hyde/hyde), it's a static website generator written in Python, and I find it pretty well executed. ~~~ pankratiev It's very interesting! Thank you very much. ------ wess Friend and I are currently working on an embeddable library for server-side javascript called CoreJS (<http://github.com/frenzylabs/CoreJS>). It's not to compete with node, as it's meant to be embedded. It's threadsafe, and is, optionally, async. ------ hsmyers For no particular reason that I can think of I write modules relating to Chess on CPAN---probably not what you were thinking of, but it fits the description in that I've recently (last few weeks) updated/revised all of them. As to their popularity, well, this is chess, so what do you think? :) ------ madhouse There's like.. hundreds starting each minute. You might wish to be a tiny bit more specific. ~~~ pankratiev I think there are not so much people who will write a comment here about their project. However, I am interested in any project. ------ cfinke I'm almost always starting new projects. One of my more recent undertakings is a client-side JavaScript implementation of a Hunspell-style spellchecker: <http://github.com/cfinke/Typo.js> ------ clark-kent Mine is Ragios - Ruby based system monitoring framework: <https://github.com/obi-a/Ragios> A good way to find recently started projects is to follow the keyword 'github' on twitter. ------ simonsarris I am about to start a few small <canvas> game engines for my own use, chiefly a point-and-click adventure/puzzle game engine. I probably won't start coding in earnest until July though. ~~~ pankratiev Do you have any plans to share it on github? ~~~ simonsarris Yeah, lemme make the repo for it right now. You can watch it if you're interested. <https://github.com/simonsarris/canvas-adventure> ------ dinesh_oi I have created a write though cache for mongodb based on memcached. ( mongoid supported currently. ) <http://bit.ly/mfdIN7> ------ Rinum Aiming to create a multiplayer sim ant <http://github.com/rinum/openant> ~~~ pankratiev Interesting! Is there any way to contact you? ------ wrburgess Score is a new open source project for fantasy sports gaming. <http://scoreos.org> ------ helwr <http://thechangelog.com/>
{ "pile_set_name": "HackerNews" }
Incident Report – DDoS Attack - alainmeier http://blog.dnsimple.com/2014/12/incident-report-ddos/ ====== latch I need to learn to let things go, but: [https://news.ycombinator.com/item?id=4280515](https://news.ycombinator.com/item?id=4280515) I've been a DnsMadeEasy customer for a while (they had an outage ~4 years ago from a 50Gbps attack), but once my year is up, I'm switching to Route53. The addition of the Geo DNS Queries was key for me. It isn't clear to me why I shouldn't pick Route53. DnsSimple's unlimited queries seems nice, but I kinda like having actual scaling costs forwarded to customers. ~~~ kyledrake I've had a similar thought RE using Route53 for Neocities. Here's the problem with Route53 though. If you get a DDoS attack using it, it's quite plausible that you would be charged for resources used in the DDoS attack. A recent Vice article discussed this: [http://motherboard.vice.com/read/inside-the-unending- cyber-s...](http://motherboard.vice.com/read/inside-the-unending-cyber-siege- of-hong-kong) DDoS is a nasty problem. We've received a DDoS attack that shut the entire site down for days. We can't use Cloudflare because they don't support wildcard domains without their very expensive plan. I've also heard stories from people using Cloudflare that have still not been able to resolve DDoS issues (I'm not knocking Cloudflare, they're a great company that does a really good job fighting this very hard problem, but sometimes even they have trouble with it). I'll be completely honest and say that I have no idea how to solve this problem. It's really, really, really hard. Switching to different service providers won't get you very far against the monster DDoS attacks that some people can execute. ~~~ stevekemp If you're going to go the Amazon route then you absolutely need to keep an eye on billing, and set up alerts so that any DDoS which caused a spike in your costs would be caught as soon as possible. ~~~ Intermernet I was burnt by this in the first 48 hours of using Amazon DNS. Very unlucky I guess... I'm amazed they still bill for DDOS traffic, or even traffic from black-listed IPs. It seems many of their competitors don't. ------ kator > A new customer signed up for our service and brought in multiple domains > that were already facing a DDoS attack. The customer had already tried at > least 2 other providers before DNSimple. Once the domains were delegated to > us, we began receiving the traffic from the DDoS. I'm curious did they know this in advance or discovered it after the fact? I often wonder about business models where the core expense is "unlimited and free". The reality is there is nothing unlimited or free for the service provider. It seems with a business model like this you open yourself to people abusing your service either by accident or by choice. Imagine poor Mr. Customer here who most likely was having horrible problems thinking to themselves "These guys can do it and for free, if I go to X service they'll cost me a lot of money". I'm a big believer in business models that incentivize both parties properly. I'm sure in general this service provider is arbitraging the 99.9% of domains that barely need any services. That said it only takes a couple of "opps" customers to drive your operational costs through the roof. ~~~ aeden Anthony from DNSimple here. We discovered it after the fact, via a tip from other DNS providers. ~~~ rpug As someone who has been down this road many times before - I can't stress this enough: DDoS mitigation solutions don't solve the problem of an app-specific layer7 attack and it is important to do some testing of how well your mitigation service responds (and that it isn't a silver bullet.) Additionally, you need to make sure your team has tested and proven procedures for engaging the service, respond to attacks, etc. Services like NimbusDDoS (www.nimbusddos.com) are good because you can do some real scenario testing and make sure your team and infrastructure is prepared. There are other services out there too that I am less familiar with, but either way really good stuff to do. ------ stephenr The solution here is one for customers, not providers. Manage your DNS at one location on "master" (potentially a "private" server with IP restricted access and zone transfer ACLs). Setup 2+ accounts with "DNS providers" that support incoming zone transfers - that is, they can operate as "slave" DNS servers, pulling records automatically from your "master" (once access rules are set of course) and returning results directly to clients making DNS queries. Most "Secondary DNS" packages are < $50 year, so use a few, and don't worry about individual DNS networks being burnt to the ground. ~~~ jhealy It seems like inbound and outbound zone transfers aren't offered by a number of providers (like AWS). Do you know of a list of DNS providers that support either option? ~~~ mike-cardwell I used to use these two services together do this: https://puck.nether.net/dns https://acc.rollernet.us/ They're both free to sign up, provide free secondary DNS, zone transfers and fully support IPv6. I only stopped using them because I wanted to run my own DNS service. ------ abalone So who do you think the "well-known third-party service that provides external DDoS protection using reverse DNS proxies" is they're going to use now? CloudFlare? ~~~ crystaln Hopefully not. CloudFlare is remarkably unreliable for a service that claims to improve uptime. ~~~ ad_hominem [citation needed] Last I checked CloudFlare routinely handles[1] 10Gbps to 65Gbps attacks, and has successfully handled attacks as large as 300Gbps and 400Gbps. According to this report DNSSimple crumbled under 25Gbps. [1]: [https://support.cloudflare.com/hc/en- us/articles/200170216-H...](https://support.cloudflare.com/hc/en- us/articles/200170216-How-large-of-a-DDoS-attack-can-CloudFlare-handle-) ~~~ etcet Their last significant outage was only 2 months ago: [https://blog.cloudflare.com/route-leak-incident-on- october-2...](https://blog.cloudflare.com/route-leak-incident-on- october-2-2014/) ~~~ xxdesmus As the blog post outlines, the outage was related to an upstream network provider leaking routes. Note exactly something we can prevent for them. ------ cm2187 Out of curiosity, what are the follow ups of an attack like that? The perpetrators are probably using their own servers or compromised clients or servers. Would DNS Simple follow up on this with the abuse/complaint dept of the ISP of the attackers? Are ISP typically responsive to abuse and complaints? If they are not is there any way to black list blocks of IPs assigned to ISP who do not care about being the source of DDoS attacks? Investing in anti DDoS devices is important but even more important is for the perpetrators to face the consequences of their acts (or anyone who lets his machine being used by pirates - terminating or suspending their contract would be a fair response). ~~~ iancarroll I was looking at [http://map.ipviking.com](http://map.ipviking.com) earlier and it was apparent it was a botnet, most likely innocent home users with a virus. ~~~ brownbat It'd be nice if IPs involved in botnet DDoS's could go into a public registry, then get a banner from Google saying, "Hey, you might have a virus, someone reported you to this list." Abuse would be tricky, you might be able to limit it by letting only a few DDoS mitigation providers populate the list. ~~~ Xylakant A lot of ISPs for example in Germany reuse IP addresses and force a reconnect every 24 hours. I don't think showing me banners because the previous "owner" of the IP had a virus is going to improve the situation. Other people share a network behind a NATed IP which is also a problem. They'd all receive a banner, check their computer and a test would come up negative. ~~~ cm2187 Google wouldn't know but the ISP would know who was behind a particular IP at a specific time. They are the ones who should police their network when there are abuses. ~~~ Xylakant The original proposal was that google delivers the ads. So google would have to contact my ISP who would then have to return whether or not I was using any of the given "spammy" IPs at the time that they were spammy - or my ISP would have to deliver the banner. No thanks. ------ milos_cohagen What was the overall makeup of the attack traffic? For example, 50% tcp syn, etc.
{ "pile_set_name": "HackerNews" }
The freelance software developer’s limbo: from freelancer to agency - peppesilletti https://medium.com/@peppesilletti/the-freelance-software-developers-limbo-from-freelancer-to-agency-761c91848f53 ====== peppesilletti There are many blog posts, communities, online courses, and gurus talking about how to get started on the freelancing path, but not so many about transitioning from being a solo gig worker to a fully-fledged web agency. This is also true for freelance software developers. In this series’ articles, I’d like to talk about this intermediate stage, the “limbo” where many freelancers like me are wandering around looking forward to establishing a successful agency.
{ "pile_set_name": "HackerNews" }
Config manager based on Git for your $HOME - albertzeyer https://github.com/RichiH/vcsh ====== rofrol And this is my imho simpler approach: [https://github.com/rofrol/.configs](https://github.com/rofrol/.configs) Store all configs in ~/.configs, and make symlinks to ~/ with install.sh. There is also push_configs.sh to rsync configs to some host and users
{ "pile_set_name": "HackerNews" }
Ask HN: How do I find an angel investor for my startup? - sktrdie Really I don't know anything about investors, company logistics, economics, marketing. I'm just a developer that created a useful product (I hope).<p>Can Y Combinator help me with this? ====== badmash69 As Tony Montana ( Scarface) said: In this country, you gotta make the money first. Then when you get the money, you get the power. Then when you get the power, then you get the women. In a start up context ,it means that first get some traction -- customer registrations or signups or revenues then you get your brand recognition -- people will start talking about you or your product then you get your investors. This is what I am doing with my bootstrap. (and much apologies for quoting a violent mobster) ------ dmlevi Everyone can have great ideas but its all about execution and the team. Through my experience when speaking with Angel investors and other people with money, they want to see some traction. They want users and to prove that it works and that people will use your baby. You will need to build a revenue model on projections based off your current traction. Hope this helps. ~~~ sktrdie Ok. I hoped it would be as easy as showing the investor a "good" product :) ~~~ dmlevi It totally depends on the investor. Some investors might see your passion and have tons of money and throw you a bone. Others want the facts and proof. My advice is to cover yourself so you don't seem naive or incompetent. ------ veyron Can you describe / show your product? ~~~ sktrdie Sure, it's a Windows application meant to automate installation of software on multiple workstations on your network, remotely and silently. So it's basically like a package manager for Windows, but instead of working for your local Windows machine, it works for all the Windows computers on your network. Think of agencies with lots of workstations that need to keep software up-to- date on each and every machine... this would normally require an IT guy going through each and every computer and manually start the installer... uDeployer does all this automatically. More info here: <http://udeployer.com/> So I guess the idea has possibilites, but would like to get connected into this entire world of investment. ~~~ hnbd Big agencies with many Windows workstations already use Microsoft SMS or System Center (I believe that's what its called now) to deploy software over the network. How do you differentiate? ~~~ sktrdie I offer latest, up-to-date, stable, 3rd party packages. With SMS or System Center you have to create the packages yourself and/or have a system in place to fetch the latest packages. ~~~ veyron customers with large numbers of machines dont necessarily care about having latest software. They care about stability. They don't just upgrade willy- nilly. Upgrading 7-zip [first entry on your screenshot] isn't compulsory. If you do upgrade, you take a risk that there will be no adverse effect. ~~~ sktrdie Actually I tend to differ with that statement. Latest software is always more secure and less buggy. I would personally prefer having my workstations with all the latest software. ~~~ polyfractal Veyron is right. While newer software _may_ be less buggy, it isn't always. New features may break old features, or break other pieces of software. Older software is usually vetted and proven functional in the particular corporate environment where it's deployed, doesn't conflict with other software, etc. My friends in corporate world have told me how it takes X many months for upgrades to be rolled out because they must be tested in sandboxes first to make sure nothing explodes unexpectedly. This apparently applies to all software, from OS updates down to individual program updates. ~~~ sktrdie Ok well, but you're not going to keep the version around forever... you will update at some point in time, and uDeployer can serve you when you need to.
{ "pile_set_name": "HackerNews" }
Impossible Figures Library - franklin_p_dyer https://im-possible.info/english/library/index.html ====== matsemann I like these. Some of them are even non-obviously impossible. They may pass at first glance, but then one sees why it shouldn't work. I recently posted my figures [0] here. It's also a kind of impossible figures, however they are actually printable in real life but still tricks the mind. [0]: [https://github.com/Matsemann/impossible- objects](https://github.com/Matsemann/impossible-objects) ------ mmazing It cracks me up that I'm getting tons of ads for 3D printing on a website for impossible objects. ~~~ 082349872349872 Then I'll suggest [https://www.kleinbottle.com](https://www.kleinbottle.com) "Glass Klein Bottles for sale - inquire within" ~~~ eternalban You could be the future of advertising.. ------ onion2k I'm struggling to understand why the illustration at the bottom left of [https://im-possible.info/english/library/grey/grey4.html](https://im- possible.info/english/library/grey/grey4.html) is impossible. If you don't assume the two horizontal bars are parallel, and the top one is farther back than the bottom one, it make seems to make sense. Eg it has a side profile of; ---- ---------------- ---- ~~~ coding_lobster I guess it's because the bottom and top bar look like their edges are actually touching which should be impossible with the middle bar going between them without it being squished. ------ 082349872349872 I wonder what the text equivalent would be. Are there sentences which are locally consistent[1] but globally impossible? (the question is on my mind because earlier on HN I'd had a nice convo about a folk song embedded in a protest song in which the quoted verse worked nearly as well as polish under a functor as in the original ukrainian. [https://news.ycombinator.com/item?id=24101696](https://news.ycombinator.com/item?id=24101696) ) [1] on a longer scale than "colourless green ideas sleep furiously", of course. ~~~ theemathas Garden-path sentences. Or comparative illusions. [https://en.wikipedia.org/wiki/Garden- path_sentence](https://en.wikipedia.org/wiki/Garden-path_sentence) [https://en.wikipedia.org/wiki/Comparative_illusion](https://en.wikipedia.org/wiki/Comparative_illusion) ~~~ 082349872349872 ขอบคุณ! The garden-path admits of a consistent, yet initially unpredicted reading, while the comparative illusion is exactly what I'd been groping for: locally cromulent but globally uffish.
{ "pile_set_name": "HackerNews" }
Bare Metal Rust 2: Retarget your compiler so interrupts are not evil - dbaupp http://www.randomhacks.net/2015/11/11/bare-metal-rust-custom-target-kernel-space/ ====== jscheel I've been playing with Rust for os dev and emulation for a little bit. It's great to have others who are significantly more well-versed in this field sharing their knowledge with those of us who are struggling through it. ~~~ easytiger With that memory model its not surprising. On the other hand, i really really would encourage any new developers to read the rust docs before learning c++ etc. It is amazing to have a language that puts its mem model first. The docs tackle the very basics really well. The k&r c book is amazing but did a poor job of dealing with the true memory model because that was common knowledge at time of writing. Its short because of assumptions the best java (top 0.01%) developers are the only ones I encounter often who understand the x86 + javas basterdised memory model The rest just take it for granted. Even c++ developers (I'm one). ~~~ Manishearth I actually doubt Rust puts the memory _model_ first, but it provides a path through which you effectively don't have to worry about it at all. Which is wonderful and good enough, really. IIRC Rust has a similarly confusing underlying memory model. However, due to a lack of direct shared mutable state being available, you don't get to see this often. In fact, one might call Rust's model more confusing than C++ since you have `noalias` everywhere which can enable more aggressive optimizations. But it's mostly irrelevant since you don't deal with it unless you're writing unsafe code. On the "encourage new devs to read the Rust docs" front I agree, though. We've had tons of people saying that they code better C++ after learning Rust. Also I've heard of companies wanting to start programming in C++/Rust-y things with a majority of python/ruby/js/etc devs use Rust because "Rust teaches a lot of things to the programmers that they no longer have to". Something like that. ~~~ easytiger Agreed. What i meant was they start talking about the memory model at the very beginning of their online book, more than most languages bother with. [https://doc.rust-lang.org/book/the-stack-and-the-heap.html](https://doc.rust- lang.org/book/the-stack-and-the-heap.html) ~~~ steveklabnik Yeah, this is because we want people who may not have a systems backround to be able to use Rust. That's just general info on the concept of stack vs heap, it's not Rust-specific. ------ br1 Can you just not use the first 128 bytes of the stack on a interrupt? ~~~ DSMan195276 No - Part of how interrupts work is that the CPU itself pushes the values of certain registers onto the stack (As well as an error code in the case of an exception). The CPU has to store those values _somewhere_ so that when the interrupt is finished it can return to where the CPU was running when the interrupt happened, and the stack is the obvious location to do so. You can't tell the CPU to do anything different in this case, so you're forced to simply make sure your stack pointer always points to the top of the stack so it doesn't get clobbered. At the end of the day, it's not really that big of a deal - Allowing the redzone only really allows you to avoid two `sub` and `add` instructions on the stack-pointer. It's a nice idea, but losing it isn't that big of a deal at the end of the day. The majority of functions can't take advantage of the red- zone anyway, because any function that calls another function has to make sure its stack-pointer is correct before calling it. ~~~ br1 Understood. Thanks. ------ steveklabnik There's been a lot of really neat stuff focusing on beginners in the Rust OSDev space lately. Glad to see even more posts about it! ------ viraptor I was wondering, would the monolitic/micro-kernel discussion be different today due to userspace services being able to use all the fancy cpu extensions, or would that still disappear compared to the cost of ring switching? Now that filesystems are more complex databases, maybe extensions could help with fast indexing/checksumming.
{ "pile_set_name": "HackerNews" }
Show HN: I made an Android app that gives you an extra phone number to give out - dustball https://market.android.com/details?id=net.tyx.extraphone ====== dustball I built this using the Twilio API, which I really enjoy using. The app is a little expensive; I can lower the price later if I move to a different solution, for example my own asterisk server. Also looking to use in-app purchasing and/or new subscription model as appropriate for each platform (Android/IOS). ~~~ jat850 That's a very neat looking concept, and congratulations on getting it out there. I don't see any link between a dollar price and the purchase of credits, however - could you add or address that? (It seems you get 360 credits for 8.95 but doesn't indicate how much it costs to buy extra credits beyond the initial ones) ~~~ dustball Ah, thank you. Yes, additional credits will be able to be purchased at the same rate. (I'm actually just waiting on Android 2.3, which includes in-app purchasing, which would be perfect for purchasing additional credits.) ------ bitskits ...Or you could use Google Voice in the same way for free. In fact, you can selectively block from GV (send certain callers to "number is no longer in service"), which makes it better than a throw away, IMO. ~~~ dustball Actually, I use Google Voice as my main phone number. I don't even want to give _that_ out -- so same exact problem for me. <shrug> Selective blocking is only a partial solution; your number is still "out there". Well, that is why I built the app, anyway =) Something neat about just being able to "buy" an extra number from the market. ~~~ manvsmachine As someone who also uses GV as a "real" number, I love the option to flush your number and get a new one. It's like the voice equivalent of "cleaning house" on social networking sites. ------ vijayr Tried to access android marketplace using chrome and this is what I got :) <http://i.imgur.com/ZpqWg.png> ~~~ hucker Chrome 11 on OSX gave me this: <http://i.imgur.com/nUyaP.png> ------ magicseth Fantastic. It's really hard to come up with a good pricing scheme. I've found that simple is better than cheap. If you can figure out a flat rate, or some other scheme that doesn't make me have to do math, I'm much likelier to join. ------ SwellJoe Cool idea. It also has other possible uses outside of giving it to potential stalkers. Adding basic analytics would make it useful for A/B testing offers on TV or in print media, for instance. ------ Groxx Price per credit? Very nice idea, and the page is a great sell. Definitely need a more useful developer website landing page, though. ------ wibblenut I like it. A reviewer says, "shouting numbers in a crowded bar isn't a particularly fun sport". Exactly (<http://www.youtube.com/watch?v=m50xrDcj0fc>). I'm a big fan of the .tel TLD for this, and other reasons: publish contact information dynamically, it's fast, reliable, gives you fine grained privacy controls, etc. etc. I don't understand why this hasn't really caught on yet. The implications for the telco market would be huge if more people used DNS to its full potential. ~~~ r00fus Hasn't caught on because it isn't simple. I hardly think dealing with registrars is an activity people relish. Startup Idea? ~~~ wibblenut That's a very good point, although it's changing; specialist registrars/resellers, directory publishers, and VOIP companies are all showing interest now. But I was thinking more about early adopters - the type of people who hang out on HN and register domains in their sleep :) ------ leot Wouldn't I get phone calls from the disgruntled last person who had the phone number? I suppose even then at least it would be temporary ... ~~~ dustball Numbers go out of service for 6 months before being reused, IIRC. The carrier may even lengthen the time if the number is actively getting calls during the hold time. ------ babo Brilliant idea, the image on your page describing it very well! ------ moe Do you have a QR code for this app? Market search doesn't find it for me (from the phone) and all QR codes I find on the web give me a 404. ------ iPhoneJunkie Tiny market, isn't it? How many geeky guy Android users are giving out their numbers at bars? ------ msquared Are you in North Jersey? Recognize the 201 in the screenshot :) ------ antihero Does it work in the UK perchance? ~~~ markszcz I dont know if this one does but what I did was get a SkypeOut number and used Skype to call out when I was in Barcelona to call my friends in the US. So at least you can call people in the US cheap =\ ([http://www.skype.com/intl/en- us/features/allfeatures/call-ph...](http://www.skype.com/intl/en- us/features/allfeatures/call-phones-and-mobiles/)) ------ T_S_ Great idea. ------ T_S_ Perfect for celebs. Know any? ~~~ wibblenut I heard Michael Jackson used to change his number on a biweekly basis. Would be quite interesting to see a survey on this group. See my other comment for a neat solution ;)
{ "pile_set_name": "HackerNews" }
Not Hotdog App - rcamp https://itunes.apple.com/us/app/not-hotdog/id1212457521?mt=8 ====== timanglade Ha didn't expect this to end up here. If anyone is interested, I'm working on a blogpost explaining how we built the app in detail… It uses embedded TensorFlow on device (better availability, better speed, better privacy, $0 cost), with a custom neural net inspired by last month's MobileNets paper, built & trained with Keras. It was loads of fun to build :) ~~~ bitmapbrother Is there a reason you didn't release on Android? The app was even demoed on a Pixel. ~~~ pkulak It was written in Objective C? ~~~ timanglade It's actually written in React Native with a fair bit of C++ (TensorFlow), and some Objective-C++ to glue the two. One cool thing we added on top of React Native was a hack to let us inject new versions of our deep learning model on the fly without going through an App Store review. If you thought injecting JavaScript to change the behavior of your app was cool, you need to try injecting neural nets, it's quite a feeling :D ~~~ tomduncalf This looks great, haha! Shame I can't access it from the UK. It would be really interesting to read more about your thoughts on working with RN and C++ and perhaps how you did some of it. I'm currently doing the same (but with a C++ audio engine rather than image processing stuff) and I think it's an incredibly powerful combination - but I do feel like I'm making up some interop patterns as I go and there might be better ways, so would love to hear how other people use it! Broadly, I've created a "repository" singleton that stores a reference to both the React Native module instance (which gets set when I call its setup method from JS) and the C++ main class instance (which gets set when it starts up), so they can get a handle on each other (I bet there are better ways to do this, but I'm new to C++/ObjC and couldn't work out a better way to get a reference to the RN module). I'm then using RCT_EXPORT_METHOD to provide hooks for JS to call into C++ via an ObjC bridge (in an RCT_EXPORT_MODULE class), and using the event emitter to communicate back to JS (so the C++ can get the RN module instance from the singleton and call methods which emit events). I've not done anything that really pushes the bridge performance to a point where I've seen any noticeable latency/slow down caused by the interop - have you had any issues here? Like I say, I'm finding a really cool way to build apps that need the power of native code but still with the ease of RN as the GUI and some logic, and I actually quite like the separation it enforces with the communication boundary. ~~~ timanglade Sounds like you're further ahead than I was with the React Native part! Not Hotdog is very simple so I just wrote a simple Native module around my TensorFlow code and let the chips fall where they may performance-wise. The snap/analyze/display sequence is slow enough that I don't need to worry about fps or anything like that. As much as I enjoyed using RN for this app, I would probably move to native code if I needed to be able to tune performance. ~~~ 1zee Can you explain to a noob how you wrote the Native module around TensorFlow? My main area of focus is in python, but I feel hindered when I think I'm ready to start developing for mobile apps. I'm looking into RN, but still not sure how that plays with TF and other python modules. ~~~ timanglade It was honestly just maybe 10 lines of code, but I was very confused about it before I got it done. The message passing is a bit counterintuitive at first. I'll try to share example code in my blogpost! ~~~ 1zee awesome, what's your blog? ------ alexcnwy For anyone keen to learn more about object detection (and deep learning in general), I just finished working through an excellent free MOOC taught by Jeremy Howard (former chief scientist at Kaggle) - you basically learn how to fine-tune a convolutional neural network with your own data (e.g. hotdog vs not hotdog) in lesson 2! [http://course.fast.ai/lessons/lesson2.html](http://course.fast.ai/lessons/lesson2.html) ~~~ timanglade I can't recommend that course enough! I attended it in person and got a lot out of it. Jeremy & Rachel were also enormously kind & helpful outside of class. ~~~ alexcnwy Ah so jealous! How was part 2? ------ davb I always find it strange when apps like this are made unavailable in certain countries (in this case, the UK). Video, I understand. From a consumer perspective it's crappy, but I do understand that licensing agreements for video are generally geographically restricted. But an app that goes along with a TV show? I can't see Sky (the UK network broadcasting Silicon Valley and most other HBO shows right now) distributing Not Hotdog. It would be fun to play with it while the novelty is still fresh. ~~~ an_account My guess is they don't want to put in the lawyer work to figure out if they can. I imagine HBO has to heavily vet things like this, unlike small startups. ------ malanj Craziest thing - it works! Just detected a hotdog (off a photo). Machine learning has really come far, that this can be done for a joke app is really cool. ------ avaer Hm, is this an officially licensed HBO marketing stunt? Or does SeeFood Technologies Inc. actually exist as a corporation? If not, I'm curious what the legal ramifications are of doing business as a corporation that does not exist. ~~~ throwaway76543 The term you're looking for is "Fictitious Business Name," or "Trade Name." Generally, the legal ramifications are a requirement to register the name and pay a small fee. In Santa Clara it costs $40 to register a fictitious business name with the county and the registration is good for five years: [https://www.sccgov.org/sites/rec/Fictitious%20Business%20Nam...](https://www.sccgov.org/sites/rec/Fictitious%20Business%20Names/Pages/Fictitious- Business-Name-Filings.aspx) It's really not a big deal. ~~~ avaer Thank you, that's a pleasant surprise! Cheap and painless enough that it makes me want to try my hand at an ARG with fake corporations. ------ DannyBee I'm still sad they weren't allowed to release the fully functional bro app that got created. ~~~ joshu I am a consultant for the show. I actually saw it! ~~~ DannyBee Yeah, same here :) It was actually pretty well done. ------ tripzilch Hail Eris! This app may be very useful to Discordians, because of the Original Snub and the Five Commandmends of the Pentabarf: [http://www.principiadiscordia.com/book/11.php](http://www.principiadiscordia.com/book/11.php) ------ elmigranto US only, welcome to the "global system of interconnected computer networks". Hope Richard's P2P-internet will work better :) ~~~ timanglade Yup sorry about that, the app is available only in the US (& Canada) due to some legal restrictions we couldn't avoid. FWIW I also worked about on Richard's New Internet concept for this season so I definitely hear ya ;) ~~~ ominous Can you please enlighten us as to what kind of legal restrictions apply here? It is always interesting to see how these «little» legal details get in the way of running software. Congrats for getting the app out! ~~~ 0x0 Could it be that the app is using HTTPS? iTunesConnect has all these crazy questions with very little guidance about encryption and export compliance if you say you do use HTTPS. ~~~ ascorbic Guys, we have HTTPS in the rest of the world now! But seriously, the iTunes connect question specifically excludes stuff like HTTPS using the regular libraries, which is just as well as otherwise pretty much every app would be affected. Legal issue is probably boring IP rights stuff. ~~~ 0x0 It's not that easy. Even the first question in iTunesConnect explicitly states you must answer "yes" even if you just make use of the built-in HTTPS libraries in the OS. If you start digging into the various guidelines you open a can of worms of recursive cross-references between documents and sections. Nowhere have I seen a statement that says "if you just use os HTTPS, you don't have to do anything". At the very least you may have to consider if you have to submit various annual self-classification reports. For an app like this I could easily see a serious company deciding to skip the hassle and CYA, instead of potentially taking on a huge legal risk. Would you, as a regular worker-bee developer, be OK with personally signing off and accepting a legal risk on behalf of a large company without involving expensive lawyers to evaluate the validity of your opinion on this legalese? Would that be a responsible action to take? ------ gregdoesit Is there a reason this is not released globally, but only in the US? I am not on the US App Store, so cannot see this app. ------ IgorPartola I had an idea for an app that would tell you if what you are about to eat is helthful or not. Basically all it has to do is determine the ratio of green to brown color. Obviously it would not really work (purple carrots, spaghetti squash), but would be a nice novelty thing nonetheless. ------ FilterSweep I wonder if it's able to tell the difference between hot dogs and well-shaven, tanned legs. ------ dozeone Make available in Norway stores please! I want to be able to know what I am eating... ------ geekme Will it be released world wide ~~~ anderscarling I second this, world wide release would be nice. It's a shame how many "locale neutral" apps end up walled in on the US app store. Is there any specific reason why this happens, just seems to me like you get worse reach and no benefits? Especially for a marketing stunt like this. ~~~ knd775 Crypto export restrictions, I believe ------ goatherders My faith in the internet is restored. Well done. ------ ziikutv DANM! I wanted to start learning NN and write an app after watching the Silicon Valley episode. You beat me to it. Congrats. ------ jgtrosh Why wasn't it called "Hotdog or Not" ? to avoid copyright infringement ? ~~~ lalalalagrr They own the copyright. ------ bitmapbrother Jian Yang was demoing the app on a Pixel. Looks like Apple negotiated another exclusivity deal ;) ------ ruleabidinguser Thought a hamburger was a hotdog btw ------ tyingq I hear UploadVR loves it and is considering an acquihire. ------ daveloyall This is related to a TV show. Maybe this comment thread could use a link to a video clip of the show. Maybe this comment thread could use a link to a video clip of the app in use, since it doesn't work on most devices.
{ "pile_set_name": "HackerNews" }
New Hack: Tired of getting kicked off News.YC? - apgwoz Is noprocrast dragging you down? Or the maxvisit clock staring you in the face? Well, then I've got the solution for you.<p>1. Wait minaway minutes after your maxvisit has timed out.<p>2. Reload the front page<p>3. In new tabs, open each and every link on the homepage<p>4. Read til you become exhausted and forget about those silly arrows. By now, your maxvisit has expired, and you'll have to wait minaway in order to press the arrows, so you'll do it when you loop back to step 1.<p>Luckily news.yc isn't high volume enough that you have to do this more than once or twice a day.<p>Enjoy! <p>(oh, and I really do like this new feature... so does my employer) ====== hollerith I applaud. Now if only redhotpawn.com/blitz/blitzchess.php would add that feature. ------ juanpablo Your employer force you to set noprocast on? ~~~ apgwoz No. But theoretically my employer is happier that I'm not distracted by random posts.
{ "pile_set_name": "HackerNews" }
Big Tech needs to act on concerns over ‘surveillance capitalism’ - mancerayder https://www.ft.com/content/37a0cb82-23c6-11e9-8ce6-5db4543da632 ====== mancerayder Apologies for the paywall. This is a letter to the FT from John Chen, CEO of Blackberry. I can't post it in whole, but the gist is: _The inevitable implications of a data-driven economy are right in front of us and we now stand before a moral, ethical and public policy crossroads. Recent events, where mass privacy breaches have occurred, have raised public awareness of the pitfalls of big data and the elevation of profit over privacy by some corporate actors. As a consequence, public authorities are demanding more comprehensive answers from Big Tech, and a healthy policy discussion has finally begun._ and: _The onus is on businesses to protect the data they manage, not exploit it. Every person should own their data. It should be yours, and yours only._
{ "pile_set_name": "HackerNews" }
Ask HN: my website doesn't work in Opera (it works in Firefox,chrome,explorer)) - Vejita00 www.winteriscomming.com (just test domain name,i know it's wrong)<p>Don't know what to do to make it work with Opera. Any help? Thanks.<p>It's theme from themeforest, and it worked until I made some changes, but I can't remember what changes exactly.<p>EDIT: wow,just downloaded Opera at my workplace, and it works.Don't know what's the problem with Opera at my home (latest version) ====== TobbenTM Seems to work OK in Opera here. ~~~ Vejita00 Thanks Tobben.Seems that something is wrong with my Opera at home.I will reinstall it.
{ "pile_set_name": "HackerNews" }
Could “Is Dead” Please Die? - api http://adamierymenko.com/is-dead-should-die/ ====== goatandsheep People can say any phrase that catches our eyes and we'll believe it. Tech news is learning from Buzzfeed and people are taking it seriously. Also, I'm not too impressed with the watch. I'm sure there could be some Spritz ([http://www.spritzinc.com/](http://www.spritzinc.com/)) type browser that will come up. ------ greenyoda _" When something really is dead, like COBOL or OS/2, nobody talks about it."_ COBOL isn't anywhere close to dead. According to Wikipedia, the language is still evolving, with the latest standard dated 2014. And it's apparently object-oriented now.[1] That seems to imply that there's still new code being written in COBOL (you wouldn't need new language features just to support legacy code). "In 2006 and 2012, Computerworld surveys found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software."[2] [1] [https://en.wikipedia.org/wiki/COBOL](https://en.wikipedia.org/wiki/COBOL) [2] [https://en.wikipedia.org/wiki/COBOL#Legacy](https://en.wikipedia.org/wiki/COBOL#Legacy)
{ "pile_set_name": "HackerNews" }
So You Want to Make Games - szafranek http://szafranek.net/blog/2017/05/04/so-you-want-to-make-games/ ====== madshiva Mobile industrie are not the same than game for PC, they are really focused more on money than the game itself, they are for dumb users (sorry I don't have other word for, and it's not even casual) making game like this is more commercial than a passion for gaming. There's plenty exemple of sucessful game that have only few developper and are better than AAA. I personnaly have started to make game because I like it to try other approach of programming and I want try to do things that other don't do more or don't focus. Like choice, cooperation, rewards on cooperation, etc. Just focus on doing things that YOU would play, that make fun and people will talk about it, pay for it (I'm not even close to this but finish something always give reward.) ~~~ ungzd Mobile games are focused on exploiting addiction on people vulnerable to them. There are no interesting gameplay process, no nice graphics, no art. Traditional games vs mobile is the same as journalism vs clickbait. Mobile games are in the same league as downloaders, browser toolbars, doorway pages. ~~~ swsieber I would say that the top games have good gameplay process, nice graphics and good art. I've played some where I've thought - "oh, this game would be so much better if it wasn't stuck in a IAP addiction mode". I never get very far in these games due to the artificial IAP barriers. ~~~ MrMember I've pretty much given up on mobile gaming. People just aren't willing to pay money for a quality game. These days if I play a game on my phone or tablet it's usually a board game (one of the few genres that's mostly free from exploitative IAPs) or an emulator. ------ JabavuAdams One thing I would strongly recommend _against_ is going to a private game- development school. These tend to market very aggressively and be _very_ expensive. The education provided is hit-or-miss, largely dependent on the specific high-turnover instructors you get. In my province, you can pay > $25k for an 18 month program. At the end of that program, only the top handful of programming students will actually know how to code vs. copying and pasting. If there's any way you can get into a college or university CS or engineering program, do that instead. Four years may seem like a long time, but if you include the time spent looking for a job after your 18 month program, it could easily be 4 years before you get your first reasonably paying game gig. Also, you can do a four-year engineering degree for about 8-10k a year in Ontario. Co-op can help defray those costs further and make you more attractive for that first full-time job. Universities all have game-development clubs. Also, these are the people you're competing with for that first job. I would much rather hire a CS grad from University of Toronto, or University of Waterloo than someone who barely knows C# from a game school. The flip-side is that there are exceptions. It's possible to find those diamonds-in-the-rough who weren't able or willing to go to university. ------ animal531 Focusing only on for example the Apple app store in deciding whether to go into making games or not is not a great idea, since it's by far the most congested (as the article also shows). But for the moment other markets also exist, such as desktop where one can still be very successful (as long as you can hit a certain level of word of mouth). ~~~ supercoder What evidence do you have that your game will make any more on the desktop than it would on the App Store ? (Given the same amount of word of mouth) ~~~ LoSboccacc Top paid app on the app-store show games that are of significant age (minecraft, plague inc) or with significant name recognition (monopoli, risk, etc) hinting at a stagnant market for independent, paid for games. Now, evidence is a hard word, but I've the hunch that unless your game is an addictive style freemium incremental, all other genres are as of today more suited to desktop, because of user disinterest on paying a mobile game upfront. ~~~ animal531 Pretty much this. It's far more difficult to gain recognition and get word of mouth going on the app store (due to volume), combined with the lower selling point as compared to desktop (for example $1 vs $15). People don't want to pay up front for games and the race to the bottom on prices have really hurt the little guy. All graphs I've seen of sales on the app store show an inordinate amount of sales for the top sellers, whereas the tiers in the middle and below don't make anything significant at all. Then go look at steamspy etc stats, even games 2/3rd's down the lists for a year's releases would have made a few thousand $ (as compared to app store where they might have been sitting on tens of $). Either way, learning to code via writing a game is something I recommend for anyone, anywhere. ~~~ clarry And then I guess mobile gamers don't feel so excited and invested in their $1 break time distraction that they would talk about it on chats, streams, forums, or with friends. Heck I don't think I've ever seriously heard people discuss a mobile game, whereas discussions or passing mentions of PC & console games (both old and new) happen all the time on various media. ~~~ lfowles The only two examples of that I can think of are Puzzle&Dragons and Clash of Clans. I suspect it definitely contributes to them being so large. ------ drops This video will always stay relevant: [https://www.youtube.com/watch?v=lGar7KC6Wiw](https://www.youtube.com/watch?v=lGar7KC6Wiw)
{ "pile_set_name": "HackerNews" }
Show HN: Lightweight vacuum packed foam camping matress - alexlajeunesse https://www.kickstarter.com/projects/campcomprest/comprest-the-unstoppable-camping-bed-that-charges ====== alexlajeunesse Hey HN, a guy I went to university with followed through with his idea to make a better alternative to the current camping mattress options. Incase you don't want to go to the kickstarter link (unfortunately it's the only product page he has so far) here's a brief rundown: \- Compressor is 7.5 x 3.25 x 5.5 in, 2.6 lbs. \- Foam mattress \- The vacuum packing takes 2-3 minutes to complete. I'd love to hear what you all think about it. It looks like he's trying to raise money for injection moulding to produce a bunch now. I've never worked with manufacturing personally but from reading blog posts it seems that this is a difficult part of getting a physical product going.
{ "pile_set_name": "HackerNews" }
Ask HN: Would you be interested in being provided daily option plan entries - marketgod I currently utilize a system to trade for myself. It has impressive returns and is fairly simple to enter plans.<p>With a developer skill set you would be able to enter the trades automatically and sell whenever you decide. Generally at 100% you want to take profits and keep a contract or two extra to catch the run.<p>The system I use provides entry points similar to this:<p>[PLAN] If BIDU breaks over 264.25 the Jun 15 210.00 Calls can be used. [ 600k on a 3minute chart is required.]<p>In the above case an API call can be written with your brokerage, i.e. Interactive Brokers, to monitor the 3 minute volume, and if it hits 600k while crossing 264.25 purchase, otherwise, wait for the price to increase 25-50 cents to enter.<p>Another entry would be:<p>[PLAN] If BIDU breaks over 264.25 the Jun 15 210.00 Calls can be used. [ Break over 264.25 ]<p>If the stock price goes over 264.25 buy the calls.<p>Would people be interested in a system which provides plans but doesn&#x27;t get you into them automatically. I currently provide consulting privately however the people are at work and miss many trades, which lowers the effectiveness of the strategy. ====== icedchai I’d be interested. I have my brokerage web site open most of the day at work. ~~~ marketgod Follow me, see profile for details.
{ "pile_set_name": "HackerNews" }
Ask HN: What are the features users want in a location based app? - vskr ====== cdvonstinkpot Automatic tagging, don't make me have to initiate interaction with the interface to make the apps primary function do its thing. ------ AznHisoka the ability to see where their friends are. Nothing more.
{ "pile_set_name": "HackerNews" }
Bash Vi Command Line Editing Mode - timmorgan http://www.catonmat.net/blog/bash-vi-editing-mode-cheat-sheet/ ====== glymor It's readline that has the vi mode set editing-mode vi in .inputrc will enable it everywhere. ~~~ james2vegas luckily not everything on the command line uses readline, else there'd be way more gpl licensed command line tools. vi mode has been in nsh and pdksh, independent of gnu readline. It is also part of the specification for sh at opengroup.org: [http://www.opengroup.org/onlinepubs/000095399/utilities/sh.h...](http://www.opengroup.org/onlinepubs/000095399/utilities/sh.html) Interesting note about emacs mode though: In early proposals, the KornShell-derived emacs mode of command line editing was included, even though the emacs editor itself was not. The community of emacs proponents was adamant that the full emacs editor not be standardized because they were concerned that an attempt to standardize this very powerful environment would encourage vendors to ship strictly conforming versions lacking the extensibility required by the community. The author of the original emacs program also expressed his desire to omit the program. Furthermore, there were a number of historical systems that did not include emacs, or included it without supporting it, but there were very few that did not include and support vi. The shell emacs command line editing mode was finally omitted because it became apparent that the KornShell version and the editor being distributed with the GNU system had diverged in some respects. The author of emacs requested that the POSIX emacs mode either be deleted or have a significant number of unspecified conditions. Although the KornShell author agreed to consider changes to bring the shell into alignment, the standard developers decided to defer specification at that time. At the time, it was assumed that convergence on an acceptable definition would occur for a subsequent draft, but that has not happened, and there appears to be no impetus to do so. In any case, implementations are free to offer additional command line editing modes based on the exact models of editors their users are most comfortable with. ------ bittersweet This works in ZSH as well, I also didn't know about it so this is definitely going to save me a lot of keystrokes. Although I already had some of the regular shortcuts memorized (like ctrl-a goes to start of the line) being able to use one set of keybindings is great! ~~~ sundarurfriend And in Zsh, you could even have it show the current Vi mode (Insert or Command) in the prompt. I don't have my .zshrc handy, I'll try to get the command once I get the .zshrc. ~~~ nuclear_eclipse Wow, that's been my biggest trouble with vi mode for years, and I've been using Zsh for a long time and didn't know it could do that. I would be eternally grateful if you could post the solution here... ~~~ sundarurfriend I'm at work after a long vacation, so got access to my .zshrc. Here's the relevant part: #It starts in insert mode export VIMODE=INS function zle-keymap-select { VIMODE="${${KEYMAP/vicmd/CMD}/(main|viins)/INS}" zle reset-prompt } zle -N zle-keymap-select bindkey -v I use Oh My Zsh, but as far as I know this should work without it too. Wish HN had a notify-on-reply system like Reddit's orangered. Would you see this? Would you not? Would I miss eternal gratefulness from a fellow HN citizen? Oh, how I wish I knew the answer! ~~~ nuclear_eclipse Maybe you won't even see this ;) , but it unfortunately doesn't seem to work on my vanilla Zsh. Perhaps I'll look into OMZ a bit and see if things work out that way. Thanks anyways :) ------ timmorgan I've been using Ubuntu and bash for several years now, and never knew until today that Bash had a Vi mode. I'm ashamed. ------ nuclear_eclipse My biggest complaint about readline's vi mode is that there is no visual indicator of what mode you are in, and it seems there are quite a few actions that unexpectedly take you out of insert mode as compared to using Vim, and without some way of seeing this, it's an exercise in frustration. I've been using Vim as my primary editor/IDE for three years now, and I still can't get used to vi mode on the command line, and keep going back to emacs mode because it functions exactly like you would expect it to. ------ imurray I use vim as my editor but never got on with "vi mode" in bash/readline or zsh/zle. However, when you have a complicated command line you can use vim (or your favorite $EDITOR) on it. In bash type Ctrl-x-e In zsh I press Ctrl-z, which does what I want because I have this in my .zshrc: setopt hist_ignore_space # trick so that history doesn't get polluted function edithist() { local tmp=${TMPPREFIX}${$}hist read -Erz >| "$tmp" "$EDITOR" "$tmp" print -Rz - "$(<$tmp)" rm "$tmp" } bindkey -e '\M-q' push-input # replaces push-line in 3.0.x bindkey -e -s '\C-z' '\M-q edithist\n' The zsh version leaves you editing the command line after exiting the editor. The bash version executes the command after exiting the editor. ~~~ james2vegas And if you don't have a problem with vi mode, use ESC to enter command mode and enter v to edit the command line in ${EDITOR} ------ dylanz I've known about Vi mode for a long time, but have never been able to make the switch. It's a brain bender! ~~~ rg3 Something similar happened to me. I'm a happy Vim user and have been for years, but I've always used bash in emacs mode. However, some months ago I started to work in an environment full of Solaris 8 machines with ksh as their default shell in vi mode for some reason. It's configured that way for every machine and we have to constantly log in and out of them to do our jobs, and it's always been that way so I simply didn't ask for it to change. After all I was also the new guy. After a few weeks I got so tired of trying to use emacs commands in vi mode shells while at work, and vi commands while in emacs mode at home that I changed to vi mode at home. And I've been happy since then. It's not that hard. You get used to it quickly, especially if you use vi or Vim a lot like I do. ------ sophacles One really really annoying thing about this: pressing return in "normal" or "insert" modes executes the command. That behaviour just doesn't feel right. ~~~ sundarurfriend Why? What would you expect it to do? ~~~ sophacles Well, Enter in normal mode in vi puts the cursor on the next line, at the first non-whitespace character. I would like enter in "normal mode bash" to do the same. I would also like to scroll up (j) and see the previous command as i left it (i.e. if modified, show them). ------ albemuth Is there an analog for \e on the postgres console? That makes more sense to me than using vi editing all the time ~~~ gredman fc
{ "pile_set_name": "HackerNews" }
Richest 1% now owns more of US wealth than at any time in past 50 years - eevilspock https://www.washingtonpost.com/news/wonk/wp/2017/12/06/the-richest-1-percent-now-owns-more-of-the-countrys-wealth-than-at-any-time-in-the-past-50-years/ ====== vfulco Trickle down capitalism at its finest. ------ danschumann If you took all their wealth and gave it to the poorest 1%, by the end of the decade, they'd probably have it all back. ------ ajroas So, what?... ------ eevilspock Yeah, I know that we've seen reports like this before. But if the SV sphere wants to stop being part of the problem and start working on a solution, it is critically important it first stops perpetuating the meritocracy myth and looks at reality. ~~~ goldensnit Can you expand on what you mean? SV seems to represent more of a meritocracy than most industries or areas? ~~~ eevilspock I don't have time to write the treatise this deserves, but here are some things to consider: 1\. More does not mean sufficent. 2\. Does Zuckerberg have more merit than all the people who invented the Internet in the first place? I'm pretty sure the net worth of _all_ those people don't add up to 0.1% of Zuck's ($74 billion today). 3\. Google was better than Infoseek. It won on the merits. But Google wins and owns markets now not on the merits, but on its accumulate power, and the moats it has built. Including political moats and advantages (It puts a lot of money into lobbying). The Matthew Effect (the rich get richer) is the antithesis of meritocracy. 4\. Do VCs merit the cut they take? Smacks more of rentier capitalism than meritocracy to me. 5\. Bill Gates supposedly won the OS wars on the merits (many in SV will strongly disagree with that, myself included). But now he uses his wealth (i.e. power) to push his ideas and preferences on education and healthcare. Did Common Core get selected on the merits? 6\. In the spirit of going back to first principles, what exactly it meritorious? Look at all the things that SV is rewarded for versus the things that this fucked up world really needs. Why is Instagram worth so much? Does coding work on any of YCombinator projects, for example, have more merit than teaching children in our totally neglected schools? Why do teachers get paid so little? Is our society valuing things properly _on the merits_? "The best minds of my generation are thinking about how to make people click ads. That sucks." – Jeff Hammerbacher, fmr. Manager of Facebook Data Team, founder of Cloudera. If we define "merit" as able to figure out how to capture eyeballs and data about those eyeballs in order to sell them to advertisers, that's a pretty meritless definition of merit. 7\. Advertising itself is antithetical to meritocracy. Products should win on the merits, not on which one has the best marketing (much less which one has the most money for marketing), not on who is the best used car saleman. Silicon Valley not only uses advertising, a lot of what it does is _funded_ by advertising (undermining things like privacy while they're at it). 8\. Likewise, producing and making money off of products that taps into our psychological weaknesses, our propensity for addiction, our desperate need for social validation, is not meritorious, it's despicable. See Farmville, Facebook, and many of SV's greatest "successes". 9\. Here's another first principles question: What does it actually mean to be a meritocracy? Is it "the best idea, solution or person for the issue/problem/job get chosen" or does it also mean "get the most money". Because we have a economic system that is built on the idea that self-interest and greed are so inevitable, so fundamental that we should build an economic system around these givens, "gets the most money" is our answer. I don't agree that someone born with a higher IQ merits a greater share of the world's wealth or resources, just as I don't agree that if 10 people get shipwrecked on a desert island with no edible food on land, the best fisher of the 10 gets to eat the most much less be king. They each get the greater share by leveraging power. That's not merit. Note also the situational arbitrariness: in SV the higher IQ gets more money and power, and in the shipwreck the better fisher does. Okay, maybe I should go write that treatise. ~~~ wahern Does Zuckerberg have more merit than all the people who invented the Internet in the first place? I'm pretty sure the net worth of all those people don't add up to 0.1% of Zuck's ($74 billion today). How to judge merit is a separate issue. Whatever the value of all the other contributions, it's Facebook that attracted the most money in the marketplace --one of the most common ways to judge merit in our society. When people discuss the [lack of] meritocracy, usually the issue is whether, how, and to what extent people are rewarded for their individual actions as opposed to being rewarded for their status or associations. Zuckerberg may have profited by standing on the shoulders of giants, but to the extent that the environment was meritocratic those were shoulders anybody else could have stood upon had they chosen, not shoulders that preferred Zuckerberg over someone else equally situated. Of course, not everybody is equally situated, but that has nothing to do with the merit of Facebook over TCP/IP.
{ "pile_set_name": "HackerNews" }
Emails prove traffic shutdown was political payback from office of NJ governor - ck2 http://www.northjersey.com/news/christie_kelly_bridge_lane_closures_emails.html ====== ck2 What's interesting about this is apparently you can get anything back from Google gmail via a warrant. Also, without the press, this would have been completely buried. Note how it was done to another mayor and no-one believed him until now.
{ "pile_set_name": "HackerNews" }
Ask HN: Review my startup, Snapherd.com - endtwist So I've just launched Snapherd, a mobile photo game. Basically, every 48 hours a new catchword or catchphrase is given, and the goal is to take a picture of what you think best represents the subject matter.<p>The idea is simple, but I'd love to hear what the Hacker News community thinks! ====== petervandijck I like it too. Nice design. Nice concept. I think you need to convince 10 friends to snap lots of pictures to get things rolling, because right now it looks pretty empty, and nobody likes an empty community site. ~~~ endtwist It should start filling up quite a bit more today...the word had just rolled over last night (from "debate" to "absurd") and my friends have not submitted new images quite yet. I do very much appreciate the input, though! ------ wensing <http://www.snapherd.com> For convenience. ------ calambrac Why 48 hours? Would cycling catchwords in shorter increments do a better job of keeping people engaged with the site? What's the smallest period of time you can find that still yields decent photos? ------ kenver Really good idea and I like the site. You probably know, but when I tried to find it with a search engine there was nothing there. You should probably do something about that ------ pxlpshr We tried to do this last year when the "Safari SDK" was released, but the side project died... I still think it's a great idea and you've executed it nicely. There's another iPhone application that does something similar called Scavenge built by the hosting company A Small Orange, however they currently do not have the website component as far as I can tell. <http://www.apptism.com/apps/scavenge> ------ vaksel did I see a sign for your site during last night's debate? Or was that something else? ~~~ endtwist That was my sign during last night's pre- and post-debate, on MSNBC. It got me a few new users, but not quite as many as I'd have hoped. ------ walesmd I like it - design is nice. What is the incentive for winning (other than community)? I think this is a prime opportunity to purchase (or get one donated) an iPhone or the T-Mobile G1 and give one away. ------ tomsaffell nice idea. maybe you could try to tie the word of the day to current affairs in some way? as for building traffic, maybe you could find a way for people to vote (or even submit) w/o needing to login. you might need to bring the login back ultimately, once it's popular - but do you need it now? final thought - the home page runs off the bottom of the screen (at 1280 x 1024) by a good few hundred pixels. And the bottom is where all the real content is. Maybe reshuffle and/or shrink? cheers tom saffell ------ migpwr I wish I didn't have to register to vote for a picture... i was about to vote for deep fried candy bars on "absurd" ------ mattjung Simple, but appealing idea, nicely done. I'll keep an eye on it. How many users do you have already? ------ jsmcgd I enjoyed the HL reference. I reckon all it needs now is some users and some content. Well done. ------ Tapthat where.com did this a couple years back... It got some response but you definitely need to find a way to get buzzzz ------ joshu Today's word is "mesothelioma" ------ jcapote openid support would be nice, but I love the idea. ------ mwinters58 business model? grow traffic and sell ads? ~~~ kenver Offer prizes/incentives to people who manage to incorporate a second "sponsor" type word. ------ alaskamiller cute design. kind of an addicting concept. should make this into an iphone app. need to seed more, i only see two pics/words. ~~~ tialys Wow... and iPhone (or Android) app could be great for this! Start the app, snap a pic and send it in with details about location and everything. It would certainly put the site in front of a lot more people as well.
{ "pile_set_name": "HackerNews" }
See Star Wars a Day Early - lenkendall http://thenextweb.com/shareables/2015/12/14/do-or-do-not-there-is-no-try/ ====== DerekL I'm in the SF Bay Area, and many theaters are showing it on the 17th starting at 7PM.
{ "pile_set_name": "HackerNews" }
Show HN: Flight search to max. accr. mileage at best price (for frequent flyers) - alexjawad http://www.bunainternational.com/demo-0.1.html ====== alexjawad The demo page is primarily for potential users to provide feedback on the features before building, and it's a bit sloppy but should illustrate what it's all about. Feedback is very welcome!
{ "pile_set_name": "HackerNews" }
Why OpenHeatMap is banned from Github - joshfraser http://petewarden.com/2013/09/27/why-openheatmap-is-banned-from-github/ ====== alexholehouse The weird/upsetting thing is that CTO would go to the producer of a software his or her company use for _free_ and despite having active communication lines open go behind the developer's back to have it removed from the very website they got it from. I believe that the technical term here would be "dick move". ~~~ iends The world is full of people who are quick to jump to conclusions and refuse to give you the benefit of doubt. I used to use a self hosted blog aggregation software (Gregarius) and put it up at www.mypersonalsite.com/blogs that was unlisted and unlinked to. It got 3 uniques/month for about 12 months. Then one day a company owner (a regular from the Joel on Software's forums -- a technical person) wrote a blog post linking to my site, calling me a spam blog, and posted my personal information. He sent my host a C&D, and also blocked my work IP (most of the local IBM office) from reading his blog. From what I gather, he looked at his refer logs and saw I clicked through to his blog 2-3 times, went to the link and freaked out that I was displaying his RSS feeds. I emailed him to try and clear it up, and was quite polite, but he kept being an asshole, accusing me of stealing his content and breaking the law. He threatened me with further action, and was condescending because I was 17. ("You should password protect your 'blog reader' as a learning exercise, then blog about it so you'll have content of your own and you won't have steal my content"). He never reached out to contact me. He just sent the C&D, wrote a nasty blog about me, and didn't even try. Once he jumped to conclusions, it was too late. ~~~ euroclydon Some people are like that. It's frustrating. But it's a good lesson that if you're wrong, you're wrong, nobody owes you a conversation about it and similar people in high stakes venues will deliver swift, harsh, uncompromising justice that can have devestating consequences. ~~~ ballard "Great minds discuss ideas. Average minds discuss events. Small minds discuss people." Eleanor Roosevelt The only thing to add is great minds are more generous and less concerned with violent retribution to compensate for a lack of control in their own frustrated lives. Others have a 4 letter word for it.... cool. ------ lm741 That's funny. The Github page ([https://github.com/petewarden/openheatmap](https://github.com/petewarden/openheatmap)) says it was disabled due to "excessive use of resources, in violation of our Terms of Service." I also didn't find anything here: [https://github.com/github/dmca](https://github.com/github/dmca) ------ xpaulbettsx GitHubber here, we'll follow up on this with Pete. We think one of our replies was missed along the line. Thanks for letting us know. ~~~ catch23 hopefully there is a documented process so that one doesn't have to make a blog article to post on HN to get this stuff resolved. ~~~ unreal37 It doesn't sound like the OP tried very hard to contact them. He sent them an email, didn't hear back, and then decided to write a blog post about it. I mean, if he cared that much about github, he would send them an email back asking what was up. Or 5. ~~~ twistedpair There are always things like real people and phone numbers. You know, find some GH staffers via G+, LinkedIn, Fb or your own network of friends and work some connections. Pete does not say the lengths he went to, but email is admittedly one of the most passive mediums. ~~~ mineo You really should not have to find people working at GitHub on other parts of the internet for something like dealing with copyright issues if you're not the copyright owner because (I suspect) copyright owners will have no trouble getting GitHub to (at least) listen to their complaints. ------ fiatmoney Why not BitBucket then? Github is nice, but it's not the end-all of hosting solutions. ~~~ zonkey Yes. I use BitBucket because it allows free private repositories. No problems at all. ------ kurotek The copyright claim seems unsupportable in the first place. As far as I understand, a simple list of things that fit in a category does not meet the minimum originality requirement. See: [http://en.m.wikipedia.org/wiki/Feist_v._Rural](http://en.m.wikipedia.org/wiki/Feist_v._Rural) It also seems like it would be trivially easy to compile a similar list from linkedin. ~~~ chrismcb Data isn't copyrightable. The format is, but the data itself isn't. Yes, it would be nice to sanitize a list. And yes it would be nice to not actually use real names with real addresses. But no copyright violation here. ------ alsobrsp Sadly there in lies the problem of using a third party, you are at the mercy of their legal department. The facts don't always matter. ------ jonchang Another unfortunate example of the necessities of contributor agreements. ~~~ ansible _Another unfortunate example of the necessities of contributor agreements._ Uh, no. This is an example of needing to take care, and fully understand what is going into each commit. The OP got a little sloppy (which happens), and is now praying the price for it. Hopefully it can all get resolved soon and everyone can get back to work. ~~~ jthol Whoever gave him the file might also share some of the blame. ~~~ PeterisP There is a big difference in submitting an example that provokes a bug, and publishing that example. If an audio player crashes when opening a file with the latest Miley Cyrus hit, then that specific file is useful to reproduce the bug, but it doesn't mean that it can be redistributed further. The same is with any personally identifiable information - if you have obtained a list of people names/emails/adresses, it doesn't automatically mean that it's okay for you to publish that list. ------ bougiefever Another example of someone abusing copyright. This person is using copyright as a means to shut down someone else's work, even though this is clearly not a copyright issue. Something needs to be done about bogus copyright claims that are really about censoring someone else. There should be penalties for issuing bad copyright claims. That is simply not working. ------ joeevans Why doesn't GitHub make it easier to research or attach an open source license to code? ~~~ dbaupp When you create a new repo it allows you to automatically pick a license (and even a initialise a default .gitignore file). ~~~ joeevans Ah, ok... I didn't know they had a license picker. Cool!
{ "pile_set_name": "HackerNews" }
Many Students Around the World Can’t Read or Add, World Bank Says - stablemap https://blogs.wsj.com/economics/2017/09/26/many-students-around-the-world-cant-read-or-add-world-bank-says/ ====== moretai So is the world just a top that's about to stop spinning? I mean we are probably the most educated group of humans in history, yet it still feels like we're just playing make believe and we aren't doing shit with our jobs. Besides people building buildings, catching criminals, and putting out fires, are any of us actually contributing anything to the world or are we just laundering money around from one person to the next? It feels like we're just really good marketers is all. ------ downrightmike Neither can my HR department.
{ "pile_set_name": "HackerNews" }
APOD: LIGO detects gravity waves... - AliCollins http://apod.nasa.gov/apod/astropix.html ====== AliCollins Currently a "Placeholder APOD" until the LIGO Press Conference at 11AM (ET)...we're waiting!! ------ AliCollins ...and now there is an interesting image showing the signals from the twin LIGO detectors - fantastic!!
{ "pile_set_name": "HackerNews" }
2 Steps to Becoming a Great Developer - jmonegro http://theadmin.org/articles/2010/04/16/two-steps-to-becoming-a-great-developer/ ====== dinde I can definitely appreciate the fears listed in step one. I am about four years into my career and have recently begun realizing that I will not surpass what I consider mediocrity without taking steps beyond my 9-5 job. Now that I have begun looking seriously at my path, it is amazing to look back at the fears that have been holding me back. I have been afraid to start my own side projects, out of fear that I won't be able to contend with what other people can do. I had to have the very obvious realization that I will never get better if I don't try, and that everyone has to start exactly where they are. Since then it is like a light has gone off, and I have become much more appreciative of those who try, rather than hold back and criticize. Thanks for posting this. ~~~ bbuffone "I have been afraid to start my own side projects, out of fear that I won't be able to contend with what other people can do" Why do you care? Taking some one else's code and rewriting is a great way to learn. First, you don't need your own idea. Second, there is already a "viable" solution you can learn from and third, you can get to the meat of being a great coder. "How can I make XYZ better, simpler, more performant..." Once you learn thoughs skills you can go off and create yor own projects. Look at all the things people love today, they are simpler. better architected, more performant versions of existing solutions. Apache -> ngnx, MySQL -> ..., If you are afraid to fail or worry about what others think you won't be a great coder. ------ AndrejM These blog posts are kind of like those "Learn this language in 24 hours" books. They're not bad per se, but it's as if becoming a great developer is some kind of <insert-number-here>-step process. Becoming great is, in my opinion, not a final destination. You can be great from day #1 by pushing yourself to learn more, for every day in your life as a developer. ~~~ blaix That is essentially what this article is saying. It just gives a couple concrete examples of how to do this. I think the title of the article does it a disservice. ------ aditya Wow. What has Giles started?! This "I'll-teach-you-to-be-a-great-developer- for-$$$" trend is scary... ------ 10ren #5 is true: fear makes you tired. ------ d0m Is Eric Davis a great developer? ~~~ ryanhuff Maybe not (don't know him), but teaching is an entirely different skill set. Perhaps he's not a great developer, but is a great teacher? ~~~ ohashi I can't emphasize this enough. It applies across all disciplines too. Great teachers don't have to be the best in their field, most are not. However, in my limited experience, those who _think_ they are best in their field are more often than not awful teachers. In those rare cases where they are both, it's something magical. I can only think of 3-4 teachers that I have ever had that fit that bill.
{ "pile_set_name": "HackerNews" }
Ask HN: anyone willing to contribute ideas for pricing for our startup? - andrewstuart Hi folks - we're coming out of beta soon and need to come up with pricing. trouble is I'm not sure how to price it. Is anyone willing to contribute ideas on pricing this service? Any input would be a help and much appreciated.<p>The service is an REST based API that converts documents to PDF and other formats, for example extracting text from documents.<p>The site is at http://www.pdfalchemy.com<p>I'm thinking that we should have subscription levels like:<p>$0 (free) per month to convert up to 100 documents per month<p>$x per month to convert up to 500 documents per month<p>$x per month to convert up to 1,500 documents per month<p>$x per month to convert up to 3,000 documents per month<p>$x per month to convert up to 10,000 documents per month<p>$x per month to convert up to more than 10,000 documents per month<p>I'd love some input from the HN community on what people think the prices per level should be, and also some input on the numbers of documents per month.<p>your input valued. thanks as ====== exline I have a project for a client that has a need for exactly this tool, but the pricing per month is a sticking issue. The problem is that the conversion to PDF is not tied to revenue for the project. And because it would be an unknown cost (they don't know how many word docs they will be getting), the client would not like it. That said it could in theory be up to 5-10K documents a month in a year from now, but right now it is probably only 100-200 a month. Currently we are using an open source tool and having decent results. Not great, but acceptable so far. The second option is to require their users to provide only PDF, which is not ideal, but also acceptable. It seems like this is a commodity product, so figure out what the cost is per document (or per gig of data.) Then add in some profit and that is your price. You can have larger profit on smaller plans and less profit on the larger plans. To do this, I would charge by data not document, since you can determine costs by data, not document. ------ carbocation Are there other value-adds besides # of documents? In the abstract, I'd like to see less granular pricing over the # of documents, instead charging more for value-adding benefits.
{ "pile_set_name": "HackerNews" }
Show HN: Assistant to help you fight social media addiction and do smth useful - pogorsky https://douseful.com/ ====== pogorsky Hi, I'm Eduard, and over the last few years I've been enrolled on many online courses to strengthen my research skills, broaden my career prospects, and to simply learn something new. Some of these courses were free, and others I paid to take part in. However, I haven't completed all of them. Often, I tried to find excuses, telling myself that “this week I'm too busy with my work”, or “this week I’ve a lot on socially, so I don't have the time to submit an assignment…”. This led me to the idea that I needed something that could help me fight procrastination. I needed regular pushes, or gentle advice from some as yet unknown source. As I tried to explore how a tool could be developed to solve this problem, I attended the summer school on behaviour change at University College London. The puzzle has developed. Why not use behaviour change techniques applied effectively to change behaviour in healthcare settings and help others to learn something new instead of wasting their time online.
{ "pile_set_name": "HackerNews" }
Maya Angelou Dies at 86 - tieistoowhite http://www.nytimes.com/2014/05/29/arts/maya-angelou-lyrical-witness-of-the-jim-crow-south-dies-at-86.html ====== silverbax88 I was fortunate enough to have Maya living within a couple of miles of me, and was able to meet her a couple of times. I admired her, and she was an inspiration to me. Her words and actions resonated with me, and I always felt she had veracity of emotion that I could only aspire to. I too, know why the caged bird sings, but I learned it from you, Maya. ------ goatforce5 Here's a recent 14 minute interview she did. As ever, she speaks beautifully and is engaging to watch: [https://www.youtube.com/watch?v=i1T9CEjjRzE](https://www.youtube.com/watch?v=i1T9CEjjRzE) In the video she mentions having "already been paid for" (paraphrasing - don't have time to rewatch video at the moment). She uses a similar phrase in her Clinton poem and other interviews. As I understand it, it's her concept that all the people who went before you to make your life better (moving countries to seek a better life, fighting slavery, fighting wars, getting the right to vote or whatever) paid a price long ago to make your life better. The only way to pay that debt back is to make life better for those yet to come. ~~~ js2 Lovely interview. Asked about her frailty: "What I really want to do is be a representative of my race, of the human race. I have a chance to show how kind we can be. How intelligent and generous we can be. I have a chance to teach and to love and to laugh and I know that when I finish doing what I was sent here to do, I will be called home. And I will go home without any fear [or] trepidations." ~~~ goatforce5 I think she says "trepidation... some. Wondering what's going to happen..." She sort of changed thought midstream. [https://www.youtube.com/watch?v=i1T9CEjjRzE#t=11m53s](https://www.youtube.com/watch?v=i1T9CEjjRzE#t=11m53s) I liked her incredulity when asked how she resolved her religious beliefs and views on homosexuality: [https://www.youtube.com/watch?v=i1T9CEjjRzE#t=9m53s](https://www.youtube.com/watch?v=i1T9CEjjRzE#t=9m53s) ...but just go watch the whole thing. ------ packetslave You may write me down in history With your bitter, twisted lies, You may tread me in the very dirt But still, like dust, I'll rise. ------ jonahx "I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel." ------ akilism RIP "I have a certain way of being in this world, and I shall not, I shall not be moved." ------ akavi >(pronounced AHN-zhe-lo) /ˈæn.dʒə.loʊ/ in IPA, if that confused anyone else. (I misinterpreted it as representing /ˈɑːn.ʒə.loʊ/.) ~~~ freehunter That's interesting. I learned the pronunciation from The Simpsons episode, where they apparently pronounced it wrong (I know it wasn't actually Maya who did the voice). ------ aniijbod How many of us deserve to be described as consistently: 1\. beautifully spoken? 2\. profound in thought? 3\. joyful in outlook? and yet despite possessing these outstanding qualities, never allowing those who do not seem to, to feel anything but encouraged to try to follow her example. ------ calebm She is no longer a caged bird. It's really odd timing: I had never heard of Maya until yesterday, when I discovered her beautiful poetry. I even looked her up on Wikipedia to she if she was still alive... ------ rrrx3 She was a real inspiration to so many. A brilliant person that will be sorely missed. ------ Duhveed I saw her speak once in college. She was a brilliant and engaging speaker. ------ davidtanner She was an amazing woman. ------ CoachRufus87 RIP
{ "pile_set_name": "HackerNews" }
Share Your Location Easily - crjHome https://itunes.apple.com/gb/app/mycords/id803293602?mt=8 ====== crjHome If anyone wants a free download then please tweet @conrjac for a coupon code for the App store.
{ "pile_set_name": "HackerNews" }
Street pastors calm down drunken aggro after closing time - pbowyer http://new.spectator.co.uk/2015/12/what-do-you-do-when-theres-drunken-aggro-after-closing-time-send-in-the-street-pastors/ ====== hanoz > Street pastors calm down drunken aggros after closing time At the risk of opening a Lego-built can of worms, the word aggro should not be pluralised to aggros, as it refers to the activity, not its practitioners. ~~~ herbig It's a British colloquialism. ------ dkokelley Very interesting point about the difficulty police have in being a "friendly, neutral presence". Is it possible to replicate the "street pastors'" success through police presence, or does the uniform elicit a response that antagonizes potential troublemakers? ~~~ ludamad I think a uniform is fine, but they have to really be a certain kind of people who see you doing something illegal and be friendly. Otherwise people will treat them with the same suspicion. ~~~ throwaway049 I believe dkokelley meant could the police achieve the same result or does their _police_ uniform cause a response among the drinking public that the street pastor uniform does not. In my experience working in the ambulance service, there is a threshold of aggro above which the police struggle to de- escalate (but can still overpower). I can't say if street pastors can do any better as I haven't worked alongside them.
{ "pile_set_name": "HackerNews" }
The Privacy Scandal That Should Be Bigger Than Cambridge Analytica - sqdbps https://slate.com/technology/2018/05/the-locationsmart-scandal-is-bigger-than-cambridge-analytica-heres-why-no-one-is-talking-about-it.html ====== WisNorCan This is far beyond Telcos. Lots of apps are selling location data to data intermediaries (e.g. GroundTruth and FourSquare) without user awareness or consent.
{ "pile_set_name": "HackerNews" }
First OpenSocial Application Hacked Within 45 Minutes - nickb http://www.techcrunch.com/2007/11/02/first-opensocial-application-hacked-within-45-minutes/ ====== hello_moto Was this Google fault or the 3rd-party fault?
{ "pile_set_name": "HackerNews" }
New York, America’s unhappiest city - __Joker http://nypost.com/2014/07/22/new-york-americas-unhappiest-city/ ====== __Joker Article based on this study [http://www.hks.harvard.edu/inequality/Seminar/Papers/Glaeser...](http://www.hks.harvard.edu/inequality/Seminar/Papers/Glaeser14.pdf)
{ "pile_set_name": "HackerNews" }
Ask HN: How does instagress get permission - wootez Instagram requires users to have a proper business case in order to use their API to like&#x2F;follow people. Instagram says you shouldn&#x27;t use this to automate likes and follows. However, instagress seems to sell automated likes and follows - how does this happen? Is instagress secretly a facebook company? ====== MarkCole Very unlikely that it's a facebook company, the money they're making is nothing at the scale of facebook. It's more likely they don't have access to the official API, they've just reverse engineered the instagram iphone/android/web app and are using that to like/follow people.
{ "pile_set_name": "HackerNews" }
Graphene Doubles Up on Quantum Dots’ Promise in Quantum Computing - gsmethells http://spectrum.ieee.org/nanoclast/semiconductors/materials/quantum-dots-made-from-graphene-help-realize-their-promise-for-quantum-computing ====== xbmcuser I wish someone would do a chart of all the things graphene can be used for that have been posted on hacker news in the last 5 years with actual products that use it or are going to use it. ~~~ CoryG89 Here is the list: [ ] No really though, there are a lot of people researching it. I could find very little on the market actually using it. [http://www.physics.manchester.ac.uk/our-research/research- im...](http://www.physics.manchester.ac.uk/our-research/research- impact/graphene/) Graphene looks great on paper and in the lab. It seems making the economics work at scale may be a bit more didficult. ~~~ paulwal [http://www.hobbyking.com/hobbyking/store/__2001__85__Batteri...](http://www.hobbyking.com/hobbyking/store/__2001__85__Batteries_Accessories- Turnigy_Graphene.html) [http://www.rcgroups.com/forums/showthread.php?t=2592234](http://www.rcgroups.com/forums/showthread.php?t=2592234) [https://www.reddit.com/r/Multicopter/comments/43z8d0/thought...](https://www.reddit.com/r/Multicopter/comments/43z8d0/thoughts_on_the_new_turnigy_graphene_batteries/)
{ "pile_set_name": "HackerNews" }
Startup Madness - guglanisam http://sameerg.wordpress.com/2008/12/13/startup-madness/ ====== bootload _"... Each time we met a new person, we were constantly thinking of how this person can help our venture, . Everywhere we went, we explored if there was something there that could benefit our startup. Frankly we were classical ‘opportunity hounds” and quite shamelessly so ..."_ I've seen this so many times and it tends to work well in a boom. I'm curious how well does this idea work in the current crash? ~~~ guglanisam I would think importance of this increases in the bust / crash times as money is scarce / precious, one has to use innovative / free ways to get things done. In fact it happened with me most when madhouse was running with very low cash.
{ "pile_set_name": "HackerNews" }
Some business schools are shutting down their on-campus MBA programs - hhs https://www.forbes.com/sites/poetsandquants/2019/05/26/why-business-schools-are-shutting-down-their-mba-programs/ ====== mcmoose75 I went to one of the top-tier (top 3) MBA programs in the US. The way I've described that value of the top MBA programs is in 3 (roughly equally-weighted) categories: 1) The Education: Getting to discuss business concepts with a group of smart, successful folks for 6-9 hours a day is very helpful. Most of the content, I could get from books/ online articles at a fraction of the price, but spending the time in a focused way is actually helpful. 2) The Network: In terms of network, this isn't some vague amorphous thing at Stanford GSB, HBS, or Penn- it's knowing several hundred other folks who are going to be very successful in their careers. This can be useful in sales, fundraising, finding a next job, etc. and DOES have a tangible dollar value. 3) The Rubber Stamp: Having your resume say "Stanford GSB" or "Harvard Business School" does have value to future employers or customers- it de-risks you as you've been validated by a famous institution, and helps THEM to associate with these famous brands. For the top 3 programs, even though they're VERY expensive (approx $250k in tuition + living expenses, plus at least another $250k in foregone income for most incoming students, for $500k in actual cost+opportunity cost), they're almost certainly worth it. For middle-/ lower-tier MBA programs, 2) and 3) above aren't NEARLY as valuable, or perhaps don't even exist- is your "network" from Podunk State School MBA actually valuable? For these, the only real value is the actual education, and most students would be better served with an online program or just actually picking up some books/ case studies/ reading online articles. ~~~ Scoundreller > This can be useful in sales, fundraising, finding a next job, etc. and DOES > have a tangible dollar value. I believe you, but I haven’t seen any school websites actually advertise an a dollar value on these human synergies. If anyone would, you’d think it would be them. ~~~ lotsofpulp That would be very gauche, and would net them negative PR from the overall public due to laying bare the “it’s not what you know, it’s who know” mantra for no reason. The people looking to attend these schools already know the implied value. ~~~ Scoundreller Sounds more like intangible value then. ~~~ 0xDEFC0DE You can probably establish a floor value but getting a good average/median is going to be difficult because the top end is basically unlimited dollars for yourself. Taking a guess, a degree from a school listed should easily get a $100k salary in any big city at minimum unless you've tanked your reputation very publicly (like paying for your acceptance with bribes). Most people who go to ivy league aren't really concerned about their safety nets though. ------ amb23 US MBA programs need to start following the European model: Year-long to 16 month programs tailored to a slightly older demographic (late 20s to 30s) who actually need a degree to move a rung higher in their career. The shorter program length will decrease the costs for students (both in terms of tuition & opportunity costs) and--based on how much travel my friends currently pursing MBAs do during the academic year--is unlikely to hurt academic outcomes. There are a few MBA ROI calculators online, and for my career I wouldn't see an ROI until ~20-25 years down the line. (For context, I'd probably gain a ~30k salary increase if I were to pursue an MBA.) And that's with consistent salary growth with no sabbaticals or career changes (and any subsequent loss in earning potential) I might want to pursue. I would love to use a year long program to gain some needed financial modeling, HR, and operational skills while taking the time to pursue a business idea, but the traditional programs are not structured to accommodate that. And besides, you don't actually need an MBA if you work in the tech industry until you're in a senior position--VP or C-suite--so there's no direct need for the degree itself until ~10-15 years in the future. ~~~ jayalpha "US MBA programs need to start following the European model" No they don't. I, as a dual citizen, think US programs are by far superior. "And besides, you don't actually need an MBA if you work in the tech industry" "How to value your start-up? Add 1 Million for every engineer, subtract 500k for every MBA" Guy Kawasaki ~~~ rchaud The Guy Kawasaki quote is absurd when you consider that US startups bleed cash all the way up to and often well past their IPO. You best believe that the banks who are out there selling a fantastical vision of a startup's future profitability have MBAs in their team. They're the ones making the engineers' equity worth something. ~~~ ska It's not so absurd - it doesn't imply you shouldn't have any MBA's, just that the ratio to engineers should be very small at this stage. Which isn't wrong in a tech startup. Also, the banks don't really create value here, they preserve it if they can and in the best case help people not piss it away. All for a handsome percentage, but c'est la vie. ------ rahimnathwani The two existing comments are from people who are considering an MBA at some point, so perhaps these thoughts are helpful: \- Doing a full time MBA at a top school (or any school) is a large commitment of time and money. If your aim is to start a business, then it's worth considering what you could achieve with the same amount of time+money invested in getting that business started. If your aim is to get a promotion then, again, it's worth considering the opportunity cost. \- Some of the things you'll learn during the MBA can easily be learned on your own. But 90% of us usually only study the stuff we find most interesting or most immediately relevant. An MBA is a good mechanism to force some breadth. \- The people who get most out of an MBA (from what I've saw at my school, and other people I know) are those who have a clear goal, and have done some research beforehand about how an MBA from their chosen school will help them to get there. Those people also spent a significant portion of their time researching jobs, and trying to meet people to get leads. They didn't just focus on the classes until near graduation, and then apply via standard processes. They were focused on their goal (e.g. 'M&A Associate position at a top-tier investment bank') from the first day, until they got their job offer. \- There are plenty of people who pursue a full-time MBA as a 'break' from work, or because they want to learn a set of skills they can apply in their existing or future business, or because their employer (e.g. investment bank, or top-tier consulting firm) expects it and pays for it, or even just for fun. These are all acceptable reasons. But pursuing an MBA because you think it's a magic solution to $current_career_problem probably won't end well[0]. [0] This last part might be incorrect if you end up at a top school. Like, maybe if you do an MBA at Harvard you won't even need to apply for jobs, and people will come knocking on your door. I don't know. But I suspect the intersection of people who (i) get into HBS, and (ii) have unrealistic expectations of what a piece of paper can do for their career, is pretty small. ~~~ dahart > If your aim is to start a business, then it's worth considering what you > could achieve with the same amount of time+money invested in getting that > business started. I’m not sure how to even begin evaluating opportunity cost. I started a business, and had a successful exit, and I’m still considering the MBA for the next time. The reasons include: as an engineering founder I undervalued marketing dramatically and don’t know how to do it well, and I had a hard time speaking the language of investors. I want to add skills to my quiver that I found missing when I needed them. How would you suggest thinking about opportunity cost, especially if you lack experience in one or both options? In order to make starting a business more valuable than school, I feel like you have to either be much luckier or know what you’re doing and execute perfectly, which is incredibly hard to do the first time. The things I spent time on as a first time founder are things I think I would have learned not to do in business school. It worked out for me only because I got lucky, but I did burn a few years and a lot of money. After having chosen to start a business instead of school, I feel like it might have been more efficient to go to school. ~~~ rahimnathwani "I want to add skills to my quiver that I found missing when I needed them." This is a really good reason. What are those skills? Would attending business school help you build all/some of them? "I’m not sure how to even begin evaluating opportunity cost." If I were to give you $100k, on the condition that you spend the next 18-24 months doing only things that would help you build those skills, what would you do with that time? Start a business? Take smart people out for dinner? Get a job at a company you admire, and spend the cash on making your life more comfortable? Sign up for a degree program? ~~~ dahart Hmmm. If the goal of the $100k was to build the skills, I would include at least some school. Taking smart people to dinner and working in successful teams would also be on the list. The only thing I’d actively avoid is using that cash for comfort. But, building those skills may be far away from both knowing what skills I need, and from executing, right? I guess that’s part of my question... the answer to how to build those skills doesn’t necessarily have a lot of bearing on whether it would be better to start a business or go to school. Aside from what I might guess, how could I actually and meaningfully evaluate whether going to school is a more valuable use of time than just launching into another business? Maybe looking at rates of MBA founder success vs others is a start...? ------ vikramkr Most of the value in an MBA is from the face to face interaction and networking - the classes in business school are helpful sure but nothing you can't learn yourself in a book. If these schools weren't able to offer much in the way of networking and development of soft skills in the first place, then frankly going online makes sense because you don't lose anything, but i think that's a factor specific to these business schools, not the very top bschools that have no shortage of funding/extensive networks etc. ~~~ povertyworld Is it possible some corporations just have MBA requirements for advancement? Then it doesn't matter where you got the degree from just that you have it. I know this is the case in education, and has lead to the proliferation of online education grad degrees. Since you just need the piece of paper, most careerists optimize for the easiest online "mail order" type degree possible. They don't, however, optimize for cheapest, since tax payers bankroll most of these cheesy degrees through tuition reimbursement. ~~~ delfinom MBA used to be an requirement for an engineer to jump into manager roles. But nowadays it depends on the company. ------ rchaud You can see from the list of names in the article why those schools are dropping on-campus programs. Some of them are prestigious institutions, but not for business: \- University of Iowa \- Wake Forest University \- Thunderbird School of Global Management \- Virginia Tech \- Simmons College There aren't many people out there thinking, sure, I'll go into six-figure debt so my LinkedIn can say I have a Simmons College MBA. Simply put, if your on-campus MBA program isn't likely to result in a six-figure job right out of the gate, it's not worth the cost. And those six-figure jobs only come about if big, rich companies (banks, big 4 consulting/audit, big tech) come on campus to hire MBAs. If yours is not a Top 15 MBA program, you may be better off career-wise by staying at your current job and pursuing professional certifications related to your field. ~~~ askafriend I’d take it even further. If it’s not a top 5 program then you’d really really have to think about if it’s worth doing at all. ~~~ joker3 Top five is a little too strict, but top twenty, yeah, maybe. ------ bitL IMO online MBA from a respected school is a perfect thing for entrepreneurs that need flexibility and can't just take 2-3 years off. It's also awesome for digital nomads that can literally earn degree while lying on a Hawaiian beach or traveling around the world. And if the class is massive like at UIUC (2000 people?), networking effect gets a huge multiplier, globally, comparing to much smaller classes at M7 (and FAANG is quite strongly represented there, like with Georgia Tech's OMS CS). The only problem I see is prestige, so I'll keep an eye on iMBA to see how they wrestle with rankings. ------ _sword Interestingly, institutions such as Simmons (highlighted in this article) are partnering with online education platform vendors such as 2u to bring their campus online. Using 2u as an example, the schools offload all of the marketing for candidates, managing the online platform, and more to 2u in exchange for a revenue share for tuition for online students. Schools are in turn responsible for the course content including the professors, and for their admissions department to accept or deny applications provided by 2u. The net impact for the school can be incremental revenues by addressing students that weren't accessible previously, and that these incremental revenues come with a high margin for the school, which can help fund other areas of the school. Fascinating business model in my view. ------ miohtama Online MBA would be only for career signalling? Is the other side of the coin, networking, present in any form? Has anyone taken online MBA and can comment the benefits? ~~~ Ibethewalrus Interested as well about the benefits/downsides of an online MBA by a prestigious vs regular school ------ baron816 Is there a lot of value in getting a non-specialized MBA if you're working in tech (or want to work in tech)? ------ SQL2219 There are 16 references in the comments about "top" programs. I am curious as to what top actually means, can I sort by the top column in a spreadsheet? Is it a code word for cost? ~~~ filmgirlcw It means Harvard, MIT, Stanford, University of Chicago, Penn — most generally. But there are different MBA programs. There’s the traditional MBA, the part- time MBA (which is a great choice for people who already work), and the executive MBA. ------ RickJWagner Anything that adds prestige and profitibility to online education is ok with me. Traditional college education is a racket. I'm hoping online is the way of the (near) future. ------ dahart Tl;dr on-campus programs are in decline while online programs are growing. Seems like the title is a tad misleading, the programs aren’t shutting down, they’re going online. But isn’t a lot of the value of an MBA in the face time and networking you get during the program? (Or is that a myth or only something that happens at Harvard?) Every once in a while I think about going back and getting an MBA, speculating it might help with my next startup, but I’ve always imagined going to campus. Are online tools and online degrees improving in the areas of having class discussions and making friends and asking questions and general people time? ~~~ lotsofpulp It’s not a myth, top MBA programs are top because they give you access to an exclusive network. Personally, I wouldn’t value an MBA much without that aspect.
{ "pile_set_name": "HackerNews" }
The future of photography and Unsplash - dstein64 https://medium.com/unsplash-unfiltered/the-future-of-photography-and-unsplash-811f114aab7a ====== pgeorgep Unsplash powers nearly all of the stock photography I see these days. It's crazy to see photographers are able to get more impressions on their photos than any other platform (ie Instagram and the New York Times)
{ "pile_set_name": "HackerNews" }
Golang only 2x ruby at net/http level and same as ruby at web framework level? - gankgu https://gist.github.com/gankkank/3a59513ea81cb5ec5e33 ====== nostrademons TechEmpower has it at about 3x Ruby at net/http level, but close to 10x on BeeGo vs. Sinatra. [https://www.techempower.com/benchmarks/#section=data-r9&hw=p...](https://www.techempower.com/benchmarks/#section=data-r9&hw=peak&test=json) (Interestingly, JRuby is actually within about 10% of Go.) This doesn't surprise me all that much: the guts of Ruby's HTTP parsing & network handling is generally done in C. It's only when you layer all the code in Rails through the default Ruby interpreter that it gets slow. ------ coldtea What does this "hello world" measure? It doesn't measure Ruby's speed, that's for sure. The IO is C, and the HTTP parsing and network operations are also C in Ruby. Also the frameworks you used are minimal (for both Ruby and Go) so their overhead is negligible as well. Again you're mostly measuring some C calls vs Go calls. So, a more accurate title would be: "Golang only 2x C at net/http and same as C at web framework level". Now, try a full blown Rails service or a Sinatra endpoind that DOES some processing, not just prints something, and compare it with the same thing in Go. ------ gankgu But in actual world, people will be attracted by post like "Iron.io Blog: How We Went from 30 Servers to 2: Go". And like to think so we can use it do faster and easier ! Also, for start-ups, It's important to choose a language that has a certain level of performance rather than rewrite all codes later. ------ smt88 Cross-language benchmarks are nonsense. Hardware can be scaled. Time cannot. Use the platform that saves you the most time (now and when you're in "maintenance" mode) and worry about performance later.
{ "pile_set_name": "HackerNews" }
Careful, there's an app which will delete all your tweets - FluidDjango http://technolog.msnbc.msn.com/_news/2011/12/27/9737518-careful-theres-an-app-which-will-delete-all-your-tweets ====== rtjggfj The point of the application is to delete all your tweets, it explains that it's not undo-able, and it requires confirmation before it does it. Relax.
{ "pile_set_name": "HackerNews" }
LShift is terrified by asynchronous libraries performance - bandris http://www.lshift.net/blog/2008/12/15/asynchronous-libraries-performance ====== tptacek Couple things. First, there isn't enough context in the LShift article to judge whether Marek has a valid point. He's conceded up front that both Libev and libevent show logN growth for adding new fd's to the loop, and seems to be arguing that it could be brought down to constant time. So? He hasn't presented evidence that the overhead he's talking about is significant in the real world. This is compute vs. I/O in an I/O-bound environment. Next, he also seems to be taking his cues from the second set of benchmarks on the Libev page, in which every socket has a random timeout. This is a worst- case scenario for both Libev and libevent. Contrary to both the Libev page and Marek's analysis, setting timeouts on individual I/O events is not the only way to implement "idle timeouts" (which many protocols don't even have); it is in fact one of the poorer approaches. Finally, he's taken two very specific benchmarks and generalized them to the whole design approach. Libevent is considered "high-performance" because async I/O is a high-performance design --- especially compared to demand-threading --- and libevent was the first stable, cross-platform implementation of it. It's also "high-performance" because it implements the high-performance kernel interfaces, like epoll. But the internals of libevent, while reasonable, are not (at least last time I checked) particularly optimized. You can get around most of these problems by building on top of libevent; for instance, at Arbor Networks, we ditched libevent's (verbose, clumsy) timer interface for a better data structure, and simply kept libevent notified of the next closest timeout. "Terrified" seems like a very silly word to use here, but I'm always eager to get schooled. ~~~ signa11 > Contrary to both the Libev page and Marek's analysis, setting timeouts on > individual I/O events is not the only way to implement "idle timeouts" just a quick question: what would you consider, is a superior approach to idle-timeouts ? thanks for you insights ! ~~~ tptacek Use another data structure to track connections by liveness, which you can measure by timestamping on each RX event. Relax the requirement that an connection idle timeout of 60 seconds must fire in exactly 60,000 milliseconds; 60, 61, 65, nobody cares. ------ huhtenberg _The problem is that currently asynchronous libraries often use binary heap as a representation of internal priority queue._ This is a grossly ignorant statement. I have written my share of event loops (including epoll and IOCP based) and choosing heap for timer management is an _obviously_ dumb thing to do if you care about the performance. Also removing objects from the event loop _before_ delivering a callback is even more dumb for exact reasons he listed in the post. Essentially what he's bitching about is a lame-ass un-optimized implementation of an event engine. Also, interestingly enough he actually reinvented a simplified form of Linux timer management code - <http://lwn.net/Articles/156329>.
{ "pile_set_name": "HackerNews" }
A new class of attack via LinkedIn Skills - rjurney https://twitter.com/rjurney/status/567455739245895681 ====== positr0n Sorry to be that guy, ([http://xkcd.com/1053/](http://xkcd.com/1053/) and all), but this isn't new. It was discovered soon after the skills feature came out. Here's a buzzfeed article from 2013:[http://www.buzzfeed.com/charliewarzel/heres-how-to- endorsmen...](http://www.buzzfeed.com/charliewarzel/heres-how-to-endorsment- bomb-your-friends-on-linkedin) Congrats on your independent discovery though :-) ------ rjurney I did it all for the lulz.
{ "pile_set_name": "HackerNews" }
Postmortem: Azure DevOps (VSTS) Outage of 4 Sep 2018 - wallflower https://blogs.msdn.microsoft.com/vsoservice/?p=17485 ====== a2tech Long story short they give no indication as to why their data center cooling systems were unable to handle voltage changes caused by the storm and their systems are not designed for speedy restoration into another region. ~~~ chrisbolt Seems like there's more information in the preliminary RCA on [https://azure.microsoft.com/en- us/status/history/](https://azure.microsoft.com/en-us/status/history/) ~~~ bpicolo > Initially, the datacenter was able to maintain its operational temperatures > through a load dependent thermal buffer that was designed within the cooling > system. However, once this thermal buffer was depleted the datacenter > temperature exceeded safe operational thresholds, and an automated shutdown > of devices was initiated ~~~ souterrain Total failure of a data center's cooling apparatus seems to be a very rare occurrence to me, perhaps limited to simultaneous failure of utility and genset power (example: electrical switchgear and fuel pumps underwater due to flooding). Anyone have any data around how frequently such a failure occurs? ~~~ beh9540 I had the same thought. The only thing I could come up with is that it wasn't a failure of power supply, but that a surge took down enough cooling systems that they couldn't maintain temperature. A lot of DC's I've seen are N+1 with cooling (or even 2N), but they all run at the same time and are the same units. Or the control system went down, and they weren't able to get it back up and running, although I would think they would have redundancy in that case. ------ outworlder Ok, it's understandable, freak events happen. > The primary solution we are pursuing to improve handling datacenter failures > is Availability Zones, and we are exploring the feasibility of asynchronous > replication. This I do not understand. I was also amazed when I saw that Azure AZs are not available on all regions. In AWS, the bare minimum is 2 AZs (except for one odd region). Same thing for Google Cloud. ~~~ scarface74 From what I understand and I can't find the reference anywhere, each region has at least three availability zones. Some regions only have two user selectable AZ's. For instance, S3 promises that it is replicated between 3 AZ's in a region. That guaranteed is available in regions that only have two publicly available AZ's. ------ romaniv At the end of the day, Azure and AWS are monocultures with considerable amount of centralization and interdependency within their services. Their scale undermines the original purpose behind the Internet. It bothers me that increasing number of large companies dump their own data centers to jump into The Cloud. Thia means future outages (which will undoubtedly happen) will have wider and wider impact on end users. For example, if your email is hosted on AWS and it goes down, you loose access to your email. No big deal. However, if your email, VOIP and IM/chat go down at the same time, you may loose all ability to communicate electronically. This can be a very big deal in certain situations. ~~~ otterley The original purpose behind the Internet was to build a robust layer-3 network based on packet switching technology. The designers weren't focused on the application layer. Source: [https://www.internetsociety.org/internet/history- internet/br...](https://www.internetsociety.org/internet/history- internet/brief-history-internet/) Separately, I think we have enough history of working with the cloud at this point to demonstrate that major providers' availability is on par with, or better than, the availability of the typical small entity. Sure, the impact is potentially wider spread (although this can be mitigated with a cellular architecture, which first-class providers do employ), but there's a perverse advantage that when outages occur, they tend to get fixed a lot faster because the complaint volume is much higher. ~~~ romaniv _> when outages occur, they tend to get fixed a lot faster because the complaint volume is much higher._ On the other hand, they can be much harder to fix, because the sheer scale of failures and complexity of the infrastructure. There is a higher probability of complex systemic issues, as demonstrated by this very outage. There are plenty of smaller providers that beat Azure VMs in uptime. Plus, smaller websites/services can employ much simpler failure mitigation strategies. ~~~ otterley The "complex systemic issue" here is that Azure is only now rolling out availability zones, and the product in question hasn't yet been able to take advantage of them to mitigate a serious DC fault caused by an Act of God. The necessity of low-latency-but-decoupled-physical-plant AZs is well known in the art by now, and these issues will no doubt be addressed as Azure matures. Remember, they're 5 years behind AWS. ~~~ romaniv _> The "complex systemic issue" here is that Azure is only now rolling out availability zones,_ Availability zones are a _mitigation_. The issues is the sequence of events and dependencies described in the postmortem. The description has six paragraphs. ~~~ otterley I'm not precisely sure what you're referring to. Can you cite the precise problem discussed in the postmortem, and how, specifically, you think it could have been better designed? And how could your perfect model, whatever that is, survive a similar catastrophic DC failure without availability zones? ------ swebs That's an extremely roundabout way of saying there was a lightning strike and they had inadequate surge protection. ------ em0ney Really not the worst post mortem I've ever seen ------ sungju1203 just use AWS. simple. ~~~ Bhilai Comments like this are counter productive for the discussion. Competition is always good and some of us like that AWS has competition in the form of Azure and GCP. AWS has had its own share of outages so its not perfect either. ------ byte1918 > VSTS (now called Azure DevOps) Not again. ~~~ herbderb What's the point of even mentioning that when they just use the old name for the entirety of the article anyway ~~~ skrebbel The point is that the headline should've been "Azure DevOps Outage…" but they're afraid that other outlets will take that over as "Azure Outage..." and they don't want a headline like that making the rounds. So they use the old name for bad news and the new name for good news. TBH I'd do the same. ~~~ freeone3000 But is _is_ an Azure outage. It took out a DC. VSTS is one of the services affected, but other services were also affected. ------ indemnity Is this what we have to look forward to as GitHub will be forced onto Azure? ~~~ manigandham No service is perfect and github has had plenty of outages. They will only become more reliable with the resources of MS/Azure at their disposal. ~~~ tumetab1 Having worked in big Azure customer I wouldn't say resources equals stability. The reality is more resources + quality engineering + failure testing. As this case tells, Azure, isn't spending a lot on failure testing. Also, having experience with being an big Azure customer, I can tell you that things look better than they are. ~~~ eropple Small Azure customer (five figures a month), but can co-sign all of this. Azure looks shiny from the outside but we've had way, way more problems, from uptime to bad APIs to _awful_ language SDKs to bad user interfaces to licensing hell, than I've _ever_ had on AWS or GCP. It's so bad that I am currently weighing whether or not to advocate for a migration off of it, at nontrivial expense, because I cannot pretend to provide reliable services for our customers.
{ "pile_set_name": "HackerNews" }
Ask HN: I Just Got A Used MacBook Pro. What To Install? - tronium I just got a late 2011, 15-inch MacBook Pro from my older brother. I enjoy programming&#x2F;developing a lot, so what apps should I get&#x2F;install on the new system for developing and productivity? ====== jevinskie Divvy lets you easily resize windows to a grid pattern. There may be similar free utilities but I found that it was worth the $14. [https://mizage.com/divvy/](https://mizage.com/divvy/) If you are interested in binary objects/executables, check out MachOView. Think of it as an excellent GUI version of nm/readelf (for MachO, obviously) with search. [https://github.com/gdbinit/MachOView](https://github.com/gdbinit/MachOView) ~~~ tsm People at work use SizeUp, which is free and beautiful: [https://www.irradiatedsoftware.com/sizeup/](https://www.irradiatedsoftware.com/sizeup/) I use Slate, which is free and powerful: [https://github.com/jigish/slate](https://github.com/jigish/slate) ~~~ kovrik +1 for Slate ------ karangoeluw Some apps I use every day: \-------------------------- Homebrew - [http://brew.sh/](http://brew.sh/) Growl - [http://growl.info/](http://growl.info/) Alfred - [http://www.alfredapp.com/](http://www.alfredapp.com/) Sublime Text - [https://www.sublimetext.com](https://www.sublimetext.com) Transmission - [http://www.transmissionbt.com/](http://www.transmissionbt.com/) Transmit - [http://panic.com/transmit/](http://panic.com/transmit/) Evernote - [http://www.evernote.com/](http://www.evernote.com/) BetterTouchTool - [http://blog.boastr.net/?page_id=1722](http://blog.boastr.net/?page_id=1722) Dash - [http://kapeli.com/dash](http://kapeli.com/dash) F.lux- [https://justgetflux.com/](https://justgetflux.com/) ~~~ arzugula I thought Growl was kind of dead since Apple introduced Notification Center? ------ vincentbarr These are my 'must-haves', or very close to it, and most of them are free or offer a free version. Alfred 2 (search and a lot more) aText (text expansion) Adium (Chat) Adapter (audio/video filetype conversion) Caffeine (prevent display from dimming or sleeping) Chrome (browser) Colloquy (IRC client) Dash (documentation and snippet browser) Dashlane (password management) Doubleplane (window resizing) Dropbox (cloud storage) Evernote (notes, bulky) Firefox (browser) F.lux (smart display brightness) Handbrake (video transcoder) Hazel (file/folder automation) iTerm (terminal replacement) Jumpcut (store and recall clipboard history) MailMate (email) Mou (markdown editor with live preview) Readkit (RSS reader) Screenmailer (free, easy screencast creation and sharing) Simplenote (notes, lean) Skype (calls) Spark (hotkey) Sublime Text Editor 3 (text editor) TicToc (time tracking) VLC (media player) ------ hansy For the people who use Alfred ([http://www.alfredapp.com/](http://www.alfredapp.com/)), I'm curious to know Alfred's advantages over the native Spotlight (which IMO works fairly well) or other similar apps like Found ([https://www.foundapp.com/](https://www.foundapp.com/)) or Quicksilver ([http://qsapp.com/download.php](http://qsapp.com/download.php))? Oh and to add my two cents to the OP's question: HyperDock ([http://hyperdock.bahoom.com/](http://hyperdock.bahoom.com/)): Windows 7 functionality to preview individual windows ------ Croaky [https://github.com/thoughtbot/laptop](https://github.com/thoughtbot/laptop) will install Homebrew, Tmux, Silver Searcher, Postgres, Redis, a few programming languages, and other items. [https://github.com/thoughtbot/dotfiles](https://github.com/thoughtbot/dotfiles) sets up a bunch of slick aliases and plugins for Vim and ZSH to make development productive. ------ shawnreilly Lately I've become a fan of isolating multiple environments. This way I can run different IDE environments on the same machine without conflicts or dependency problems. There are quite a few ways you could do this, ranging from entire VM's (something like virtualbox), to VM containers (something like docker), to language specific isolated environments (something like virtualenv for python or rvm for ruby), to prebuilt environments (something like bitnami). Each one has different pro's and con's (too heavy, too complex, etc) but the general idea is the same; Having the ability to build multiple isolated environments makes it easier for me to maintain those environments. It also gives me the flexibility to test different environment variables with some sort of fallback if something goes wrong. So it's something I would recommend, but YMMV. Another recommendation I would make (not software, but still a must IMO) is to install an SSD and max out the RAM. Feels like a whole new machine! Good luck and have fun. ------ ken_laun You have a good older brother. I recommend these apps. <Developing> iTerm2 Firefox Sublime Text Cyberduck Xcode Gimp(Image) Skitch(Image) <Productivity> Evernote Dropbox Alfred Memory Clean 1Password ------ jmagnusson Alfred App is an absolute essential in my book (especially custom web searches) [http://www.sequelpro.com/](http://www.sequelpro.com/) Sublime Text. Makes u feel like a magician. [http://www.sublimetext.com/](http://www.sublimetext.com/) Sequel Pro. Best db manager out there. Wish they just supported more than MySQL. [http://www.sequelpro.com/](http://www.sequelpro.com/) iTerm2. The built in terminal in OS X kind of sucks. [http://www.iterm2.com/](http://www.iterm2.com/) Homebrew. The missing package manager for OS X. [http://brew.sh/](http://brew.sh/) ~~~ surreal ( Typo, Sequel Pro's URL has been put for Alfred App. For convenience it's [http://www.alfredapp.com](http://www.alfredapp.com) ) ------ marmarlade Some great submissions here already (second/third the usual suspects Divvy, Alfred, VLC, Sublime Text et al.) Depending on what you use for productivity, you might find a Pomodoro Timer useful. There are loads, and I quite like this one: [https://itunes.apple.com/gb/app/pomodoro-timer-focus-on- your...](https://itunes.apple.com/gb/app/pomodoro-timer-focus-on- your/id872515009?mt=12) (or just use the Chrome app [http://tomato- timer.com/](http://tomato-timer.com/)) And for writing creatively, I can highly recommend OmmWriter. [http://www.ommwriter.com/](http://www.ommwriter.com/) ------ rgawdzik The other recommendations are awesome. For me, I like using a lot of desktop window management, however the Mission Control transitions are too slow for me, with the fact that are bulky and uncustomizable. There is TotalSpaces2, which basically is similar to Ubuntu/etc spaces, but you can customize the transitions, hotkeys, locations, etc. Even though I don't have any transitions (so my switching is instant), you can have cube transitions, etc, very similar to Gnome. Downside to the program: $18, with a trial. If you miss proper desktop management, do it. Combined with Spectacle (A tiling window manager), I have functionality similar to XMonad, so I can use my mac effectively. ------ BillyParadise Lets see... I recently went Mac for the first time, and what do I have on there? For "serious" work-related things, I have Sublime Edit and MacPorts. That's everything. I picked up Omnigraffle but it's just not all that useful to me with a small screen. I'll look at using it again when I replace my desktop with a mac (or when I get an external monitor for the MBA) Oh, and I have MSDN access, so I put Office on there. But honestly, I never use it. (Disclaimer, I'm an old school "only have 1 page of apps on my iPhone" kind of guy) ------ celias SourceTree from Altassian git and mercurial client, free [http://www.sourcetreeapp.com](http://www.sourcetreeapp.com) CodeRunner from Nikolai Krill for easily running/testing code snippets in any language, $9.99 on the App Store [http://krillapps.com/coderunner/](http://krillapps.com/coderunner/) ------ british_geek Guardian Angel is pretty cool, it locks your Mac when you walk away so you don't need a password to lock / unlock it. Definitely worth checking out for $4 - [https://itunes.apple.com/us/app/guardian- angel/id657241260?m...](https://itunes.apple.com/us/app/guardian- angel/id657241260?mt=12) ------ collyw Windows ~~~ adamconroy Ditto ------ xauronx I'm a fan of Sip, it lets you grab colors off the screen and generates code for you. ------ itazula Notational Velocity ------ 2close4comfort Quicksilver
{ "pile_set_name": "HackerNews" }
Ask HN: Industry job for 1+ year or try directly for grad school? - cybernoodles Curious about what your thoughts would be on the pros and cons of the two options and whether or not going into industry for a bit would hurt or improve someone&#x27;s chances. My real passion is research. I&#x27;ve been told working at one of the top tech companies could only help me get into my desired program (CS, possibly ML&#x2F;AI). ====== elmarschraml Go work in industry for a bit before you go to grad school. Why? You already know academia, but not industry. There is a good chance that you will like working in industry, and after a year or two will think that going back to school would have been a waste of time. And if not, you can always go to grad school later. Even if you are already set on going to grad school, having worked in industry will give you a much better idea of what really matters to users and applications in the real world and will probably give you good ideas about what direction to take your research. And on the practical side, you can make more money in an industry job, so you can save up some money, and spend the time in grad school not having to worry about finances. ------ narwally I'm going through that same decision right now. I'm currently doing an internship at a startup that is allowing me learn machine learning and data analysis in a production setting. Like you I eventually want to do research, but I think grad school is just one way to get there. I'm going to spend another year or so in industry, see if I can make it into doing the kind of research I'm interested in, and if not I'll head back to school. I see grad school as taking a massive pay-cut over 2-6 years in order to further my education. However, If I can get a similar education while being payed at market value, then there's no competition. ~~~ cybernoodles How could you get that same education while being paid market value? Don't these companies require a post-grad degree in order to be considered for research positions? ------ codegeek You can always go back to school. But nothing beats real industry experience specially if you can work at one of the top tech. companies. So I would say go get a job, work for a year or 2 and then decide if you still want to go back to school to continue research etc. You will have a much better idea. On the other hand, if you feel that your chances of getting into one of the top. tech companies is not that great right now but you can get into a top grad school program, then go for the grad school. You will connect with top companies there. ~~~ cybernoodles Sorry. Forgot to mention, I have an offer from one of them. Would it be a bad idea to try to negotiate the "non-negotiable" starting salary since the stocks and signing bonus wouldn't be much use to me? (I need to be there for probably 4 or 5 years for all the bonus and stock stuff to be dispersed. I'd say about 25% of it would be dispersed before I went off to grad school without needing to be repaid.) ------ forward_number Having a year of experience may help you to define your research interests with a more pragmatic goals in mind. A complaint that I frequently hear from friends who have got into academic research is that there are very few people in the world who can understand the area where they do research. Presumably, if one can focus one's research one the area that is of interest to the real world, one can find many more people to talk to, get motivated, etc.
{ "pile_set_name": "HackerNews" }
The humble USB cable is part of an electrical revolution - dmmalam http://www.economist.com/news/international/21588104-humble-usb-cable-part-electrical-revolution-it-will-make-power-supplies ====== kabdib It's going to be interesting, from a security standpoint. One of the original cracks on the PS3 was via the USB stack. I'd not go plugging my computers or phones into jacks that I don't necessarily control. I can see a market for buffering devices that allow power through, and perhaps do power negotiation for you, but that do not allow data traffic. I believe these devices already exist. though I don't know how sophisticated they are. ~~~ x0x0 (I'm not an engineer). I a similar article; iirc android phones could be owned by plugging them into a hostile usb connection. I wonder, though, if you couldn't produce a simple adapter that just drops some of the pins? Maybe I don't know enough about usb. ~~~ jdietrich You can buy just such an adapter from the link below; Version 2 is in the works, which will include a microprocessor to allow for full-power charging. [http://int3.cc/products/usbcondoms](http://int3.cc/products/usbcondoms) ~~~ x0x0 thank you jdietrich ------ Brakenshire This is interesting as a contribution to the solar net metering debate. If you have a local low-voltage DC network, perhaps backed up by a UPS, you can charge the UPS battery using solar, and that means that solar electricity generated on-site automatically displaces grid electricity at the retail rate. That means that solar would only need to compete with the retail price of conventional electricity, and not the generation cost. And solar cost is already at or near parity with retail price in many parts of the world: This is 2010: [http://reneweconomy.com.au/2012/solar-pv-its-cheaper-than- yo...](http://reneweconomy.com.au/2012/solar-pv-its-cheaper-than-you- think-58689/bnef-12) And this is 2025: [http://reneweconomy.com.au/2012/solar-pv-its-cheaper-than- yo...](http://reneweconomy.com.au/2012/solar-pv-its-cheaper-than-you- think-58689/bnef-2025) (Countries above the isobar have lower solar costs than the price of electricity from the grid) ~~~ furyg3 Of course you also have generation costs, though (The solar panels, the UPS). ------ ams6110 One problem with low-voltage DC is that according to Watt, to achieve equivalent power (watts) at a low voltage, current is higher. And heat is proportionate to current squared. So you can't send a lot of power at low voltage because you lose a lot to heat. That means low-voltage runs have to be kept fairly short, or very low power, limiting their usefulness. ~~~ GammaDelta Funnily enough USB's inability to work over longer cable lengths works in its favour here. Generally we don't rely on anything working over 3 metres. Wikipedia says "the maximum power supported is up to 60 W at 20 V, 36 W at 12 V and 10 W at 5 V" [1]. For a typical 3 metre 20 gauge USB cable 10 W power delivery will cause a voltage drop from 5 to 4.6 V. This is within the +0.25/-0.55 specified for USB 3 [2]. Since nobody can rely on the 5 V from USB to be exactly the charging voltage they need, there will probably be switchmode regulators in-line anyway. Device manufacturers will just have to spec these up a bit to accept higher input voltage if they want more than 10 W. [1] [http://en.wikipedia.org/wiki/USB_Power_Delivery_Specificatio...](http://en.wikipedia.org/wiki/USB_Power_Delivery_Specification#PD) [2] [http://en.wikipedia.org/wiki/USB](http://en.wikipedia.org/wiki/USB) ~~~ michaelt Yeah, it's fine for USB cables. The difficult bit would be if you're planning on cabling an entire office building from one low voltage DC source, like the solar panels the article mentions. If you want 60 watts at 20 v is 3A, and if you want to supply a hundred ports with that you're going to need copper cables the size of your finger. ~~~ tanzam75 A standard is currently being developed for DC distribution in datacenters and commercial buildings, at 380 volts. One of the proposals for residential DC is to supply 380 volts and 24 volts. The higher voltage would be for things like space heaters and hairdryers, while the lower voltage would be for electronics. ------ kochb This was posted a month ago as well, discussion: [https://news.ycombinator.com/item?id=6591186](https://news.ycombinator.com/item?id=6591186) ~~~ jcampbell1 I find it notable that this article entirely ignores the EU's Common Electrical Power Supply law ([http://en.wikipedia.org/wiki/Common_External_Power_Supply](http://en.wikipedia.org/wiki/Common_External_Power_Supply)), which effectively mandated that all smart phones sold in the EU use micro USB for power. This had a swift and noticeable effect in the diversity of connectors in phones (basically Apple is the only maker that doesn't use micro USB and this change happened at the exact time of the law). The emergence of USB as The Way Phones are Charged didn't happen as a magic emergent property, but via considered government regulation. Government: it can actually work. ~~~ icebraining The EC memorandum was only passed after the industry had already decided on its own to implement it: [http://www.gsma.com/newsroom/mobile-industry-unites- to-drive...](http://www.gsma.com/newsroom/mobile-industry-unites-to-drive- universal-charging-solution-for-mobile-phones) ------ jzwinck This is great. Five years ago I carried on trips a phone with a proprietary USB cable, a GPS with mini-USB, a pocket camera with a like-sized charger (and 2m lead, which I refactored), and a AA battery charger for the rest of the stuff. These days, bike lights charge from mini-USB (or even have integrated USB A plugs), phones are semi-required to use micro-USB, computer mice have USB charging and data, and I recently learned that even pocket UV water treatment devices use micro-USB and integrated Li-ion instead of AAs now. I think the only use I have for AA batteries anymore is a camera flash. One of the requirements when I bought a pocket camera recently was that it charge via USB (Sony gets this; Olympus sort of does but their cables are "special"; Canon has one or two models). USB power can also do wonders for places where mains power is provided only part of the day. Those USB "power banks" are already taking off, and the more things we can use them with, the better. We've basically killed off the C and D-size battery, the 6V lantern, and a bunch of other unnecessary form factors. Now let's finish the job--we can make AAAs as obscure as AAAAs if we get USB charging remote controls, and there's no reason we can't make smoke detectors last a full year if we get rid of their antiquated and inefficient 9V packs. P.S.: America, think about migrating to 220-240VAC outlets someday. The fewer standards, the better. You can make USB wall outlets standard at the same time! ------ _Adam If anyone is interested in the actual technical details, check out the docs here: [http://www.usb.org/developers/docs/](http://www.usb.org/developers/docs/) There's a 37.7MB zip and the USB PD specification (328 pages) is packaged within. In order to ensure shit doesn't melt, they'll have detectable cables for >5V and >1.5A operation. I'm reading the spec to find out how they plan to do cable detection. IC based, or electrical connections, or maybe something else? ------ blocke Network engineers in campus and enterprise environments have been building a DC network overlay for years in the form of Power over Ethernet. All of those VOIP phones, access points, and security cameras all need DC power with UPS backup and the network closet has become where that power is provided. On our campus it's reaching the point where every switch we'll be buying will soon be PoE. I imagine many places are far ahead of us on this. What is the max cable length of USB Pd? ------ fab13n 100W at 5V means 20A. We usually recommend 4 A/m2 max for copper conductor sections, so it would require two 5mm2 wires in the cable here. It would feel more like a rod than a cable IMO. Anyone got an idea how they plan to address this? Higher voltages? ~~~ kyzyl Well they're pretty limited in what they can do. Most things you'd plug a USB cable into are not thing you'd want to plug higher voltages into, so that would require device-side voltage conversion which is wasteful in energy, space and complexity. I suspect that the 100W figure is actually just a pulsed maximum specification. The thing is that current ratings for wires are actually specified as a max. continuous current for a given temperature rise in the conductor, per unit length. So blowing 3-4x the current through a conductor for a very short time (think of flashing a bulb or moving a servo) is not a big deal. You just get a transient heat rise. Also, most applications simply don't require 20A. Even microwaves and kettles stay below the 15A residential fuses. (Although they do come close. I once had a shitty basement suite with an underrated fuse. If I ran my toaster and my kettle at the same time the breaker would flip!) ~~~ nickff Almost all your USB-powered devices have voltage converters with varying inefficiencies. It should also be noted that a switched mode voltage converter can have well over 90% efficiency, even with large changes in voltage. You should also remember that those microwaves and kettles are getting up to 15A @ 120VRMS continuously, which works out to 1800W. You can verify the actual power output of a kettle by timing how long it takes to boil a liter of water, and calculate power from this time and the specific heat capacity of water. ~~~ kyzyl There are some practical limits to voltage conversion if you want to keep that high efficiency. Probably most important is that your switching frequency shouldn't be as high as it is in most small devices (because high freq. allows you to use smaller components). In any case, as dfox mentions, most internal voltage level conversions won't be switched, because it adds complexity. They will be some form of linear regulation s.t. they can move between logic levels. That's different than moving from whatever high voltage is on your 150W USB line into a level that won't fry CMOS circuitry. There's a reason that the wall-->DC plug conversion usually happens in a brick on your power cable. Switched mode will be used as sparingly as possible, such as when you also need AC signals rectified, if you need both buck and boost depending on a battery or something, or if you need to be able to modify the control loop dynamically. > those microwaves and kettles are getting up to 15A @ 120VRMS continuously, > which works out to 1800W Well that's kind of my point. Even at 1.8kW those devices don't need to draw 20A continuous (or even pulsed, because of the fuse). Basically no matter what you're doing, the copper losses are roughly fixed by the hardware. What you can control are heat dissipation and current levels, and it's a lot more fun to play with Ohm's law than try to fight against thermodynamics. ------ zhte415 In 2006 China demanded all mobile telephone manufactures to standardise on USB connections for charging and data transfer. South Korea did so a year earlier, requiring 'standardized charging' without explicitly stating USB. [1] I'm curious if or how this requirement had any impact on charger standardisation. The Chinese market combined with economies of scale for common production models could have outweighed any cost benefits a market for chargers could have brought. [1] [http://news.softpedia.com/news/Chinese-Government-Demands- US...](http://news.softpedia.com/news/Chinese-Government-Demands-USB-Access- Mobile-Phone-Chargers-43092.shtml) ------ pkulak Anyone know how this standard actually works? I think right now Android phones tend to short the data lines and Apple uses some system of voltages to communicate that it's high power, which means that the other device usually gets stuck pulling 0.2 amps. How does this new standard tell the device it can supply 100 watts? And does this mean that iOS and Android will be stuck on 0.2 amps? Can you plug a legacy device on at all? ~~~ em3rgent0rdr [http://www.usb.org/developers/powerdelivery/](http://www.usb.org/developers/powerdelivery/) USB has both power and data lines. This new standard can use data lines to negotiate power delivery, I believe. I'd expect will still be backwards- compatible of course. I would think the only allowable voltage would be 5V, as is currently. ~~~ srinivasanv 100W at 5V would be a 20A current, which is sort of high. [http://www.usb.org/developers/powerdelivery/PD_1.0_Introduct...](http://www.usb.org/developers/powerdelivery/PD_1.0_Introduction.pdf) Page 9. There will be different voltage levels: 5V, 12V, and 20V. ~~~ em3rgent0rdr yup, I had read that and corrected exactly same time as you :) ------ simcop2387 With the new USB Power Delivery spec this can finally make sense to really start thinking about. Being able to power things that are "non-trivial" as far as power goes would make this go a long way. Imagine your sound system being powered off of a very clean DC power source, would be an audiophile's dream. 20V and 100W would make for a lot of nice options for powering things. ~~~ quesera > 20V and 100W would make for a lot of nice options for powering things. Requires a pair of 17AWG stranded wires for a 2m run, allowing 3% cable loss. For comparison, cat6 is 23 or 24AWG stranded, and US residential power wiring is typically 12 or 14AWG solid core. Smaller numbers are bigger, less flexible wires. I guess it won't work with microUSB contacts, or at least not at full power. ------ perlpimp On wikipedia I can count at least 6 different types of connectors for different voltages/current, sounds like a deal breaker to me. I'd rather have devices adapt Power over ethernet which requires just one cable and gives many more opportunities for networked future of devices. in fact [http://www.commercialintegrator.com/guide/product/details/po...](http://www.commercialintegrator.com/guide/product/details/poe_to_usb_chargers_it_chrg_p2u_it_wpchrg_p2u) [http://www.creativeplanetnetwork.com/the_wire/2012/06/14/fsr...](http://www.creativeplanetnetwork.com/the_wire/2012/06/14/fsr- powers-up-for-infocomm-2012-with-new-poeusb-charger-for-ipad/) So I think R45 is a better standard to rely on, that it does have option to carry networked data and intelligently carry voltage to charge usb powered devices. you can already get RJ45 hubs on the cheap too, ones with 8 ports and such. ~~~ kalleboo We'd need a "micro-RJ45" to get any adoption among portable devices such as phones or tablets. And some solution to those terrible plastic clips. ------ skreech Disregarding the flamboyant visions of an electrical revolution, just having a flippable physical interface (and hopefully not in three different sizes, two of which always gets mixed up) would be a huge enough leap forward for everyday life. ------ legulere They forgot to mention the most important thing about the system from Moixa: It's variable voltage and devices generate the needed stable voltages themselves. ------ imahboob love USB.. now I don't have to go looking for a compatible charger or connector when ever I lose mine
{ "pile_set_name": "HackerNews" }
10 Most Recommended JavaScript Scene Articles of 2015 - ericelliott https://medium.com/javascript-scene/10-most-recommended-javascript-scene-articles-of-2015-292be655d6cc ====== honua The article is written by Eric Elliot and literally every one of the ten articles he recommends he wrote himself.. reminds me of the plastic surgeon in the show Workaholics who said "I'm widely considered by myself to be the best plastic surgeon in Rancho Cucamonga"
{ "pile_set_name": "HackerNews" }
Ask HN: Anything better than Tableau for data viz, dashboards? - datavizq I am in charge of implementing a web-accessible dashboard to visualise some data we are collecting on behalf of a client.<p>I am currently planning to use Tableau. However, my (admittedly very limited) exposure to the platform has left heavily underwhelmed. Tableau dashboards appear expensive, slow, ugly, and completely lacking in statistical tools.<p>Can anyone suggest a platform that can improve on any&#x2F;all of these issues?<p>I am experienced with Postgres, JS, Ruby and Objective-C, and also with Stata and R, so I&#x27;m not afraid of some coding. But I am looking for something considerably quicker than coding the thing from scratch. ====== click170 As someone who has multiple years of experience maintaining tableau, it is laughably Ops unfriendly. Want to change the email address that tableau sends reports to? Requires a restart of a Tableau. Want to update the ssl certs used in tableau? That's a restart. Want to upgrade tableau to a new veraion? Get ready to uninstall and reinstall the new version. Of all of the servers and services that I manage, Tableau is my least favorite. However, apparently its incredibly good at what it does. I say apparently because I maintain it but I don't use it in day to day operations. ------ learnyearn If you want to leverage R, there is a web application framework called Shiny that lets you build interactive apps from R analyses: [http://shiny.rstudio.com/gallery/](http://shiny.rstudio.com/gallery/) ------ gerpsh If you're using more common visualizations (e.g. bar chart, line chart, scatter plot, etc) there's an excellent js library called C3 ([http://c3js.org/](http://c3js.org/)) that wraps charts implemented in D3 with a super-simple api. I'm a huge fan. ~~~ colordrops I did a lot of test plots with several D3-based charting libs, and found weirdness with C3, such as performance degradation over time, and oddities like including the label for the data as the first entry in the array. NVD3 seemed to be the most mature and sensible of all the libs I tried. [http://nvd3.org/](http://nvd3.org/) ------ kposehn I can't believe I'm going to say this, but tableau can easily be the right solution. So, if you're dealing with large datasets stored in multiple systems (like Excel + MySQL + others) Tableau can be a boon. The ability to use many different sources of data, create calculated fields that merge/modify other fields, and then operate against them? Quite nice. I especially am happy with how I can create larger visualizations that work across different disparate datasets from many sources. However, it does have quite a few issues in terms of UX, usability, etc. but so far I've liked it. Your mileage may vary :) ------ bmh100 If you want something with a lot of batteries included, extensive through JS/HTML5, and fast, look to QlikView [1]. Message me (address in profile) if you want someone to show you around the platform. [1]: [http://www.qlik.com](http://www.qlik.com) ~~~ spaceactuary In my (admittedly limited) experience, you'll probably run into some of the same issues with QlikView being "expensive, ugly, and completely lacking in statistical tools". ~~~ bmh100 I have deep experience in the QlikView (QV) platform, so I can address some of the points in your experience: > expensive QV is not free, that's for sure. You'll be spending tens of thousands of dollars for the one-time license fee, as well 20% yearly maintenance. On the other hand, you'll be saving thousands of hours of engineering effort by not reinventing the hundreds of wheels already in the platform. Don't succumb to "not invented here" syndrome. Check out a few dashboards I designed in just a day total [1], or the vendors demos of Twitter data [2], HealthData.gov data [3], or Salesforce.com data [4]. > ugly If you are a first time user, the default visualizations are ugly, no doubt. But for someone with design skill, the visualizations can be made quite beautiful. For wanting more tools, just add your favorite JS library and HTML to make a custom visualization. > completely lacking in statistical tools Fortunately, QV can link with R, allowing you to all the advanced capabilities you need. Need something more specific? Throw a microservice REST API on top of your desired application, and load that in through a GET request. There are Hadoop connectors also built in. One thing that people often don't realize when comparing Tableau and QV, is that QV is a platform, as opposed to Tableau being just a visualization tool. QV includes ETL, task scheduling, and an in-memory analytics database. [1]: [https://imgur.com/a/3Tzni](https://imgur.com/a/3Tzni) [2]: [http://us-d.demo.qlik.com/detail.aspx?appName=Social%20Media...](http://us-d.demo.qlik.com/detail.aspx?appName=Social%20Media%20Buzz.qvw) [3]: [http://us-d.demo.qlik.com/detail.aspx?appName=Epidemiology%2...](http://us-d.demo.qlik.com/detail.aspx?appName=Epidemiology%20-Tycho.qvw) [4]: [http://us-d.demo.qlik.com/detail.aspx?appName=Salesforce.qvw](http://us-d.demo.qlik.com/detail.aspx?appName=Salesforce.qvw) ~~~ chris_wot I have to agree with this assessment. Qlikview does ETL very well. ------ hobbe80 I like periscope.io myself, although I haven't done an in-depth comparison between the current options. ------ travisoliphant You might take a look at Bokeh ([http://bokeh.pydata.org](http://bokeh.pydata.org)) and either the PyData stack or R (Bokeh can be used from R as well: [https://github.com/bokeh/rbokeh](https://github.com/bokeh/rbokeh)). Bokeh inside a Jupyter notebook with widgets and/or emerging "Bokeh Apps" is a powerful application stack. Anaconda is a single download that can help you get started with all the tools (including R): [http://continuum.io/downloads](http://continuum.io/downloads) It still requires some coding but it is very powerful. There are a lot of examples in the Bokeh gallery and in examples directory: [https://github.com/bokeh/bokeh/tree/master/examples](https://github.com/bokeh/bokeh/tree/master/examples) . There are several devs on Bokeh mailing list eager to help and the company behind Bokeh ([http://continuum.io](http://continuum.io)) can provide more significant help if you need it. ------ jkaykin I quite like BIME ([http://www.bimeanalytics.com/](http://www.bimeanalytics.com/)) ------ Afton Disclaimer: I work at Tableau, but I don't claim to represent them, and I'm not in tech support or sales. Two things that are likely: If you think the statistical tooling is limited you may not be aware that Tableau offers R integration built in. So if you're comfortable in R you can probably build what you want. [https://www.tableau.com/new- features/r-integration](https://www.tableau.com/new-features/r-integration) If you find it slow, it may be something that tech support can help out with (changing config settings, or reworking your dashboards to be more performant). You should email whoever manages your account, or hit up [https://www.tableau.com/support/request](https://www.tableau.com/support/request) and include your contact info. You can also email me (email in profile) and I'll get back to you from my work account with the right contacts. ------ mclemme I've used Dashing quite a bit, for relatively simple data, demo here: [http://dashingdemo.herokuapp.com/sample](http://dashingdemo.herokuapp.com/sample) official page: [http://dashing.io/](http://dashing.io/) ------ tixocloud It really depends on what your use case is. There are still many unknowns related to implementing a web-accessible dashboard to make a decision. What does your client intend to do with the dashboard? How much interactivity do they want in place? Is the data real-time? What sort of advanced statistical analysis does your client want to run? Having answers to those might help guide you toward or away from Tableau. As an everyday user of Tableau with a solid technical background, there are some things that Tableau does well and there are some that it doesn't. I love Tableau because it allows me to join many different data sources together quickly so I can analyze the data. I can easily drag-drop and visualize my data in many different dimensions. That said, sometimes the analysis is basic and it runs slower when there's a huge dataset. ------ buu700 For Cyph, I initially looked into Tableau and various analytics platforms like Mixpanel, then ultimately realised that Google Analytics (which we were already using) had an events API that worked fine for our needs. See: [https://developers.google.com/analytics/devguides/collection...](https://developers.google.com/analytics/devguides/collection/analyticsjs/events) And this is what our dashboard looks like: [http://i.imgur.com/F8g8Hxq.png](http://i.imgur.com/F8g8Hxq.png) It's fairly basic when it comes to visualisations, but thought I'd throw it out there in case it's helpful. ------ jboggan I don't know about built in statistical tools beyond the basic aggregates available in SQL, but we (Fullscreen) have been using Chartio and are really enjoying it. I'd say it does great for 95% of the dashboards and visualizations we need (custom d3.js for the rest) and it plays well with our data sources. Particularly coming from Redshift data I found Chartio a lot snappier than equivalent charts in Tableau, especially for large data sets. You can either use their UI to make charts, or write pure SQL, or my preferred method of making most of the functionality in the UI and tweaking the SQL for the last few details if you need something bespoke. ~~~ JPKab Chartio looks awesome, but I'm unable to get any pricing information from their site. Care to elaborate on the cost for a small shop to leverage this for a few dozen users? ------ ryanatallah Argo ([https://argo.io](https://argo.io)) is a web-based tool that enables fast, natural-language based question asking and visualization of data. It's designed to be used by a non-technical user, so you can share dashboards and visualizations with people, and they can ask their own questions. Under the hood, Argo uses advanced search processors to turn natural language queries into SQL, optimized for visualization. You can request a demo on their website: [https://argo.io](https://argo.io) Disclaimer: I'm a co-founder and CTO of Argo ------ ecyrb Kibana / ElasticSearch? It's limited, but pretty and interactive, and gets you a bunch with very limited up-front work. I'm sure you can find some better demos, but here's one: [http://parlement.letemps.ch/](http://parlement.letemps.ch/) HUE is a similar but different alternative. The "search" tab has some great demos, but appears to be down atm: [http://demo.gethue.com/](http://demo.gethue.com/) ------ akg_67 Though Tableau is incredibly expensive, I haven't yet found anything better that Tableau for web dashboard and visualization. Tableau is strictly a visualization tool and not statistical analysis tool. Tableau expects you to perform all the calculation on the back-end and send it the final data for visualization. I have also used Shiny, D3, Flot, Highchart, Chartio, Spotfire, Bime, FusionCharts,Qlickview and nothing comes close to Tableau. ------ gt565k I'd suggest you use a JS library like HighCharts or D3JS (or both). All you need to do is format your JSON on the back-end in the correct format and throw it into the chart's configuration. HighCharts has an amazing API, documentation, and examples. [http://www.highcharts.com/](http://www.highcharts.com/) [http://d3js.org/](http://d3js.org/) ~~~ panorama What's your opinion on Highcharts usage longterm? I find it's great to get something up and running, but I've found myself hitting limitations, especially when it comes to custom design. But it's possible I may just be using it ineffectively. I always assumed that one day I should port my company's charts over to D3 if we wanted to be _serious_ about our data visualization (which makes up a big part of our site). In other words, is it like Bootstrap in that it's a useful starting tool, but doesn't really scale well if you want to have full design control in the longrun? Do you happen to know any notable sites using Highcharts? Thanks in advance. ~~~ gt565k You can style almost anything on the chart with CSS and HTML I believe. Custom tooltip templates, etc The API docs are just fabulous, with examples for everything :) [http://api.highcharts.com/highcharts](http://api.highcharts.com/highcharts) ------ MrApathy QlikView is similar to Tableau, though more powerful and with a steeper learning curve. But if Tableau is too expensive, likely that QlikView is, too. Another option is Looker, a relatively new product that relies more heavily on existing transaction/DW infrastructure. Dashboards are not ugly. You can also look at d3, though by comparison development time will be much slower than the other two I've named. ~~~ chris_wot If you've got SQL experience it's actually very easy to learn Qlikview scripting language. I recently finished a gig where the CEO retrenched all the IT staff and didn't bother to have any of the Qlikview dashboard processes updated. I had to pick up Qlikview in a few days, and had it all worked out pretty fully within 2 weeks. Learning "set-analysis" took me few days to get up to speed. It's honestly not that hard to understand. The difficulty as always is putting together a sane data model. ------ chahex You might want to try out Tibco Spotfire; it is similar to Tableau and Qlik. They do provide some statistical tools (I personally don't know much about it). And they do have a WebPlayer to view the dashboards online. See demos here: [http://spotfire.tibco.com/demos](http://spotfire.tibco.com/demos) ~~~ doctaj I use Spotfire regularly, and it's a pretty good tool. I highly suggest getting professional training, though, because our team didn't and it just took FOREVER for everyone to get in the swing of things. It's really good for allowing your end users to explore data - it just requires a lot of development time to make really usable (ie: to make it more than "just a dashboard"). It has been built with R in mind from day 1. They have their own "Tibco Enterprise Runtime for R (TERR)" which I don't get to play around with much, but it's an obvious place to start for advanced predictive stuff, machine learning, and general data manipulation. Using the "RinR" package, you can pretty much do anything that R can. My only absolute HATE with it is the LACK OF FREE SUPPORT/Community. Even though they have a nice "tibbr" (Facebook for businesses, basically) especially for Spotfire support, it's all behind a login wall, so it's not indexed by Google at all and it's not very searchable in my experience. In my opinion, this is a fatal mistake with their entire solution. Forums are amazing. Forum posts stay around forever. Very rarely do you want "the newest" forum post. You usually want a SPECIFIC forum post - making tibbr an awful user experience for support. Additionally, their OLD forum/community IS indexed by Google, so you'll end up at dead-ends (404s with the exact information you want, conveniently highlighted in Google just before the answer is presented). Only a few people have blogged about it in the past, and even those are usually old versions. Also, basically no one talks about it on StackExchange. So, I just find it really hard to find answers to specific questions - like you might naturally do when programming to get a problem fixed quickly. That said, it's super flexible and might be worth a look. I have very limited experience with Tableau and Power BI, but those lacked some of the convenience features I was used to when I used them. Personally, I wish I were forced to just program all my data visualizations in R or Python, haha. ------ jastr VQL is a gui for really quick analysis and plotting. It’s mostly for non- technical people, but we’ve had data science teams that use it to explore their data before breaking out IPython or Tableau. I’m the founder of VQL. Nothing on our site yet, but happy to send a demo/instance. My email is jstrauss (then an @ sign) getvql.com ------ civilian One candidate is [http://redash.io/](http://redash.io/) My team has a backlog task to set it up. We use bigquery to hold onto our data, and redash can work with that.. It's open source, so it's free (except for one server) and looks good. ~~~ jhorman We use redash for some basic reporting. It is nice, and development is active. ------ asdfprou My favourite right now is Looker. Dashboards are beautiful and ad hoc query creation is dead simple and on point. You will save yourself many "oh could you re-run this data but split it out by device type?" moments because your clients should be able to do it themselves extremely easily. ------ thorin Jasperserver with jasper reports has dashboards and is much improved now with visualize js. The dashboarding is only in the free version i think and not sure how the cost compares to tableau. It's much cheaper than business objects or oracle bi publisher etc though. ------ q2 [https://en.wikipedia.org/wiki/FusionCharts](https://en.wikipedia.org/wiki/FusionCharts) EDIT: This seems to be heavily used in industry as well as in Federal IT dashboard...etc. So it appears to be a good choice. ------ dedalus Interana ([http://www.interana.com](http://www.interana.com)) which is a YC S12 company sounds like your best bet. Whats your scale? how many events per day/month etc? ------ dreaminvm D3.js or even Google Charts will get the job done for most visualizations. ------ bra-ket Pentaho, Saiku, or plain D3.js on the front end with your own middle layer ------ apurvadave A new product you could take a look at is [http://www.jut.io](http://www.jut.io). It's in beta / free for anyone. It's a streaming analytics development environment, and uses d3 for visualization. It ingests both events and metrics. It's based on a high-level dataflow processing language that allows you to process your data flexibly (moving window analytics, anomaly detection, general statistical processing). you can build interactive apps & dashboards and control which facets users can manipulate. aaand here's the disclosure - I work at Jut and run customer success. ------ whatok Demoed Tableau at work a few years back and it was lacking in real-time visualizations. Is that still the case? If so, anyone have any recommendations? ------ slake I'm a user. I haven't found something as good. ------ drewrv Clicdata is user friendly and affordable. [http://www.clicdata.com/](http://www.clicdata.com/) ------ alison985 Looker. Looker is the best. It's an upfront monetary investment, and the language you most need to know is SQL, but I love it dearly. ------ mstkrft [http://www.bimeanalytics.com/](http://www.bimeanalytics.com/) ------ marianoguerra take a look at [https://event-fabric.com/](https://event-fabric.com/) we didn't officially launched the saas version but I can set up an account for you to try it for free of course.
{ "pile_set_name": "HackerNews" }
Where can I hire designers? - sidwyn I need a new icon and UI for my iPhone app - Definition.<p>Been looking through oDesk but somehow it doesn't really appeal to me. Browsing through Forrst and Dribbble is driving insane too. Anyone has any good contacts experienced in iOS design? ====== lachyg <http://www.dribbble.com/> is the best source of designers. Browse through it, searching keywords, etc, and then click on the profiles of 10-15 that interest you. Contact them. Easy! If not, shoot me an email and I'll hook you up with a designer (emails in profile). I've connected about 20-30 HN'ers with designers. ------ limedaring Not sure if you'll find specifically iOS design, but <http://sortfolio.com> is another service for finding designers. ------ solost My contact information is in my profile, contact me if you want to discuss your specific needs. ------ rfugger Try <http://99designs.com/> ? ~~~ djb_hackernews whoa! $1000 for a 2 page mockup? ------ taitems I'm interested, maybe we should chat? taitbrown@gmail.com ------ paulsingh I've been using brandstack.com for ny side projects. ------ niico Drop me a line. My email is in my profile
{ "pile_set_name": "HackerNews" }
Ask HN: Is there a name for the 'web shortcut' services? (tinyurl, tinypaste, cli.gs, etc) - AlexeyMK Other than 'web shortcuts', is there an umbrella category name for tinyurl and the resulting offshoot services?&#60;p&#62;This feels like an interesting niche blog to start.&#60;p&#62;[Full disclosure: I created look.fo and str8.to] ====== pwoods I think they are referred to by there services. Like tinyurl is a tinyurl. But if you wanted to coin plink I'll support it! Only 1,499,999,999 internet users to go ~~~ AlexeyMK Yes; but what would you call the industry as a whole, do you think? ------ AlexeyMK (just found this) lifehacker is calling them "url shrinkers" - <http://str8.to/best-url-shrinkers>. ------ ram1024 call em plinks <\-- cause it's cute short for hop-links maybe? ~~~ ph0rque how about urlets (not sure how one would pronounce that) ~~~ ram1024 ooh i like that one too!
{ "pile_set_name": "HackerNews" }
Ubuntu 14.04 LTS (Trusty Tahr) Final Beta released - jgillich https://lists.ubuntu.com/archives/ubuntu-announce/2014-March/000181.html ====== ghotli Most of the articles and chatter I can find talks about the changes in 14.04 from a desktop perspective. I only run ubuntu on servers and vms. Does anyone know where I can find a good changelog as to what's changed between 12.04 LTS server and 14.04 LTS server? ~~~ hnriot most of the reasons for running ubuntu are dekstop oriented. Centos is probably a better bet for servers. ~~~ cies I will not voluntarily use "yum" on a server in a million years. Debian has always been my favorite, and currently we use Ubuntu on servers as the OS-packages-as-shipped are more up-to-date; which turns out to be quite important in web-dev-land. Frankly I dont know what CentOS/Redhat does better then Ubuntu nowadays, apart from selling enterprise stuff like JBoss :) ~~~ jgillich What's wrong with yum? I never had any issues with it on Fedora and CentOS and actually prefer it over apt* (mainly due to it's speed). ------ scanr I'm using 12.04 LTS quite extensively. For folk in a similar situation, how long are you thinking of waiting before switching to 14.04 LTS? ~~~ negativity Ubuntu's user interface upgrades as of 11.10 (the "unity" interface) sucked so bad that I refuse to use Ubuntu beyond verision 10.04 on desktops/laptops. 11.10 was when I switched over to Mint and never looked back, and it seems that doing so was a wise move, given the Amazon adware/spamware/spyware that Canonical saw fit to include in more recent versions. [http://www.markshuttleworth.com/archives/1182](http://www.markshuttleworth.com/archives/1182) Even though Mint includes proprietary binaries (like Flash and Audio/Video codecs), which may or may not contain opaque questionable material, at least the third party non-open-source software is something that (arguably) improves the distribution and actually serves a purpose for me, as the end user. Mint has changed over time too, though, and now I'm thinking about moving to a personally customized Debian image, and a hobbyist project. Hopefully it won't prove to be too demanding to pull off. ~~~ JeremyMorgan I too was pushed to mint after 11.10 but I'm sick of Mint now too. Not only do upgrades break it frequently, but I find small 'glitches' that end up taking up too much time tracking down. Stuff like Wifi dropping, and icons in my taskbar dissapearing, random browser crashes etc. What I did was decide to sit down and do an Arch install. Yes it takes time, and yes you have to know what you're doing. But I invested the time up front, and now it runs very reliably, and faster on the same machine. I say if you want to GSD the best thing to do is set up something like Debian, Arch or Gentoo and invest the time setting it up so you can use it without problems later. I don't know about you but I have better things to do than screw with an OS all time, I have real work to do. These "harder" distros are great for that. ~~~ vivin I am using Mint 16 now; first time using it. Previously I was using Ubuntu. I had to switch to Mint because we got W540's at work which has a lot of new hardware that is not currently supported (well). I wasn't able to get Ubuntu working on it properly, but I have been impressed with Mint so far. The only issue right now is that it doesn't see my nvidia card and so I have to use the integrated chip instead of the discrete one. I'm hoping that Mint 17 fixes the issue. How hard is it to get a custom system up and running from Arch? I haven't done anything like that in a few years although I've had a lot of experience with setting up custom FreeBSD systems. Is it more or less like that? What I'm reaching for is something that "just works" and that I can work on reliably instead of having to fix obscure problems all the time. I figure once I set something up that works, I can simply create an image of it to use later. ~~~ JeremyMorgan Arch isn't that hard to set up, it's still easier than Gentoo. You just have to set it up from an explicit point of view. You must know every detail of what you want, and each item up, rather than a "10 clicks and I have an OS now" type of setup. It helps to have a good knowledge of Linux to do it, because you know where things should go and where to look if there is a problem, but it doesnt' require you to become a kernel hacker just to get it to a prompt. ------ JanezStupar I would use this magnificent milestone to raise my hand and ask... Nvidia, where are my native Linux Optimus drivers? ~~~ vanderZwan I dunno, but as someone on a laptop with an intel HD 4000... who can I thank for this _ridiculous_ increase in performance? Default Ubuntu has gone from laggy to rivalling Lubuntu in responsiveness. ~~~ JanezStupar For me Unity stopped working somewhere between 13.04 and 13.10... (Asus UX32VD). I am going to upgrade to 14.04 in a couple of months and see what happens. ~~~ mdeslaur FYI, I'm currently running 14.04 on an UX32VD. I was previously running 13.10, and that worked great too. ------ listic What improvements in touch device and HiDPI support have made it to 14.04? I'm going to use Ubuntu on Microsoft Surface Pro 2, because even though full convergence for Ubuntu is delayed, I think of all Linux distros it is in the best position to run on such devices. Some enthusiasts have made 13.10 work on Surface Pro 2, surely it can only get better from there? [http://ubuntuforums.org/showthread.php?t=2183946](http://ubuntuforums.org/showthread.php?t=2183946) ~~~ matb33 I'm also eagerly awaiting HiDPI support (I have one of those QHD screens on a 15" display, 3200x1800!). There is apparently some support for it in the GNOME version (I've read it was not perfect though). Don't recall where I read this but it was rumored HiDPI may come to 14.10. Probably no more than a rumor... But at least a version number to look at helps me cope :) ~~~ owaislone 14.04 has very good HiDPI support per monitor unlike Gnome. I've been using 14.04 for months and am loving it on a retina display for the last few weeks. The shell scales perfectly, GTK3 apps scale as well. Firefox has the `layout.css.devPixelsPerPx` setting in about:config that you can change to 2 or 4 to make it scale properly. Chrome doesn't yet support HiDPI screen but setting the default zoom level to 200% does the trick. ~~~ matb33 Upgrading now just to try this! (Now to get past "symbol 'grub_term_highlight_color' not found" on boot... I should have waited for the weekend to update to a beta release!) ------ Yuioup Is there a short summary of changes since 13.04? The blueprints list is quite extensive. ~~~ jgillich [http://www.omgubuntu.co.uk/2014/03/ubuntu-14-04-beta- release...](http://www.omgubuntu.co.uk/2014/03/ubuntu-14-04-beta-released) ------ ausjke I already moved all my servers to Debian, have not decided when I should do the same on the Desktop side yet, I don't really care about games/MIR/smart- UI-decision-made-for-me/one-GUI-does-all-screens etc, all I need is vim and a browser, with newer tested packages installed underneath for development. ~~~ thinkmassive I was using Arch as my laptop OS for a couple years until just recently switching to Ubuntu Server 14.04. Since I use i3 it still seems like the same environment. The reason I switched is because I'm developing solely on Ubuntu Server 12.04, and it's less work to get things working once instead of twice. Is there any advantage to using Debian over Ubuntu for servers and/or development? ~~~ ausjke I feel Ubuntu is moving towards more to the mobile arena which _could_ impact the desktop/server quality especially for the long run. It could also be fighting a battle that is too big with its limited resource(i.e. stretched too thin). I switched to Debian as a precaution. ------ butchlugrod Just installed it into a VM. No issues, very slick. The UI scaling stuff is a neat addition. Haven't tried the Server edition yet, but I imagine I'll start deploying that in six months or so. Precise Pangolin has been my bread and butter for servers. But why does it still have a "Floppy Disk" icon in the launcher? This is 2014 right? I feel like that is even more absurd than using a floppy disk icon for save buttons in documents. My desktops and laptops don't even have optical drives anymore, much less floppies. ~~~ Shorel > But why does it still have a "Floppy Disk" icon in the launcher? Because the BIOS of the VM reports a Floppy Disk even if you add no Floppy Disk to the list of hardware installed. That's a bug for the VM BIOS, and a feature for Ubuntu. You can disable it if you want: [http://imgur.com/anGmzpj](http://imgur.com/anGmzpj) ------ sesm Very excited to see Ubuntu Studio getting LTS support. ------ keithpeter Desktop oriented end user here: Does anyone else use the mnemonic shortcuts, e.g. ALT-F-A for Save As... and in LibreOffice Alt-I-O-F to pop a mathematical formula into a document? Broken completely in 13.04 and 13.10 and somewhat broken in 14.04 (Alt-F opens File menu but any attempt at a second note in the chord opens a different top level menu). Otherwise sensible changes, menus on window bars makes sense on larger monitors and shrinking sidebar very nice on a 1280 by 800 screen. Very snappy from live image on a Core Duo 2 laptop with Intel graphics (Thinkpad X200s). Good for demonstration of Linux! ~~~ kleiba Not exactly, but ALT+F is M-f in Emacs which is by default bound to forward- word, i.e., lets you jump over a word. When you run Emacs inside a terminal window, I find it quite annoying that the Menu bar gets activated when you use that shortcut. ------ username42 I am very happy that Ubuntu GNOME is present. This means no unity for the next 3 years ;-) ~~~ adwf At the risk of starting a flamewar, is Gnome 3/Shell really any better? I've never felt quite so unproductive as when I use Gnome nowadays. ~~~ daivd I have never quite understood all these desktop emotions. I run Kubuntu, but almost never interact with any desktop features. What is it in your workflow that requires you to interact with Gnome/Unity? (just curious) ~~~ cturner alt+tab. Used to work perfectly. Now broken. I've returned to debian stable, which comes with a gnome-2 legacy option that works out of the box just as well as it did a decade ago. ~~~ vertex-four Go to [0], install, configure to either "all windows" or "all windows in current workspace", and alt+tab probably works like you want again. It's not terribly difficult. [0] [https://extensions.gnome.org/extension/15/alternatetab/](https://extensions.gnome.org/extension/15/alternatetab/) ~~~ rglullis Or, you know... just keep using an old, stable, dependable version. But you can't do that with GNOME since they are such rabid fans of the CADT model. ------ ciupicri Is it to me or this release hasn't switched to systemd from upstart? ~~~ kijin AFAIK, the systemd decision was finalized too late to make it into 14.04. Last-minute changes are not a good idea for an LTS release that emphasizes stability. This also means that Canonical gets to keep supporting their darling (Upstart) for another five years ;) ~~~ Already__Taken It does mean at least that a systemd LTS ubuntu will be completely bulletproof with so many years of work from all other distros in it.... ------ dexcs fyi: "final release expected on April 17th, 2014." ~~~ Kudos Also, upgrading to the final release will happen automatically with your usual `apt-get upgrade`. Edit: to clarify, I mean upgrading from this release to the final. ~~~ richardwhiuk Are you sure that's true? Normally it requires do release upgrade? Normal upgrades also require apt-get dist-upgrade to upgrade the kernel as there's a new package. ~~~ gnur Upgrading from beta to release is apt-get upgrade. Update from 13.10 (or 12.04) to 14.04 is do-release-upgrade. ------ ycombasks Any idea if it will be possible to keep the menu in the title bar from being hidden? It looks like you have to mouse-over to show it, which could be a pain if I'm trying to access it often. I'd like to see where to take my mouse without guessing. ~~~ smithzvk I just watched a video overview of the changes. It seems like you must still mouse over to reveal the menus, even with the new option to put menus in the title bar of the window instead of at the top. ~~~ ycombasks That's a bit annoying. Hopefully someone will release a patch or something that keeps it from hiding. ~~~ smithzvk I think this is a reminder that different interfaces work for different people. I think that the Unity interface, with its hidden menu bars, application menus, scroll bars, etc. is almost perfect and a clear step in the right direction (with the huge exception that it crashes frequently, hopefully this will be fixed for me in the 14.04 release). The idea that I would ever access a drop down menu by mouse is in some way ridiculous and signifies a failure of the application's user interface. Again, that is what I think and it is obvious that other people think drastically different things. ~~~ ycombasks How else would you access it? I often access the toolbar in Sublime Text, for example, so I'd like to keep it visible. ~~~ smithzvk Typically via holding alt and pressing short cut keys. That is the old way to do it. Unity introduced a new way to do it where you use the alt key and a text box comes up which allows you to search the options by keyword. It could be done better in Unity, but I still like it better than searching though the hierarchy that mostly never made sense to me. The real answer is that things like Vim and Emacs long ago came up with interfaces that don't require toolbars/menubars. They were added a long time ago, but I am amongst the people that turn them off and don't use them even though they are available. For many of us, mousing is less efficient than well tuned muscle memory. ~~~ ycombasks Perhaps so. I tried Ubuntu with Unity a little while ago and didn't like it much--I'm used to minimizing to the taskbar and seeing the names of the programs. I settled on Linux Mint as it offers a clean and "classical" way of doing things. I'll consider trying out Ubuntu again when 14.04 LTS comes out this month. Maybe I'll get used to it, who knows. ~~~ smithzvk To me Unity just made sense and if not for it crashing entirely too often and incurring a performance hit due to all the eye candy, I wouldn't look further. To me the nice thing about GNU/Linux on the desktop is that each person can have the interface they want. This has secondary benefits where if you have your interface which is significantly different from my preferred interface, it forces application developers that really care about supporting their users to develop high quality abstraction layers that support both. The same goes for software packaging, driver support, general compatibility of proprietary software, etc. So, by all means, keep using Mint, it is in my best interest if you do (and yours, and everybody elses). ------ davidgerard Toshiba Portege R830-13C, Intel HD Graphics 3000. Minecraft, running in Xubuntu 14.04 on openjdk 7.51, is having graphics problems that it didn't in 13.10. (No, I haven't filed a bug yet, yes I should ...) Anyone else running Minecraft on Intel HD Graphics 3000? In 14.04 or otherwise. ------ hit8run Is Python3 now the new default Python? ~~~ rlpb Define "default". Both Python 2 and Python 3 are installed by default (it's a goal to not have Python 2 installed by default, but Python 3 is already there). If you want /usr/bin/python to be replaced, this is unlikely to ever happen[0], but what difference does that make? [0] [http://legacy.python.org/dev/peps/pep-0394/](http://legacy.python.org/dev/peps/pep-0394/) ------ herokusaki Based on your experience is it stable enough to upgrade to already? Previously some final betas were. ~~~ pizza234 I've found the Ubuntu [Desktop] stability to be in constant decline over the time, even experiencing bugs in the installers in the latest versions. A few notes: \- the "stability" I refer to is always minor errors \- I remove lots of packages every time I install it, although I've experienced system error notifications even when I didn't uninstall anything All in all, I'd say that there is a lack of polishing, at the low-level, more than lack of stability. There is no excuse for having the installation fail, though, and it happened a number of times. To reply the question directly, I've used betas a few times, and they worked as much as the final version for me. I wouldn't do it now though - in the past, for my usage, some types of changes were very significant; today, I get very little in upgrading. ~~~ chrismonsanto > I've found the Ubuntu [Desktop] stability to be in constant decline over the > time I wish I never upgraded to 13.10. Sometimes drag-maximizing my window can crash my entire system. And compiz leaks memory like a sieve, sometimes I will wake up to find compiz using ~5.5gb of memory and the system will be unusable. Gotta restart! When I first upgraded, I thought "oh, it's always like this at the start, they'll fix it." And here we are at the next version and it still hasn't been fixed. Probably jumping ship (to another Linux distro) once my next work deadline passes. ~~~ kator Or maybe dig in and contribute a fix? ~~~ chrismonsanto Is this a serious suggestion? I can't even reliably reproduce it. It just happens randomly. The problem could be unity. It could be in compiz. It could be in the nvidia drivers, which I don't even have the source to. Who knows what in that mess causes my system to lock up. Regardless, I do not have enough loyalty to Ubuntu to do this kind of work. There are a number of open source projects that I am involved with, and if I spend time on this (likely to be fruitless) endeavor, I end up with less time to spend on projects I care about. My post is purely to vent, and to serve as a warning for those looking to try Ubuntu Desktop. My personal opinion is to try something else. I am. ~~~ popey For anyone who _does_ have the time and inclination to debug this there are some comprehensive docs on the wiki [https://wiki.ubuntu.com/X/Debugging](https://wiki.ubuntu.com/X/Debugging) \- I've used these steps to get backtraces before so devs can get stuff fixed. ------ csense I hope they fix the part where Unity is crashy, slow, and has a really sucky user interface. I mean, seriously, Unity is a piece of garbage. You have to look up some magical key combination to do something as simple as launching multiple instances of an application. It's as bad as the Mac [1] -- designed to cater to users who aren't smart enough to understand the concept of "multiple instances of an application." In order to launch something, you have to search for it -- WTF? I don't want to search for my application, I know what application I want to run! Let me run it dammit! That whole "no menu you can browse and discover what's installed on your system" is a huge barrier, especially to new users to the Linux ecosystem, because how are you supposed to know the default email client is called Evolution, or your spreadsheet is "LibreOffice Calc"? If there's a comprehensive, categorized menu of all installed applications, you can look through it to, you know, _browse_ what's on your system. Unity is supposed to be good for noobs. I probably have way more understanding of computer fundamentals than anyone who fits in the "noob" category so I probably have a better chance of figuring out what the UI's _trying_ to do, I probably have way more tolerance for crappy, clunky UI's than most noobs [2], and every time I've tried Unity I've usually uninstalled it within a day. If Unity sucks too much for _me_ to handle, the only noobs I can see sticking with it are those who've never used a computer before and have no idea that it's possible to do better. [1] Sorry if this isn't up to date. I don't use Macs much; the last time I used a Mac was sometime around 2004. [2] I played a lot of DOS games in the 1990's. Enough said. ------ cies Any other netrunner-os lovers here? Its an Ubuntu derivate with KDE by default, but with much more sane defaults and loads of goodness preinstalled (ad-blockers, YT-downloaders, Steam, codecs, etc.) I love it! (but it's released several months after Ubuntu is released) ------ Shorel I tested it in a VM. It is like a low latency version of what the previous version is. Extremely responsive. Now I need about 6 ppas to add support for Trusty before I can upgrade my main system to it. Including TrueCrypt. ------ anonbanker been using 14.04 for about 2 months now. if you're reliant on open drivers (Radeon, Nouveau), and use HDMI output, understand that Kernels 3.13.x and 3.14.x are going to be miserable for you, and you'll be limited to sub-720p resolutions due to regressions in the drivers. 3D has much improved, but the lack of 1080p makes this a dealbreaker for me right now. ------ nitishdhar good to see Ubuntu GNOME has been kept ------ onmydesk who? ------ horaceho I love LTS! ------ retube Things I would like: 1) drop the ridiculous Unity desktop. maybe I am old and out of touch, but that was <i>horrific</i>. 2) Dual boot just works with UEFI/Windows 8.1 3) Supports audio and graphics hardware in common brands like Acer I have 9 installed on a samsung since 2009 and it's been an absolute pleasure, just worked. Trying to get 12 or 13 onto a modern Acer however... was a nightmare. V disappointing. I still don't have a functioning soundcard and the battery indicator only starts 1 in 10 boots. ~~~ jfoster Why would you use Ubuntu if you think Unity is ridiculous? That's like getting a Mac and saying that Apple should drop OS X. It's very clear that Unity isn't for everyone, but I don't understand why people who know it's not for them seem to still want to use Ubuntu. ~~~ unicornporn True dat. If unity does not appeal, there are *buntu options. IMO, Xubuntu is the way forward. ~~~ zanny Xubuntu doesn't have the flashy mass market appeal. For power users its great due to its low footprint, but if I was trying to sell Linux I'd show them Cinnamon or Zorin or KDE, and _maybe_ Gnome, because they all have eye candy and the first three work a lot like Windows (albeit KDE is a lot more powerful than the rest in that regard and would probably scare newbies off).
{ "pile_set_name": "HackerNews" }
Ask HN: How does Twitter implement following and followers between users? - benkovy I have always been curious as to how twitter implemented this in their user model. How do they keep track of who the user is following and who is following the user? ====== tedmiston In 2010, Twitter was using a graph database, FlockDB, instead of a relational database to implement their social graph where relationships like following and followers are represented as edges between nodes that represent people. It's since been deprecated, and I'm not sure what type of database they're using today. Anyway, they have a nice write-up blog post [1] and FlockDB is open source [2]. [1]: [https://blog.twitter.com/engineering/en_us/a/2010/introducin...](https://blog.twitter.com/engineering/en_us/a/2010/introducing- flockdb.html) [2]: [https://github.com/twitter-archive/flockdb](https://github.com/twitter- archive/flockdb) ------ bsvalley \--- USER TABLE --- | USER_ID | | USERNAME | | NB_FOLLOWERS | | TWEETS -> List of TWEET_ID's | | FOLLOWING -> List of USER_ID's | | FOLLOWED_BY -> List of USER_ID's | etc. NB_FOLLOWERS, FOLLOWING and FOLLOWED_BY are updated when a USER follows/unfollows another USER. ------ miguelrochefort User1 follows User2 User1 follows User3 User3 follows User1 User3 follows User5 ...
{ "pile_set_name": "HackerNews" }
Oculus Rift co-founder killed by a car speeding form police. - bamfunkified http://3dgeeks.com/news_story/oculus_rift_co_founder_killed_by_a_speeding_car_ina_police_chase.html ====== ColinWright <https://news.ycombinator.com/item?id=5802474>
{ "pile_set_name": "HackerNews" }
Ask HN: Facebook hack - jeffmould So a few hours ago I looked at my FB on my laptop. At the top of my profile was a message that my account was memorialized. Doing some digging it seems anyone can submit a request to FB to have one of their friend&#x27;s profiles memorialized. It appears that when that happens whoever is listed as the legacy contact can take control of some things on the person&#x27;s page. There is no proof required, you simply fill out a quick form with profile and date of death.<p>https:&#x2F;&#x2F;www.facebook.com&#x2F;help&#x2F;contact&#x2F;234739086860192<p>Anyone ever heard of this and is there something going on with FB? It seems like there is significant potential for abuse for this without having to prove the person is dead. ====== exolymph Appears to be a widespread bug: [http://www.businessinsider.com/facebook- death-bug-tells-peop...](http://www.businessinsider.com/facebook-death-bug- tells-people-they-died-2016-11) On the one hand, this is hilarious. On the other hand, I hope no one gets confused and thinks that someone is actually deceased. ~~~ dwiechert Also reported here - [http://www.theverge.com/2016/11/11/13602824/facebook- just-ki...](http://www.theverge.com/2016/11/11/13602824/facebook-just-killed- everyone) ~~~ exolymph Casey Newton cracks me up. ------ ganeshkrishnan Press F to pay respect
{ "pile_set_name": "HackerNews" }
Myself (interactive programming demo) - IA21 https://codepen.io/jakealbaugh/full/PwLXXP/ ====== King-Aaron That was extremely cool, and would make for a useful little demo for front end development students (and maybe also for people working in the field, grinding their teeth looking for something to engage themselves with!)
{ "pile_set_name": "HackerNews" }
The Recursive Women in Tech Issue - DoreenMichele http://gistofthegemini.blogspot.com/2017/12/the-recursive-women-in-tech-issue.html ====== lkrubner I think partly people want an explanation for why women's participation in tech has declined so much since the 1980s. Consider: \---------------------- _Suppose there was overwhelming evidence that 95% of women were terrible at technology and 5% of women were awesome at technology. There are roughly 7 billion people on the planet, roughly 3.5 billion women, roughly 1.5 billion women who work outside the house for a wage. In this scenario, where only 5% of women love technology, there are 75 million working women who are awesome at technology. According to the Bureau Of Labor Statics, the USA had 1,256,200 software developers in 2016. The BLS also tracks some other minor categories, such as Web Developer, which have about 150,000 jobs. Lump all the sub- categories together, and let’s say there are 2 million such jobs in the USA. Let’s be wildly generous and double the number for the EU, and triple it for Asia. That gives 12 million software developer jobs in all of the advanced and developing economies. So even with exaggerated assumptions about women’s inherent weakness in technology, we still end up with a scenario where every single programming job in the world can be filled by a woman who will be awesome at the job. There is no need for men, at all, in the tech industry._ [http://www.smashcompany.com/business/business- productivity-h...](http://www.smashcompany.com/business/business-productivity- has-been-undermined-by-the-hubris-and-power-grabbing-of-elite-computer- programmers)
{ "pile_set_name": "HackerNews" }
“This is my Merrill Lynch portfolio – 1.83% expenses, underperformed S&P 500” - keywonc http://hellomoney.co/portfolio/347 ====== philrea Problem with the arg how can an "average" investor expect to compete against the big dedicated investment houses is that in most respects the big guys are average at best. Speaking as a former aspiring fund manager turned software developer, most of the tools at these manager's disposal are simplistic and completely invalid yet in most cases the manager's don't quite understand them anyways. Bottom-line: its a boys club of MBAs whose curriculum still says the sharpe ratio is important and holds their students responsible for at the very most being able to calculate probabilities from a normal distro using a table of values. Solution: invest in a lot of risky small companies, you will lose most of the time but, those few winners more than make up for the loss. Spread across enough bets stats says its very unlikely to lose. Contrast that with your family's 401k split in two a few years back while invested in the "blue-chips" ------ nodesocket I've had pretty good luck in the market. Owned a REIT ([http://www.investopedia.com/terms/r/reit.asp](http://www.investopedia.com/terms/r/reit.asp)) during the housing boom, owned APPL during their run up, and recently TSLA. When I don't have a company I like I've owned QQQ (Power Shares 100), AGG (iShared Bond), and various ETFs. I'm young so I can take risk, but honestly you can manage your own investments without the fees. ------ iamahnsihyo Pretty good portfolio, if you think 1.83 expexses fine, you have to bet more risks. ------ junheek 1.83%? That's highway robbery! ~~~ Afforess Maybe not: [http://www.reddit.com/r/investing/comments/28ys30/this_is_my...](http://www.reddit.com/r/investing/comments/28ys30/this_is_my_portfolio_managed_by_merrill_lynch_it/cifsa0p) ~~~ outside1234 There's always a winner but many many more losers for active portfolios in general. The research shows that the passive index tracking approach very convincingly beats actively managed portfolios net of taxes and fees. ~~~ Bsharp Approve! I don't understand why Average Joe thinks he can beat investment firms long term with staff dedicated to tracking each and every tradeable asset. Who do you think you're trading with? If you want to gamble on a company or a market shift then fine, but for the most part that's just what it is - gambling. ~~~ 7Figures2Commas The Average Joe shouldn't be "trading." But the individual trader absolutely has certain advantages over hedge funds and large institutions. One of the biggest is that the average individual trader can move the needle without establishing large positions. This often makes it possible for the individual trader to take positions in securities that larger players couldn't invest in even if they wanted to (and no, I'm not talking Pink Sheets issues). There are of course lots of reasons many individual traders are not successful. Lack of discipline and poor money management are far bigger contributors to individual trader failure than lack of dedicated staff and institutional tools.
{ "pile_set_name": "HackerNews" }
API Keys on GitHub - fabulist I wanted to add something a little more dangerous to this recent meme. A lot of the time, people bake credentials into apps and then accidentally commit them. Especially database credentials and API keys.<p>A naive approach for hunting API keys gets a of false positives; things like api_key = &quot;&lt;VALID KEY&gt;&quot;. But if we put some characters you&#x27;d be likely to find in an API key, we get a much better ratio.<p>https:&#x2F;&#x2F;github.com&#x2F;search?q=api_key+%3D+%22z9&amp;type=Code&amp;ref=searchresults<p>Repeating the search with different values can yield a lot of keys.<p>Another method is to go for less keys, but more valuable ones. This has an awful signal&#x2F;noise ratio, but the keys you find are pure gold to a bad guy.<p>https:&#x2F;&#x2F;github.com&#x2F;search?q=amazon+api+key+%3D+%22g&amp;type=Code&amp;ref=searchresults<p>I expect most of these keys are redacted by now, but this has lead to real compromise in the past. This story was on HN a while back:<p>http:&#x2F;&#x2F;vertis.io&#x2F;2013&#x2F;12&#x2F;16&#x2F;unauthorised-litecoin-mining.html ====== techaddict009 Seems like you found a gold mine for hackers! ~~~ fabulist They've been abusing it for a long time, and GitHub has taken steps to remediation the situation. There are also other avenues, such as PasteBin. I've seen a bunch of people post, say, router configurations to PasteBin to share them with tech support, including password encrypted with Cisco's broken password 7. RaiderSec built a bot that find them automatically: twitter.com/dumpmon
{ "pile_set_name": "HackerNews" }
Show HN: Botlist – An App Store for Bots - iisbum https://botlist.co ====== bentossell Hey! I'm one of the makers here Appreciate any thoughts anyone else. WE are all ears :) Mobile responsive isnt in place yet so apologies for that. Trying to get bots/AI from different platforms in one central, 3rd party place - not owned by Facebook/Slack/Kik etc. We are looking to update more information - probably giving the bot makers the ability to take control of their page, add more info and images so hopefully that will help. Has been a time-consuming manual process so far. Happy to hear feedback! But yes, just a bare bones MVP for now ~~~ joshbaptiste Saw an error and a PHP call stack and relative directories, should disable that from being visible on a production webpage. ~~~ iisbum Fixing that right now. ------ asimuvPR Visited the site on mobile and could not find a little explanation about what the goal of the site is. Maybe a short introduction might work. This seems interesting but it's not fully clear what it aims to do now and in the future. :) ~~~ bentossell Gotcha! Will look into adding that :) ------ jedberg Interesting business model. It's free to submit, but if you want to show up "quickly" you have to pay $50. Even though it hurts me as a bot owner, I think that's really clever! ------ an4rchy This is a pretty neat idea. I know it's probably an MVP but a description of what the bot does instead of forcing the user to click through to the bot website. ~~~ ajpgrealish There are some very short descriptions of what each bot does but it's not enough to help me decide if it is what I need. I searched for a JIRA bot to connect to slack but clicking the bot link took me to the install page rather than more info. ~~~ bentossell yeah we want to add more info on each bot detail page, then give the option for direct 'install' or landing page. ------ iisbum Very happy to launch our MVP for Botlist. We hope someday it will become a fully fledged store, but for now we're working towards building a complete directory of bots available on every platform. ~~~ karimdag I've seen this on Product Hunt but couldn't comment so now that I have the chance to, Here's what I want to say: \- first, awesome idea. This would make things way easier. \- second, say that I have built a bot, then what ? Would it follow the same philosophy as the regular App Store? (Meaning you upload then people download or buy) \- Third, you should add a FAQ and a roadmap so that people can help! (Even with just an idea) ------ chillydawg Are these bots just for slack? What do you consider to be a "bot"? ~~~ spinlock I was thinking the same thing. I'll admit that I'm out of the loop when it comes to slack but I've heard they don't support IRC. Seems too bad to me as IRC would make a great platform for all bots rather than just for one platform. ~~~ tyrust Slack has an IRC gateway - [https://get.slack.help/hc/en- us/articles/201727913-Connectin...](https://get.slack.help/hc/en- us/articles/201727913-Connecting-to-Slack-over-IRC-and-XMPP) ------ dreeves Beautiful collection! I'd love to convince you to reject from the collection any smarmbot emails -- [http://blog.beeminder.com/smarmbot](http://blog.beeminder.com/smarmbot) \-- which I define as emails that pretend to be sent personally ("I noticed that you recently..."). I realize a lot of hackers (I've argued with @patio11 about this) don't see the problem with those. Maybe because it's inconceivable to us nerds how anyone could be deceived by them. But some people are and I think the best of both worlds can be achieved by either saying "we" instead of "I" \-- as in "all of us, including the program that sends these emails!" \-- or just appending something like "This email is obviously automated but you can reply to it and it will go straight to me personally!". ------ sourcd Nice design & +1 for the groundwork. How will you protect your hard work from someone who just wants to scrape and clone it ? There's also botpages.com discussed 3 days ago here [https://news.ycombinator.com/item?id=11456651](https://news.ycombinator.com/item?id=11456651) ------ harry_botter Saw botpages.com on Product Hunt last week. It looks like they're free to submit. ------ toyg 502 bad gateway. _It 's dead, Jim._ ~~~ eb0la I think we're going to need a bot to tell us when _something_ that shows on HN is back online :-) ------ yeukhon My first foremost suggestion: try not to ask for password as a new site. I am not saying I'd only trust the big players (they were once a startup or a little project to begin with), but I would enjoy more if I can reuse Google/Twitter/FB or whatever as an option. Next, any plan for validating bot's security and privacy? Rules for submission (must be open source etc?) Fake feedback is always a tough war in real monetize app store. ------ amflare My #1 question is where is the About Page? Even if it's just a small synopsis, I'd like to know what this is, and what it is meant to do. ------ arcameron Improvement: If I navigate to [https://botlist.co/bots/filter?platform=5](https://botlist.co/bots/filter?platform=5), then I should be able to see what each bot claims to do, without needing to click in to the #show page ~~~ bentossell yeh we are currently trying to figure out if taglines can be included in a way that doesnt make the site look too busy/messy. possibly a 'quick-view' ------ tomc1985 Ugh. Now we're gonna be hearing about chatbots for the next 5 years :( ------ fiatjaf I want to sell my bot which currently people are using for free, where can I do if not in an App Store? ------ tomc1985 Also can we call these something else? Bot is too overloaded a term. "Chatbot" is more accurate. ------ drewry I'm getting "Whoops, looks like something went wrong." when trying to register. ~~~ bentossell hmm we've had a couple of instances so I will look into it! Sorry about that. ~~~ drewry Looks like it's working now, thanks! ~~~ strictnein Getting this now: 502 Bad Gateway nginx/1.8.0 ~~~ bentossell should be back up in next 10 mins! Had overwhelming response today, sorry ~~~ prdonahue May want to throw CloudFlare in front. (Disclaimer: I work there.) ------ findjashua getting the following error: [http://imgur.com/Duxzl6W](http://imgur.com/Duxzl6W) ~~~ iisbum We didn't quite get zero downtime deploys worked out for our MVP :) If you refresh thing should be working now. Thanks! ~~~ findjashua now i'm getting 502s :-( ------ swalsh You should resubmit this as Apply HN: ------ woodruffw Any plans for an IRC category? ------ studentrunnr yikes - link doesn't work!
{ "pile_set_name": "HackerNews" }
Recommended Space Books for Kids, 2019 - sohkamyung https://www.planetary.org/blogs/emily-lakdawalla/space-books-kids.html ====== mncharity Sigh. So, a common misconception in astronomy education, is that the Sun itself is yellow. It's common even among astronomy graduate students at, err, a first-tier institution for both astronomy research, and astronomy _education_ research. Indeed, you can roughly guess who has/hasn't done the 'common misconceptions in astronomy education' class by asking them "[A five- year old asks...] What color is the Sun?" So it's utterly without surprise, but a sad commentary on the state of science education, that I see... (via image and video search) _ABCs of Space_ and _8 Little Planets_ have yellow or orange Suns. _Pop-up Peekaboo! Space_ has no Sun, but yellow stars. I had hope for _Twinkle Twinkle Little Star: I Know Exactly What You Are_ which used white stars when explaining twinkling... before it explicitly said the Sun was "yellow", next to something the same-ish orange color as the "Red" dwarf a few pages back, and one page before a yellow Sun. _Moon’s First Friends_ , and _Planet Hunting_ , and... ah well, I'll stop there. The handling of scale is... never mind. To be fair to astronomy graduate students, among first-tier non-astronomy physical-sciences graduate students, a common response is some variant of "it doesn't have a color; it's lots of different colors; it's rainbow colored" \- misunderstanding color, rather than a classification scheme. If some country ever aligns its science education content incentives with the creation of robust understanding... it's kind of hard to imagine where they might end up. But that's not us. ------ mncharity "Launch Ladies" was a kickstarter project.[1] [1] [https://www.kickstarter.com/projects/jameyerickson/launch- la...](https://www.kickstarter.com/projects/jameyerickson/launch-ladies-a- childrens-book-about-the-women-of)
{ "pile_set_name": "HackerNews" }
Ask HN: What's the most secure communications method today? - JamesAdir Assuming that I can exchange keys of some sort (physical, digital) with the other contact. ====== adrianN What's your threat model? If you don't care about metadata and you can exchange keys, use some form of symmetric encryption. If you care about metadata then things are a lot more complicated. But if you can assume that you have a secure channel for exchanging keys, you can just use that channel for communication. ~~~ Retric There are latency issues. Suppose you meet up once to exchange CDs full of random data 20 years ago. Now you can exchange messages with newspaper adds secure in the knowledge nobody can ease drop even though your secure communication happened 30 years ago. Granted, this is limited to low bandwidth text, but you can leverage this for key exchange if you happen to trust some other form of crypto. ~~~ Tinyyy Of course, given that the CDs are kept securely or promptly destroyed upon use. ------ anikain One time pad. without the original key, the message could litterally be anything. There's no way to analyze the text at all ~~~ eeZah7Ux Not again! Your OTP is extremely sensitive to the quality of randomness and requires a lot of it - which makes things very difficult. It provides no authenticity and integrity or at least proof of tampering. It does not protect from message reordering and capture+retransmission. It obviously leaks metadata in a real-world usage: sender, receiver, msg length, time of message. ~~~ falcolas All of these problems are present with any pure encryption method. That's why authentication hashes, message ids, high quality randomness sources, and so forth exist. OTP can use these just as well as any other encryption method. ~~~ irundebian What do you mean by "pure encryption method"? No one (no smart people) uses AES purely, but of course in some mode of operation such as GCM which provides integrity. ------ hnarn If you want it to not only be the most secure, where the answer in my opinion is symmetric encryption, but also the easiest to use, I would say using the app "Signal" on a smartphone. As long as you're able to meet afk and you can verify the safety number between the two phones, you should be good to go. Disappearing messages adds another layer of security. I'm sure there are ways that are more secure in terms of encryption strength and opsec, but in the real world most people you want to communicate with aren't savvy enough for most "truly secure" setups to be realistic. ~~~ zulln What I dislike the most about Signal is the need for a phone number though. Yes, I understand I could register with a temporary phone number but that still is not good enough. ~~~ hnarn Surely most smartphones will have a phone number anyway, no? It doesn't have to be connected to you personally and it's a good way to keep illegitimate Signal registrations down. ------ Jaruzel The most secure conversation is the one you don't have to have. ~~~ dom0 The standard answer: Want to talk about something that could compromise you or the company? Well, don't talk about it. ------ Fox8 Signal, Wire, Privus SecurLine, Riot (Matrix), Threema are good candidates for audio and text secure communications. Some leak metadata, some have countermeasures like not using own's phone number (SecurLine and Threema) or using fixed bitrate for audio calls (Signal, Wire, SecurLine). If you want privacy and trust choose a solution in that you can audit the source code and that it is verified by a third party auditor. ------ Tepix Does a one time pad qualify for what you defined as "key exchange"? One time pads are proven to be perfectly safe as long as they are used correctly (read the first paragraph of the wikipedia page at [https://en.wikipedia.org/wiki/One- time_pad](https://en.wikipedia.org/wiki/One-time_pad) ) ~~~ heinrich5991 If you want to avoid metadata, you need more than one-time pads. ------ kobeya For what purpose? There are many trade offs to consider. Do you want repudiation? Do you need group messaging? Synchronous or asynchronous? Etc. ------ perlgeek Define secure. Is leakage of meta data (who communicated with whom, when, and what size of data was exchanged) relevant? Or just the content? Is reliability of delivery part of "secure"? ------ mr51m0n Threema? [https://threema.ch/en](https://threema.ch/en) ------ uoaei The process of exchanging keys will theoretically leak metadata unless you already have an established secure line. In which case you will not need to open a new one, defeating the purpose. Anyway, the most secure method of communication would be to leave all electronic devices somewhere far away and ideally locked in a solid metal box, then meeting in person somewhere where surveillance is hard or impossible. In the ideal case this will obfuscate all metadata including sender and receiver, unless someone happens to see you travelling to or away from the meeting place. ------ lmm For letter-like communications GPG is fully open-source, has gone through the fire of decades of use, and if you believe the Snowden leaks then even the NSA can't break it. If you're serious about security use it via something like Tails - keep the thing you boot from on you at all times, and never let plaintext leave your securely-booted system For OTR-style messages I'd find a fully open-source messenger that uses an atoxl-like protocol - i.e. OMEMO (Conversations/ChatSecure) or Riot/Matrix ~~~ codewritinfool The problem with GPG and the like is that the assumption is made that the end platforms are secure (where the message is generated or read). They are not. ~~~ lmm That's true for almost all cryptosystems. I mentioned Tails which is pretty much the state of the art as far as securing the endpoint goes. ------ therealmarv [https://vuvuzela.io/](https://vuvuzela.io/) \- Private messaging system that hides metadata ------ Frenchgeek Face to face in a SCIF? ~~~ anotheryou I'd say going for a walk without phones in a noisy place is good. Any fixed facility can easily be bugged. Cameras are harder to hide, so handing over some folded paper that you wrote on in private will be even easier. Just burn after reading :) ~~~ crottypeter A noisy place might not give the protection it appears to... [https://en.wikipedia.org/wiki/Microphone_array](https://en.wikipedia.org/wiki/Microphone_array) ~~~ anotheryou Yes, the movement part is more important and to go somewhere you are not expected. These arrays are indeed scary: [https://youtu.be/bgz7Cx- qSFw?t=3](https://youtu.be/bgz7Cx-qSFw?t=3) ------ nottorp One time pads (with good randomness) delivered by armed couriers. Sorry for the non technical answer ;) ------ koehr This really depends on who you are and from what you are hiding: 1) Communication between non-targeted (unimportant) individuals hiding information from: 1.1) other individuals or non-governmental institutions 1.2) governments or GOs 2) Communication with targeted (important) individuals hiding information from: 2.1) other individuals or non-governmental institutions that target you 2.2) governments or GOs The first one is the easier one as expected: 1.1) Individuals want to secretly share information without someone else notice. "Someone else" can be another person, family, friends, a teacher, collegues or their boss. Important here is, that the person to hide the information from doesn't target you. This makes it VERY easy because the person doesn't necessarily expect any secrets to be exchanged. Simple chat apps do here. Telegram and others support self destroying messages. 1.2) Individuals want to secretly share information without being (potentially) tracked by the government. They are part of the grey mass of "normal citizens". As long as you or your partner are not actively watched by a government, things can still be relatively easy. Standard apps (eg WhatsApp, Telegram) might be even enough. Mass surveillance might be a problem though (in China, Iran or the US for example) so to be on the save side, better use non-standard software that is decentralised and uses hard encryption and something like Off-The-Record messaging. Good and mature candidates would be of course XMPP (aka Jabber) with OMEMO or the newer Matrix protocol. 2.2) Individuals want to hide information from someone who knows or suspects that they do it: As soon as you or your communication partner is targeted, things get a lot harder. Now not only the information itself needs to be encrypted (good old rubber hose decryption works against the best encryption methods). Other individuals usually don't have sophisticated surveillance methods, so it should still be relatively easy. Important is, that meta-information (who communcated with whom at what time, etc) needs to be secret, too. As soon as the one who suspects you to secretly share information knows that you did, they will ask questions. Better they don't have anything at hand to do so. Plausable denyability is the keyword. Off-the-record messaging provides this but is of no use if you keep the chat logs or be seen. Even the contact in your phone could be suspicious enough. Better use a dedicated system or memorise the contact information and only use it without saving it. Never ever communicate while the watching person could see it. 2.2) Governments or governmental organisations watch you: Now this is the hard part. Hiding from a government that watches you and/or your communication is REALLY HARD. Don't be fooled by advertised end-to-end encryption and public law-suits of companies trying to defend their users privacy. You have no idea what GOs are capable of which is why you need to implement measurements even against unknown attack vectors. The best you can do is to hide your communication traces by following at least the following rules: * Never use something that leaves a trace of personal information. Use pre-registered sim-cards or internet cafes in different cities. Always use public proxies, TOR, everything. * Use asynchronous communication: Leave an encrypted blob somewhere in the void of the internet without any receiver. The receiver needs to be potentially everyone but of course nobody except the receiver can be able to read the message. * Use disposable keys. Hide signatures but never forget to use them! A cryptographic secure signature is the only way for the receiver to be sure that it is really your message and nothing intercepted or faked. But the signature needs to be hidden inside the unreadable crypto-blob. Phew… that was a long one. But I hope it gives you and the interested reader some insights. Some links to kick-off the research: [https://en.wikipedia.org/wiki/Off-the- Record_Messaging](https://en.wikipedia.org/wiki/Off-the-Record_Messaging) [https://en.wikipedia.org/wiki/OMEMO](https://en.wikipedia.org/wiki/OMEMO) [https://staltz.com/an-off-grid-social-network.html](https://staltz.com/an- off-grid-social-network.html) [https://en.wikipedia.org/wiki/Matrix_(communication_protocol...](https://en.wikipedia.org/wiki/Matrix_\(communication_protocol\)) [https://en.wikipedia.org/wiki/Public- key_cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography) ------ toanant Consider using [https://keybase.io](https://keybase.io), they have recently added chat feature as well to their app. ~~~ dsacco Keybase Chat doesn't feature forward secrecy. ------ irundebian Is there something like a state of the art one time pad implementation which provides integrity and other security properties which are lacking with pure OTP? ------ 1ba9115454 Secure messaging on a bockchain. Due to the fact you no longer have to trust a 3rd party. If you choose the Bitcoin blockchain then you can send your encrypted data and no-one will know who decrypted it due to the P2P nature of the network. Every node receives every message. Example. AES encrypt your message with a key you both know. Add it to the message field of a bitcoin tranasction. The person at the other end decrypts any transaction they find with a message until they find one which does decrypt with that key. For more securtity you can hide the message in the content of the transaction. i.e. the public keys you pay to. ~~~ jwalton > no-one will know who decrypted it > Every node receives every message. Neither one of these strike me as desirable characteristics of a secure messaging system. I can see the advantage in a third party not being able to tell who received a message, but in a perfect world as the sender I'd like to know that only my intended recipient received it, which is the opposite of what's going to happen with the block chain. Also, to encrypt with AES you need to pick a key length, and since your message is going to everyone in the world, you need to pick a key that is both impractical to break using today's technology, but using all future technology for as long as your secret remains relevant. If AES is ever broken, then all bets are off. If you have a chance to exchange keys of unlimited length ahead of time, then you could use a one time pad and message over the block chain. This would be secure, but then if you have a OTP, almost any message channel you pick is going to be secure. Someone else in this conversation recommended using newspaper ads. ~~~ 1ba9115454 > no-one will know who decrypted it This is useful when you want to send a mesage but you don't want anyone to know who you were sending too. It's a form of obfuscation. ------ zero_one_one Can you also assume absolute trust with the party you are communicating with? ------ miguelrochefort The risk with communication is not to have your messages read by the wrong people. The risk with communication is to not have your message read by the right people. The risk with communication is for your message not to properly reflect your true intent. 1\. We need to stop obsessing about privacy. 2\. We need to fight censorship. 3\. We need to improve our semantic model. ~~~ icebraining These are contradictory statements, because privacy is an essential protection against censorship. You're never completely free to speak if you can't speak anonymously, and you can't speak anonymously if you don't have privacy. As the SCOTUS wrote in _McIntyre v. Ohio Elections Commission_ , “Anonymity is a shield from the tyranny of the majority.” ------ twobyfour A face to face conversation in the woods, perhaps? ------ vgb2k11 >Assuming that I can exchange keys of some sort (physical, digital) with the other contact. Each contact has an identical table of data (pure-random, 1 terabyte, ASCII 256 or choose your own encoding); this is your "Key of some sort". Messages sent between contacts are encoded character-by-character as offsets from the start of the table. No offset can be used more than once. After offset 1099511627776 (for a 1 terabyte files) has been used for encode, a new key file is generated and exchanged. Example: tables contains a terabyte of random data such as "ahx Ui D 7gu3a7NrdMr 9y&S )iM AAt 8'9s 98m..e kj j uhbd f..." 1,5,6,9,12,15,18,20,23,25,30,33,35,36,39,41 = hi garry it's me ~~~ y7 If you're gonna go through the trouble of exchanging 1TB of one time key, use a standard one time pad. This method is either insecure (when offsets are not strictly ascending), or unnecessarily wasteful. ~~~ vgb2k11 After searching the definition of one-time-pad, I'm pretty sure post is redundant and shall be deleted (in T-minus 2 minutes). [edit] No delete option. Mod please delete.
{ "pile_set_name": "HackerNews" }
µTorrent 1.0 for Mac released - drewr http://www.utorrent.com/downloads/mac ====== Naga This is a bit late to the boat. Not sure about on Windows, but on OS X, Transmission is a great torrent client. ~~~ maximilian I don't have any evidence to back it up, but I think uTorrent has better network code and produces faster downloads. I use despite its slight ugliness compared to transmission. ------ surki hmm, Linux? After trying many clients, I have settled for rTorrent (libtorrent.rakshasa.no) which is quite lean and terminal based (so that I can wrap it in a Screen session) It supports OS X as well. ~~~ pjscott The uTorrent people say they're working on a Linux version, but haven't given details. Until then, it works well under Wine. Or there are a lot of other options, too. ------ erenemre oh who needs a torrent client after <http://put.io> ?
{ "pile_set_name": "HackerNews" }
Ask HN: online Work a winner takes it all market? Elance vs. odesk - cked Hello hacker news community,<p>I am a frequent user of marketplaces like Elance and odesk where I offer my services as a .NET developer. I thought a lot about the nature of hat market and how many platforms  we will have as the markets matures over the next 5 years. The main questions I would like to answer is if that market is a winner takes it all market and what are its characteristics. <p>For example Facebook is a winner in my opinion. It might get replaced but I do no see multiple social networks co-exist due to strong network effects and switching costs.<p>For online work and the current platforms available it is not so clear for me. I do not see a big advantage for a developer like me if I signed up with a platform having 15000 or 100000 projects. For a contractor it's kinda the same. A platform with 15000 or 100000 developers does not concern me. I do not see a strong network effect. Switching costs aren't so high as well. <p>I would expect to have maybe 4-5 big players worldwide. I don't see multiple platforms like over 50+ neither. It would be too hard to maintain for human in my point of view.<p>However, I have observed something interesting in the data Elance and odesk disclose for the market in USA and Canada. The past two years odesk seems to show a monthly growth rate which is way higher than the growth of Elance if you look at number of projects and number of service providers. I am not sure if odesk managed to steal a market share from Elance since both compani are growing. Anyway, if I look at the exponential growth (oconomy at odesk) it feels like that the reason for that growth can't be only operational excellence which brings me back to market effects/ dynamic.<p>If you have any more insights I would love to discuss with you because I am personally interested in topic. I think in the future online work will play a big role how humans work. <p>To sum up, will there be one player one In that market which takes it all?<p>Is it just a money game? By money game I mean the player with the most funds will win? <p>A the end I want to apologize for my English. I am not a native speaker which stopped me from posting earlier, but I still hope m points are clear and we will have a good discussion. I am happy to answer any questions. ====== teyc One of the problems I heard is that people are concerned about what work actually gets done at outsourcing sites. Odesk's monitoring enables customers to know that work is actually being completed in a distributed environment, while elance doesn't do that. If that is the case, then it explains ODesk's growth. ~~~ cked This is true but elance offers a monitoring software as well. Both companies show a steady growth but odesk is way more successful in the last two years. On top of that elance started the market entry in European countries. However odesk seems to grow twice as fast as elance. Is the outsourcing market in USA and Canada still growing that fast? Seems that odesk is stealing from elance AND is drawing from network effects or a different market effect which could be a indicator that it is a winner takes it all market
{ "pile_set_name": "HackerNews" }
Functional programming with python - Anon84 http://united-coders.com/christian-harms/functional-programming-with-python ====== hedgehog Newer versions of Python also have generator expressions which look similar to list comprehensions but with parenthesis instead of brackets. They mostly can be used for similar purposes but they are lazy, that is they only generate results as they are requested. ~~~ graywh The keyword 'yield' comes in handy for making your own generators. ------ GeneralMaximus Offtopic: I recommend picking up _Expert Python Programming_ by Tarek Ziadé if you want to get a feel for what is 'Pythonic'. ------ sharjeel Guido wanted to kick out map and filter in Python 3000 but there was community out there using these two, God knows for what reasons. As long as the lambda is limited to single expressions, map and filter are pretty much useless; and lambda is going to stay like that due to the way Python handles scoping using indentation. ~~~ evgen > As long as the lambda is limited to single expressions, map and filter are > pretty much useless Not really. It is not too difficult to actually define a named function and the code usually ends up being easier to read/understand so that lack of multi-line lambdas is not a serious problem. Additionally, now that map and filter return iterators they are a bit more flexible than they used to be. OTOH, if you are using map and filter in your python code you should probably take a close look at what you are attempting to accomplish and see if a generator expression might do the job better. ------ graywh Should we tell the author that 'reduce' was removed in Python 3? ~~~ jrp Ironically, map and filter are still available, when they seem easily duplicatible with comprehensions: map(f,lst) = [f(x) for x in lst] filter(f,lst) = [x for x in lst if f(x)] But I don't see a way to translate reduce, except for perhaps the particular case of sum. reduce(f,lst) = ??? ~~~ jacobolus Write a for loop. Much of the time, the code ends up clearer. See Guido's explanation at: <http://www.artima.com/weblogs/viewpost.jsp?thread=98196> ~~~ jrp Thank you. I picked up Python from assorted online tutorials; maybe it's time for me to look at the book mentioned elsewhere in this thread to improve my style. ------ bcl Nice little introduction! I wish I had read something like that when I first started with Python.
{ "pile_set_name": "HackerNews" }
Steve Bannon's far-right Europe operation undermined by election laws - orf ====== _Schizotypy missing link ?
{ "pile_set_name": "HackerNews" }
Cockroaches deliver kicks to avoid being turned into “zombies” - YeGoblynQueenne https://arstechnica.com/science/2018/11/karate-kicking-cockroaches-can-fight-off-zombifying-jewel-wasps/ ====== symplee The video[1] in the article of the kick defense is pretty crazy. [1] [https://www.youtube.com/watch?v=Rt8XoT2-qwQ](https://www.youtube.com/watch?v=Rt8XoT2-qwQ) edit: remove all but the reference to the article's video
{ "pile_set_name": "HackerNews" }
Ask HN: Does taking a severance package make you less marketable? - DrWumbo Is it a bad idea to take a severance package from a job that I was already planning on leaving? I work as a developer at a Fortune 50 company that is currently doing &quot;restructuring&quot; and is offering severance packages to anyone who wants them. The package is enticing despite my position (unlike the Business Analyst) not being in danger. ====== ChuckMcM Absolutely not, _take the package_. Use the extra runway to recharge your batteries and destress and re-focus. You'll come back stronger. ~~~ DrWumbo Thanks for responding :) I took the package, and am very excited to move on!
{ "pile_set_name": "HackerNews" }
One Minute Reads for Writers – 30 Posts in 30 Days - monkeymagick https://medium.com/1-one-infinity/one-minute-reads-for-writers-30-posts-in-30-days-6b781fb9cf22 ====== CtrlAltEngage "These are all friend links — because I love you — so you will be able to read them whether you’re a member of Medium or not." Promptly got told off by Medium for not having an account and that I'd hit my quota of "Member Previews" ~~~ monkeymagick No idea why that happened. All links are friend links. This should be the friend link for the post: [https://medium.com/1-one-infinity/one-minute-reads- for-write...](https://medium.com/1-one-infinity/one-minute-reads-for- writers-30-posts- in-30-days-6b781fb9cf22?source=friends_link&sk=168034b96f94610f532a2f7786e9152c)
{ "pile_set_name": "HackerNews" }
Lego NXT Mindstorm Bot controlled through IRC using perl - bsdpunkblog http://bsdpunk.blogspot.com/2009/02/spike-irc-nxt-mindstorm-video-using.html ====== jacquesm This reminds me of the beginnings of the webcam project, I had a silicon graphics box with a camera attached to it pointing at a 'mobile' with a bunch of paper birds, a fan and a light. You could control the fan and the light via the internet, a simple cgi program would change the state of two output lines, a couple of relays and it was done. For months I would receive messages that would roughly fall into two categories, those that thought that it was really neat and those that thought it was fake :) Now, as for this robot, imagine two of them in an arena!
{ "pile_set_name": "HackerNews" }
Demigod: So much for piracy - jemmons http://forums.demigodthegame.com/349758 ====== jemmons The money quote: "When the focus of energy is put on customers rather than fighting pirates, you end up with more sales."
{ "pile_set_name": "HackerNews" }
When the boss is wrong: How speaking out can save lives - schrofer http://www.bbc.com/news/world-europe-33667046 ====== thaumasiotes Title is unrelated to article. Did you mean to submit [http://www.bbc.com/news/health-33544778](http://www.bbc.com/news/health-33544778) ?
{ "pile_set_name": "HackerNews" }
Let me Duck Duck Go that for you - arst http://lmddgtfy.com ====== antirez Using DDG as a replacement for Google in the latest two months. I'm impressed. If it's better or not than google for certain types of usage, I'll let other users to decide (but it _is_ better, for my usage). But what is truly impressive is how this guy build a search engine that works in a way that is comparable to Google for the end user, with limited resources. ~~~ Volt I find that Google gives me better results when it matters. So although DDG has been my primary search engine, there are times when I'll enter my query into Google and get something that better satisfies my information need as either the first or second result, whereas I might have to look further down the list on DDG. _That said_ , I do love DDG, and it's still my primary search engine after 2 months as well. I switched based on the privacy policy (no personally identifiable information retained), ability to use HTTPS, and the nice infinite scrolling implementation, and unless these things change I'll certainly continue to use it. The combination of these, plus the generally impressive quality of the search results, make it really great. Kinda solidifies the notion that you can compete on more than just the content of the search results list. ~~~ Sirupsen It's easy to double check on Google with DDG, if you search for: !google hacker news It'll take you to Google's results on Hacker News. ~~~ epi0Bauqu It's even easier now. You can use the !g shortcut, and there are shortcuts for most google services, e.g. !gf !gn etc. ------ epi0Bauqu Stuff like this keeps me super-motivated. Thx Mike for making it! ~~~ lappie I think ddg is a great initiative and like how you are going about it. I have been following ddg quite a bit and also on the reddit ads/postings. With a slightly long term view and clear focus I think it has a decent chance to compete with the big guys. Having said that, there are two things I noticed which I don't completely agree with. 1\. The interface - the fonts, layout on the first page somehow is very unreadable. I am not sure what the motivation behind this is, but imho even a copy of hte exact google serp fonts/layout should have been good enough. The only way ddg will stand out from Google is the quality of data on your serp. There is very little incremental innovation possible from the layout that Google already has for listing a list of links and text, and it also doesnt seem to be your focus anyways. 2\. I think the auto extension of search results is a bad interface choice. A pagination interface provides a clear anchoring and also for most people the first 10 results are more than enough - anything more and the information becomes too overwhelming. Again I dont think this provides any more value from the Google interface and is bad from a cognitive overload viewpoint. Just my two cents. Best of luck on ddg. ~~~ epi0Bauqu Thx for the detailed constructive criticism. On 1, I disagree with the premise that you can't improve UI. It's a subjective thing but I strongly believe a non-negligible % of people would prefer a different UI. That being said, wrt to fonts and sizes, I wonder if you like any arrangement after tweaking the settings? <http://duckduckgo.com/settings.html> On 2, I get way more positive feedback on this than negative, perhaps 10/1, though I do agree there are issues around the edges. ~~~ axod I know I've said this before, but the infinite scroll always catches me out. I often brush on the trackpad to get to the bottom of the page. If there isn't a bottom of the page, it just freaks me out. It makes me think something's broken. Maybe I'm just an outlier though ;) ------ mattmaroon Or they could just change their name to something people would take seriously. Just a thought. ~~~ roc ... because the search engine market has so long been dominated by serious- sounding firms? ~~~ Legion My first reaction was to agree with your point. But I wonder if there are varying levels of non-seriousness. Names like Google and Yahoo definitely aren't serious, but they also have a sort of generic quality. Not in a bad sense, I just mean they don't evoke a specific silly image in my head - at least not by themselves (when I think Yahoo, I see a big purple !, but that's by marketing design). Duck Duck Go, though, doesn't have that same nebulous quality. There's something more concrete there, and more specific imagery that comes to mind. And I think that's why someone might react that way to the name and not to the names Google or Yahoo or Bing. It's less vague, and decidedly more Saturday morning cartoon. Is that bad? I don't know, it doesn't necessarily bother me. But I understand why some people view it differently than the search engine names already out there. ~~~ mattmaroon That's my exact thought. Google and Yahoo! come off as genuinely irreverent and fun (but, importantly, not childish). It's like something you'd expect a couple 20-something hackers to name their product. Duck Duck Go comes off as the 50 yr old guy who yells "'Fo Shizzle!" and holds his hand out for a fist bump. It's something you'd expect from an older guy trying to pretend to be a 20something hacker. It thus comes off as insincere. ~~~ epi0Bauqu Empirically, people seem to love or hate the name, i.e. it generates an emotional response. When I talk to "normals" this love/hate ratio is very high. I understand though that you've taken issue with the name right from the beginning. This is not the first time you've shared this viewpoint :) ~~~ mattmaroon Ha, yeah wouldn't surprise me. I don't remember it but definitely believe you. I think what New Coke proved though is that haters have an unduly large influence. New Coke kicked both original Coke and Pepsi's asses in blind taste tests. It's possibly bad to have something that has a love to hate ratio, even if a large one, rather than something people respect but don't care much about one way or the other. ------ blueben Is DDG engaging in some sort of grassroots campaign to make sure they get a link on HN at least a few times a month? I get it. DDG is out there. They're an alternative to Google. Great. How about some articles with merit rather than yet-another-link to the search engine front page and a bunch of people gushing about it? ~~~ epi0Bauqu Nope. I didn't make this and I didn't know the person who did. Nor did I submit it. ~~~ blueben Take pride in the fact that people love your work so much that they will spam their friends to share it. :) ------ arst [http://lmddgtfy.com/?q=let+me+google+this+for+you&v=](http://lmddgtfy.com/?q=let+me+google+this+for+you&v=) ------ heyitsnick [http://duckduckgo.com/?q=lmddgtfy&v=](http://duckduckgo.com/?q=lmddgtfy&v=) ------ resdirector Quick question: what types of searches are better done through DDG than Google? I've tried a few searches, but haven't found DDG better than Google...is DDG superior only for certain types of searches? ~~~ epi0Bauqu <http://duckduckgo.com/about.html> Of course it is subjective, but it really should be better across a wide swatch of searches. I think it is most clear though on what is X searches. The information view (from the home page) goes even further on these type of searches and grabs topic summaries in real time. Other areas where I think we do noticeably better on average are with names and long un-quoted searches (5+ words). Of course you can find counter- examples everywhere... What I always suggest to people is to give it a week as your primary search engine. If you (or anyone) do/does, I'd really appreciate you getting back to me with your feedback. ~~~ resdirector Very initial impression: * I like the zero-click info. * Is disambiguation redundant? Can't you just infer from history and location? Or is this against your privacy policy? The single most frustrating thing I find about search engines is iterating a non-trivial search. It doesn't seem like DDG has an edge against Google here. I long to see a search engine that makes it easy to send a question off to Quora, Vark etc. ~~~ epi0Bauqu How would that work exactly in your mind? We already have a feature to send your search to hundreds of other sites, <http://duckduckgo.com/bang.html> Is it as simple as redirecting you to those sites, or do you mean manage the workflow, email you the results, etc.? ~~~ resdirector (I like the bang feature: many of my searches are of the form "wiki william henry harrison". By the way, I think the search results for <http://duckduckgo.com/?q=william+henry+harrison> are out of order) My _ideal_ search-engine would be a cross between a traditional search-engine (machine) and a Q&A site (humans). If you were taking a lot of iterations to find your answer, you could simply expand the text-area to allow you to write out a human question. So, short of DDG becoming also a Q&A site....yeah, dunno :P. ------ blahedo In the early days of Google, back while it was still beta, I remember having to go to my backup search engine—remember Altavista? :)—for queries where I just needed a boatload of sites, or where I had a complex boolean thing, or where I needed to search for a whole phrase. Of course, back in the 90s Google knocked out my reasons to backoff to Altavista, one by one, adding boolean queries, and phrases, and of course adding a boondle of data. So I was eventually able to stop using Altavista. For the last few months, DDG is my primary and I love it. I still use Google for: * a few queries that DDG can't find anything for * to find out what _other_ people will see when they "google it" * YouTube and maps. YouTube will be hard to ditch because that's where the content is, although I suspect I can wean from maps if I actually try. ------ asimjalis I tried <http://lmddgtfy.com/foo> and I got this message: Unhandled Exception An unhandled exception was thrown by the application. ~~~ simonk It looks like the URL should be [http://lmddgtfy.com/?q=foo&v=](http://lmddgtfy.com/?q=foo&v=) Although, yes a lot easier to remember your way. ~~~ asimjalis True. Still an unhandled exception does not sound good. ------ natep Sharing the link is really hard (at least for me, maybe I'm doing it wrong?). When I type something and click search, it takes me straight to the page I'd want to send someone, and I can't really select the url (because of whatever's moving the pointer?). So, do I just have to figure out the url scheme and write it out myself? lmgtfy just gives you the url to send someone. ~~~ arst It doesn't actually move your cursor, just an image of a cursor. You should be able to copy & paste the URL out of your address bar as it is animating; you're right that just showing the URL after the user clicks 'Search' might be more user friendly, though. ------ varjag [http://lmddgtfy.com/?q=%D0%B0%D0%B6+%D0%B4%D0%B2%D0%B0+%D1%8...](http://lmddgtfy.com/?q=%D0%B0%D0%B6+%D0%B4%D0%B2%D0%B0+%D1%80%D0%B0%D0%B7%D0%B0&v=i) and [http://lmddgtfy.com/?q=%E4%BD%A0%E5%A5%BD+&v=i](http://lmddgtfy.com/?q=%E4%BD%A0%E5%A5%BD+&v=i) throw an unhandled exception. Seems like non-Latin character sets give it bad vibe. ------ stcredzero I think of DuckDuckGo as a "budget airline" sort of search engine. It seems to be best at handling the "low hanging fruit" or the common case, but it does so with better presentation, more convenience, and no bells and whistles. By contrast, Wolfram Alpha is more like an arctic bush pilot. Or maybe that's Cuil? ~~~ jacquesm > Or maybe that's Cuil? I think you mean 'cpedia'. ~~~ stcredzero You are correct. ------ gnoupi Oh, great, one more condescending way to answer people when you are too lazy to provide a real answer but want to be """funny""". Yay. Seriously, if you are concerned that someone didn't make an appropriate search on Internet before asking for information, there are nicer ways to suggest that than that kind of sites. ~~~ cookiecaper LMGTFY is just a tool -- it can be used in good humor or malice. This site, likewise, but it has the added benefit of promoting DuckDuckGo and producing brand exposure, and is therefore less likely to be taken as a slight; with LMGTFY, everyone knows what Google is, so they don't take that as an informative thing; with LMDDGTFY, few people know what DDG is, so they may just assume that the linker is humorously and conveniently promoting a new search engine out of personal preference. ~~~ gnoupi I see the interest in promoting DDG, but the "let me take your hand and do that for you" is and will always be a bit condescending. You can tell about a great search engine without that kind of approach, in my opinion. Because if your intention is good, this way is hardly the best, I think. Most will use it like others used lmgtfy, or a more strict "RTFM", years ago: as a "search yourself noob" end of discussion. ~~~ eli I kinda disagree. There's a difference between saying "search yourself" and "search for yourself using these keywords" And depending on context, some people really do need to be shown that they can type keywords into the search box like so in order to get results like these. ------ RyanMcGreal Returns a server error when the DuckDuckGo page loads: \---- Server Error The following error occurred: [code=CACHE_FILL_OPEN_FILE] An internal error prevented the object from being sent to the client and cached. Try again later. Please contact the administrator. \ \---- [http://duckduckgo.com/?q=test&v=](http://duckduckgo.com/?q=test&v=) ------ vaksel Someone should make one of those blind experiments where you pull in the search results from yahoo, bing, google and DDG, so people can do a blind test to see which search engine is better ~~~ eru Already happened. People seemed to have preferred the bing search results, and the Google-looking page. Perhaps somebody can even find the reference for this? ~~~ nostrademons Nope, they preferred the Google results: [http://techcrunch.com/2009/08/08/which-search-engine-do- you-...](http://techcrunch.com/2009/08/08/which-search-engine-do-you-choose- in-the-blind-test/) ~~~ eru I probably saw an earlier tally: [http://www.istartedsomething.com/20090607/bing-vs-google- vs-...](http://www.istartedsomething.com/20090607/bing-vs-google-vs-yahoo- blind-search-engine-test/) ------ senko Crashes on non-ascii input. Example (query is "rašić" in utf-8): [http://lmddgtfy.com/?q=ra%C5%A1i%C4%87&v=](http://lmddgtfy.com/?q=ra%C5%A1i%C4%87&v=) ------ helwr Linkedin profiles with 500+ connections are still not at the top when searching for people names, some junk pages somehow get higher rank ~~~ epi0Bauqu This isn't true across the board, e.g. <http://duckduckgo.com/?q=gabriel+weinberg> In any case though, I'm happy to fix if you give me some specific examples to work with. ~~~ helwr try "geva perry" for example, he has like 5 thousand connections and on google his linkedin profile is #3 from the top ~~~ epi0Bauqu Thx. ------ asimjalis I am curious. Is DDG running their own web crawlers or are they a front-end to search results from some other engine? ~~~ epi0Bauqu Both. ------ motters I Duck Duck Went a couple of months ago, and have been going ever since. ------ shimonamit I wonder how they plan to monetize this... ~~~ epi0Bauqu It's fun. Why does everything have to be monetized? ~~~ ippisl If you're not really devoted to monetizing it , you might be able to compete with google on ad quality. google's really into getting maximum money , so many times the ads suck. You, on the other hand , could offer the most helpfull ads.i'm thinking of something like showing the disruptive companies for a service your looking for. for example , let's say you're looking for a divorce lawyer. one option to save a lot of money is using an expert system software to generate the required papers. most people don't know about this option, probably because real lawyers outbid them in the ad market. So knowing about the expert system option is very useful. Of course , you don't have to show only disruptive ads , but an ad combination that offer the best knowledge for the user. Now , if there's a way to make this scalable across many product types, you have a very powerful feature, one that the big search engines would find very hard to imitate, because it would hurt their profits.
{ "pile_set_name": "HackerNews" }
Oracle linking to it-ebooks.info - ishanr In Oracle&#x27;s New to Java Programming page (http:&#x2F;&#x2F;www.oracle.com&#x2F;technetwork&#x2F;topics&#x2F;newtojava&#x2F;downloads&#x2F;index.html), the recommended books section (bottom of the page) contain a link to Head First Java on it-ebooks: http:&#x2F;&#x2F;it-ebooks.info&#x2F;book&#x2F;255&#x2F;<p>Are those books legal? ====== sixQuarks I don't know what to think of that site. How has it not been shut down? personally, I've used that site to browse through books before buying them online. In my case, it is beneficial for the publisher to have their books on that site, but I'd imagine most people don't use it that way. ------ mindcrash No. Support Kathy and buy it instead. [http://shop.oreilly.com/product/9780596009205.do](http://shop.oreilly.com/product/9780596009205.do)
{ "pile_set_name": "HackerNews" }
Ask HN: What are you building? - northfoxz2018 What are you guys building? What are devs excited about nowadays? ====== rijoja Onscreen keyboard for writing math. § - toggle visibility 1,2,3,4, q,w,e,r, a,s,d,f, z,x,c,v - Select block recursively So for example: 1,z => π q,4 => \int_{a}^{b} e,1 => newline Currently it's basically a latex editor even though there will be mathml which will feature inline editing and some other niceties. Based on MathJax. Would like to use this opportunity to complain about the chrome gang, dropping MathML support which would make everything so much easier. Not mobile friendly so, if your on the phone check out the youtube video: [https://htmlpreview.github.io/?https://github.com/richard- ja...](https://htmlpreview.github.io/?https://github.com/richard- jansson/roosevelt/blob/master/index.html) [https://www.youtube.com/watch?v=l1v4L1rxsaQ](https://www.youtube.com/watch?v=l1v4L1rxsaQ) ------ bsvalley A house. I'm literally building a new house (high level supervision I'm not part of the actual labor of course). It is far more exciting than any software I've ever built in the past. Especially when you've been coding since forever. I know this message is a little out of context here but I'd highly encourage any devs to build something completely unrelated to software at least once in your life. It brings new perspective and expends your reach. While building that house I come with at least one new software idea every week. Best way to understand real life problems that can be solved with automation and software. That was my 2 cents :/ ~~~ billconan maybe blog about this project, the cost, the procedure and documents needed from the government? what kind of land is suitable for building? ~~~ bsvalley I thought about it, it's a great idea and could help a lot of other people out there. ------ ecesena Solo, an open source security key: [https://github.com/soloKeysSec/solo/](https://github.com/soloKeysSec/solo/) In terms of dev/sw, the most exciting things are 1) adding support for openpgp/ssh in addition to FIDO, 2) rewriting the firmware in rust, 3) porting the firmware to other architectures. ------ pplonski86 I'm working on automatic machine learning platform. It is already working as a SaaS ([https://mljar.com](https://mljar.com)). I would like to go open source with it and add ability to read data from different sources. ------ billconan I'm building a blog platform that allows you to write executable program, like a mix of medium and jupyter.
{ "pile_set_name": "HackerNews" }
$50K bounty for practical robocall-killing technology. - jamesbritt http://robocall.challenge.gov/ ====== astangl I dispute their contention that an "ideal" solution would not block political or charity robocalls. Ideally we close these loopholes in the No-Call List, so these all are illegal. It seems to me a lot of the problem results from allowing the caller ID information to be spoofed. Any serious attempt to fix this problem would seem to involve tracking down real numbers, defeating the spoofing. Most satisfying (and effective!) thing I have ever done to eliminate a repeated scam call (to lower credit card interest rates, never admitting who they're with, except some vague reference to imply they're associated with the credit card companies) is to string the guy along, when I am "going to get my credit card", setting the phone down and going about my other business, until it's clear he finally hung up. Then he called back, and I said "when I got back to the phone you weren't there!", and repeated the game a bunch of times over the day, with the guy getting more & more exasperated. Funnily enough, I never get those calls anymore... ~~~ rhizome Caller ID spoofing is a true misfeature. The ability should be removed, or consumers should have out-of-band access to the real number in order to be able to _at least_ blacklist it. <http://www.catb.org/jargon/html/M/misfeature.html> ~~~ dangrossman Phone numbers are to the telephone network like IP addresses are to the internet. Caller ID is to phone numbers as DNS is to IPs. I don't think getting rid of caller ID would really help anything, and you can't fix the caller ID to specific numbers, as phone numbers are as transient as IPs -- they can terminate to an IP phone in Pakistan one day, and to a Twilio gateway used by some other company's apps the next day. Blacklisting the number can be both ineffective and harmful. Letting the caller set the caller ID is the only way someone calling you from Comcast about your bill can have Comcast show up on the ID. Most large companies like that don't own the numbers they call from, or the call centers -- they outsource both inbound and outbound phone support and sales. Typically to multiple phone center companies at the same time, who all have to call "as" Comcast, and ramp up or scale down with more or less phone numbers as needed. They'll use autodialers too, with real people rather than recordings, to minimize the delay between one outbound call ending and there being another person for that now-available rep to talk to. ~~~ mleonhard The ability to set caller-id is important. I was surprised by how easy it is. With the free X-Lite SIP softphone and a flowroute.com account, I can set an arbitrary caller-id number and place a call to anyone. This is very useful, as it lets me place cheap VoIP calls "from" my mobile phone number. It could also be used to get into voicemail and other systems that trust caller-id. ~~~ Shivetya Businesses can detect your billing number, the ANI. Having been involved in a system where employees logged in and out of work via the telephone it was very important that we could prove where they were. We had many people attempt to spoof it which never worked. So the information is there. However it is worth a lot of money to the phone company and they sometimes resell that information to others who repackage it. They also in turn don't always give you this information even when you pay for caller id which is similar but not the same. Originators can block paid caller id, I have never seen a case where you can block ANI subs ~~~ rhizome I was under the impression that ANI was forced on WATS lines, but that it didn't necessarily exist for residential, shall I shift my understanding? I think this could actually be a good lever, putting the problem purely into the policy domain. ------ btilly All that I want is this. Right after I get a call I don't want, have another number that I can call. If I call that number, I'm telling the government, "My last call was an unwanted robocall." Trace that call to its source (as best as that can be determined). If that source has generated a lot of calls recently, and is not on a white list, it is blocked. Any attempts from that number to make a phone call go to a recorded message saying that it is blocked, with instructions for how to get unblocked. Any phone number that gets blocked several times in a week is permanently blocked. ~~~ gshubert17 After I get an unwanted robocall, I want to dial "*RC" for "RoboCall". I get a credit on my next phone bill for 25 cents. The phone company charges the originator 50 cents. Now the phone company has an incentive to track all robocalls. And I have a little compensation for my time. ~~~ greenyoda If the phone company is going to charge the originator, they'd have to verify that the call you reported was actually an illegal RoboCall. Some dishonest people might report people whom they know but don't want to hear from as RoboCalls. Others might erroneously report organizations that are legally allowed to call them, like companies with which they had a business relationship or political candidates. Verifying that each claim was valid would probably cost the phone company much more than 50 cents. ~~~ cabalamat > If the phone company is going to charge the originator, they'd have to > verify that the call you reported was actually an illegal RoboCall If I'm getting unwanted calls, I don't really care whether the government thinks they are legal, I just want them stopped. How about this solution: if the caller ID is on a whitelist, it goes straight through. If not, the caller gest asked a question (which should filter out robots). If there are determined human unwanted callers, a second line of defence would be to ask them to key in a 4 digit code (and I'd only give the solution to people who wanted to call me). ~~~ Firehed Google voice lets you do something along those lines. I personally found it incredibly tedious and quickly turned it off. I think the grandparents idea of *RC is spot on. Verification can be done through volume; i.e. a one-off may do nothing but repeated reports indicate something is up. Just like reporting spam in email. Along those lines, some sort of charge-to-call system may work. Like calling collect in reverse, but the receiving party can decide to not charge the fee if its someone they know (or flip it, hitting # within 30 seconds of an inbound call will capture the fee and disconnect) ------ bstpierre I don't just want robocalls killed. I don't want calls from politicians, charities, pollsters or any other exempt organizations either. I don't want calls from the debt collectors trying to reach the person who used to have my number. For me, and people like me, a telco-based solution won't work because they have to adhere to the regulations that have these giant exemptions. In volume, you could make a device for landlines for probably <$50. Connect the device to the primary incoming line. Connect phone(s) to the device. User dials #4321 (some non-secret activation code, printed in instructions and sticker on device) from house phone. Follows prompts to record (a) his name, (b) names of other individuals at the house, (c) one or more bogus names. May also follow prompt to enable a bypass code. May also follow prompts to add CID numbers to whitelist (see below; this is for DESIRABLE robocalls, e.g. from the school district in case of emergency or school cancellation). User hangs up; device is programmed. Incoming call, 2 rings, CID/CNAM captured (FWIW), house phones do not ring. Device answers: "Calls may be recorded. Press 1 for Bogus John, 2 for Real Alice, 3 for Real Bob, 4 for Bogus Carol". Caller presses 1/4, "Please leave a message after the tone", tone plays, incoming voice goes to /dev/null for 10s, call is dropped. Caller presses 2/3, house phones ring, stored CID/CNAM is provided. If the incoming caller uses the bypass code, the call goes straight through. Bonus: distinctive ring for Alice vs Bob. Bonus: after an annoying human caller "leaks" through, user can hang up, pick back up, and dial #5432 [some other non-secret access code]. Incoming CID put on block list. Calls from blocked numbers are unanswered (will go to VM if user subscribes to VM from telco). Bonus: similar to blacklist, user can dial #2222 (for "whitelist to Alice") or #3333 ("whitelist to Bob") to whitelist a just-received call. Whitelisted calls immediately go through. DR means that I don't have to check CID to know it's my MIL calling for wife. Numbers can be whitelisted during programming (see above) because desirable robocalls (e.g. kids' school) will otherwise never get through and can't get #2222 treatment. Bonus: pressing ## (or some other code) during a call starts a recording. Saved as <cid>-<date/time>.wav to removable flash or USB on the device. Bonus: insert flash/USB, dial #9876 from house phone. Device upgrades itself from the flash. ------ crb3 We use an answering machine to screen calls. I put SIT tones (the tones usually followed by a network message such as "we're sorry but the call cannot be completed as dialed" -- google SIT.WAV) at the beginning of our outgoing message. We don't pick up until the message ends. We get a lot of 'ghosts', calls dropped before the message is done -- those were automated calls. We get callers which are partway through delivering a canned spiel at that point because their delivery system triggered on the tones as if 'your-turn' beep -- those were automated calls intended to be left as recorded messages. It's not exactly what the contest is about, but it does provide some personal relief on a landline. ------ iloveyouocean So AT&T has the technology to bill each subscriber down to the bit of data used, but they can't detect when an entity is making 10s of thousands of calls . . . . ? ~~~ DanBC 10s of thousands of calls isn't necessarily illegal, nor unwanted. Phone companies will probably claim to be just a pipe for data, and that they cannot interfere with that data, and that regulation is for other people. They'll stop you if you're damaging their network. Cynically I'd say that a company making tens of thousands of calls is worth more to the telco than me, making very few calls. (I doubt that's actually the reason.) ------ tezza Help me out here as a UK person: What sort of Robocalls are there ? Here in the UK there are variants. 1) Pause to hear you pickup, then they connect to a human salesperson 2) Full blown automated call 3) Human on the other end but how did they get your number ? I have a solution, but can't enter as I'm outside the US :( ~~~ tinco Still you choose to keep your solution to yourself? :) ~~~ tezza I figured I'll wait until the competition is over. When I see the announcement of the winner, it may be better than my implementation and I will congratulate them. If not, then I'll post mine and see. ------ ww520 There should be a Kickstart project for this. Lots of people would pitch in. I'm sick of these daily robocalls. ------ bediger4000 Please robocall-kill "Ann from Account Services". I must get an average of 4 calls a week from that scratchy-voiced hag. ------ BryanB55 I feel like robot calls used to be much more common. I think I only get maybe 2-3 a year now. I think most recently they were from DirecTV and GNC. I tend to give out my Google Voice phone number to businesses and non personal contacts so I can block them if they sell my number or start robo calling it. Although I've only had to block maybe 1 or 2 numbers on google voice in the last few years. I wish the iphone had a way to create a black list and block callers. I'm not sure why they've never implemented this. I know it can be done by jailbreaking but it seems like it should be part of the os. ~~~ dredmorbius On Android: Mr Number and other call screening apps exist. I use this, though the app has been getting a lot more snoopy/annoying of late. ------ ww520 Penalty should not just be on the illegal robocalling telemarketers, but should also on the businesses contracting the telemarketers. Cut the funding off from the sources. ------ swampthing They should let people pledge donations to increase the bounty. ------ dredmorbius An endpoint-based fee-collection system. "To complete this call, a payment of $NOMINAL_AMOUNT is required. This may be refunded at the discretion of the caller." In actuality, you'd whitelist numbers not required to make payment, and/or clear other numbers at the end of your billing cycle if desired. Payment options would be provided. The call would not ring through until authorized or paid. This would increase the costs of phone spam markedly. Survey organizations would have a bit of a problem. Oh well. ~~~ dredmorbius Erm: discretion of the callee. ------ joebeeson Would it be possible to use the same technology of SSL with phones? Have the telco, who presumably knows the endpoint of the call can either apply an SSL certificate (or equivalent) to the call so that the receiver can confirm their validity? Or, alternatively, much like how websites currently operate, the person making the call would have to attach their certificate which the receiver could check against CA(s). This would be nice because if certain CAs had rules where they wouldn't sign up certain numbers (telemarketers, politicians), you simply wouldn't use that CA to validate calls. ------ elastigirl I definitely know where you're coming from. I get calls like that a lot and I With all the consumer complaints these nuisance calls created, I don't understand WHY these companies still operate! Well, yeah, there's that thing they call the freedom to "advertise" but what the ____?? They're already trespassing into our freedom to privacy! I don't know anybody who'd disagree but if these companies continue this unethical business practice, I would surely be happy to see them shut down!!!! ------ arohner Can anyone explain why this is hard, technically? ~~~ anonymous1019 Sure. If you plan on implementing this as some sort of end-user device that would be hooked up to a phone handset or a software "app" you would install on a smart phone, then all you've got to work with is caller ID. Caller ID can be blocked by the caller (e.g., by dialing *67 first) and spoofed, including the purported outgoing number. In fact, VoIP systems like Skype have made spoofing caller ID and now even ANI, a toll network analog of caller ID, trivial. So even if you keep some sort of constantly updated database of numbers used by robocallers, you are still relying on the robocallers 1) not blocking outgoing caller ID and 2) not spoofing the numbers of legitimate users resulting in them getting blacklisted. ~~~ hollerith >VoIP systems like Skype have made spoofing caller ID and now even ANI . . . trivial. Is there any way for hardware connected to an ordinary phone line to distinguish between an incoming call from a VoIP system versus an incoming call from an ordinary phone line? ------ DanBC You pass a law saying that all robocalls must comply with ROBOCALL_STANDARD. You include a regulator in that law. The regulator is responsible for updating the standard as needed, and for taking reports from people who receive a robocall, and for imposing sanctions on companies who i) do the robocalling and ii) ask other companies to do the robocalling for them. Sanctions include fines for the companies; potential prison time for the directors of those companies (obviously this would need to go through court) and 'blocking of numbers by telecom companies' (not sure how realistic that is. The regulator has an "opt out" list. Every one with a phone who doesn't want to receive calls registers on that list. New numbers are added by default. (They can maintain an "Opt in" list, so people who want to receive junk calls can). Then the regulators set up a website. This site contains a simple report form; the opt in and out lists; links to the current standard; links to the law; links to previous adjudications. If CompanyX uses a robocall company in a different country you can still go after CompanyX. Not sure what you'd do if both CompanyX and the robocall company are overseas with no US presence. ------ mxfh Hey, Shazam there's an almost "free" prize waiting for you. Just make an app that hooks into you calls on demand and records & forwards suspected robocall's to match them against validated malicous ones. Someone else might figure out the telco backtrace part with timestamps and so on. ------ elastigirl And guess what? All you trespassers out there, be aware that I am reporting your phone numbers to Callercenter.com every time you call. Just so you know, in case you start wondering why your calls seemed to be blocked. You want publicity by harassing me? I give you just that. Negative publicity! ------ sahaj I believe Google Voice has already solved this problem. Just as with email, click report spam and the whole user-base benefits. I suppose Google could share that phone number list with others providers. ------ forgotAgain This just reeks of the FTC abdicating its responsibilities to enforce the existing laws. Show me the budget the FTC spends on prosecuting violators of the law and maybe I'll change my mind. ------ stickyku What I think would be cool is to be able to forward the call to a smart enough bot that wastes like 5 minutes of their time every call. This will surely kill their spammy business model ~~~ bstpierre Robocalls aren't necessarily interactive. You can't waste their time. ------ icewater Why does a company need at least 10 employees to compete in the Federal Trade Commission Technology Achievement Award? ~~~ hollerith You're reading it wrong: if the winning team has fewer than 10 employees, the team gets $50,000. 10 or more, the team gets no cash, just bragging rights. ------ Andaith Just ban robo-calls. Make it illegal. ~~~ civilian They're already illegal. This is an enforcement problem. ~~~ xulescu And why isn't this a problem in Europe? E.g. Germany? I'm not getting any unsolicited phone calls anymore - robot or not (this used to be a problem 15 years ago, but not isn't anymore). ~~~ Sander_Marechal There are national do-not-call registries. Companies are required to check those before calling. If they don't then they get fined, which usually starts at around 25,000 euro. This is for The Netherlands, but I believe it's similar throughout Europe. ~~~ kbuck The U.S. also has a do-not-call registry that you get fined for violating (donotcall.gov). The problem is that they don't know who to fine, because it's really easy to spoof caller ID and the businesses aren't dumb enough to identify themselves. ~~~ tripzilch How can those businesses try to sell you something without identifying themselves? Also, there must be some weird political or legal reason why they can't (or won't) get the identity from the phone companies. It can't be a technical reason, because they're already capable of tapping everything, and the phone companies are already in full cooperation with that, even developing and providing specific technical interfaces for law enforcement. And, maybe someone can tell me if this is actually possible (as opposed to a "CSI" type exaggeration): In many police series you see them requesting full cell-phone logs of all incoming and outgoing calls to a certain phone in the past few weeks or so. In case that's realistic, I certainly hope that it can't be foiled by something as simple as spoofing the caller ID? Because, you know, that'd make it really easy to frame someone. ~~~ kbuck I haven't seen or heard one of these calls actually play out, but they might not even be trying to sell something - they could just be scammers out to steal credit card information. (I get one all the time that's a prerecorded message from "Rachel from Cardholder Services") I'm sure telephone companies could _technically_ stop them if they really wanted to, but telephone companies make a profit from these people. What incentive do they have to stop them? Same with text message spam. If they tracked it (which they surely collect enough money per message to do), they could easily notice one number sending a hundred spam texts and stop it before it sends tens of thousands of them. They don't, though, because each of these messages means anywhere from another 5 to 25 cents in their pocket. Most people don't even contest getting charged for receiving spam texts, because who's going to argue over a quarter? The biggest issue seems to be that all of this data is ephemeral - even if they had a "more powerful" caller ID (which I believe 911 dispatchers do), you would have to catch them in the act and personally have access to check where the other end of the call is terminating, and you'd have to do it before they hung up. For IP calls, I think it's unlikely they would even be able to fully trace it. ~~~ tripzilch > What incentive do they have to stop them? That it's illegal? (the ones that are) ~~~ kbuck It's not illegal for the telephone company to not try stopping them. ------ ruby_on_rails "Contestant further represents, warrants, and agrees that any use of the Submission by the Sponsor, Administrator, and/or Judges (or any of their respective partners, subsidiaries, and affiliates) as authorized by these Official Rules, shall not: a. infringe upon, misappropriate or otherwise violate any intellectual property right or proprietary right including, without limitation, any statutory or common law trademark, copyright or patent, nor any privacy rights, nor any other rights of any person or entity; b. constitute or result in any misappropriation or other violation of any person’s publicity rights or right of privacy." (<http://robocall.challenge.gov/rules>) I find this clause rather disturbing, I think I know what they meant to say, but they instead wrote something so overly general, that if enforced, effective makes this competition un-winnable. Maybe someone else can weigh in on this.
{ "pile_set_name": "HackerNews" }
Drag and Drop for AngularJS - codef0rmer http://codef0rmer.github.com/angular-dragdrop/#/ ====== tanepiper It's good, but again adds another dependency with jQueryUI - I wish people would stop adding heavyweight dependancies to this stuff.
{ "pile_set_name": "HackerNews" }
NIH investigating if U.S. scientists are sharing ideas with foreign governments - SiempreViernes http://www.sciencemag.org/news/2018/08/nih-investigating-whether-us-scientists-are-sharing-ideas-foreign-governments ====== java-man scientists do science?? outrageous!
{ "pile_set_name": "HackerNews" }
Everyone’s AirPods Will Die - Ice_cream_suit https://www.washingtonpost.com/technology/2019/10/08/everyones-airpods-will-die-weve-got-trick-replacing-them/ ====== mft_ Interesting example to consider. I can usually mostly understand Apple (and others) making batteries unremovable, as it allows them to pack the maximum battery into the minimum space (and sometimes a awkwardly-shaped space which wouldn't support removability anyway). However, in the AirPods example, the 'stalk' doesn't appear to hold anything except a battery, charging contacts, and an antenna for bluetooth reception. This being the case, it should have been possible to design the 'stalk' as a self-contained unit which would then have been easy to swap out; it would have needed maybe a millimetre extra to incorporate contacts to transfer power and the antenna signal.
{ "pile_set_name": "HackerNews" }
Google shifted $23 bln to tax haven Bermuda in 2017 - bitcharmer https://www.reuters.com/article/google-taxes-netherlands/google-shifted-23-bln-to-tax-haven-bermuda-in-2017-filing-idUSL8N1Z3403 ====== tareqak Submitted yesterday ([https://news.ycombinator.com/item?id=18828050](https://news.ycombinator.com/item?id=18828050)), and the day before ([https://news.ycombinator.com/item?id=18816984](https://news.ycombinator.com/item?id=18816984)). ------ GreeniFi A lot of people - myself included - feel pretty uncomfortable when we read these stories. But we have to remember that (1) directors are under a fiduciary duty to maximize return to shareholders, (2) use of tax havens is legal. The 2 facts together means directors can be sued by shareholders if they _don’t_ use tax havens. So the route to change, would be to legally end use of tax havens - through the ballot box. I would support that, it doesn’t seem fair. But I’d also point out that one line of thinking is that when the EU started getting real about stopping corporate tax havens, the oligarch class got together and fermented Brexit. Damned if you do, damned if you don’t. We’ve got some big problems! ~~~ sokoloff Directors do not have a fiduciary responsibility to maximize returns to shareholders. They do have a fiduciary responsibility to act in the best interests of the company (and therefore the shareholders), but not a specific duty to maximize after-tax profits. They can be sued for anything, just like anyone, but there is a large body of corporate case law supporting the use of business judgment to achieve goals other than short-term maximization and typically have E&O coverage as well to provide some coverage for nuisance lawsuits and cases other than personal misconduct or conflicts of interest. So, sue away, but you almost surely won’t win if the board pursued a lawful strategy other than the one you as a shareholder prefered. ~~~ GreeniFi Can you differentiate acting in the best interest of the company to maximaing return to shareholders, when that is the very raison d'être of a company? ~~~ sokoloff [In most states] Companies may be formed for any lawful purpose; maximizing returns to shareholders does not need to be the primary purpose (nor necessarily a purpose at all, the extreme example of which are charities). IOW, it _could be_ the raison d'être of a particular company, but is not necessarily so. Read the Hobby Lobby case as a start. Quote from Justice Alito in that opinion: While it is certainly true that a central objective of for-profit corporations is to make money, modern corporate law does not require for-profit corporations to pursue profit at the expense of everything else, and many do not do so. . . If for-profit corporations may pursue such worthy objectives, there is no apparent reason why they may not further religious objectives as well. ~~~ GreeniFi Tell that to the shareholders of Google! We’re not talking about a company set up to manage a water feature in a town square here!
{ "pile_set_name": "HackerNews" }
Show HN: Applications are open for TinySeed's 2020 batch - einarvollset TinySeed is a remote, year-long accelerator for independent software businesses.<p>We just opened up our applications for our second batch: https:&#x2F;&#x2F;tinyseed.com<p>We focus on SaaS and “non-unicorns” (companies that don’t aspire to grow at all costs to reach a $1B valuation). In addition to investment, we are a tight-knit community providing advice, support, a deep network of founders, and valuable connections to world-class mentors.<p>Applications will be open for the month of November and will close at midnight, November 29th. We’ll be reviewing applications through December and making decisions in Winter 2020, with the next batch starting Spring 2020.<p>Let us know if you have any questions; either here or email einar@tinyseed.com. Hope to see your application! ====== limedaring Hey, I’m the program manager at TinySeed, I’ll be hanging here through the day to answer questions as well. Ask us anything!
{ "pile_set_name": "HackerNews" }
Every October, Write a Lisp Interpreter In C - blackhole https://twitter.com/#!/haikoschol/status/128741308985643008 ====== demallien I would have thought that writing a C compiler in Lisp would be more interesting... Every time I read a book on compiler design in C, the book passes an inordinate amount of time setting up the infrastructure for creating an AST, yet surely this would be a very natural thing in Lisp? ------ geekytenny Do you think they would want us to spend our 'Octobers' reinventing things they built or for us to take their works in computing to greater heights?
{ "pile_set_name": "HackerNews" }
Finding Ideas For Your Next Project - br0ke http://nathanbarry.com/finding-ideas-project/ ====== amix I think a better approach is to solve your own problems. This way you don't have to waste time cold calling people and hoping they are sincere with you with their feedback. You are the user and the creator... It's a very powerful position to be in. I would recommend reading Paul Graham excellent post on "startup ideas" (which could easily translate to "project ideas"): "The way to get startup ideas is not to try to think of startup ideas. It's to look for problems, preferably problems you have yourself." <http://paulgraham.com/startupideas.html> ~~~ yitchelle Solving your problem is a good way to find ideas. You know the problem well and the context the problem is in. However, if your work life, social life or life in general is not too varied, finding a problem to solve can be a problem in itself. Not being respectful and some people live quite contented lives like these, but living a simple, straight forward, suburban life style can be hard to find pain problems. So to increase the ways of gaining problem to solve, getting out there to experience different aspects of life and talking to other folks are two sure way of find problems to solve. The side effect is that you will be a more rounded character and will, most probably, enjoy life much more. Basically, this is what Nathan is saying. ~~~ amix The problem is that people could be insincere, non-rational or ignorant. I doubt they will know what their problems are, what they are willing to pay or what the solution could be. There's a great quote by Henry Ford: "If I had asked people what they wanted, they would have said faster horses." For Nathan his pain point could be something related to writing and publishing books. I think this would be a much better problem domain since he is a user and I am sure he has some problems regarding this process that could be improved. ~~~ yitchelle Great quote! It stresses the point that a great solution to a problem could comes from a different domain. ------ lukethomas "I’m doing some research into software used by ________. Just curious, is there any software you’ve been looking for?..." I think you would be surprised at how many people get confused by "software." I've received answers like "Safari" to this same exact question. Just a thought, but instead of focusing on providing another piece of software (and phrasing it that way), look to provide business value. I know that's your end result, but I would present it that way from the first interaction. Also, when you've picked the market/client base you want, I highly recommend seeing if you can visit the business for a day or two (if you can find time.) There's nothing better than immersing yourself...it will lead to better insight. ~~~ ErrantX Reading through, Nathan wasn't expecting any realistic answers to this question. It was just an ice breaker to establish contact for the follow up call. As to your last paragraph; this is solid advice. I've picked up a few software ideas by viewing businesses "in action" and observing the pain points. ------ sharkweek I stole Drew Houston's idea for ideas by carrying a little notebook around to write down things that annoy me; has lead to the beginning of several little projects. ~~~ alok-g How do you handle cases when annoyed by lousy implementations of features of existing products? In other words, if the idea resulting from the frustration looks like "feature, not a product". Doing it right now involves not just doing that particular aspect right but also the remaining 95% of the product. ------ nicholassmith I'd say asking them as well, "What do you think you'd pay for it?" could be useful to establish if it's genuinely worth going after it, but it's a really awkward question to fit in and it's a _really_ awkward question to answer as well. ~~~ nathanbarry I like to judge the awkwardness of their response. It helps you learn how serious they are. If they really struggle to find a price they probably don't think their problem is that painful. You can also focus on how much a problem costs them in wasted time or lost revenue. ~~~ nicholassmith That's a pretty awesome kicker to the question, I'd never thought of it in quite those terms. ------ rsobers Another option: do what someone else is already doing. For instance, bug tracking and project management apps have proven demand. Sometimes you can steal enough market share by doing things better or by making subtle variations. ------ gfodor There is more than one way to skin a cat. Customer development based approaches like this are effective at birthing certain types of products. But lets not fall into the trap that there is a one-size-fits-all approach to innovation. Necessity is always the mother of invention, but often people can't conceive of what their life is missing or what parts of life could be more enjoyable or less painful. ------ eranation HN traffic took it down? all I get is "Error establishing a database connection" ~~~ nathanbarry Sorry, should be back now. ~~~ ekurutepe It seems like it went down again. I'm getting the DB error as well. ------ ThomPete Ideas, execution, problems, audience.... They are all part of the same thing. You can choose to be the gold digger or the merchant. It doesn't matter. One is filled with risk but great fortune. The other is more secure but the prospects of making it big are much less. ~~~ bjelkeman-again Yeah. I always think it is funny when people are talking about ideas as if they are really worth much. I have new business ideas regularly. Ideas are cheap. Execution and access to markets, is where the idea hits the road. And the likelihood that the idea and execution will make you insanely rich is so small that I have removed it from my equations these days. Just keep executing though. ------ wacheena I dig this overall process (and transparency), but I'm bummed that ultimately it came down to "I have an idea." It's an idea that has some customer validation, but the value of this series of blog posts for folks was to prove a process for creating a web application that didn't start with "an idea." ~~~ nathanbarry I didn't start with the idea. It came out of a conversation with a friend last week. My goal for the post series is to create a web app and be transparent about the process. So finding an idea and validating it matches that goal perfectly. ~~~ muellerwolfram true, but the idea coming out of the idea finding process that you describe in your post would have been slightly more awesome. ...but only slightly, and i'm still eager to read follow-ups on this experiment! ------ ebertx There are a couple of sites that help for finding or at least verifying viable markets: 1) U.S. Bureau of Labor Statistics (www.bls.gov) 2) and FreeLunch.com - <http://www.economy.com/freelunch/default.asp> ------ wasd I really like articles like this (on how to "find" ideas) and have seen a few on HN. Does anyone have more articles like this book marked? I've seen PG's essay. ~~~ amitklein Here are a few: \- <http://paulgraham.com/startupideas.html> \- <http://cdixon.org/2010/03/14/developing-new-startup-ideas/> \- <http://www.wired.com/magazine/2011/08/st_qareis/> These are tangentially related: \- [http://blog.eladgil.com/2012/02/entrepreneurial- turbulence.h...](http://blog.eladgil.com/2012/02/entrepreneurial- turbulence.html) \- <http://500hats.com/niche-to-win> I have more general "starting a startup" links here: <http://bitly.com/bundles/o_7ki5mkvgf8/1>
{ "pile_set_name": "HackerNews" }
The Problem With 'Above Average Programmers' - gacba http://www.lessonsoffailure.com/developers/problem-above-average-programmers/ ====== hga " _Being an expert means you know it all about your subject._ Unfortunately, it also means you’re going to get lazy. _It means you’re going to eventually rest on your laurels and sit around thinking you’re better than everyone else instead of actually working to get there. Your expertise will become a liability because you stop trying to learn. Maybe not today, but soon enough._ " (I look at my bookshelves, recent Amazon.com purchasing history and piles of books I'm in the process of reading or have queued up.) Uh, right. " _So what’s the number one thing you can do to be the best programmer out there?_ Start by considering yourself below average." But that's stupid if it's not true. "Know thyself" is one of the cardinal rules in this game. If I thought myself below average in this field, I'd spend my time in another where I _know_ I'm above average (e.g. chemistry) and I wouldn't try to tackle some hard problems I'm looking at, including a few I don't think it's likely I'll be able to contribute to. ------ gte910h Author has confused experts with people who don't continually learn. ------ pixelbath An interesting article, but it seems like the author is drawing the incorrect conclusions from the Dunning-Kruger effect (which I found to be a MUCH a more fascinating and informative read).
{ "pile_set_name": "HackerNews" }
VS2008 Stepping into framework source code - tarunkotia http://referencesource.microsoft.com/serversetup.aspx Configuring Microsoft Reference Source Server ====== tarunkotia Since January 2008, Microsoft has enabled a public symbol server containing source code for most of the .NET Framework libraries. This means you can step into the source code for System.Web.dll and various other core assemblies, which is extremely useful when you have an obscure problem and not even Google can help. This contains more information than the disassembly you might get from Reflector you get the original source code, with comments.
{ "pile_set_name": "HackerNews" }
Tesla First Quarter 2018 Update [pdf] - kgwgk http://files.shareholder.com/downloads/ABEA-4CW8X0/6239644551x0x979026/44C49236-1FC2-4FD9-80B1-495ED74E4194/TSLA_Update_Letter_2018-1Q.pdf ====== magicbuzz Model 3 has a larger market share in the US now than BMW 3 series? That seems to be what the graph for April says... ~~~ secabeen The 3 series is pretty late it its product cycle: Sixth generation (F30/F31/F34; 2011–present) I would expect a surge in 3-series when they next release a new generation.
{ "pile_set_name": "HackerNews" }
Apple Exploring New Glass Panel MacBook Keyboards - kmano8 https://www.macrumors.com/2019/02/04/apple-exploring-new-glass-panel-keyboards/ ====== hsbaut76 Sorry Apple, but I won't buy another MacBook Pro again in the foreseeable future. I won't accept your dogma anymore. I encourage other developers to use Linux. Many Distros have come along way, personally, I think Manjaro is great.
{ "pile_set_name": "HackerNews" }
Origins and migration of Soccer's elite – data visualization and app - antonmc https://developer.ibm.com/bluemix/2016/06/03/origins-of-soccer-superstars/ ====== antonmc A blog post and link to an interactive data science app that plots the origins of Copa America players and the paths that led most to Europe.
{ "pile_set_name": "HackerNews" }
Help with Yahoo Pipes Error - jdavid http://jdavid.net/?p=93 I am getting an error on the response while using OSDE ( Open Social Developer Environment )<p>HTTP ERROR 500 Problem accessing /gadgets/makeRequest. Reason: host parameter is null<p>I think Yahoo Pipes is rejecting the request because it's coming from a local server. Can anyone confirm this?<p>I did not see any documentation at pipes.yahoo.com that would confirm or deny this in their documentation or their forums.<p>Has anyone use used Yahoo Pipes in an opensocial/ facebook context? ====== powdahound Why not post on stackoverflow.com?
{ "pile_set_name": "HackerNews" }
Ask HN: Why don't you hire Indian freelance developers? - googlycooly ====== anandnair One thing I've felt is that there are lot of freelance developers in India, both good and bad ones. Most of them say "Yes" to everything but only few of them have the capability to do what we need. When I posted a simple task on Upwork recently (related to AWS server configuration), I got 100s of requests, mostly from Indians. Now the problems are 1) I'm looking for a freelancer, not an agency. Most of them are pitching on behalf of their own agency, and we will never know who is actually doing the work. (They might even outsource it) [This is the worst part] 2) Some of them won't even read the work description properly, and so we need to spend a lot of time filtering the requests. I got confused and skeptical about their capabilities because of all these and finally hired a freelancer from Europe. But I've seen amazingly talented freelancers from India as well. It's just that, filtering through 100s of requests is painful. ~~~ smartis2812 We had the same experience at my last company. And the final result was very disappointing. Also we found exact parts of the received code on StackOverflow. ------ PaulHoule From the U.S. timezone is a big concern. When I meet with people in India it is always around 8am or 8pm, it is kinda fun the first few times but it gets old fast. There are many people in CONUS, Canada and South America who are easy to work with in terms of timezone. Particularly there are many people who can do data science and other fancy work in Argentina, I have even had good experiences with freshers from Brazil and Colombia. ~~~ googlycooly +1 for the timezone issue. ------ Porthos9K I'm an American, and I'd rather hire my fellow Americans. They have a much more sensible work ethic, and are more efficient; they put in a solid day's work in eight hours or less and then get the hell out of the office. If they leave at 5pm, then I don't have to stay all night to babysit them. ------ manyxcxi Primarily, it’s because I don’t know many. Most attempts at networking come from body shop style companies in very spammy ways. Second to that is the time zone issue. I’m on the US West coast, I’m an early riser, but it’s still rough. ------ catacombs Build American. Hire American. I'm not one to outsource work to people overseas to save a buck. That, to me, is extremely problematic.
{ "pile_set_name": "HackerNews" }
ICloud: The Mother of All Halos - jsherry http://allthingsd.com/20110608/icloud-the-mother-of-all-halos/ ====== stanleydrew It would seem that this guy has never actually used more than one Android device. Cause Android has done this from the very beginning, syncing data and apps through your Google account. Which is free. Indeed it has been awesome. When I turned on the Samsung galaxy 10.1 I got at I/O this year, it already knew my wpa key. That was just the latest in a series of small touches that have gone completely unnoticed by those living deep within the apple ecosystem. ~~~ saturdaysaint As an iOS user, I understand that Android's always been cloud-centric, but I'm curious (and a bit skeptical) if this has really entailed everything that Apple just unveiled: Does Android automatically sync photos (both with the cloud and the PC)? Does stock Android do a full backup (including application data) to the cloud? When I synced my iPhone 4 to iTunes for the first time, I instantly got voice notes, text files and PDF's I've had since my first iPhone. I've always wondered if an Android user picking up a new phone could expect the equivalent. Is the Google Docs experience (ie Google's answer to the "document syncing" problem) better on Android than it is on iOS devices? I've wanted to use Google Docs exclusively for years, but editing is practically unusable on an iPhone. I'm shocked at how slow they've moved to make documents mobile. ~~~ blinkingled Photos : Multiple solutions exist to do n-way photo and video sync although it is highly debatable how useful anything other than Mobile to Cloud and Cloud to Mobile is which Android does out of box. Picasa photos and videos automatically appear in gallery and there is no limit to the number of photos. <http://developer.android.com/guide/topics/data/backup.html> \- Backup can include anything including app data and it doesn't have to go to Google's cloud - the backup client component is designed to be customizable to send your backup data wherever you please. Best part of all! Google Docs - Again multiple solutions exist. Documents to Go full edition offers seamless Google Docs support with decent editing functions. There exist Word/Excel add-ins that save to Google Docs from desktop. ------ blinkingled Quote - " Who wants to go back to emailing documents to yourself, or firing up Dropbox to move media from one device to another, when iCloud will–if it works properly–obviate the need for both by enabling change-on-one-device, update- to-all computing that’s ostensibly effortless and invisible?" " Add to that a price point of free and a software-driven ecosystem like the one Apple’s developed and, well, that’s an offer not easily refused. Not easily duplicated, either–particularly for more fragmented platforms like Android" Hmm. Never used Android before or just paying it back Mr. Writer? Apple patted him in the back and John is just doing his best in response. WSJ is Apple's guerilla marketing arm. ------ jsherry It just seems to me that companies like SugarSync (and maybe Dropbox - I don't use it) have been doing this for years. I'm confident that Apple will improve some of the media streaming experience, but their real genius is in marketing these products b/c this has existed for some time. ~~~ danieldk iCloud is not (just) a hard disk in the cloud (as Steve Jobs said during the keynote). More defining is the API that makes it seamless for applications (and consequently users) to sync data to the cloud. In that sense, it's not just another Dropbox. In a year, the average iOS app will automatically sync data across devices using iCloud, while there is no simple knob you can switch to store it in Dropbox instead. ~~~ jsherry I will have to see the full implementation of iCloud to understand the difference. Again, can't speak for Dropbox even though they're the market leader. But SugarSync has this syncing functionality already and has had it for at least 2 years. You can sync to the cloud and access files from the cloud OR you can actually choose files/folders to sync across hardware. For example, if I say that I want to sync my music folder across my Macbook Air and my Thinkpad, any time I make an update on one machine, it flows through to the other machine's hard drive. Similar with documents. If I choose to sync a Word document across machines and I edit it and save on one machine, in seconds it updates on the other machine should I open it there. It's all seamless - happens in the background without me having to push it from machine to machine. Re: the API point, SugarSync has had one since 2010: <http://www.sugarsync.com/developer>. Haven't developed with it at all nor have I used an app that uses it, so can't speak for its flexibility. P.s. I swear I'm not a SugarSync employee - just a very satisfied user since 2008 ;-) ~~~ danieldk _You can sync to the cloud and access files from the cloud OR you can actually choose files/folders to sync across hardware._ There is a huge usability difference between putting a file in some folder, and having it synced, and automatically syncing all relevant data in an application. This is easy to underestimate for us technical people, but it is very difficult to explain my mother that she has to put files in, say, Dropbox on her iPod Touch to be able to access it on her iPad. Do something on the iPod Touch, have it available nearly instantly on the iPad in the same application, she understands. ~~~ jsherry Good point indeed. The functionality is already there, but Apple will surely simplify things and cause widespread adoption. ------ ThomPete It's not that others haven't done it before. It's that Apple hasn't done it until now. That is the big thing. Not the technology in itself. ------ cph1 It's true; iCloud has the potential to be a massive lock-in mechanism, ensuring that people will continue to buy Apple devices for years on end because buying anything else will mean manually moving your data out of iCloud - which probably won't be easy for the average user. ~~~ peterb To move your data you will need to sync to a mac and then manually move your data. Copying from a iOS device is problematic, but that is true today. ~~~ Timothee That would work for some stuff, but I'm not sure what would happen to the data stored by third-party apps. iTunes has a section to retrieve documents that some apps create, as long as the apps themselves are built to take advantage of that. However, if an app doesn't provide either that or some kind of export, I'm not sure if you'd get easy access to that data. And actually that might create an incentive for some apps not to add the functionality. It makes sense for Office-type apps to provide a PDF, or Office-compatible format, but I could imagine very specific apps that can lock you in since you don't get to (easily) reverse-engineer and convert their data because you don't get as easy of an access as you get on a desktop. ------ adaml_623 Does anybody else think that the EU is going to jump on Apple eventually the same way they jumped on Microsoft and browsers and make Apple provide a way for users to select which storage provider they want to use? ~~~ a2tech They might try-but why would Apple let them? They can just take their ball and go home. I couldn't find any hard numbers but a few articles I found were claiming that European sales make up less than 11% of total Apple sales. ~~~ pavlov That makes sense in a world where losing billions of dollars of revenue is preferable to adding an API. The EU has a population of over 500 million. Apple's presence in Europe involves much more than selling computers and gadgets. If they actually were to retreat because of some squabble with the EU Commission, they'd also be abandoning dozens of telecom operators with whom they have iPhone deals, tens of thousands of developers who make things for the iOS and Mac platform, and millions of consumers who buy digital content from iTunes... Leaving all that on the table in this extremely competitive market would be nothing short of madness.
{ "pile_set_name": "HackerNews" }
Russian Scientist Claims Signs of Life Spotted on Venus - daegloe http://news.yahoo.com/russian-scientist-claims-signs-life-spotted-venus-070321311.html ====== uncoder0 “Let’s boldly suggest that the objects’ morphological features would allow us to say that they are living.” I would have liked a more in-depth explanation and some photos indicating the morphological features. This is the only photo I could dig up: <http://i.imgur.com/dfv0m.jpg> ~~~ daegloe I couldn't dig up any additional details re: the photos in question. Some are claiming the crab-like object in the photo linked to above is a fractured piece of the probe's protective shield. The fully intact probe can be seen here pre-launch: <http://www.myspacemuseum.com/v_venera13i_24.jpg> ------ kia While he really seems to be a well known scientist, the photo of a "scorpion" [1] is not the best confirmation of his theories. [1] - [http://www.mk.ru/science/article/2012/01/20/662678-na- venere...](http://www.mk.ru/science/article/2012/01/20/662678-na-venere- nashli-skorpiona.html) ------ PaulHoule Are this the same scientist who thinks the U.S. shot down the last Russian mars probe? ------ TheCoreh Hmm, are this photos the article mentions available to the general public? ------ bendangelo There has always been life on venus under the clouds. Search for valiant thor.
{ "pile_set_name": "HackerNews" }
Ask HN: Does rust have a future? - zabana ====== foldr I mean, probably? Significant chunks of Rust code are already making it into Firefox. I will say that from my relatively limited experience of Rust, it's a language to use if you're really really sure that you can't use GC. Any time you want to use a graph-like data structure (which can quite often, in some applications) you have to do a significant amount of thinking that you just don't have to do in a language with GC. I don't mean that as a criticism of the language. Rust makes automatic non-GC memory management about as easy and flexible as it could be. But it's still a significant cognitive overhead. ~~~ steveklabnik You don't have to do that thinking if you use a library which implements one for you, which is one reason why we made libraries so easy to use. ~~~ foldr It's true in any language that you don't have to think about implementing a data structure if there's a library that already implements it. I think that clearly misses the point of what I was saying, though. Quite often you do have to implement data structures yourself. Here's a concrete example: [https://en.wikipedia.org/wiki/Doubly_connected_edge_list](https://en.wikipedia.org/wiki/Doubly_connected_edge_list) This is a data structure that you're quite likely going to have to implement yourself. It is, obviously, possible to implement a DCEL in Rust. However, to do so it is necessary to make a number of decisions (unsafe pointers? indices instead of pointers?) that you don't have to make in, say, Go or Java, where you can just use ordinary pointers/references. Again, that is not a criticism of Rust. It's just an observation about its design tradeoffs.
{ "pile_set_name": "HackerNews" }
Call me maybe: MongoDB - iand http://aphyr.com/posts/284-call-me-maybe-mongodb ====== nasalgoat Frankly I was amazed that majority lost as few as it did. If you need atomic writes, MongoDB is not the place for you. ------ dccoolgai The only thing that surpises me about this is that people are still nominally surprised about this.
{ "pile_set_name": "HackerNews" }
A VNC client for your geeky character terminals (VT/xterm/etc) - howardg https://github.com/HouzuoGuo/headmore ====== brudgers If it meets the guidelines, this might make a good 'Show HN'. Guidelines: [https://news.ycombinator.com/showhn.html](https://news.ycombinator.com/showhn.html) ------ t0mst0n I miss the VNC client for your geeky character terminals for Amiga 500 or C64 :D Only then is the geeky.
{ "pile_set_name": "HackerNews" }
Ask HN: How long before you leave? - nzk1 If you recently started a new job but you are not enjoying it - how long would you stick with it until you leave?<p>(for sake of discussion - you have enough money to survive a year without working). ====== ndhoa Frankly, having a job one does not enjoy is the _vast majority_ case. So you should think of being able to change that as a gamble where the odds are generally against you. My opinion is: you should not think of your current job as something terrible you must leave asap. Make some token efforts to change the working environment if you have the say in the management or try to fit yourself for a few months. Trying to change job is good, but you should have a solid backup plan first because the odds are against you, you are more likely to fail than to succeed going by general statistics. So you need to plan for failure, you should not "all in" even if you have a year of saving to spare. It's really possible to have a shitty job but a good and meaningful life. Job don't control you, I am the captain of my soul. However especially while we're young there is no reason to muster a good effort to get a good job. Basically if you have a shitty job, you can still do quality things with your life, there is much to one's life outside a job, unless you are at a slavery shitty job. Our normal day jobs leave us plenty of room to define our lives in other ways - how we treat people, who we get to know, what we do in our spare time, what we see in life and our surroundings. Quality of life is more dependent on self than on external parameters. Irregardless of your peers and your product at work, you can believe in causes like FOSS and fight for it in your spare time, you can get involved in charity and community work, you can read and think and define your way of life, you can get to know people and treat them in different ways. The source of happiness to, variedly, helping other people, being part of something greater than yourself, feeling collective purpose, and other things in that ballpark and if we believe them, then all of those are possible in almost any kind of environment Background: recently finished paying my university debt after 3.5 years. Started a new job 3 months ago, tried all what I said above but didn't work out but I made my efforts to change the working environment itself. Text above are distilled from all the conversations I had with my friends ~~~ wikwocket While I'm all for making the most of your position, and not defining yourself by your job, I have to disagree that most jobs are bad and that you should just serve your time and go on with your life after 5pm. A good career is energizing, a bad job placement can be totally demoralizing, and _everyone_ deserves to find a career that vitalizes them. Work/life balance is hard to achieve with a great job, let alone if your job is soul-crushing, mind-numbing, impossible, badly managed, poorly located, or in an industry you're incompatible with. So I always advise people to look around if they're unhappy with their position. Even in a down market, there are possibilities out there, and you will never know until you look. And with due diligence, research, networking, etc, you can have some decent assurances about a potential new job is a good match for you. Now of course there are situations where it may be better to stay where you are for a while, to get experience/build your resume/pay off loans/etc. But life is too short to work in a place that makes you unhappy, and family is far too important to have a job that robs you of energy which you could devote to them. ------ singular I'd give it at least 3 months and really try to assess whether the discomfort was due to the job being a sucky situation or me being challenging/experiencing natural discomfort after a big change, i.e. getting the job, or even potentially due to some outside factor. I'd also definitely talk things over with friends to get some outside perspective. After that, if I felt the same way, if I had enough money to survive a year without working I'd leave immediately, take time off, then prep for interviews and go for a better job. However this is very situation dependent, I am a single man with no (serious) responsibilities, ymmv. ------ mnbvcxza I'd keep looking in my spare time. Why quit before you have something else lined up? I'd have to evaluate why it was bad, how bad it was, and how well I could mitigate the problem(s) when determining if I wanted to look for another job. It would have to be fairly bad for me not to give it at least a month - more than just me thinking it wasn't quite as cool as I thought it would be. I don't see how it would hurt your record if you found another job and left within a month - you could just leave that one job off your resume. ------ sfronczak I once began looking for a job about a week after I started a new one. I wasn't completely sold on the company when I accepted the offer - I had some misgivings about the culture - and later wished I had just declined. After a week I knew my gut was right and I needed to get out. Of course it was more difficult to find a new job since I had just started one but I was honest in my interviews and after 90 days I left for something else. Trust your instincts. ------ mzarate06 I've found about 6 months to be the cut off point. I never set that time frame intentionally, and most of my jobs/contracts last much longer, if not for their full term and beyond. However, in bad situations, 6 months happened to be the longest I was able to stand all the negativity. In one case I left after about 3 months, only b/c I knew right away that my place wasn't in that particular environment, or with that particular team. If you're asking due to relevant circumstances, what don't you like about the job, and how long have you been there? ------ jason_wang Another data point to consider: recruiter fee If you were hired through a recruiter, the company that hired you will pay 15% to 25% of your annual salary in fee. In most cases, if you leave within 30 to 60 days, the recruiter will find your replacement for free. If you 1 day after the guarantee period, then the company that hired you have to pay the full fee. Moral of the story, once you know you want to leave, talk to your manager. Be a nice guy. ~~~ mnbvcxza > Moral of the story, once you know you want to leave, talk to your manager. > Be a nice guy. Before making sure you have something else lined up first? ~~~ jason_wang I suppose every company or team is different. But almost everyone on my team told me they are moving on about the same time they started looking. This is beneficial both ways: * A smooth transition can be made. They get to wrap up their last project at the company and have enough time to do a proper knowledge transfer * We get a head start on hiring a replacement. Typically the person leaving gets to help out during the interview process as well. * I get to help the person leaving on picking out the right next opportunity and more often than not, help the person negotiate his next job offer. ------ jmspring In the last case where I wasn't happy, it took me about 3mo to convince myself, it wasn't worth it and about a month to find something I liked. My tolerance for putting up with a crappy job at the time was pretty high. Even with the scenario you put forth, it would have probably taken 1/2 to 3/4 of the time. These days, I'd probable be gone within a month of such a realization. Time to transition and move on. ------ runawaybottle I bounced in my third month at a job that I really misjudged. Cut your losses and move on, the market is good enough right now. ------ OafTobark Personally, ASAP or at the latest, when I find a replacement job if I was in your position if thats what you're looking for. ------ penguinlinux you don't specify the real reason why you dislike this job. ~~~ muruke Agreed, it really depends on why I don't like the job. But I generally would try to stick around for few months more. Although if I had enough money to live for a year why did I take the job? :) ~~~ greenlakejake Please note that potential employers don't like seeing resume gaps >6 months. ~~~ sejje Never been a problem for me, and I have lots of resume gaps like this. ~~~ mnbvcxza What interesting things are you doing during those gaps?
{ "pile_set_name": "HackerNews" }
The Quest for the Ultimate Vacuum Tube - sohkamyung http://spectrum.ieee.org/semiconductors/devices/the-quest-for-the-ultimate-vacuum-tube ====== imperialdrive mind getting blown!
{ "pile_set_name": "HackerNews" }
Ask HN: Is your company sticking to on-premise servers? Why? - aspyct I&#x27;ve been managing servers for quite some time now. At home, on prem, in the cloud...<p>The more I learn, the more I believe cloud is the only competitive solution today, even for sensitive industries like banking or medical.<p>I honestly fail to see any good reason not to use the cloud anymore, at least for business. Cost-wise, security-wise, whatever-wise.<p>What&#x27;s a good reason to stick to on-prem today for new projects? To be clear, this is not some troll question. I&#x27;m curious: am I missing something? ====== AgentK20 Like many others have pointed out: Cost. I'm the CTO of a moderately sized gaming community, Hypixel Minecraft, who operates about 700 rented dedicated machines to service 70k-100k concurrent players. We push about 4PB/mo in egress bandwidth, something along the lines of 32gbps 95th-percentile. The big cloud providers have repeatedly quoted us _an order of magnitude_ more than our entire fleet's cost....JUST in bandwidth costs. Even if we bring our own ISPs and cross-connect to just use cloud's compute capacity, they still charge stupid high costs to egress to our carriers. Even if bandwidth were completely free, at any timescale above 1-2 years purchasing your own hardware, LTO-ing, or even just renting will be cheaper. Cloud is great if your workload is variable and erratic and you're unable to reasonably commit to year+ terms, or if your team is so small that you don't have the resources to manage infrastructure yourself, but at a team size of >10 your sysadmins running on bare metal will pay their own salaries in cloud savings. ~~~ mmmBacon A few years ago I was trying to start a company and get it off the ground. We had to make decisions on our tech stack and whether we were going to use AWS and build around their infra. Our business was very data heavy and required transferring large datasets from outside to our databases. Even in our early prototypes, we realized that we couldn’t scale cost-effectively on AWS. I figured out that we could colocate and rent racks, install HW, hire people to maintain, etc... for way less than we could use the cloud for. I was shocked at the difference. I remember saying to my cofounder why does anyone use AWS, you can do this on your own way cheaper. Later I worked at a FAANG and remember when Snap filed their S1 when they were going public they disclosed that they were paying Google $5B and we were totally shocked at the cost compared to our own spend on significantly larger infra. I think people don’t realize this is doable and it’s great to hear stories like yours showing the possibilities. ~~~ RachelF Dropbox did the same thing a few years back - moved everything from Amazon S3 to their own storage. My guess is they did it for cost reasons. ~~~ llarsson That S3 is eventually consistent with object updates (HTTP PUT) might also screw up things for a company whose core value is synchronized storage. ~~~ dfsegoat I don't mean to sound daft, just clarifying my own understanding, but isn't Dropbox eventually consistent (as a system)? ~~~ llarsson Oh, sure, but when they think they have written something to S3 and got a successful HTTP response back from the API, perhaps they want to be able to tell clients to go fetch the new data from the bucket. But those clients may not get the new data then, due to eventual consistency. ~~~ lozenge S3 is immediately consistent for new objects unless the service received a GET on the object before it was created. It's easy to use this to make an immediately consistent system. ~~~ ozkatz S3 ListObjects calls are eventually consistent (i.e. list-after-put). EMRFS [1] and S3Guard [2] mitigate this for data processing use cases. [1] - [https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-f...](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr- fs.html) [2] - [https://blog.cloudera.com/introducing-s3guard-s3-consistency...](https://blog.cloudera.com/introducing-s3guard-s3-consistency- for-apache-hadoop/) ------ tgamblin I work in Livermore Computing at LLNL. We manage upwards of 30 different compute clusters (many listed here: [https://hpc.llnl.gov/hardware/platforms](https://hpc.llnl.gov/hardware/platforms)). You can read about the machine slated to hit the floor in 2022/2023 here: [https://www.llnl.gov/news/llnl-and-hpe-partner-amd-el- capita...](https://www.llnl.gov/news/llnl-and-hpe-partner-amd-el-capitan- projected-worlds-fastest-supercomputer). All the machines are highly utilized, and they have fast Infiniband/OmniPath networks that you simply cannot get in the cloud. For our workloads on "commodity" x86_64/no-GPU clusters, we pay 1/3 or less the cost of what you'd pay for equivalent cloud nodes, and for the really high end systems like Sierra, with NVIDIA GPUs and Power9's, we pay far less than that over the life of the machine. The way machines are procured here is different from what smaller shops might be used to. For example, the El Capitan machine mentioned above was procured via the CORAL-2 collaboration with 2 other national labs (ANL and ORNL). We write a 100+ page statement of work describing what the machine must do, and we release a set of benchmarks characterizing our workload. Vendors submit proposals for how they could meet our requirements, along with performance numbers and test results for the benchmarks. Then we pick the best proposal. We do something similar with LANL and SNL for the so-called commodity clusters (see [https://hpc.llnl.gov/cts-2-rfi](https://hpc.llnl.gov/cts-2-rfi) for the latest one). As part of these processes, we learn a lot about what vendors are planning to offer 5 years out, so we're not picking off the shelf stuff -- we're getting large volumes of the latest hardware. In addition to the cost savings from running on-prem, it's our job to stay on the bleeding edge, and I'm not sure how we would do that without working with vendors through these procurements and running our own systems. ~~~ zozbot234 > All the machines are highly utilized, and they have fast Infiniband/OmniPath > networks that you simply cannot get in the cloud. It's weird that these networking technologies are not used more in "plain" datacentre settings, since networking latency and throughput has to be a significant challenge to scaling up non-trivial workloads and achieving true datacentre-scale computing. We hear a lot about how to "scale out", but that's only really feasible for relatively simple workloads where you just seek to do away with the whole issue of keeping different nodes in sync on a real-time basis, and accept the resulting compromises. In many cases, that's just not going to be enough. ~~~ alexpotato There are a lot of people from the National Lab super computer world who end up in High Frequency Trading for just the reason you describe. Specifically, how do you optimize a large cluster of computers to operate at the lowest possible latency. For the National Labs, those computers could be in the lab or with other labs around the world. For the HFT folks, the machines could be in an exchange or spread across multiple exchanges around the world. Source: I used to be head of Global Latency Monitoring for a HFT. ~~~ j88439h84 I'm curious why you moved to LLNL from HFT? ~~~ eyegor Money is a safe guess. Research pay scales aren't even close to private sector, especially not finance. ~~~ dwohnitmok It sounds like the opposite direction happened here. ------ centimeter We are a 1000-2000 person company and we have probably on the order of $100M of servers and data centers and whatnot, and I think we spend about 2/3rds of that every year on power/maintenance/rent/upgrades/etc. We don't generally trust cloud providers to meet our requirements for: * uptime (network and machine - both because we are good at reliability [and we're willing to spend extra on it] and because we have lots of fancy redundant infrastructure that we can't rely on from cloud companies) * latency (this is a big one) * security, to some degree * if something crazy is happening, that's when we need hardware, and that's when hardware is hard to get. Consider how Azure was running out of space during the last few months. It would have cost us an insane amount of money if we couldn't grow our data centers during Corona! We probably have at least 20-30% free hot capacity in our datacenters, so we can grow quickly. We also have a number of machines with specs that would be hard to get e.g. on AWS. We have some machines on external cloud services, but probably less than 1% of our deployed boxes. We move a _lot_ of bandwidth internally (tens of terabytes a day at least, hundreds some days), and I'm not sure we could do that cheaply on AWS (maybe you could). We do use <insert big cloud provider> for backup, but that's the only thing we've thought it was economical to really use them for. ~~~ H8crilA Hundreds of terabytes a day is really not that much, depends on what latency can you accept. I often run computations over datasets that are petabytes in size, just for my own needs. A big data move would be at least tens of petabytes or more like hundreds, or thousands. Also surprised about latency, latency from what to what? Big cloud providers have excellent globally spanning networks. Long distance networking is crazy expensive, though, compared to the peanuts it costs to transfer data within a data center. Reliability - again, not sure I buy it. Reliability is "solved" at low levels (such as data storage), most failures occur directly at service levels, regardless of whether you have the service in house or in the cloud. The rest of your points make sense. ~~~ centimeter > Hundreds of terabytes a day is really not that much How much would it cost to move this across boxes in EC2? I actually don't know, that's not a rhetorical question. A lot of our servers have 10-40gbit links that we saturate for minutes/hours at a time, which I suspect would be expensive without the kind of topology optimization we do in our datacenters. > Also surprised about latency We've spent a surprising amount of money reducing latency :) We're not a high frequency trading firm or anything, but an extra 1ms (say) between datacenters is generally bad for us and measurably reduces performance of some systems. > Reliability is "solved" at low levels To whatever extent this may be true, it's certainly not true for cloud providers. One obvious example is that EC2 has "scheduled maintenance events" where they force you to reboot your box. This would cost us a lot of money (mostly in dev time, to work around it). Also, multi-second network dropouts in big cloud datacenters are not uncommon (in my limited experience), but that would be really bad for us. We have millisecond-scale failover with 2x or 3x redundancy on important systems. ~~~ tstrimple > How much would it cost to move this across boxes in EC2? Nothing. You generally only pay for data going out of cloud providers. Not data going in or data being transferred within the same region. > One obvious example is that EC2 has "scheduled maintenance events" where > they force you to reboot your box. This would cost us a lot of money (mostly > in dev time, to work around it). You're not going to have a successful cloud experience unless you build your applications in a cloud suitable way. This means not all legacy applications are a good fit for the public cloud. Most companies really embracing the cloud are mitigating those risks by distributing workloads across multiple instances so you don't care if any one needs to be restarted, especially within a planned window. > Also, multi-second network dropouts in big cloud datacenters are not > uncommon (in my limited experience), but that would be really bad for us. We > have millisecond-scale failover with 2x or 3x redundancy on important > systems. Are these inter-region network dropouts or between the internet and the cloud data center? You're not going to be relying on a public internet connection to the cloud for critical workloads. All that being said, there are plenty of workloads which I don't think fit well in the cloud operating model. You may very well have one of them. ~~~ iampims You pay for cross-AZ Traffic in AWS, and that adds up really fast. ~~~ Wintereise Yep. Got bitten HARD by this recently, $1.5k inter-az transfer charges that we never saw coming. Our fault, I suppose -- but multi-az is prohibitively expensive if you need to run anything data heavy distributed. ~~~ resonator I'm working on reducing a $50K per month bill for Inter-AZ traffic at the moment. > but multi-az is prohibitively expensive if you need to run anything data > heavy distributed. If you communicate between your AZs via ALBs, multi-az is effectively free. Our bill is so high because within our Kubernetes cluster, our mesh isn't locality aware; it randomly routes to any available pod. 2/3rds of our traffic crosses AZs. ------ horsawlarway I'm slowly coming to the complete opposite opinion you seem to have. I've worked almost entirely for companies that run services in various cloud infrastructures - Azure/Heroku/Aws/GCP/Other. I recently started a tiny 1 man dev shop in my spare time. Given my experience with cloud services it seemed like a no brainer to throw something up in the cloud and run with it. Except after a few months I realized I'm in an industry that's not going to see drastic and unplanned demand (I'm not selling ads, and I don't need to drive eyeballs to my site to generate revenue). So while in theory the scaling aspect of the cloud sounds nice, the reality was simple - I was overpaying for EVERYTHING. I reduced costs by nearly 90% by throwing several of my old personal machines at the problem and hosting things myself. So long story short - Cost. I'm happy to exchange some scaling and some uptime in favor of cutting costs. Backups are still offsite, so if my place burns I'm just out on uptime. The product supports offline, so while no one is thrilled if I lose power, my customers can still use the product. Basically - cost, Cost, COST. I have sunk costs in old hardware, it's dumb to rent an asset I already own. There might well be a point when I scale into a point where the cloud makes sense. That day is not today. ~~~ tjbiddle What's the time trade-off? I've been drawing out my plans lately for a hobby project, all 100% on AWS. Being able to spin up my entire infrastructure with Terraform, build out images with Packer, setup rules for off-site backups, ensure everything is secure to the level I want it, etc. - It takes me next to no time at all. I can't imagine buying hardware, ensuring my home is setup with proper Internet, configuring everything here, and then still needing off-site backups anyway. Now, keep in mind - I'm definitely coming in from a Millennial point of view. My entire career was built on cloud. I've never touched hardware apart from building a computer back when I was 15 or something. I understand virtual. But being able to build up and tear down an entire setup, having it completely self-restore in minutes. Can't beat that. Napkin math has me at ~$50/mo: Full VPC, private/public isolated subnets, secure NACLs and security groups, infinitely extendable block storage and flat-file storage, near-instant backups with syncing to a different continent, 5 servers, DNS configurations, etc. All depends what you're doing too - of course. But for me, just the trade-off of working with what I know and not needing to leave my cafe of choice, still not breaking the bank - and if I do, having instant tear down and restore. Bam. ~~~ tigerstripe What kind of setup did you have for 5 servers at $50/mo on AWS? Interested to know - our EC2 instances that are about 1/4 as powerful as a laptop cost $60+/mo ~~~ tjbiddle Certainly nothing powerful :-) I can get away with t3.nano and t3.micro for what I'm doing at the moment. But the beauty of cloud, is that I can scale up when I eventually need it. 5x t3.nano will be ~$25/mo 5x t3.micro will be ~$50/mo All of my AMIs are EBS optimized and require a minimum of 8GB for the root drive (Although they only use ~1.6GB. Not bothering to hack around this to save a buck.) So that'll be 40GB EBS block storage. Plus I want ~20GB spread across 3 of the machines. So EBS should be ~$6/mo. I only need the volumes of those last 3, the others are good to go with their base AMI or user-data init script. So I only need snapshot backups of ~20GB. Being priced incrementally and having minimal changes, I'll only be charged ~$1/mo for that + off-site another $1/mo So, currently experimenting with the t3.nano - Cost is ~$36/mo. One of these servers will be used as a personal VPN, and I expect ~75GB/mo coming from my laptop. So bandwidth charges at $9/mo. Total $45/mo - For what I have planned now, at least. ~~~ tarasmatsyk That's exactly the reason I gave up on AWS, I need an accountant to do the math every month :D Now I rent a 4GB Linux box for 5$/m with no Dockers or whatsoever and happy that it just works ~~~ Gravyness I also hate this complexity with a passion. I love cloud, but pricing can be a real nightmare. I don't use AWS specifically but when I needed to know the price of some cloud service or group of services I spin up the service (or services) in a brand new project and let it run for 24 hours under similar working environment to see the impact, then after checking the results (the breakdown of each service's price in that day) I just close the project entirely, no left overs. So I tend to successfully avoid these strange, terribly organized, cloud- specific, service-specific calculator where I can easily forget one aspect of the service that might cost a lot of money absolutely randomly. Obviously it is a bad strategy if things are expected to reach $200/month and/or you do 'price evaluation' frequently, but otherwise it is stupid easy. I barely spent $50 each year doing this (small company and sporadic system changes) But the best part is that the final daily price of your system is as precise as it can possibly be and that is worth something. ------ reacharavindh University research group here. Simply, _cost_ Our compute servers crunch numbers and data at > 80% util. Our servers are optimized for the work we have. They run 24/7 picking jobs from queue. Cloud burst is often irrelevant here. They deal with Terabytes or even Petabytes of moving data. I’d cry paying for bandwidth costs if charged €/GB. Sysadmin(yours truly) would be needed even if it were to be run in the cloud. We run our machines beyond 4 years if they are still good at purpose. We control the infra and data. So, a little more peace and self-reliance. No surprise bills because some bot pounded on a S3 dataset. Our heavy users are connected to the machines at a single hop :-) No need to go across WAN for work. ~~~ dathinab In germany it's a pretty common think for universities to have some servers for themself. 1\. Their use case is kinda different. The servers mostly run heavy CS research related stuff. E.g. they might have heavy CPU load and heavy traffic between they servers but they have less often heavy traffic to the "normal internet" (if they have heavy traffic to the outside it's normally to other research institutes which not seldom have dedicated wire connections). 2\. They might run target specific optimized CPU or GPU heavy compute tasks going on for weeks at a time. This is really expansive in the cloud which is mostly focused in thinks like web services. 3\. When they don't run such tasks in the research groups they want to allow their juniors to run their research tasks "for free". Which wouldn't work with a payment model as done in the cloud. 4\. They don't want to relay on some external company. Also I'm not sure are there even (affordable) cloud systems with compatible spec? (like with 4+TB of _RAM_ , I'm not kidding this is a requirement for some kind of tasks or they will take way to long and requires additional complexity by using special data structures which support partial offline data _in the right way_ , which can be very costly in dev time)?? ~~~ RockIslandLine It's not just CS. The computational chemistry and materials science crystallography folks can have jobs that run for days or weeks too. ~~~ veddox I'm at a center for computational biology - our genomics guys have been known to use 90% of our university's HPC capacity ;-) My own work (ecological modelling) is not as heavy, but when I run a full experiment, that takes a 32 core machine about two weeks to complete. ------ Groxx Meta-comment: cost: [https://news.ycombinator.com/item?id=23098576](https://news.ycombinator.com/item?id=23098576) cost: [https://news.ycombinator.com/item?id=23097812](https://news.ycombinator.com/item?id=23097812) cost: [https://news.ycombinator.com/item?id=23098658](https://news.ycombinator.com/item?id=23098658) abilities / guarantees: [https://news.ycombinator.com/item?id=23097213](https://news.ycombinator.com/item?id=23097213) cost: [https://news.ycombinator.com/item?id=23090325](https://news.ycombinator.com/item?id=23090325) cost: [https://news.ycombinator.com/item?id=23097737](https://news.ycombinator.com/item?id=23097737) threat model: [https://news.ycombinator.com/item?id=23098612](https://news.ycombinator.com/item?id=23098612) cost: [https://news.ycombinator.com/item?id=23097896](https://news.ycombinator.com/item?id=23097896) cost: [https://news.ycombinator.com/item?id=23098297](https://news.ycombinator.com/item?id=23098297) cost: [https://news.ycombinator.com/item?id=23097215](https://news.ycombinator.com/item?id=23097215) That's just the in-order top comments I'm seeing right now. (please do read and upvote them / others too, they're widely varying in their details and are interesting) The answer's the same as it has always been. Cloud is more expensive, unless you're small enough to not pay for a sysadmin, or need to swing between truly extreme scale differences. And a few exceptions for other reasons. ~~~ cortesoft There is also another answer... I work for a CDN, so we can't really use the cloud when in many ways we ARE the cloud. Although we do often make jokes about "what if we just move the CDN to AWS?" ~~~ mcny It is a pity East Dakota won’t make jokes like these. Can you imagine cloudflare running on aws? What happens when someone tries to denial of service them while on aws? On a different note, Netflix still runs “on the cloud”, right? I mean what does it really mean? Dropbox can still have most of its stuff on aws and do the expensive part on premises if cost is a concern? The truly bizarre stuff happens at hybrid cloud. ~~~ ddorian43 Netflix runs it's own bandwidth/cdn. Sometimes it actually has a pop/box INSIDE your ISP [https://openconnect.netflix.com/](https://openconnect.netflix.com/). ~~~ mcny My understanding is the website, the user services such as authentication, heart beat (not sure what is the proper technical term but the thing that says where I am in a particular episode). That and internal apps like project tracker not to mention dev/test. At least in my imagination. At my work, I'm not even worth throwing an SSD at my work computer. My manager is powerless to help as the company has some kind of deal to only buy from HP? No idea what kind of glue procurement is sniffing at this company... ------ bcrosby95 We have around 20 servers in a colo center down the street. At this number of servers we can still host websites that have millions of users (but not tens of millions). They are not exotic servers either. In fact by now they are, on average, around 11 years old. And costed anywhere from 2k to 8k at the time of purchase. Some are as old as 19 years. Hell, when we bought some of them - with 32GB of memory each - AWS had no concept of "high memory" instances and you had to completely pay out your ass for a 32GB server, despite ram being fairly cheap at the time. We have no dedicated hardware person. Between myself and the CTO, we average maybe a day per month thinking about or managing the hardware. If we need something special setup that we have no experience in, we have a person we know that we contract, and he walks us through how and why he set it up as he did. We've used him twice in the last 13 years. The last time one of us had to visit the colocation center was months ago. The last time one of us had to go there in an emergency was years ago. It's a 5 minute drive from each of our homes. So, why exactly should we use the cloud? We have servers we already paid for. We rent 3 cabinets - I don't recall the exact cost, but I think its around $1k per month. We spend practically no time managing them. In our time being hosted in a colo center - the past 19 years - we've had a total of 3 outages that were the fault of our colo center. They all lasted on the order of minutes. ~~~ dahfizz I think people who have no experience managing servers dramatically overestimate how much time it takes to manage servers. Depending on your team, it can definitely be easier to manage your own hardware than to manage your cloud infrastructure. ~~~ wooly_bully In my experience, it's not the time required but that a lot of development teams don't have a sysadmin or ops skillset. ~~~ jcrawfordor I live in a software engineering world professionally but my background is in traditional "neckbeard" Linux system administration. This ends up making me "DevOps" but honestly a lot of what I've ended up doing in my career is basic sysadmin for organizations that get a remarkably long ways before realizing they need it - things like telephony and video surveillance become really unreasonably expensive when you end up relying on a cloud service because you don't have the skillset to manage them in-house. This is purely my opinion, but I think that 1) there is a strange shortage of IT professionals (people who are _not_ software engineers but instead understand _systems_ ) in much of the industry today, and 2) a lot of tech companies, even those that are currently well functioning, might be able to save a lot of money if they hired someone with a conventional IT background. This is a little self-serving of course, but it really does astound me when I see the bills that some companies are paying cloud services to do something that is traditionally done in-house by an IT department. And not everything can readily be outsourced to some "aaS" provider, so on top of that you end up with things like software companies with multi-million budgets running an office network that consists of a consumer WiFi router someone picked up at Fry's - not realizing that they are losing a lot of time to dealing with how poorly that ends up working. I think part of the problem rests in academia - at least in my area a lot of universities seem to have really backed off on IT programs in favor of CS. I went through an undergraduate program that involved project management, decision analysis, and finance courses because these were considered by the college (I would say accurately) critical skills for the IT field. But that program had an incredible two students and was widely considered inferior to the CS program with hundreds. Another part of the problem though seems to rest in industry. The salary differential between "DevOps Engineer" and "IT Analyst" is incredible when in practice they end up doing mostly the same thing in a lot of small orgs. So I end up walking sort of an odd line of "I have a long background in IaC but I also know about conference room equipment." And I'm not saying that everything with a Cisco/Tandberg badge isn't overpriced, but Zoom rooms can end up costing just as much and seem to be less reliable - not surprising for a platform which, by practical necessity of the lack of IT support in many orgs, is built on the Silicon Valley time-tested architecture of "five apple consumer products taped together." ~~~ chillfox From my experience, large enterprises sabotage the effectiveness of internal IT with bureaucracy and politics in a misguided attempt to eliminate all possibility of mistakes being made. It's usually done with the "let's pretend it is ITIL" process. Let me give two examples where if I had been the client then I would absolutely have sprinted for the cloud if I could, or at the very least start talking it up as much better. 1) System outage, time to fix 5 hours and 3 minutes. The 5 hours was me sitting in front of my computer with screens open showing the problem and waiting for various managers/decision-makers to fly by and take a look as they were ping-ponging around the office panicking about what would be impacted by the fix. Everything that was going to get impacted was already impacted by the system not working, and I had to explain that to them multiple times. Towards the end of the day, I eventually got the go-ahead to do the 3 minutes of work to fix the system. This system being down had prevented another team from doing any work for the entire afternoon. 2) Two full days of politics and paperwork to get approval to do 30 minutes of work, all while the client was impatiently asking "is it done yet" every few hours. ------ burnte Yes. Why? Cost, availability, flexibility, bandwidth. For a lot of companies, on-prem servers are the best solution for efficiency and cost. One great example. We were paying $45k/yr for a hosted MS Dynamics GP solution. For $26k we brought it in house with only a $4k/yr maintenance fee. We bought a rackmount Dell, put on VMWare, have an app VM and a DB VM. My team can handle basic maintenance. In the past 11 months we haven't had to touch that server once. We have an automated backup, pulls VMs out daily and sends them off to Backblaze. Even if we need to call our GP partner for some specialized problem, it's not $45k/yr in consulting costs. We had a bunch of Azure servers for Active Directory and a few other things. When I came in 2 years ago I set up new on-prem DC VMs, and killed out absurd Azure monthly bill, we were saving money by month three. A meteor could take out Atlanta and the DCs are our satellite offices would handle the load just fine until we restored from backups and we'd STILL save money. We've had MORE uptime and reliability since then too. If I have a server go down, we have staff to get on it immediately, no toll free number to dial, no web chat to a level 1 person in India, etc. Our EMR is hosted, because that's big enough that I want to pay someone to be in control over it, and someone to blame. However, there have been many times where I'm frustrated with how they handle problems, and jumping from one EMR to another is not easy. And in the end they're all bad anyway. Sometimes I DO wish we were self hosted. The Cloud is just someone else's computer. If they're running those machines more cheaply than you are, they're cutting out some cost. The question is, do you need what they're cutting? ~~~ jedberg > The Cloud is just someone else's computer. If they're running those machines > more cheaply than you are, they're cutting out some cost. The question is, > do you need what they're cutting? They're cutting overhead and getting better deals on hardware than you could ever get. Their efficiency is their profit margin. ~~~ Slartie > Their efficiency is their profit margin. Last time I checked, AWS had a profit margin in the 40%-50% ballpark. Sorry, but the semiconductor industry doesn't operate with any kind of markup that would allow such profit margins from "getting better deals on hardware". The only one able to make that kind of profit used to be Intel on high-end server CPUs, and even they are now pressured by AMD and custom ARM silicon options. Anything else needed for a server, RAM or flash chips or whatever, is usually selling on thin single-digit margins. Cloud provider profit margin is perfectly logical and explainable through lock-in effects keeping their customers paying big markups to stay in AWS infrastructure. Be it software that was built against AWS proprietary services, be it having the necessary engineering skills to manage AWS infrastructure in the team but lacking the skills to manage on-prem hardware, be it the enterprise sales teams of cloud operators schmoozing CTOs of big corporations and making them jump on a "going into cloud" strategy as some kind of magic bullet to future-proof their corporations' IT, be it the psychological effect that makes "using the cloud" apparently a mandatory thing to be "cool" in todays' silicon valley culture, and therefore by extension the whole worlds' IT engineering culture. The most ironical of them all is this weird effect that drives people to rationalize these things, writing comments like yours, because nobody likes to admit they've painted themselves into a corner of lock-in effects. And of course there's the irony of this all being history repeating itself: anyone still remembering when IBM dominated the IT industry? ~~~ krageon > that would allow such profit margins The percentages don't _quite_ hit that amount of discount, but they are much much higher than (I at least) expected. ------ throwaway6845 Mostly, headspace. If I run my own server, I just need to apply my existing Ubuntu sysadmin knowledge. If I use AWS, I have to learn a whole load of AWS- specific domain knowledge, starting with their utterly baffling product names. My time is more valuable than that. Also, sheer cost. Literally everyone I know in my particular part of the industry uses Hetzner boxes. For what I do, it’s orders of magnitude cheaper than AWS. ~~~ HatchedLake721 That’s how you get old, when your time is more valuable than a massive shift in technology. ~~~ henriquez Nah, we already did mainframes in the 1970s. Renting CPU time only makes sense if you don’t need CPU time or you like wasting money. ------ shockinglytrue Try running any service with an average egress exceeding 10 Mbit/s then tell me cloud still makes sense. By the time you reach 1 Gbit/s the very idea of it is enough to elicit a primitive biological defensive response. We don't do on-prem but we do make heavy use of colo. The thought of cloud growth and DC space consolidation some day pushing out traditional flat rate providers absolutely terrifies me. At some point those cloud premiums will trickle down through the supply chain, and eventually it could become hard to find reasonably priced colo space because the big guys with huge cash-flush pockets are buying up any available space with a significant premium attached. I don't know if this is ever likely, but growth of cloud could conceivably put pressure on available physical colo space. Similar deal with Internet peering. There may be a critical point after which cloud vendors, through their sheer size will be able to change how these agreements are structured for everyone. ~~~ jedberg Netflix runs on the cloud and does 30% of all internet traffic. That being said, 99% of that traffic is served from servers in colos now, but 10 years ago it was all served from CDN providers like Akamai, which is just a specialized cloud. ~~~ toomuchtodo This is kind of a big caveat (“Netflix is in the cloud but almost none of the work is done there”), and something I have to mention to non tech decision makers when they say “but Netflix!”. I even have a slide for presentations just for this (“You Are Not Netflix”). ~~~ jedberg 99% of the work is done on the cloud. What comes off of those colo servers is literally just bits streaming from disk to network. There is no transformation or anything. No authentication, no user accounts, no database. Nothing. Just static files served efficiently. ~~~ enneff > What comes off of those colo servers is literally just bits streaming from > disk to network So... the core of their business? ~~~ jedberg Not at all. It was so “not core” that it was outsourced. The core of their business is recommendations, encoding, and authentication. All of those are done 100% on the cloud. ~~~ sidibe Sure but probably 99% of the > 30% of all internet traffic. is outside of the cloud. ~~~ anshumania Anyone know the bill Netflix has for running on the cloud ? ------ catlas3r Why stay on premise? Cost. On-prem is roughly on-par in an average case, in my experience, but we've got many cases where we've optimized against hardware configurations that are significantly cheaper to create on-prem. And sunk costs are real. It's much easier to get approval for instances that don't add to the bottom line. But for that matter, we try to get our on-prem at close to 100% utilization, which keeps costs well below cloud. If I've got bursty loads, those can go to the cloud. Lock-in. I don't trust any of the big cloud providers not to jack my rates up. I don't trust my engineers not to make use of proprietary APIs that get me stuck there. Related to cost, but also its own issue, data transfer. Both latency and throughput. Yeah, it's buzzwordy, but the edge is a thing. I have many clients where getting processing in the same location where the data is being generated saves ungodly amounts of money in bandwidth, or where it wouldn't even been feasible to transfer the data off-site. Financial sector clients also tend to appreciate shaving off milliseconds. Also, regulatory compliance. And, let's be honest, corporate and actual politics. Inertia. Trust. Risk. Interoperability with existing systems. Few decisions about where to stick your compute and storage are trivial; few times is one answer always right. But there are many, many factors to consider, and they may not be the obvious ones that make the decision for you. ------ strags Cost and Latency. My team and I run the servers for a number of very big videogames. For a high- cpu workload, if you look around at static on-prem hosting and actually do some real performance bencharking, you will find that cloud machines - though convenient - generally cost at least 2x as much per unit performance. Not only that, but cloud will absolutely gouge you on egress bandwidth - leading to a cost multiplier that's closer to 4x, depending on the balance between compute and outbound bandwidth. That's not to say we don't use the cloud - in fact we use it extensively. Since you have to pay for static capacity 24/7 - even when your regional players are asleep and the machines are idle, there are some gains to be had by using the right blend of static/elastic - don't plan to cover peaks with 100% static - and spin up the elastic machines when your static capacity is fully consumed. This holds true for anything that results in more usage - a busy weekend, an in-game event, a new piece of downloadable content, etc... It's also a great way to deal with not knowing exactly how many players are going to show up on day 1. Regarding latency, we have machines in many smaller datacenters around the world. We can generally get players far closer to one of our machines than to AWS/GCP/Azure, resulting in better in-game ping, which is super important to us. This will change over time as more and more cloud DCs spring up, but for now we're pretty happy with the blend. ------ hprotagonist AI compute is so much cheaper on-prem that it's not even in question. And there are clients that demand it. And researchers, in general, like to do totally wacky things, and it's often easier/cheaper to let us if you have physical access. ~~~ sdan +1 on this. Get a nice server with some GPUs and you'd save a lot more than paying the super expensive costs on cloud. ------ jabroni_salad I'm in rural iowa and you really can't bank on a solid internet connection. One of my clients decided to iaas-ify all their servers and it works great except when it is windy out. They're on fixed wireless and the remote mast has some sway to it. 3-4 times a year they get struck by lightning and have a total work stoppage until all their outside gear can get replaced. Even their VDI is remote so all the thin clients just disconnect and they are done for the day. Also, my clients aren't software development firms. They are banks and factories. They buy a software based on features and we figure out how to make it work, and most of the vendors in this space are doing on-prem non-saas products. A few do all their stuff in IAAS or colo but a lot of these places are single-rack operations and they really don't care as long as it all works. A lot of people in small/midsize banks feel like they are being left out. They go to conferences and hear about all the cool stuff in the industry but the established players are not bringing that to them. If you can stomach the regulatory overhead, someone with drive could replace finastra/fiserv/jackhenry. Or get purchased by them and get turned into yet another forever-maintenancemode graveyard app. ------ sdan Founder of a growing startup: Started with a cluster of Raspberry Pis and expanded onto an old desktop. Primarily did this for cost (raspberry pis alone were more powerful than a GCP $35/mo instance). Everything was fine until I needed GPUs/handling more traffic than those Raspberrys could handle. So I expanded by including cloud instances in my Docker Swarm cluster (tidbit: Using Traefik and WireGuard) So half on-prem half in the cloud. Honestly just scared GCP might one day cancel my account and I'll lose all my data unless I meet their demands (has happened in the past) so that half on-prem stores most of the data. ~~~ chickenpotpie At $35/month though GCP would only have to save you a half an hour of maintenance for it to be worth it though. ~~~ sdan Well, given that I am using Docker it doesn't really matter much... but the bigger issue is: GCP in the past has completely blocked access from accounts when they detect random things. Unless I meet their demands, my entire infra is gone/down for days, which I can't deal with. ------ XCSme I use a $5/mo DigitalOcean VPS droplet instead of AWS or other "cloud" service. I only have to host an analytics dashboard ( [https://usertrack.net/](https://usertrack.net/) ), I don't need scaling and this way I know exactly how much I will pay. The resources are more than enough for my needs and I don't think it could be much cheaper even on the most optimized pay-per-minute of use cloud platforms. I also have some other APIs hosted in the same way (eg. website thumbnail generation API), for the very low traffic I have and no chance of getting burst traffic I think the use case of a VPS or dedicated server is perfect. ~~~ chickenpotpie Whenever I need to host something small and I’m trying to decide between DO and AWS I always ask myself. Would I rather be surprised by the bill or my website crashing from too much traffic? I almost always pick DO because I don’t want to mess something up and lose a few hundred dollars. ~~~ jackson1442 Wholeheartedly agree. I think AWS is moving in the right direction with Lightsail[0], which is a service very similar to DO droplets and includes transfer. Nice if you want to use AWS for like one or two other services, but I tend to still go with DO for small things. [0]: [https://aws.amazon.com/lightsail/](https://aws.amazon.com/lightsail/) ~~~ XCSme That sounds interesting. By "moving in the right direction" do you mean that it's still in beta or not released yet? Or that it's just the first step of many to come? ~~~ adventured Lightsail works well now. It's a little over three years old. In the first year or so after it was released, they were notorious for being slow in most regards compared to their peers (it launched using rebranded instances from AWS, and used spinning disks, going up against SSDs their competitors were all using). They've largely caught up on performance with DigitalOcean, Linode, Vultr and similar. That said, I've stuck with DigitalOcean even though Lightsail tests fine. I've had a great experience over the years with DO and see no reason to leave. ------ michaelt On-prem makes your cost control proactive, rather than reactive. Nobody gets a new server without first having a purchase order approved - and the burden to get that approval falls on the person who wants the server. In the cloud, at least the way it's generally used, cost control is reactive: You get a bill from AWS every month, and _if you 're lucky_ you'll be able to attribute the costs to different projects. This is both a strength and a weakness: on-premise assets will end up at much higher utilisation, because people will be keen to share servers and dodge the bureaucracy and costs of adding more. But if you consider isolation a virtue, you might prefer having 100 CPUs spread across 100 SQL DBs instead of 50 CPUs across two mega-databases. ------ doctor_eval Lots of great insights here, which fully accord with my experience, even in the small end of town. About a year ago, I was in a meeting with my new CEO (who had acquired my company). My side of the business had kept hardware in-house, his was in AWS. We had broadly similar businesses in the same industry and with the same kind of customers. My side of the business needed to upgrade our 5+ year old hardware. The quote came to $100K; the CEO freaked out. I asked him how much he spent on AWS? The answer was that they spent $30K __per month __on AWS. The kicker is that we managed 10x as many customers as they did, our devops team was half the size, and we were rolling out continuous deployment while they were still struggling to automated upgrades. Our deployment environment is also far less complicated than theirs because there isn't a complex infrastructure stack sitting in front of our deployment stack. There was literally no dimension on which AWS was better than our on-prem deployment, and as far as I was able to tell before I quit, the only reason they used AWS was because everyone else was doing it. ~~~ pickle-wizard With all the job hopping that goes on in tech, there is a lot of Resume Driven Development. People want to use AWS because it will help them get their next job. I'm finally in a job that I'm happy with and can see myself staying here until retirement. I have noticed that has changed my technology recommendations. For example we recently started looking at configuration management tools. Ansible is the obvious choice from a resume perspective as it is very popular. I ended recommending Powershell DSC. Why, because our environment is mostly windows, the team is familiar with Powershell, and for our use case is much faster. Powershell DSC is not as popular so it won't help me get another job. When it comes time to expand the team, I can hire someone who understands configuration management tools or powershell, and get them up to speed in a day or two. ------ adreamingsoul Personally, I'd rather have capex than opex. My observations from working with, and in the "cloud": The "cloud" does benefit from it's scale in many ways. It has more engineers to improve, fix, watch, and page. It has more resources to handle spikes, whales, and demand. Almost everything is scale tested and the actual physical limits are known. It is damn right impressive to see what kind of traffic the cloud can handle. Everything in the "cloud" is abstracted which increases complexity. Knowledgeable engineers are few and far between. As an engineer you assume something will break, and with every deployment you hope that you have the right metrics in place and alarms on the right metrics. The "cloud" is best suited for whales. From special pricing to resource provisioning, they get the best. The rest is trickled down. Most services are cost-centers. Very few can actually pay for the team and the cost of its dependencies. It's insane how much VC money is spent building whatever the latest trend of application architecture is. Very few actually hit their utilization projections. ------ erulabs We hear from our customers mostly what has been said here: cost and mental overhead. There is a bit of a paradox - companies that plan to grow aggressively are wary of AWS bills chopping their runway in half - they're very aware of _why_ cloud providers give out a year for free to most startups - they recoup that loss very fast once the cash faucet opens up. What really gets me is that most cloud providers promise scalability, but offer no guard-rails - for example diagnosing performance issues in RDS - the goal for most cloud providers is to ride the line between your time cost and their service charges. Sure you can reduce RDS spend, but you'll have to spend a week to do it - so bust out the calculator or just sign the checks. No one will stop you from creating a single point of failure - but they'd happily charge for consulting fees to fix it. There is a conflict on interest - they profit from poor design. In my opinion, the internet is missing a platform that encourages developers to build things in a reproducible way. Develop and host at home until you get your first customers, then move to a hosting provider down the line. Today, this most appeals to AI/ML startups - they're painfully aware of their idle GPUs in their gaming desktops and their insane bill from Major Cloud Provider. It also appeals to engineers who just want to host a blog or a wedding website, etc. This is a tooling problem that I'm convinced can be solved. We need a ubiquitous, open-source, cloud-like platform that developers can use to get started on day 1, hosting from home if desired. That software platform should not have to change when the company needs increased reliability or better air conditioning for their servers. If its a Wordpress blog or a minecraft server or a petabyte SQL database - the Vendor should be a secondary choice to making things. ~~~ sbrother I've found that Kubernetes mostly solves this problem. I say mostly because for AI/ML workloads that require GPUs, we still rely on running things on bare metal locally, and deploying with GKE's magic annotations and Deep Learning images. But for anything else, I haven't had an issue going all in on k8s at the beginning, even with very small teams. ~~~ erulabs Yep! My startup is [https://kubesail.com](https://kubesail.com), so I agree :) As for ML on Kube, I agree, there have been and still are some rough edges. The kernel drivers alone make a lot of out-of-the-box Kubernetes solutions unusable. That said, we've had a lot of success helping people move entirely onto kube - the mental gain alone from ditching the bash scripts or ansible playbooks (etC) alone is pretty freeing. ------ dogecoinbase Yes. Three major reasons: \- Cost. It's vastly cheaper to run your own infra (like, 10-100x -- really!). The reason to run in cloud is not to save money, it's to shift from capex to opex and artificially couple client acquisition to expenditure in a way that juices your sheets for VCs. \- Principle. You can't do business in the cloud without paying people who also work to assemble lists of citizens to hand over to fascist governments. \- Control. Cloud providers will happily turn your systems off if asked by the government, a higher-up VP, or a sufficiently large partner. EDIT: I should add. Cloud is great for something -- moving very fast with minimal staffing. That said, unless you get large enough to renegotiate you will get wedged into a cost deadend where your costs would be vastly reduced by going in-house, but you cannot afford to do so in the short term. Particularly for the HN audience, take care to notice who your accelerator is directing you to use for cloud services -- they are typically co-invested. ~~~ PaulWaldman Regarding the shift in CapEx to OpEx, on-prem servers can also be leased, keeping their costs in OpEx. ------ reilly3000 I’ve been studying like a fiend to get AWS certs and thoroughly understand the cloud value proposition, especially for ephemeral workloads and compliance needs. I’m all for cloud solutions that make sense and love when serverless/usage-only systems can’t be deployed. That said, I recently started work on a friend’s system that he has had running in colo for a long time. It’s absolutely insane how long his systems have been up. There are processes that have been alive since 2015, with some hosts having uptime linger than that. He’s got a nice HA configuration but hasn’t had any incidents that have triggered failover. He recently built a rack for his home with 384gb ram and gobs of cpu across 3 nodes, with rack, nice switch and UPS for just shy of $2500 ( he is quite the bargain hunter... ). I did some quick math and found a similarly equipped cluster (of just VMs, not dedicated hosts) has a 1.1 month break-even with on-demand costs, no bandwidth considered. Sure, maybe a 1 year reservation could make it a 2-3 month break even instead, but why? Those machines can easily give him 3-5 years of performance without paying another dime. If you can feasibly run workloads onpremise or colo and have a warm failover to AWS you could probably have the best of all worlds. ~~~ whatsmyusername If he has processes that have been up since 2015 how is he patching? That's one of my biggest gripes with on-prem, it's easy to leave something that works alone... until it gets popped by a 5 year old vuln. In cloud I'm constantly looking at what we have because I have good billing tools in place to see what we're paying for. ------ grantlmiller I always find it important to separate "cloud" into 2 categories: 1\. IaaS - Which I mainly define as the raw programmable resources provided by "hypercloud" providers (AWS, GCP, Azure). Yes, it seems that using an IaaS provider with a VPC can provide many benefits over traditional on-prem data centers (racking & stacking, dual power supply, physical security, elasticity, programmability, locations etc). 2\. SaaS - I lump all of the other applications by the hundreds of thousands of vendors into this category. I find it hard to trust these vendors the same way that I trust IaaS providers and am much more cautious of using these applications (vs OSS or "on-prem software" versions of these apps). They just don't have the same level of security controls in place as the largest IaaS providers can & do (plus the data is structured in a way that is more easily analyzed, consumed by prying eyes). ~~~ opportune What about first-party SaaS? Those can also be big features that bring people to some cloud providers. Not all SaaS requires you to trust your data/availability to some random vendor. Of course those first-party SaaS aren't typically suitable for lift-and-shift by their very nature, and they can still have some rough edges, but IMO you can expect them to be almost as reliable as IaaS ~~~ grantlmiller First-party SaaS meaning things like RDS, DBaaS, queues, LBs etc? Most of that I would sort of put into a IaaS controlled PaaS, rather than true IaaS SaaS. Yes, these are generally higher on the trust spectrum as they don't involve additional vendors accessing/managing/storing data. ~~~ opportune A major one I'm thinking of is BigQuery, also of course all the various db/queue solutions outside of your typical S3 clone as you mentioned. That would make sense viewing them as platforms though ------ dijit I work for a large video games publisher, as you might expect we use a lot of windows. Windows server licenses on AWS and GCP are hundreds of times more expensive at our scale. Incidentally we actually do have some cloud infra and we like it, but the licensing cost is half the total price of the instance itself. In fact, you might not know this but games are relatively low margin, and we have accidentally risked the companies financial safety by moving into the cloud. ~~~ whatsmyusername TBF windows licensing in general is a shit show, to the point where just handling that is a specialized ability potentially warranting a full time position. ------ tr33house I'm a solo founder who's bootstrapped a saas that's in one state. I'd started out with the cloud then moved to a private cloud in a colocated data center. Saved more than 80% in monthly costs. Got faster speeds, better reliability and a ton of extra compute and network capacity. I just bought used servers from eBay that are retired from big corps. Nothing significant has really changed in the last five years on compute so I'll happily take their depreciated assets :) __Modern__ servers are really awesome and I totally recommend them. You can do a ton remotely. ------ drej Many of the stories here are from large companies, where the costs are quite a different beast. I want to offer an opposite view - from a small company (20-30 people), which is usually the kind of company best suited for the cloud. We ran a number of modelling jobs, basically CPU intensive tasks that would run for minutes to hours. Investing in on-prem computers (mostly workstations, some servers), we got very solid performance, very predictable costs and no ops issues. Renting beefy machines in the cloud is very expensive and unless you get crafty (spot and/or intelligent deployment), it will be prohibitive for many. Looking at AMD's offering these days, you can get sustained on-prem perf for a few dollars. Three details of note: 1) We didn't need bursty perf (very infrequently) - had this been a need, the cloud would make a lot more sense, at least in a hybrid deployment. 2) we didn't do much networking (I'm in a different company now and we work with a lot of storage on S3 and on-prem wouldn't be feasible for us), 3) we didn't need to work remotely much, it was all at the office. Obviously, it was a very specific scenario, but given how small the company was, we couldn't afford people to manage the whole cloud deployment/security/scaling etc. and beefy workstations was a much simpler and more affordable endeavour. ------ snarfy The idea of the cloud is to only pay for what you use. Your on-premise server is idle 99% of the time so why are you paying for a full server? If that's not true, it turns out it's quite expensive to run things in the cloud. If your workload is crunching numbers 24/7 at 100% cpu, it's better to buy the cpu than to rent it. ~~~ Polylactic_acid Cloud servers tend to be more reliable as well if you don't run your own datacenters. We have lost our internet connection or power 3 times in the last year in the office. Its not the end of the world since we can go to 4g for our own usage but if our servers were hosted locally this would be a huge issue. ~~~ dathinab Don't forget that between cloud and servers in the company there are still VPS and rented dedicated hardware in a data center. So you: 1\. Don't manage hardware. 2\. But manage a server (OS+software stack). 3\. Have reliable internet, power and physical security from the data center you are renting your hardware from (if you trust them fully!). 4\. Have fixed cost but also fixed resources. Tends to be cheaper for many tasks. Especially CPU/GPU heavy ones. ~~~ Polylactic_acid I consider VPSs to be cloud servers. Is this not common? ~~~ XCSme I mentioned in another comment that I use VPS and not cloud services. I think of cloud as the auto-scaling infrastructure with dynamic pricing. I think of VPS as just sharing a dedicated machine with others, so each one gets a few cores and shares other resources. The implementation of VPSs nowdays is probably more similar to cloud services, where your own space might be moved around to another physical machine without any downtime. ~~~ Polylactic_acid So you consider cloud servers to be what most people call serverless (S3/serverless functions/etc)? ~~~ XCSme I do hate the term "serverless" as it makes no sense, but I think of cloud as a system that automatically spins-up/down VPSs based on your current usage. This means the infrastructure/software also allows for automatically load- balancing between those VPSs. So I think of cloud as the VPS servers that are used to host the actual data + the layer on top that does all the scaling, provisioning, load-balancing, etc. ------ mattbeckman We spend ~$50k/mo on serverless infrastructure on AWS. It hurts sometimes, given we were fully colocated about 4 years back, and I know how much hardware that could buy us every month. However, with serverless infra we can pivot quickly. Since we're still in the beta stage, with a few large, early access partnerships, and an unfinished roadmap, we don't know where the bottlenecks will be. For example, we depended heavily on CloudSearch, until it sucked for our use case, so we shifted to Elasticsearch, and ran both clusters simultaneously until we were fully off of CS. If we were to do that on-prem, we'd have to order a lot more hardware (or squeeze in new ES cluster VMs across heavy utilization nodes). With AWS, a few minutes to launch a new ES cluster, dev time to migrate the data, followed by a few clicks to kill the CloudSearch cluster. Cloud = lower upfront, higher long term, but no ceiling. On-prem = higher upfront, lower long term, but ceiling. ~~~ brickbrd If "Cloud = lower upfront, higher long term, but no ceiling. On-prem = higher upfront, lower long term, but ceiling" is true, then how come the revenue of cloud companies keeps going up? That would mean the incoming rate of users who are just starting off and find Cloud worthwhile is more exit rate of mature users who are finding on-prem more worthwhile than cloud ~~~ wvenable If you're spending more than 50k/month on AWS where is the money to move to on-prem? When they got you, they got you. ------ walterbell The (startup) Oxide podcast has good history/stories about on-prem servers, from veterans of pioneering companies. They are fans of open-source firmware and Rust, and are working to make OCP-based servers usable for on-prem. In one podcast, they observed that cloud is initially cheaper, but can quickly become expensive with growth. There is a time window where you can still switch from cloud to on-prem, but if that window is missed, you're left with high switching costs and high cloud fees. [https://oxide.computer/podcast/](https://oxide.computer/podcast/) ~~~ mapgrep Their co founder Bryan Cantrill gave a talk at Stanford on what they are trying to do, essentially offer on prem servers comparable to what “hyperscalers” like Google and Facebook put in their data centers — highly efficient and customizable (in low level software) iirc. [https://youtu.be/vvZA9n3e5pc](https://youtu.be/vvZA9n3e5pc) ------ PaulWaldman Manufacturing. The cost to a factory if the internet is down is too great. Each facility has its own highly redundant virtualization infrastructure hosting 50 to 100 VMs. ~~~ eitally I was in manufacturing IT before moving to big tech. Our big campuses in the US & Europe had 40-80mbps internet circuits. The remote facilities in developing countries often only had 10mbps MPLS connections to a regional hub. To be 100% honest, we had 10x the outages caused by crappy local infrastructure than anything having to do with a SaaS service or IaaS/PaaS provider. Seriously, things like bad storms, a snake (cobra!) sneaking into the server room and frying itself and a machine it was snuggling against, utility workers accidentally severing cables, generators failing during power outages, labor strikes, and so much more. Moving to the cloud -- or even just hosting everything centrally -- was much more stable than maintaining a fleet of distributed machines. ------ MaulingMonkey I work in gamedev. Build servers, version control, etc. are almost always on- premise, even if a lot of other stuff has been transitioned to the cloud. There's a few reasons: 1) Bandwidth. I routinely saturate my plebian developer gigabit NIC links for half an hour, an hour, longer - and the servers slurp down even worse. In an AAA studio I am but one of hundreds of such workers. Getting a general purpouse internet connection that handles that kind of bandwidth to your heavily customized office is often just not really possible. If you're lucky your office is at least in the same metro area as a relevant datacenter. If you're really lucky you can maybe build a custom fiber or microwave link without prohibative cost. But with those kinds of geographical limitations, you're not so much relying on the general internet, so much as you're expanding your LAN to include a specific datacenter / zone of "the cloud" at that point. 2) Security. These servers are often completely disconnected from the internet, on a completely separate network, to help isolate them and reducing data exfiltration when some idiot installs malware-laden warez, despite clear corporate policy threatening to fire you if you so much as even _think_ about installing bootleg software. Exceptions - where the servers _do_ have internet access - are often recent, regrettable, and being reconsidered - because of, or perhaps despite, draconian whitelisting policies and other attempts at implementing defense in depth. 3) Customizability. Gamedev means devkits with strict NDAs and physical security requirements, and a motley assortment of phone hardware, that you want accessible to your build servers for automatic unit/integration testing. Oddball OS/driver/hardware may also be useful for such testing. Sure, if you can track down the right parties, you might be able to have your lawyers convince their lawyers to let you move said hardware into a datacenter, expand the IP whitelists, etc... but at that point all you've really done is made it harder to borrow a specific popular-but-discontinued phone model from the build farm for local debugging when it's the only one reproducing a specific crash when you want to debug and lack proper remote debug tooling. ...there are some inroads on the phone farms (AWS Device Farm, Xamarin Test Cloud) but I'm unaware of farms varied desktop hardware or devkits. Maybe they exist and just need better marketing? I have some surplus "old" server hardware from one such gamedev job. Multiple 8gbit links on all of them. The "new" replacement hardware often still noticably bottlenecked for many operations. ------ mrmrcoleman There's a renewed interest in on-prem bare metal recently with a lot of different offerings helping to make various parts of the stack easier to manage. Awesome bare metal is a new repo created by Alex Ellis that tracks a lot of the projects: [https://github.com/alexellis/awesome- baremetal](https://github.com/alexellis/awesome-baremetal) Also we (Packet) just open sourced Tinkerbell, our bare metal provisioning engine: [https://www.packet.com/blog/open-sourcing- tinkerbell/](https://www.packet.com/blog/open-sourcing-tinkerbell/) ------ chime 1\. 500TB of storage for 3-6mo of CCTV footage. 2\. Bought a hanful of $700 24 core Xeons on eBay 2 years ago for 24/7 data crunching. Equivalent cloud cost was over $3000/mo. On-Prem paid off within a month! 3\. Nutanix is nice. Awesome performance for the price and almost no maintenance. Got 300+ VDI desktops and 50+ VMs with 1ms latency. ~~~ junar > $700 24 core Xeons on eBay 2 years ago Can you clarify? I don't think such a product exists as a single chip at that price point. The Threadripper 3960X costs $1400, and that released less than a year ago. Edit: Looking up Intel chips on Wikipedia, I think you might be using 12-core/24-thread chips... [https://en.wikipedia.org/wiki/Skylake_(microarchitecture)#Xe...](https://en.wikipedia.org/wiki/Skylake_\(microarchitecture\)#Xeon_Bronze_and_Silver_\(dual_processor\)) ~~~ fiveguys94 I picked up three Dell R900's for an average of $200 each, with 4x Xeon E7450 and 128GB of ECC RAM. No hyperthreading (2011!), it's 24 real cores. They're noisy and use lots of power, but you can't argue with the value for money. ~~~ icedchai I have one similar to this. I don't think it is an R900, but it's an older 1U rack mount. I forget if it's a 12 or 24 core xeon, but it was dirt cheap, came with 72 gigs of RAM, and sounds like a jet engine turning it on. I recently built a Ryzen box with 128 gigs of RAM and it's much quieter... ------ mattmireles CEO of an AI startup & former AWS employee here. The cloud sucks for training AI models. It's just insanely overpriced in a way that no "Total Cost of Ownership" analysis is going make look good. Every decent AI startup––including OpenAI––has made significant investments in on-premise GPU clusters for training models. You can buy consumer-grade NVIDIA hardware for a fraction of the price that AWS pays for data center-grade GPUs. For us in particular, the payback on a $36k on-prem GPU cluster is about 3-4 months. Everything after that point saves us ~$10k / month. It's not even close. When I was AWS, I tried to point this fact out to the leadership––to no avail. It simply seemed like a problem they didn't care about. My only question is why isn't there a p2p virtualization layer that lets people with this on-prem GPU hardware rent out their spare capacity? ~~~ blueblisters Are TPUs too application specific to replace GPUs? It seems cloud TPUs could be competitive with GPUs in terms of $ per number of target epochs, provided you can do data parallelism for your workloads. Also, IBM offers bare metal pricing which is _somewhat_ cheaper than virtualized instances attached to GPUs (and faster too). I think GPU virtualization is not quite there yet because Nvidia does not give access to core GPU functionality needed for efficient virtualization - you're stuck with using their closed-source libraries and drivers. ------ kasey_junk Bandwidth is the big reason to stay on-prem. Good peering contracts can more than make up for any cloud advantages for bandwidth intensive uses. Now the hard part is turning those cost advantages into operational improvements instead of deficiencies. ------ INTPenis Becuse we don't trust a foreign cloud provider with our client's data. Why is that so hard to understand? All the best cloud providers are from the US and as a european company with clients in european government and healthcare we are often not morally or legally allowed to use a foreign provider. The sad thing is that this is an ongoing battle between people on a municipal level who believe they can save money in clouds, and morally wiser people who are constantly having to put the brakes on those migration projects. ~~~ v4dok What about the existing/upcoming technologies of let you use the cloud without trusting it? ~~~ INTPenis Doesn't matter because all US companies are subject to US laws and agencies. Even if the data is in Ireland, they are obliged to cooperate and before you know it all our patient records are leaked in the states. And speaking of Ireland, I have a memory of a Microsoft EULA saying that even if the data is stored in Ireland they can't guarantee that it won't be transferred to the US. ------ shortlived Small company here (30 total people, 8 in IT/software). \- unwillingness to seed control of the critical parts of our software infrastructure to a third party. \- given our small team size and our technical debt load we are not currently able to re-architect to make our software cloud-ready/resilient. \- true cost estimates feel daunting to calculate, whereas on-prem costs are fairly easy to calculate. ~~~ aspyct I agree with the last 2 points. Estimates are hard to get right, and rearchitecting an existing app is probably not worth it. What about your first point though? Do you not trust a 3rd party to maintain infrastructure properly? In what way? ------ janstice I'm application architect at enterprise-type org. We have a few SaaS applications, but all the big dogs, including custom dev, run in-house on a dual data centre vmware environment. It's cheaper for us to spin up more VMs in-house, so there's no real cloud driver for things that just live on VMs. On the other hand, our ops team are still working on network segmentation and self-service, but I regularly get a standard VM provisioned in less than 20 mins. If we had to buy new data centres it might be different. But the real reason we're not deeper in the cloud is that our business-types insist on turn of the century, server-based software from the usual big vendors, and all the things that integrate with them need to use 20th century integration patterns, so for us migrating to the cloud (in stages at least) would have drawbacks from all options without the benefits. It's only where we have cloud-native stuff that we can sneak in under the radar for stand-alone greenfields projects, or convince the business types that they can replace the Oracles and Peoplesofts with cloud-first alternatives will things really change. ~~~ jasonv Last company I was at was more or less as you describe. Now.. at a company in a different industry, there's a 5+ year plan to move 100% to cloud. Nascent efforts are about 18 mos old already, no apps are live yet. Fortunately, they've been using a container approach for their on-prem stuff for a while, so some stuff can move over pretty easily, a lot of things will get a touch-up or more interesting upgrade along the path to the cloud environment. Not even talking about de-commissioning the DCs yet, but those will get defunded as things go on. ------ olivierduval Security... not against hacker but against provider's government. I worked for some french or european companies, with IPs and sensitive informations, and US business competitors. By the US law, US companies may have to let the US gov spy on their customers (even non US, even on non US location), so this may be a problem for strategic sectors, like defense for example. In that case, sensitive informations is required to be hosted in country by a company of the country, under the country law. Of course, it's not against "cloud" in general... only against US cloud providers (and chinese, and...) ------ MattGaiser I work on two projects, neither of which use cloud. For my day job, it is privacy and legal constraints. I work for the government and all manner of things need to be signed off on to move to cloud. We could probably make it work, but in government, the hassle of doing so is so large that it is not going to happen for a long time. In my contract project, it is a massive competitive advantage. I won't go into too many details, but customers in this particular area are very pleased that we do not use a cloud provider and instead host it somewhat on-premise. I don't see a large privacy advantage over using the cloud, but the people buying the service do simply because they are paranoid about the data and every single one of them could personally get in a lot of trouble for losing the data. Not my project, but intensive computing requirements can be much more cheaply filled by on-premise equipment (especially if you don't pay for electricity), so my university does most of its AI and crypto research on-premise. ------ axaxs My company moved from all on prem, to all in AWS. Having used both, I'd much rather go back to on prem. I did architecture, capacity planning, system tuning, deployments, etc. I knew everything about all of them, and treated them as sacred. The next generation came in, deciding not to learn anything about systems and brought in the 'systems as cattle' attitude and everything that comes with it. I try to remain objective, there are some pros to AWS, but I still much prefer my on prem setup. It was way cheaper, and deployments were way faster. ------ sqldba \- Uptime The number and frequency of outages in Azure are crazy. They happen non-stop all year around. You get meaningless RCAs but it never seems to get better, and if it did, you'd have no way of knowing. Compare this with doing stuff internally - you can hire staff, or train staff, and get better. In the long run outsourcing and trusting other companies to invest in "getting better" doesn't end very well. Just because they moved their overall metrics overall from 99.9 to 99.91 may not help your use case. \- Reliability Their UIs change every day, there's no end to end documentation on how things work. There's no way to keep up. \- Support Azure's own support staff are atrocious. You have to repeatedly bang your head against the wall for days to get anyone who even knows the basic stuff from their own documentation. But it's also difficult to find your own people to do the setup too. Sure, lots of people can do it, but because it's new they have little experience and end up not knowing much, unable to answer questions, and building garbage on the cloud platform. Because there's no cloud seniority, it hasn't been around for long enough. \- Security Cloud providers have or can get access and sometimes use it. \- Management I've seen too many last minute "we made a change and now everything will be broken unless you immediately do something" notifications to be happy about. \- Cost It's ridiculously expensive above a certain scale, and that scale is not very big. I don't know if it's because people overbuild, or because you're being nickel-and-dimed, or if you're just paying so many times above normal for enterprise hardware and redundancy. It's still expensive. Yes, owning (and licensing) your own is expensive too. For smaller projects and tiny companies, totally fine! It's even great! \- Maturity People can't manage cloud tools properly. This doesn't help with costs above. PS: I don't think any other cloud service is better. ------ TedLePoireau Not exactly on-premise but we rent 2 big dedicated server (ovh) + install VMWare ESXi on them. Going to the cloud would cost more, the price would be unpredictable, only to solve a scale problem we won't have. And customers love to know their data are hosted in France by a French company, not by Google or Amazon :) ------ ROFISH Network storage. MacBooks don't have a lot of space and artists like to make HUGE psd files. Bonus points for small stuff like RADIUS for wifi and stuff. Groups charging $5/user for junk like that is absolutely awful with a high number of staff. With a staff of 100, a single box with a bunch of hard drives is two months worth of cloud and SaaS. TCO needs to come down by like at least 100x before I consider going server- less. ------ throwaway7281 We own infra because we need to own and control it and also because it's just vastly cheaper at the scale we use. Besides, we do have things like our own S3, k8s and other cloud-ish utilities running so we do not miss out that much, I guess. ------ nemacol This conversation sounds a lot like the debate around globalization and outsourcing manufacturing to me. It might be a stretch but there is something here. There is room for both Cloud and On-Prem to exist. This endless drive by industry to push everyone to cloud infrastructure and SaaS, in my humble opinion, will look exactly like the whole supply chain coming from the east during a pandemic. The economics of it look great in a lot of use cases, but putting our whole company at the mercy of a few providers sounds terrible to me. Even more so when I see posts on HN about folks getting locked out of their accounts with little notice. It does not take much to bring our modern cloud to a grinding halt. For example, a mistake by an mostly unheard of ISP lead to a massive outage not less than a year ago(1). It was amazing to see the interconnections turn to cascading issues. 1 ISP goofs. 1-2 major providers have issues and the trickle down effect was such that even services that thought they were immune from cloud issues were realizing that they rely on a 3rd party that relies on a different 3th party that uses cloudflare or AWS. So, even though I think the cloud is (usually) secure, stable, resilient, etc... I still advocate for its use in moderation and for 2 main use cases. 1 - elastic demands. Those non-critical systems that add some value or make work easier. Things we could do without for several days and not hurt the business much. 2 - DR / Backup / redundancy. We have a robust 2 data center / DR fail over system. Adding cloud components to that seems reasonable to me. (1)[https://slate.com/technology/2019/06/verizon-dqe-outage- inte...](https://slate.com/technology/2019/06/verizon-dqe-outage-internet- cloudflare-reddit-aws.html) Edit: Spelling and clarity Edit2: New reasons to stay on prem are happening all the time. [https://www.bleepingcomputer.com/news/security/microsofts- gi...](https://www.bleepingcomputer.com/news/security/microsofts-github- account-allegedly-hacked-500gb-stolen/) ------ throwaway028374 I worked for a famous large tech company that makes both hardware and software. On stuff that runs on customer datacenters. There are plenty of companies that run their infrastructure to keep their data secure and accessible. It's not the type of companies that blog about their infra or are popular on HN. Banks and financial institutions, telcos, airlines, energy production, civil infrastructure. Critical infrastructure need to survive events larger than a datacenter outage. FAANGs don't protect customers from legal threats, large political changes, terrorist attacks, war. ------ sudhirj The way I think about it this: not using the cloud is like building your own code editor or IDE after assembling your own laptops and desktops. It may make you happier and it’s a great hobby, but if you’re trying to run a business you need to do a cost benefit analysis. We currently have double digit petabytes of data stored in our own data centres, but we’re moving it to S3 because we have far better things to do with our engineers that replace drives all day, and engineering plus hardware is more expensive than S3 Deep Archive - but it wasn’t until Deep Archive came out. We put out hundreds of petabytes of bandwidth, and AWS is horribly expensive at first glance, but if you’re at that scale you negotiate private pricing that brings it to spitting distance of using a COLO or Linode/Hetzner/OVH - the distance is small enough that the advantages of AWS outweigh it, and it allows us to run our business at known and predictable margins. Besides variability (most of our servers are shut nights, weekends and when not required), op ex vs cap ex, spikes in load (100X to 1000X baseline when tickets open), there’s also the advantage of not needing ops engineers and being to handle infrastructure with code. If you have a lot of ops people and don’t need any of the advantages, and you have lots of money lying around that you can use on cap ex, and you have a predictable load pattern, and you’ve done a clear cost benefit analysis to determine that building your own is cheaper, you should totally do that. Doesn’t matter what others are doing. ~~~ nihil75 You're not building everything from scratch on-prem. There are excellent tools for deploying and managing infrastructure like Terraform, Ansible, Puppet that funny enough are used to deploy to cloud as well. Add a self-hosted Kubernetes cluster to that and your on-prem is not that different than a cloud. As for ops people - you might not need an engineer to replace failed hard- drives, but you'll need a DevOps person to manage Cloud Formation templates and such, and they cost more. ------ skiril One of the reason is a legacy systems. Some companies are too tied up to old custom made systems built on old software and hardware virtually not convertible to cloud. You will be surprised but there are big corporation still using AS400 and not planning to switch anytime soon. If you heard in recent news US unemployment system was still built on COBOL... In 2020... Another reason is a cost. I love AWS! Its fantastic to be able to create and launch servers or the whole farm of servers in the matter of minutes! And ability to convert physical servers to virtual and upload them to the cloud is breathtaking! But my monthly bill started at $300 and grew to $18K per month in less than 3 years. And that was for just a few virtual servers with Windows OS and SQL. My company realized that we can have a state of the art datacenter with WmWare and SAN on premises for the fraction of that price. Put second one on the other coast (one of our other offices) and you have your own cloud with better ping and six digits figure saving a year. For the last I would name vendor lock. With vSphere its very easy to move your virtual servers between AWS, Azure and Google (assuming you can afford all 3 and licensing cost of WmWare) but have you ever tried to "download" your server back to premise? It's virtually impossible or made so hard by cloud players trying to keep you up they're in the clouds. With all said I read that Netflix (I believe its Netflix) saving hundreds of millions dollars per year by using Amazon instead of its own servers. I also read somewhere that Dropbox moved away from AWS... ------ bluedino A job or two ago: Everything on-site for a couple reasons (50 servers). Mainly because as a manufacturing company, machines on the shop floor need to talk to the servers. This brings up issues of security (do you really wan to put a 15 year old CNC machine 'on the internet'?). Also, if our internet connection has issues, we still need to build parts. The other big part of it is the mindset of management and the existing system, which was built to run locally, does Amazon offer cloud hosted Access and Visual Basic workers? ------ nitwit005 We're looking at moving from AWS to having some machine space rented in two data centers. The reason is purely cost. There are still some computers on site due to equipment being tied to it, telephony stuff, etc. My last company was looking at "moving to the cloud", with the idea that its data centers were too expensive, but found out that the cloud solutions would be even more expensive, despite possible discounts due to the size. They still invested in it due to some Australia customers wanting data to be located there. ------ tcbyrd I haven't personally done detailed cost analysis lately, but if you have systems that regularly operate at 80+% of capacity, I can't see how the operating costs of any cloud operator can be cheaper than operating it yourself. Their whole pricing model is based on being able to over-provision compute and move workloads around within a datacenter. As much as people talk about failing hard drives and other components at scale, failure rates are still low enough that you could operate dozens of systems at full capacity for 3+ years with maybe a handful of legit hardware failures. To rent that same compute from any cloud provider would cost significantly more. The cheapest "Dedicated Host" on AWS will cost you almost $12k over 3 years if you pay for it on-demand, and it's equivalent in specs to something you can buy for ~$2k. > am I missing something? I'd want more background behind what you mean by "at least for business". What kind of business? Obviously IaaS providers like Digital Ocean and Linode are are type of business that would not use other clouds. Dropbox and Backblaze as well would probably never use something like S3. And there are legitimate use cases outside of tech that have needs in specific teams for low latency compute, or its otherwise cost and time prohibitive to shuttle terabytes of data to the cloud and back (3D rendering, TV news rooms, etc). If you're talking about general business systems that can be represented by a website or app with a CRUD API, then most of that probably doesn't require on-prem. But that's not the only reason businesses buy servers. ------ marvinblum As most have mentioned already: cost We started out with Emvi [1] on Kubernetes at Google Cloud as it was the "fancy thing to use". I like Kubernetes, but we paid about 250€/month just to run some web servers and two REST APIs. Which is way too much considering that we're still working on the product and pivoting right now, so we don't have a lot of traffic. We then moved on to use a different cloud provider (Hetzner) and host Kubernetes on VMs. Our costs went down to about 50€ just because of that. And after I got tired managing Kubernetes and all the complexity that comes along with it, we now just use a docker-compose on a single (more powerful) VM, which reduced our cost even futher to about 20€/month and _increased_ performance, as we have less networking overhead. My recommendation is to start out as simple as possible. Probably just a single server, but keep scaling in mind while developing the system. We can still easily scale Emvi on different hardware and move it around as we like. We still use Google Cloud for backups (together with Hetzners own backup system). [1] [https://emvi.com/](https://emvi.com/) ------ hdmoore Anything with massive storage and massive compute that doesn't need low latency is a great fit. I still host ~300TiB and ~250 cores at home because the cloud cost would be astronomical. Edit: This is for personal stuff related to internet-wide scan data and domain tracking. See [https://github.com/hdm/inetdata](https://github.com/hdm/inetdata) ------ desc 1\. Our customers run our software on their own machines for security and data-control reasons. As soon as something's running on someone else's hardware, the data is out of your control. Unless you're going to accept the (often massive) cost of homomorphic encryption, AND have a workload amenable to that, it's a simple fact. 2\. Everything we do in house is small enough that the costs of running it on our own machines is far less than the costs of working out how to manage it on a cloud service AND deal with the possibility of that cloud service being unavailable. Simply running a program on a hosted or local server is far far simpler than anything I've seen in the cloud domain, and can easily achieve three nines with next to no effort. Most things which 'really need' cloud hosting seem to be irrelevant bullshit like Facebook (who run their own infrastructure) or vendor-run workflows layered over distributed systems which don't really need a vendor to function (like GitHub/Git or GMail/email). I'm trying to think of a counterexample which I'd actually miss if it were to collapse, but failing. ------ lettergram I actually am moving my startups servers from AWS to a home server. Reasoning: * We know how much compute is need. * We know how much the new servers can compute. * We have the ability to load balance to AWS or Digital Ocean or another service as needed. * This move provides a 10x speed improvement to our services AND reduces costs by 70%. For reference, had to call the ISP (AT&T) and they agreed to let me host my current service. It’s relatively low bandwidth, but has high compute requirements. ------ ggm We operate an X509 PKI with a Trust anchor and CA. Its not impossible to run the Hardware Security Module (HSM) in the cloud but its off the main path. Its more plausible to run it in a D.C. but it invites security threats you don't have, if you run it inside your own perimiter. Of course you also then invite other risks, but its a balancing act and it has to be formally declared in your Certification Practice Statement (CPS) We also run some registry data which we consider mission critical as a repository. We could run the live state off-prem, but we'd always have to be multi-site to ensure data integrity. We're not a bank, but like a bank or a land and titles administration office, registry implies stewardship in trust. That imposes constraints on "where" and "why". Take central registry and the HSM/related out of the equation, if I was building from scratch I'd build to pub/sub, event-sourcing, async and in-the- cloud for everything I could. private cloud. If you don't control your own data and logic, why are you in the loop? ------ SoylentOrange We are a research group at a company with 2000 or so employees. We have a few GPU machines to train models, which are utilized nearly around the clock. AWS and co’s GPU-enabled servers are exceedingly expensive. Most of the GPU models on those machines are also very old. We pay maybe 1/3 or less to maintain these machines and train models in-house vs paying AWS. Mind you, we use AWS for plenty of stuff... ------ Blackadderz I work in R&D for a telecomms/optics company. All servers are on premises. Not allowed to have a laptop. No access to emails/data outside of the office. No USB drives, printing documents, etc. Reason? Protect IP. From who? Mostly Huawei. Good and bad: When I walk out the door... I switch off. The bad is that working from home isn't realy an option. Although they have accommodated somewhat for this pandemic. ------ mrweasel We sell hosting to large number of different customers who for whatever reason, mostly legal, are required to keep data within the borders of Denmark. There is no Google, Amazon, Azure or DigitalOcean data centers in Denmark, so cloud isn't an option for them. Regarding cost, well it depends. We try to help customers to move to cloud hosting if it's cheaper for them. It almost always will be if they take advantage of the features provided by the cloud providers. If you just view for instance AWS as a VMware in the cloud, then we can normally host the virtual machines for you cheaper and provide better service. You have to realize that many companies aren't developing software that's ready for cloud deployment. You can move it to EC2 instance, but that's not taking advantage of the feature set Amazon provides, and it will be expensive and support may not be what you expect. You can't just call up Amazon and demand that they fix your specific issue. ------ pmlnr > The more I learn, the more I believe cloud is the only competitive solution > today, even for sensitive industries like banking or medical. Then learn A LOT more and start with mainframes and their reliability. ------ gameswithgo Our business has had greatly increased load due to COVID-19. It would have been very nice to buy a 128 core EPYC bare metal server to run our SQL Server on at this time, to buy us time to rearchitect to handle the load. Instead we are stuck with 96vCPUs because that is the most Amazon can do. Its also very very expensive to have a 96vcpu VM on amazon! ------ apetersonBFI I'm one of two IT persons at a food processor, in a small town. Despite living in an area where a majority of IT & Programmers work at the hospital or an insurance company, my boss has run our own networks and servers since the days of Novell, and we continue to run Windows servers on-premise, instead of the cloud. It does lead to interesting situations, like finding out a Dell server shrieking next to you is running max fan speed because the idrac is not connected. Neither of us have any experience with the cloud, whereas we have a lot of Microsoft experience. We still rely on OEM licenses of Office, because Office 365 would be 3x or more expensive. We have a range of Office 2019, 2016, 2013 OEM, and we get audited by them nearly every year. We use LastPass, Dropbox and Github, but only the basic features, and LastPass was an addition last year after someone got into our network through a weak username/password. In our main location, we have three ESX boxes, running several virtual servers, and then we have a physical server for our domain controller, file sharing and DHCP, DNS in other locations. We also switched to a physical server for our new ERP application server, which hasn't yet been rolled out. Projects like upgrading our ERP version can take months, but we have a local consulting team, with a specialist in our particular ERP solution, as well as a Server and Network specialist, and we also have a very close relationship with our ISP, who provides WAN troubleshooting. Our IT budget is small relative to our company revenue, so most cloud proposals would raise our costs manyfold. We continue to use more services like Github and Lastpass, and we both have multiple hats. I'm a developer, in-house app support, Email support, HR systems support, ERP support, PC setup, and I run our Data Synchronization operation and my boss runs EDI. I do a lot of PowerShell and Task Scheduler, but I've got familiar with bash through git bash. ------ mikorym Here is a stupid example: Excel vlookups work on a network drive, but not on a cloud service like Dropbox or OneDrive. The absolute path can't resolve if it's used across multiple Excel users. If the users store the file locally, each will have a different path on their computers. Excel stores actual paths. [1] There is one way around it: Mounting the cloud server as a network drive (some providers do this by default, but OneDrive is not one of them, neither is Dropbox). I don't know of a way of mounting OneDrive as a virtual drive; I would be interested to know. It sounds stupid, but the above was a real life scenario. [1] Only if the files are closed. Excel can change the path if you have the file open, but it can't change it to multiple option across different PCs. But as I have mentioned before, Excel doesn't seem to document all of their more subtle features. ------ jolmg Unreliable internet. A retail company may decide that the best place to put up a new branch is coincidentally (though there might be a correlation) at the edge of what the available ISPs currently cover. They might have to make a deal to get an ISP to extend their area to where the store is going to be. However, because of lack of competing ISP options on the part of the retailer, and the lack of clients in the retailer's area on the part of the ISP, that service is probably not going to be all that reliable. Also, that retail company may experience a big rise in sales after a natural disaster occurs, when communications (phone/cell/internet) are mostly down for the area. One tends not to think about stuff like that until it happens at least once. It's very important for the ERP/POS systems to be as operational as possible even when the internet is down. ------ joshuaellinger On the smaller end of the scale, I have a $12K/mo spend with Azure. I decided to go back to Coloc. For under $50K, I have 4 machines with an aggregate 1TB RAM, 48 cores, 1 pricy GPU, 16TB of fast SSD, 40 TB HHD, and infiniband @ 56GB/sec. Rent on the cabinet is less than $1K/mo. It's going to cost me about $20K in labor to migrate. So the nominal break-even point is six months but the real kicker is that this is effectively x10-30 the raw power of what I was getting on the cloud. I can offer a quantitatively different set of calculations. It also simplifies a bunch of stuff: 1\. No APIs to read blob data -- just good old files on an ZFS share. 2\. No need to optimize memory consumption. 3\. No need for docker/k8s/etc to spin up under load. Just have a cluster sitting there. There are downsides but Coloc beats the cloud for certain problems. ------ busterarm I'm going to buck the trend and say cloud is great. We do cloud, on-prem and colo (16 racks in two different DCs). Procurement is a nightmare especially when your vendor is having problems with yields (thanks Intel!) and the ability to scale up and scale down without going through hardware procurement process saves us millions of dollars a year. We avoid the lock-in by running on basic services on multiple cloud providers and building on top of those agnostically. Spend is in the millions per month between the cloud providers, but the discounts are steep. We're essentially had to build our own global CDN and the costs are better than paying the CDN services and better than running our own hardware & staffing all those locales. It's a no brainer. We'll continue to operate mixed infrastructure for quite some time as certain things make sense in certain places. ------ the_svd_doctor PhD student/researcher. Most of the compute I do are HPC/scientific computing style and run on University or US National Lab machines. We thought about cloud, but the interconnect (MPI-style code) is very important for us and it's not very clear to me what's available there in the cloud. ------ frellus Self-driving company here. We're doing on-prem because we have a clear projection on the amount of data we'll need to injest, store and process using GPUs. The advantages of running things in a cloud are clear -- and as an infrastructure team we have challenges around managing physical assets at scale, however it's clear with the cost of cloud providers that eventually we would have to pull data into a datacenter to survive at some point. Co-location costs are fixed, and it's actually easy to make a phenomenal deal now-a-days given the pressure these companies are under. The real trick of it all is that regardless of running on-prem or in the cloud, we need to run as if everything is cloud native. We run Kubernetes, Docker, and as much as possible automate things to the point that running one of something is the same as running a million of it. ------ jaboutboul There is not one clear cut answer on this. It depends on what your company values, i.e. cost vs. agility. If you are using the cloud for what it was meant for availability, scalability, elasticity--and those are the things that your org values--its the right fit for you. If on the other hand you value cost then it clearly isn't the right fit. One other point I'll make, the true value of cloud isn't in IaaS, renting VMs from anyone is relatively expensive compared to the costs of buying a server and maintaining it yourself for a number of years. The true value of the cloud is when can architect your solution to utilize the various services the cloud providers offer, RDS/DynamoDB, CDN, Lambda, API Gateway, etc. so that you can scale quickly when you need to. ------ jcrawfordor For hobby projects, I own a moderately outdated 1U "pizzabox" installed in a downmarket colocation facility in a nearby major city. Considering the monthly colocation rate and the hardware cost amortized over two years (I will probably not replace it that soon but it's what I've used for planning), this works out to appreciably less than it would cost to run a similar workload on a cloud provider. It costs about the same or possibly a bit less than running the same workload on a downmarket dedi or VPS provider, but it feels more reliable (at least downtime is usually my fault and so under my control) and the specs on the machine are higher than it's easy to get from downmarket VPS operations. Because my application involves live video transcoding I'm fairly demanding on CPU time, which is something that's hard to get (reliably) from a downmarket VPS operation (even DO or what have you) and costly from a cloud provider. On the other hand, dual 8 core Xeons don't cost very much when they're almost a decade old and they more than handle the job. There are a few fairly reputable vendors for used servers out there, e.g. Unix Surplus, and they're probably cheaper than you think. I wouldn't trust used equipment with a business-critical workload but honestly it's more reliable than an EC2 instance in terms of lifetime-before-unscheduled-termination, and since I spend my workday doing "cloud-scale" or whatever I have minimal interest in doing it in my off-time, where I prefer to stick to an "old fashioned" approach of keeping my pets fed and groomed. And, honestly, new equipment is probably cheaper than you think. Dealing with a Dell account rep is a monumental pain but the prices actually aren't that crazy. Last time I purchased over $100k in equipment (in a professional context, my hobbies haven't gotten that far yet) I was able to get a lot for it - and that's well less than burdened cost for one engineer. ------ _bxg1 One of my favorite things (at least for personal projects) about using the cloud is so-called "platform as a service" systems like Heroku, where I don't have to get down in the weeds, I just push code and the process starts (or restarts). Is there something like that I could use on my own hardware? I just want to do a fresh Linux install, install this one package, and start pushing code from elsewhere, no other configuration or setup necessary. If it can accept multiple repos, one server process each, all the better. I know things like Docker and Kubernetes exist but what I want is absolute minimal setup and maintenance. Does such a thing exist? ~~~ mappu You're looking for Dokku Same git push deploys, heroku-compatible buildpacks or Dockerfiles, all on your own hardware, MIT license. ~~~ _bxg1 This looks perfect, thank you! I knew there was no way I could've been the first person to think of this ------ frogbox12 After a big cloud-first initiative, several managers left; leaving implantation to Linux sysadmins now in charge of cloud. Treated cloud as some colo facility, dump all apps in one big project/account, cloud costs spun quickly out-of-control and lots of problems with apps not being segregated from each other. Cloud declared 'too expensive' and 'too insecure', things migrated back on-prem, team now actively seeks to build and staff colo facilities with less than 10ms latency somewhere outside coastal California (Reno, Vegas, PHX) which just isn't gonna happen because physics. ------ annoyingnoob International Traffic in Arms Regulations (ITAR) Compliance - much easier to keep it on site, off-site compliance is costly. Also, cost over time. Better control of performance requirements for certain applications. ------ otabdeveloper4 We're in the process of moving a greenfield project from AWS to a more traditionally hosted solution. AWS turned out to be 5-10 times more expensive; what's worse, our developers are spending more then half their time working around braindead AWS design decisions and bugs. A disaster any way you look at it. There are good reasons to chose AWS, but they're never technical. (Maybe you don't want to deal with cross-departmental communications, or you can't hire people into a sysadmin role for some reason, maybe you want to hide hosting in operational expenses instead of capital, etc.) ------ Nextgrid Bandwidth costs. Most dedicated servers come with unmetered bandwidth so not only is it cheap to serve large files but your bandwidth costs won't suddenly explode because of heavy usage or a DDoS attack. ------ TuringNYC Here is how we went about w/ CSPs (AWS, Azure, GC, Oracle). Thoughts welcome Getting Started --> Definitely go w/ CSPs. No need to worry about infra. Pre Product Market Fit + Steady Growth --> On Premise, because CSPs might be expensive until you find a consistently profitable business. Pre Product Market Fit + HyperGrowth --> CSPs since you wont be able to keep up [we never got to this stage] Product Market Fit w/ Sustainable Good Margins --> CSPs, pay to remove the headache [we never got to this stage] Side Note: w/ GPUs, CSPs rarely make sense ------ acwan93 I agree with you OP. Our company provides on-premise ERP systems to small (we’re talking at most 20 person companies) wholesale distributors. Pre-COVID, I was pushing for a cloud solution to our product and pivoting our company towards that model. We’re at a hybrid approach when COVID hit. What ends up happening with an on-premise/hybrid cloud model is we end up doing a lot of the sysadmin/IT support work for our customers just to get our ERP working. This includes getting ahold of static IP addresses (and absolving responsibility), configuring servers/OSes, and several other things along the same vein that’s wholly irrelevant to the actual ERP like inventory management and accounting. Long story short, these customers of ours end up expecting us to maintain their on-premise server without actually paying for help or being knowledgable about how it all works. We keep pitching them the cloud but they’re not willing to pay us a recurring fee even though it actually saves the headaches of answering the question “who’s responsibility is it to make sure this server keeps running?" I think a lot of these answers here are dealing with large-scale products and services where the amount of data and capital costs is so massive it makes sense to start hiring your own admins solely to maintain servers. For these small mom-and-pop shops who are looking for automation, the cloud is still the way to go. ~~~ jolmg Deja vu. I think you totally hit the nail on the head with that last paragraph. On-premise ERP systems probably only make real sense for (non- small) companies that wish to avoid relying on the internet (because e.g. their business strategy requires that freedom) and can hire long-term sysadmins/programmers that can provide support to those systems. ~~~ acwan93 Have you had experience selling on-premise systems? I’m really curious how other companies handle the sysadmin and IT support issue. ------ allenrb Finance space, under 100 people. Most servers are either latency-sensitive or 24/7 fully-loaded HPC. Neither case fits the cloud model. We do use cloud for build/test, though. ------ Jugurtha Many organizations do have private clouds. If by cloud you mean a public cloud like Google, Amazon, or Microsoft, then forget about it; not with these companies piping data directly to U.S intelligence. ~~~ p1necone What's the difference between private cloud and on prem? Does it just mean you let everyone spin up VMs etc rather than requiring them to go through IT? ~~~ toomuchtodo Mostly. Cloud is just bin packing compute and data storage. There are many use cases where it’s cheaper for you to host the hardware you cloud on instead of a public cloud provider. ------ krageon You've been managing servers for quite some time but you've never considered the security implications of hosting all of your sensitive data on someone else's computer? You say you're not trolling but I genuinely don't see how those two facts are compatible, except if you work in an industry where the data just doesn't matter. If that were the case though, you shouldn't feel compelled to judge what banking or medical institutions are doing. ~~~ aspyct We use a public cloud, and believe me: we do consider the security implications. We veeery much do. ------ nikisweeting Quebec power and internet pricing is really competitive. For residential services I pay $0.06/kw + $80/mo for 1Gbps fiber with 1ms ping to 8.8.8.8 (USD). As a result, I run a power-hungry Dell r610 with 24 cores and 48GB of ram with 20+ services on it for many different aspects of my company. All the critical stuff runs on DigitalOcean / Vultr, but the 20+ non-critical services like demo apps, CI/CD, cron workers, archiving, etc. run for <$200/yr in my closet. ~~~ blaser-waffle This is also why there are a lot of data centers in the Greater Montreal area, FWIW. ------ ex3ndr We are small team startup and i am personally annoyed about pricing of a clouds. I have a 3 smallish VM for build server + managed SQL. It cost 500$/mo. It doesn't make sense. Having my own VMs on ESXi makes everything very different - most of the time this VMs do nothing, but you want to make them performant from time to time, so there are a plenty of resources because all other vms are too mostly IDLE. In cloud they are billed as if they are 100% loaded all the time. I am not really satisfied with latencies and insane price for egress traffic. I just can't do backups daily since it could cost whooping 500$/mo just for the traffic. This is just insane, i can't see how it could scale anywhere for B2C market. For B2B it might work really well though since revenue per customer is much higher. We are not moving to our own DC, but just keep realtime stuff in the cloud, but anything that is not essential is being moved somewhere else. Bonus is that you need off-site backups anyway in the case if cloud vendor will just ban you and delete all your data. Startups might move fast and iterate, but if you don't have your own servers you always reduce your usage because it could grow fast effectively reducing your delivery capacity. ------ leroy_masochist Background: I consult extensively in the SaaS space and ask people this question all the time in the course of strategy reassessments, transactional diligence, etc. 1\. Physical control over data is still a premium for many professional investors. As a hedge fund CIO told me recently when I asked her why she was so anti-cloud migration, "I want our data to live on our servers in our building behind our security guard and I want our lawyer, not AWS's, to read the subpoena if the SEC ever comes for us." 2\. There are a lot of niche ERP- and CRM-adjacent platforms out there -- e.g., medical imaging software -- where the best providers are still on-prem focused, so customers in that space are waiting for the software to catch up before they switch. 3\. A lot of people still fundamentally don't trust the security of the cloud. And I'd say this distrust isn't of the tinfoil hat, "I don't believe SSL really works" variety that existed a decade ago. Instead it's, "we'd have to transition to a completely different SysAdmin environment and we'd probably fuck up an integration and inadvertently cause some kind of horrendous breach". ------ Scaevus For our case, the need is very specific as we are working with mobile apps. Building iOS apps require macOS and even though there are some well-known "Mac hosting" services, none of them are actual cloud services similar DigitalOcean, Azure, AWS, etc. So it is much less expensive and actually easier to scale and configure to host the Macs onprem. (Off the record: If it is for internal use only, you can even stick in a few hackintoshes for high performance.) ------ arghwhat Quite frankly, "cloud" is a convenience and elasticity service at a steep premium, with downsides. Contrary to popular belief, it does not in the slightest save you a sysadmin (most just end up unknowingly giving the task to their developers). And contrary to popular belief, the perf/price ratio is _atrocious_ compared to just buying servers. For some of the loads I had been doing the math for, I could rent a colo _and_ buy a new beefy server every year with money to spare for the yearly cost of something _approximating_ the performance in AWS... ------ ai_ja_nai It's the people cost, not the hardware. Hardware is super cheap: -A 40 slots rack, with gigabit fiber, dual power and a handful of public IP addresses, costs on average 10000€/y. -A reconditioned server on eBay with 16 cores and 96GB of RAM costs 500€ (never seen them break in 3 years). -A brand new Dell Poweredge with AMD EPYC 32 core and 64GB of RAM will cost 3000€. -Storage is super cheap: 500GB of SSD costs 80€ (consumer stuff is super fine as long as you wisely plan between redundancy and careful load) and rotational disks are even cheaper. Never seen a rotational disk break. Once bought, all of this is yours forever, not for a single month. You can pack very remarkable densities in a rack and have MUCH more infrastructure estate at disposal than you would ever afford on AWS. The flip side of the coin is that you need operation expertise. If it's always you, then ok (although you won't be doing much more than babysitting the datacenter). Otherwise, if you need to hire a dedicated person, people is the most expensive resource yet and that should definitely be added to the cost of operations. ------ dathinab A company which: 1\. Does "security" critical stuff (like affecting the security of people not data if breached) 2\. Which besides certain kinds of breaches has lowish requirements to performance and reliability (short outages of a view minutes are not a big problems, even outages of half a day or so can be coped with) 3\. Slightly paranoid founders, with a good amount of mistrust into any cloud company. 4\. Founders and tech-lead which have experience in some areas but toughly underestimate the (time)cost of managing servers them self, and that _kinda_ hard to do secure by yourself (wrt. long term DDOS and similar). So was it a good reason? Probably not. But we still went with it. As a site note while we did not use the could _we didn't physically manage server either_. Instead we had some dedicated hardware in some compute center in Germany which they did trust. So no "physical" managements securing etc. needed and some DDOS and network protection by default. Still we probably could have it easier without losing anything. On the other side if you have a dedicated server hardware in some trusted "localish" compute center it's not _that_ bad either to manage. ------ prirun I did a startup with a co-founder in 1998, before cloud was a thing. We hosted at above.net first, then he.net following above.net's bankruptcy. Both were very good and we never had colo-related problems with either, though he.net was significantly cheaper. We started with 2 white-box PC's as servers, 2 mirrored RAID1 drives in each. We added a 3rd PC we built ourselves: total nightmare. The motherboard had a bug where, when using both IDE channels, it overwrote the wrong drive. We nearly lost our entire business. Putting both drives on the same IDE channel fixed it, but that's dangerous for RAID1. A few years in, we needed an upgrade and bought 5 identical SuperMicro 2U servers with hardware RAID1 for around $10K. Those things were beasts: rock solid, fast, and gave us plenty of capacity. We split our services across machines with DNS and the 5 machines were in a local LAN to talk to each other for access to the database server. The machines' serial ports were wired together in a daisy-chain so we could get direct console access to any machine that failed, and we had watchdog cards installed on each so that if one ever locked up, it automagically rebooted itself. When I left in 2005, we were handling 100's of request/s, every page dynamically generated from a search engine or database. Of course it took effort to set all this up. But the nice thing is, you control and understand _everything_. Some big company doesn't just do things to you, you have no idea what is happening, and they're not talking. And if things do go south, you can very quickly figure out why, because you built it. The biggest mistake we made was in the first few years, where we used those crappy white-box PCs. Sure, we saved a couple thousand dollars, but we had the money and it was a terrible trade. Night and day difference between those and real servers. ------ quanto > The more I learn, the more I believe cloud is the only competitive solution > today, even for sensitive industries like banking or medical. I honestly > fail to see any good reason not to use the cloud anymore, at least for > business. Cost-wise, _security-wise_ , whatever-wise. [emphasis mine] Most people here seem to point out cost and utilization. I would like to offer another perspective: security. I worked in both of these industries: finance ("banking", not crypto or PG) and medical (within a major hospital network). The security requirement, both from practical and legal perspectives, cannot be understated. In many situations, the data cannot leave an on-prem air-gapped server network, let alone use a cloud service. It costed us more to have on-prem servers as we need a dedicated real estate and an engineering team to maintain. Moreover, the initial capital expenditure is high -- designing and implementing a proper server room/data center with climate control, power wiring, and compliant fire extinguishing system are not trivial. ------ gorgoiler For business logic at a fairly large school the free and open tools that make the cloud so productive get used here a lot. We get to leverage commodity hardware and network infra very effectively for on-premises[1]. You have to have a good recovery plan for when equipment X’s power supply fails but when deploy is all automated, it’s very easy to overcome swapping bare metal, and easy to drill (practice) during off hours. This makes it _much_ easier to meet regulatory compliance: either statutory or regulations your org has created internally (e.g. financial controls in-org, working with vulnerable people or children, working with controlled substances, working with sensitive intellectual property.) Simply being able to say you can pull the plug on something and do forensic analysis of the storage on a device is an important thing to say to stakeholders (carers, carers families, pupil parents.) I’m so grateful to be living in the modern age when “cloud” software exists[2], but I don’t have to be in the cloud to use it. The downside: you need trained staff and it’s completely inappropriate if you need any kind of bandwidth, power consumption, or to support round the clock business (which we do not because, out here on the long tail, we work in single city so still have things like evenings weekends and holidays for maintenance!) — [1] Premise vs premises is one of those “oh isn’t the English language awful” distinctions. Premise is the logical foundation for some other theory (“the premises for my laziness is that because the sky is grey it will probably rain so I’m not going to paint the house”) where as the premise_s_ means physical real estate property (“this is my freshly painted house: I welcome you onto the premises”.) [2] Ansible, Ubiquiti, arm SBCs like raspberry pi, Docker, LXC, IPv6 (making global routing for more tractable, IPv4 for the public and as an endpoint to get on the VPN.) ------ hourislate If you interested, here is an article about how Bank of America chose to build out its own cloud to save about 2 billion a year. [https://www.businessinsider.com/bank-of- americas-350-million...](https://www.businessinsider.com/bank-of- americas-350-million-internal-cloud-bet-striking-payoff-2019-10?op=1) ------ ps Cost. We recently moved one rack to the different DC in the same city and used Digitalocean droplets to avoid downtime. Services running on Linux were migrated without high-availability (e.g. no pgsql replication, no redis cluster, single elasticsearch node...) and we just turned off Windows VMs completely due to licensing issues and no need to have them running at night. The price of this setup was almost 4x higher than what we pay for colo. Our servers are usualy <5 years old Supermicro, we run on Openstack and Ceph (S3, rbd) and provide VPNaaS to our clients. AWS/GCP/Azure was out of question due to cost. We considered moving Windows servers to Azure with the same result - the cost of running Windows Server (AD, Terminal, Tomcat) + MS SQL was many times higher than the price of colo per month. It is bizarre that you can buy the server running those VMs approximately every 3 months for the Azure expenses (Xeon Platinum, 512GB RAM). ------ majkinetor There is no cloud solution that lets you ship and forget a system. Come in 15 years ? It still works. Is that possible with cloud even in short periods, like 2 years ? No. Will it ever be possible? No. Thats primary reason for me. I can use cloud only for stuff that are nice but not mandatory to have for service to work, like status page. Plus, work is more enjoyable then using somebodies else stuff. ------ whatsmyusername Cloud falls over if you have a lot of data egress. I don't work on those types of workloads (mainly in PCI and a little bit of HIPAA) so I stick to cloud for the sheer convenience factor (it's easier to fix a security group than having to drive to the office and plug in somewhere like I had to earlier today). Dealing with hardware has become a very specific skill set. I have it, but I don't enjoy it, so I don't look for it. I still have to build physical networks occasionally (ex: we are building a small manufacturing facility in a very specific niche that's required to be onsite for compliance reasons) but the scale is so small that I can get away with a lot of open source components (pfsense netgates are great) and not have to use things that are obnoxious to deal with (if I never have to deploy cisco anything ever again I won't be upset). ------ darrelld I have a friend who is an IT manager for a large chain hotel in the Caribbean. I keep asking him about why they still use on premises equipment and it boils down to: * Cost for training / transitioning + sunk cost fallacy * Perceived security risk (right or wrong) * IT is mostly invisible and currently "works" with the current arrangement, why change? ------ avifreedman We're (Kentik) a SaaS company and one key to having a great public margin is buying and hosting servers. In our case, we use Equinix in the US and EU to host, with Juniper gear for routing/switching, and the customary transit, peering at the edge. One secondary factor is that we've only monotonically increased, and it's way cheaper to keep 10%-15% overprovisioned than to be on burst price with 50%+ constant load. But the simplest math is - we have > 100 storage servers that are 2u 26x2tb flash, 256gb RAM, 36 cores. They cost $18k once, which we finance at pretty low interest over 36 months (and really last longer than that). Factor in $200-400/mo to host each depending (I think it's more like $200, but it doesn't matter for the cloud math). That same server would be many $thousands/month on any cloud we've seen. Probably $4-6k/mo, depending on the type of EBS-ish attached. Or with the dedicated server 'alternate' they are moving to offer (and Oracle sorta launched with). It'd be cheaper but still > 2x as expensive on Packet, IBM dedicated, OVH, Hetzner, Leaseweb (OVH and Hetzner the cheapest probably). Three other factors for us: 1) Bandwidth would be outrageous on cloud but probably not as outrageously high as just the servers, given that our outbound is just our SaaS portal/API usage 2) We'd still need a cabinet with router/switch infra to peer with a couple dozen customers that run networks (other SaaS/digital natives and SPs that want to send infrastructure telemetry via direct network interconnect). 3) We've had 5-6 ops folks for 3 of the 6 years, 3-4 for the couple years before that. As we go forward, as we double we'll probably +1. It is my belief that we'd need more people in ops, or at least eng+ops mix, if we used public cloud. But in any case, the amount of time we spend adding and debugging our infra is really really really small, and the benefit of knowing how servers and switching stuff fails is huge to debugging (or, not having to debug). All that said - we do run private SaaS clusters, and 100% of them are on bare metal, even though we _could_ run on cloud. Once we do the TCO, no one yet has wanted to go cloud for an always-on data-intensive footprint like ours. Good luck with your journey, whichever way you go! And happy to discuss more, email in profile ------ kokey I think when the economy is hit hard a lot of companies are going to have to look at what they can do to make sure they remain profitable since investor appetite has changed. This means some companies will have to look at what is being spent on cloud providers. Renting RAM by the hour makes sense if you are optimistic about future revenue growth, but if the market changes and you have to worry about how you can sustain profits while just keeping your current customer base then this makes a lot less sense. The cloud vs on prem argument also really includes all the things in between, e.g. colo, managed servers, VPSes and also better tooling to manage your own VM and container clusters on these, which I think will now get increased attention and competition when people are considering alternatives to the big cloud providers in order to bring down costs. ------ MorganGallant I picked up a few refurbished dell servers off Ebay for super cheap a while back - and usually use these with Cloudflare Argo tunnels to host various servers. However, since these are just sitting on the floor next to my desk, usually rely on cloud for any applications with high uptime requirements. Recently though, I've been working on some distributed systems type projects which would allow these servers to be put in different physical locations (and power grids), and still continue to operate as a cohesive whole. This type of technology definitely increases my confidence in them being able to reliably host servers. I wouldn't want to be reliant on the cloud for large scale services though, from my understanding you can get some crazy cost savings by colocating some physical servers (especially for large data storage requirements). ------ pavelevst AWS is one of most expensive options, and far not perfect, I can’t understand why people consider it as default choice... To compare - dedicated server on hetzner (core i7, 32gb ram, 20tb network) is cheaper than medium VM on AWS. If the product is growing, cloud cost can quickly become the biggest expense for company. Than it will make sense to spend some time make things run in more cost effective way I think if you choose cloud hosting that costs about same as renting dedicated server plus settings virtualization by yourself - than it’s a fair choice (can check on [https://www.vpsbenchmarks.com/](https://www.vpsbenchmarks.com/) or similar) Another sweet configuration is dedicated servers with kubernetes: good user experience for developers, easy to setup and maintain, easy to scale up/down ------ bpyne My employer is a mid-sized university. Cost is the main issue. Our environment is a mixture of in-house developed apps and COTS. Until recently, our major COTS vendors didn't have cloud solutions. Now they have cloud solutions but they're far too costly for us to afford. So we need to keep them in-house and continue to employ the staff to support it. Our in-house apps integrate the COTS systems. Our newer apps are mostly in the cloud. But the older ones are in technologies that need to stay where the database server is, which is in our server room for the reason stated in the last paragraph. Rewriting the apps isn't on our radar due to new work coming in. Historically, outsource vs. in-source seems to ebb and flow. The clear path is usually muddied when new technologies come out to reduce cost on one side or the other. ------ gen220 My SO’s brother works in the studio video recording industry, and is a very IT-savvy guy. We had a long discussion last holiday season about the state of cloud adoption in that industry. He told me (this is obviously secondhand) that most of the movie industry is not only off the cloud, but exclusively working in the realm of colocating _humans_ and data (footage). This is for many reasons. The one that comes back to me now is that the file sizes are HUGE, because resolution is very high, so bandwidth is a major concern. Editors and colorists need rapid feedback on their work, which demands beefy workstations connected directly with high bandwidth connections to the source files. Doing something like this over a long distance network (even if the storage was free) would be prohibitively expensive, and sometimes literally impossible. So the write loads are basically the antithesis of what cloud optimized for: “random sequential reads of typical short length, big append only writes”. The big production houses (lucasarts famously) are also incredibly secretive about their source material, and like to use physical access as a proxy for digital access. It leads to some seemingly strange (to me as a cloud SWE guy) decisions. He pretty much exclusively purchases top of the line equipment ( _hard drives /ssds_), and keeps minimal if any backups for most projects because there simply isn’t any room. It’s a recipe for disastrous data loss, and apparently it’s something that happens quite often to this day. It’s just extremely prohibitively expensive to do version control for movie development. I don’t know to what extent cloud technologies can solve for this domain. I asked him if Netflix was innovating in this area, since they’re so famously invested in AWS, but he said that they mostly contracted out the production stuff, and only managed the distribution, which makes sense. The contractors don’t touch the cloud at all, for the most part. Again most of this is secondhand, I’d be curious to hear more details or reports from other people in the movie industry. ------ kgc We moved all AWS servers to a colo. Saving 80% of the cost. ------ ocdtrekkie There are very few niches where the cloud makes sense: Namely, where you are either too small to benefit from a single server and a single entry-level IT guy (think three or four person companies with low need for technical competency), or where you are expecting rapid growth and can't really rationally size out your own hardware for the job (in this case, the cloud is useful initially, leave it later once your scale is more stable). In every other case, you are paying for the same hardware you could buy yourself, plus the cloud provider's IT staff, plus your own IT staff which you likely need anyways to figure out how to deal with the cloud provider, _and_ then the cloud provider's profit margin, which is sizeable. ------ amq Surprised no one mentioned a third option: cheap vps providers like digitalocean or vultr. They've become real contenders to big clouds recently, providing managed databases, storage and k8s. And their bandwidth costs are close to what you'd get in colo. ------ cpascal My company cannot run our infrastructure in the cloud because we do performance/availability monitoring from ~700 PoP’s around the world. Not running our infrastructure in the cloud is part of our value proposition. Our customers depend on us to detect and alert them when their services go down. We _have_ to be operational when the cloud providers are not, otherwise we aren’t providing our customer with a valuable service. Another reason we don’t run in the cloud is because we store a substantial amount of data that is ever increasing. It’s cheaper to run our own SAN in a data center than to store and query it in the cloud. The final reason is our workloads aren’t elastic. Our CPU’s are never idle. In that type of use case, it’s cheaper to own the hardware. ------ znpy Many of the customers of my previous employer had hardware on premises, including the customer that I was handling. It had both compute and storage (netapp). It had two twin sites in two different datacenters. The infra in each site consisted basically in six compute servers (24c/48t, 128gb ram) and netapp storage (two netapp heads per site + disk shelves). Such hardware has basically paid itself across its seven or eight years of life, and having one of the sites basically in the building meant basically negligible latency. The workload was mostly fixed, and the user base was relatively small (~1000 concurrent users, using the services almost 24/7). It really checks all of the boxes, does all it is supposed to do and in a cheap manner. ------ alkonaut What do you mean by "servers"? Anything that isn't a client machine, or just customer-facing infra? We have a pool of 15 build servers for our CI. They run basically 100% cpu during office hours and tranffer terabytes of data every day. They have no real requirements for backup, reliability etc, but they need to be fast. If I run a pricing calculator for hosting those in the cloud it's ridiculous. We are moving source and CI to cloud, but we'll probably keep all the build machines on-prem for the foreseeable future. For customer facing servers the calculation is completely different. More traffic means more business. Reliability, Scalability and backup is important and so on. ------ urza I would like to add, that even for small teams/projects the cost is the reason. We have small business project, with only one server, few hundred customers, with varying traffic (database sync between thick clients, web portal,..). We were considering cloud, but with the features we needed (few different databases, few APIs,..) it would be cca $1000/month (with reasonable response times - could be cheaper but terribly slow). Having our own on premise server, the price is back just after few months and then just the minimal cost of connectivity, energy and occasional maintenance.. it just didn't make any sense for us to choose cloud. ------ adev_ I worked in Switzerland and a reason to use on premise here is _Security_ Many detail banks, asset management company or high security company refuse to use any public Cloud. They want to have a strict and traceable list of people who have physical access to their hardware. This in order to control any risk of dataleak [1]. In practice they use generally on-premise installation. They ren space in a computer center and own there a private cage monitored with multiple cameras. Meaning they know exactly anyone touching their hardware and enforce security clearance for them. [1]: [https://en.m.wikipedia.org/wiki/Swiss_Leaks](https://en.m.wikipedia.org/wiki/Swiss_Leaks) ------ jordanbeiber My last three employments have had me and my team build three different platforms with three different “providers”. Many lessons learned! Chronological order: 1\. E-commerce, low volume (1000-5000RPM), very high value conversions, highly localized trade. We built an on-prem stack using hasicorp here. This place had on-prem stuff already in place, the usual vendor driven crap - expensive hypervisor, expensive spoffy SAN, unreliable network. Anyway, my platform team (4-5 guys) built a silo on commodity hardware to run the new version of this site. This is a few years back, but the power you get from cheap hardware these days is astounding. With 6 basic servers, in two DCs, stuffed with off the shelf SSDs we could run the site and dev teams no problem. Much less downtime compared to the expensive hyperconverged blade crap we started on at basically no cost. There’s a simplicity that wins out using actual network cables and 1u boxes... LXC is awesome btw! Using “legacy” vmware, emc, hp etc for non-essential on- prem? Cloud is tempting! 2\. Very high volume (Billions of requests per day), global network. AWS. Team tasked with improving on-demand scalability. We implemented kubernetes on AWS and it really showed what it’s about! After 6-7 months of struggle with k8s < 1.12 things turned around when it hit 1.12-1.13-ish we got it to act how we wanted. Sort of, at least. Cloud just a no brainer for this type of round-the- clock, elastic workload. You’d need many millions up-front to even begin building something matching “the cloud” here. Lot of work spent tweaking cost though. At this scale cloud cost management is what you do. 3\. Upstart dev-shop. No rpm outside dev (yet). Azure. About 30 devs building cool stuff. Azure sucks as IaaS, they want you to PaaS that’s for sure. Cloud decision had been made already when I joined. Do you need cloud for this? No. Are there benefits? Some. Do they outweigh the cost? Hardly. In the end it will depend on how and where your product drives revenue. We pay for a small local dev datacenter quarterly, which i find annoying. Just some quick thoughts off the top of my head (on the phone so excuse everything). Happy to discuss further! ------ Cthulhu_ Our software runs in core (mobile) network systems, but that's at our customers. We ourselves have a rack in our office that runs things like Git repos, project management, virtual machines for development / testing, build servers, and instances for trainings. We're concerned about corporate espionage and infiltration, so we can't trust our servers being out of our sight. Most people don't have the code on their physical machines either; I'm a cocky breath of fresh air in that regard in that I prefer my stuff to run locally instead of the (slow, underpowered) VMs, I trust Apple's encryption a lot. ------ rjgonza I work for a US Stock Exchange, and some of the technologies that we rely on are not permitted in the cloud. The performance metrics we need are usually only achievable on highly tuned bare metal deployments as well so cloud is usually not an option. I guess it really depends on your workload, but I think there is a very healthy amount of production being deployed and worked on businesses own datacenter/private clouds. ------ sokoloff We have mostly switched to, or are in the midst of switching to, the cloud. Services that we will continue to run on-premises (as an exception to that rule) are some machine learning _training_ clusters (where we need a constant, high-level amount of GPU and cloud provider pricing for GPU machines is very far off the mark of what you can build/run yourself) and some file shares for our production facilities where very large raster files are created, manipulated, sent to production equipment, and then shortly afterwards deleted. Most everything else is going to the cloud (including most of our ML _deployed model_ use cases). ------ DyslexicAtheist this assumption doesn't consider threat models where the vendor could be part of your problem. E.g. if you're based in country A and work on super secret new Tech for an emerging industry, then hosting in country B may not be an option. Imagine a company in Europe that decides to host it's files on Alibaba Cloud in the US. Imagine the US Department of State hosting it's files with Google. Imagine an energy company working on new reactor Tech, ... Imagine a Certificate Authority which has an offline store of root certificates which need to come online to sync twice a day. Imagine cases where you need a hardware HSM. Then there is also Cost as others have pointed out. AWS cost structure is so complex that whole business models[1] have sprung up to help you optimize the price tags or reduce the risks of huge bills. that's right: you need to have a commercial agreement with another partner that has nothing to do with your cloud just to work around aggressive pricing. The guy who started this ~2 years ago has grown to 40+ people (organically), is based in Switzerland and is still hiring even in this recession. It should give you an idea of how broken the cloud is. Lastly there is also the lock-in. All the hours that you have to sit down and learn how the AWS IAM works is wasted once you decide to move to another cloud. The cost for learning how to use the 3rd party API is incurred by you not the cloud vendor. For people who think lock-in isn't much of a problem remember your whole hiring strategy will be aligned to whatever cloud vendor you're using (look at job description that already filter out based on AWS or GCP experience). Lock-in is so bad that for a business it is close to the rule of real-estate (location, location, location), only that it's to the advantage of the cloud vendor not you as the customer. [1] optimyze.cloud [2] _" I have just tried to pull the official EC2 price list via JSON API, and the JSON file is 1.3gb"_ [https://twitter.com/halvarflake/status/1258161778770542594](https://twitter.com/halvarflake/status/1258161778770542594) ------ blackflame7000 If you have a lot of data but not a lot of users, it's prohibitively expensive to pay monthly hosting and network egress fees when you can buy cheap hard drives, use ZFS, and whatever server protocol you desire. ------ mcv I notice all the banks I'm working for are moving to the cloud. A few years ago they all had their own data centers, sometimes really big, well-designed custom data centers. But they're all moving to the cloud now. I've personally been wondering whether that's wise, because financial data and the handling of many banking processes are a bank's core business. It makes sense that a bank should be in control of that. And it needs to obey tons of strict banking data regulations. But apparently modern cloud services are able to provide all of that. ------ Xelbair Cloud costs way way more compared to on-prem solution for my company. We need random access to about 50TB of files, and quite a decent number of VMs. For storage on-perm vs cloud: buy was cheaper to have after 3(!) months. For VMs(some of them could be containerized though): 1 year It was cheaper to buy second-hand decent server, slap SSDs and just install a decent hypervisor. Those costs also include: server room, power usage, admins etc. We do use cloud backups for the most important stuff. Cloud is cheaper if your business is a something that is user-based - as in you might need to scale it, hard. If you aren't doing anything like that it is absurdly expensive. ------ benbro Can you recommend dedicated hosting provider that: 1\. Has US, EU and Asia regions. 2\. Let you rent servers per month. 3\. Has decent pricing. Not premium, doesn't have to be low cost. I expect excellent egress pricing and 1/2-1/4 cost for CPU compared to the cloud. 4\. Reliable network. GCP premium network is very good. How does dedicated providers and VPS providers (Linode and DO) compare? 5\. Easy to use management and dashboard. Experienced really bad dashboards and hard to use Java tools to install and manage dedicated servers. ------ lrpublic \- cost, as well evidenced in other comments here. The hyperscalers are orders of magnitude more expensive than dedicated hosting or using collocation providers. \- lock in, all the hyper scalers want to sell you value add services that make it hard or impossible to move away. \- concentration risk, hyper scale providers are a well understood target for malign actors. It’s true they are better protected than most. \- complexity, if you think about how little time the hyperscalers have been operating in comparison with corporate IT they have created huge technical debt in the race to match features. ------ sgt We do a hybrid approach which I think makes sense for a lot of companies. Our mission critical stuff runs in the cloud, but anything that has to do with staging environments and development we do on-premises. It's pretty easy to host yourself, if you have a couple of decent engineers looking after it (depending on scope, of course!). Redundant power, redundant internet connections, and a few racks of Dell servers and gigabit switches. Why did I mention Dell? They just don't seem to die. We used HP for a few years but had a few bad experiences. ------ 2rsf Security (we're a bank) ~~~ aspyct How is the cloud less secure than your on-prem servers? I would argue that it's easier to keep track of all the threats with the tools available from big cloud providers. ~~~ badpun The big question is - how much do you trust these providers. ~~~ quicklime I would guess that approximately zero banks own data centers that are operated by their own employees. There might be a few exceptions to this, but the reality is that most banks don't view technology as part of their core business. So this largely gets outsourced to IT consulting firms like Infosys, IBM, Wipro, etc. The big question is - how much do you trust _these_ providers, and do you think they are more competent at security than Amazon/Google/Microsoft? ~~~ dathinab Or formulate it differently: 1\. Trust a provider which whole existence relies on trust and which you can audit or at last cross-check the audit and security processes (as a Bank you are normally not a small customer). 2\. Trust a provider which might 1st be a potential competitions in some business fields. 2nd is so big that it can easily afford losing you. 3nd for the same reason doesn't allow you any insights into there internal processes. etc. Plus many of the banks having their own hardware also have their own IT team. So it's often about trusting your own people. I mean either you keep your it or you outsource _and_ go into the cloud. Keeping local servers but outsourcing IT at the same time seems kind not very clever tbh. ------ samcrawford Cost is the sole reason for us as well. We have ~600-700 dedicated servers around the globe, and generate a lot of egress traffic (~20PB/mo). We last ran the figures a year or so ago, and it'd cost us around 13-15x in network costs alone. A common thread of a lot of the replies to this post is network traffic costs. If one of the cloud providers can figure out a way to dramatically (and I mean at least 10x) reduce their network transfer pricing, then I think we'll see a second wave of companies adopting their services. ------ irrational It is so so so much cheaper. We moved to AWS and tried setting up our servers with specs that were comparable to what our physical servers had been. We just about died after the first month. Our bill was higher for one month than for multiple years running our physical servers. We had to way dial them back to get the run rate down to a reasonable number. Now we have serious buyers remorse. Performance is terrible. The cost is still more per month than we ever had with our physical servers by a large amount. ------ fpierfed We do not use the cloud. We operate (24/7) facilities in remote locations where we do not have super reliable internet connection (we do have redundant links including three different fibres on distinct paths plus radio links but still). For this reason alone nothing critical can be in the cloud. In our experience, however, cloud offerings are not that cheap compared to purchasing and operating your own machines. Besides, one still needs sysadmins even when operating infrastructure in the cloud. ------ kjgkjhfkjf On-prem can make sense where your computing needs are constant and predictable. You can arrange to have exactly what you need, and the total cost of buying and operating it may be less than it would be to get comparable resources in the cloud. If your computing needs vary over time, then provisioning on-prem for peak load will mean that some of your resources will be idle at non-peak times. It may be cheaper to use cloud resources in cases like these, since you only need to pay for the extra capacity when you need it. ------ blodkorv Am the CTO of a small company making payroll software. We don't have on premise servers but we currently are moving away from azure to renting VPSs from a local provider. The cost benefits are huge and since our app is mostly a normal web app we dont need that many fancy cloud things. And i dont see us needing it in the future. I really dont understand why a company doing similar things would wanna go the could route. Its so damn expensive and its not always easy to use and setup. ------ iseletsk We are software development company. Most of our compute/storage needs are for build/test cycles. We recently bought ~100K worth of additional hardware to migrate some of that work off AWS. The storage / virtualization is done using Ceph & OpenNebula. Including colocation/electricity/networking costs, the investment will pay for itself in ~9 months. If I would include deployment costs & work to migrate the jobs off the AWS -- it will pay for itself in 11 months. ------ starpilot PII and draconian security policies. We are not a tech company, so we can't fine-tune or have nuanced policies, we just have to build a wall around everything . In our web password recovery process, we can't tell people if their login was correct or not, because that might help a brute force attacker infer they got that right. Even though we have that rate limited anyway. I don't know why we can't just tell people the login was found or not found like banks etc. ~~~ vikramkr What industry? It's good practice to not share information like that in any context, since attackers that have a bank if email addresses that are trying to figure out which if a few reused passwords might be used for a given website would have a harder time if they dont even know if the email has an account with a given website, or if an alternate email is used instead etc. ------ kkielhofner Not a company but a personal project: [https://github.com/krisk84/retinanet_for_redaction_with_deep...](https://github.com/krisk84/retinanet_for_redaction_with_deepstream/wiki) I haven't analyzed the TCO yet but the bandwidth costs alone from hosting my curated model of 100GB in the cloud (Azure blob) have greatly exceeded my total operation spend from downloading the entire dataset and running training. By an order of magnitude. ------ jll29 The smartest approach is to be able to run anywhere, which is increasingly practical due to VMs, Docker etc. (At least funded) startups should start with the cloud as speed to completion is key, but can later optimize for cost. Elasticity of the cloud is also great, dealing with peak demands dynamically without having to purchase hardware. I'd suggest larger companies to use at leas two cloud vendors to add resilience (when MS Teams went down, so did Slack - I was told they both use MSFT's cloud). ------ tpae Because Elon doesn't believe in the cloud. We were one of the few teams that got AWS access, and we were told not to rely on it too much because it's temporary... ------ thecolorblue Just talking about my side project here: a local server + ngrok is easier to use and cheaper than anything in the cloud. In general, I would say any noncritical system I would host on-prem. ------ edoceo Cost. We're a small company, five person team and we need a development environment. All our stuff is built around VM and Docker. So scores of little test nodes that get run, or beta environment in the cloud was costly (>$300/mo base, sometimes 3x). For $1000 we put a box in the office we all VPN to that runs all the necessary VM and Docker. The variable, sometimes expensive cost for Test/Beta in cloud was replaced with low fixed cost. ------ nurettin Sure, for tiny intel servers it makes sense to rent vms. It won't start to hurt until 1.5 years later at which point the project needs to become profitable anyways. I run a couple of on-premise xeon gold machines with 96 gb ram and 40+ cores on each. Their total purchase cost was the monthly cost of renting them on the cloud. Also, you will never get the full benefit of the servers you use unless they are dedicated instances with no virtualization layer. ------ pachico It depends on what you have to do. If your stack includes a series of microservices/monoliths connected to the typical OLTP DB then you might very well sit entirely on cloud. Things change when you need heavy lifting like having big ElasticSearch or ClickHouse clusters, or any other operation that requires heavy CPU and high RAM capacity. In that case using providers like Hetzner can cost you 1/10 of the bill compared to AWS. ------ AdrianB1 Manufacturing plants across the world controlling production lines: no way to go in the cloud for that. We put the reporting data in the cloud, no problem there. ------ Yizahi "Server" is such a broad term. In our case, aside from cost as others already mentioned, the distance and latency is very important. The servers must be located as close to client devices as possible and reasonable, and they are synced with clients using PTP to microseconds (depends on the actual distance and topology). Cloud is a no go, and we a using bare metal K8S for graceful handling of failures and updates. ------ iso1631 Most of my equipment has physical interfaces, video and audio in and out Some equipment is very latency sensitive -- I'm talking microseconds, not milliseconds. More generic tasks need easy access to that specialist equipment (much of which doesn't quite grasp the concept of security) Given that we therefore have to run a secure network across hundreds of sites on multiple continents, adding a couple of machines running xen adds very little to the overhead. ------ thorwasdfasdf Well, for one thing: jobs. It takes a lot of IT man-power to manage an on- premise solution, especially when you run everything on-premise. Just imagine if the CTO were to switch the company to a cloud based solutions, it would save the company millions of dollars but also it would mean cutting a lot of jobs. Gov departments that use on-premise do so for security reasons and to maintain existing jobs. ------ timbre1234 Amazon/Azure/GCP - they're _businesses_. They charge you 50-70 points of margin in order to run your computers. If you're R&D-limited, that's not important to you, but if you're a more mature company that's cost-limited then it matters a lot. If it's a core part of your business, it'll never be cheaper to pay another company to profit from you. Period. ------ StreamBright Yes, some of the leadership thinks that they can build a better cloud than MS or AWS. It is pretty hilarious to watch how spectacularly they fail. [https://forrestbrazeal.com/2020/01/05/code-wise-cloud- foolis...](https://forrestbrazeal.com/2020/01/05/code-wise-cloud-foolish- avoiding-bad-technology-choices/) ------ mathattack We are going to the cloud, but you have to be careful. With on-prem the limit of the cost is the server. Someone writes inefficient code and it just doesn’t work. In the cloud there are 1000 ways to overspend and the vendors purposefully don’t make it easy to track or keep things under control. It’s kind of like outsourcing. If you don’t know what you are doing, cost goes up and quality goes down. ------ Mave83 Cloud is only good if you don't care about costs and plan to scale without looking back. For example, building a Ceph based Software Defined Storage with croit.io for S3 comes for 1/10 to 1/5 of the AWS price in TCO. Same goes for any other product in the cloud. If you only need it a short time up to 6 months, go to the cloud. If you plan to have the resources longer than 6 months go to Colocation. ------ gbasin Cloud will continue to dominate, even if it's more expensive. Why? Because the best companies are able to focus on what makes them special, and outsource the rest. Cost and security are important, but they may not be most important. In a business, the scarcest commodity is FOCUS. By outsourcing anything that isn't core to your product, you can excel at what differentiates you. ------ Darkstryder On top of what others have said, when outside the US the Cloud Act has been a big one for most previous companies I worked for. Using AWS / Azure / Google Cloud (even using datacenters from your own country) implies that the US government can access your data at will. As soon as you treat sensitive information, especially related to non-US governments, this becomes a blocking factor. ------ Areading314 In some industries, cloud is not an option. For example, certain privacy laws like HIPAA preclude uploading data to third parties, in which case, you need things to be on-prem. There are also a lot of places in the world where internet access is limited. Sometimes you need to solve problems beyond the simple "web saas in cloud" use case ------ reverseengineer Own servers cheaper than cloud. We calculated two years ago, it was up to 4x for HPC. Best cost-wise option is used servers. ------ sradman Legacy seems to be missing from the comments. Before the advent of Cloud IaaS (Infrastructure as a Service) a very large ecosystem of On-Premise hardware and software flourished. The question needs to be considered in the context of greenfield vs brownfield systems as the trade-offs involved differ drastically. ------ Spooky23 It all depends. Outside of SaaS, If you have a mature data center operating model and truly understand your costs, there won’t be a strong cost savings story for many types of workflows. If you suck and don’t understand costs, or don’t automate, or spend a lot of time eating steak with your Cisco team, you’ll save money... at first. ------ mister_hn There's a fake believe in my company that all the data must be on premise because of privacy concerns. We've never used cloud services and we do not want to use it. Some are saying it's a matter of costs, but you know? For a dual node server (hot standby) we were asked 120K € + 50K € only for configuration fees. ------ 32gbsd Cloud servers tend to hide their limitations behind payment tiers which makes it hard to really know how far you can push things. Also there are various turns, conditions, cache rules, change management strategies that are hidden when dealing with someone else's constantly changing box of magic. ------ lvturner We install in to remote locations - you can't access the cloud if your connectivity is down so local resources is a hard must. Though we have adopted something close to an "Edge Computing" solution... I guess it comes down to "Why not both?" :) I think it also depends on your definition of "server" ------ sergiotapia Just want to say I love threads like these. I hope one day where I work we can have two or three obscenely beefy servers and be done with it. I'm planning for something similar probably Q2 2021 as our expenses grow too large on a manged hosting platform like Heroku/Render/Aptible. ------ dahfizz There are some things you just can't do in a VM. The company I work for actually develops and hosts an AWS clone for the Linux Foundation, but with very specific requirements. They have special needs that requires baremetal machines and "real" networking between them across 6+ NICs per server. ------ withinrafael Not many services are available in gov cloud regions, so we're stuck with on- prem nearly everything. ------ jarym my 2cents: the 'big' clouds (AWS, GCP, Azure) and the big-brand clouds (Oracle, IBM, etc.) are attractive for BigCustomerCo because: 1\. Replace capital expenditure of in-house infrastructure + staff with OpEx that can be dialled down 2\. Get to benefit from the economies of scale that the cloud vendors get (those Intel CPUs get a lot cheaper when purchased in the 1000s) 3\. Get to leverage big shiny new tech like Big Data and AI that's 'advertised' as 'out-of-the-box' My only concern really is that the big cloud players are all fighting for dominance and market share. What happens in the next 5-10 years time when they start raising prices? Different kind of lock-in - customers won't have the expertise in-house to migrate stuff back. ------ trelliscoded Zseries are a huge deal to move to the cloud; it’s just not worth the risk for most organizations. ------ icelancer Yes. Cost of GPUs if you want them on regularly and reliably, and you don't need 2000 of them. We run small/mid-sized operations on-demand and the latency of spinning up instances is not competitive and the cost is outrageous to have them on standby. ------ collyw There is extra complexity with managing cloud based solutions. Logging in, setting up ssh keys. Ok it's all automatable but if I want a basic server set up for doing a simple task quickly it's often a lot easier to run it on an internal server. ------ dryst If your product doesn't run connected to the internet, it is difficult to make the case for cloud development. You need developers who understand hardware, and abstraction layer cloud provides is a handicap instead of an enabler. ------ julienfr112 I think it depends on what is the alternative and what hardware you are buying. If you go for top of the line HPE or dell or Cisco hyperconverged stuff that allow you to be, sort of, your own cloud, you will end up with the same large bill. ------ skunkiferous I work for a German company. They run their own DC (unfortunately, I'm not privy to precise numbers I could share, but we must be in the 1000s of hardware servers). Why? Because our (German) clients don't _thrust US cloud providers_. ------ exabrial Hybrid cloud is the moneymaker solution, but there are no out of box solutions for it. ------ maltelandwehr Gigantic Elastic Search cluster - according to Elastic the largest in Europe (as of 2 years ago) - used in production. Broke again and again on AWS. We needed more shards than AWS supports. Moved to bare metal again. ------ iamgopal When cloud computing started, alternative software deployment was complicated, after a decade, much has been improved. So ease of management is also one of the factor. Not all need state of the art data and AI. ------ kevlar1818 We use on-prem GPU nodes for training deep learning models. Our group estimated the cost vs. cloud and it was significantly cheaper to go on-prem. I can't speak to security-wise and whatever-wise though :) ------ caseyf We colocate and as a small tech business, I can't imagine doing anything else. We don't spend more on payroll due to colocating and AWS/etc would easily double our annual non-payroll expenses. ------ cm2187 One good reason for your list: diversification. You don't want all the banking systems of a country all running on AWS. It's an unacceptable single point of failure risk for a country. ------ joaodlf Like many said, cost. But also - legalities. Most cloud providers have very unclear rules and what exactly happen should you be in breach. For this reason, our business prefers to have most of the control. ------ z3t4 There are several levels, not just on-prem vs cloud. You can for example co- locate, rent dedicated servers, rent a single VPS, or put your website up on a web hosting provider... ------ benbro How do you manage installation and upgrades of dedicated servers? What do you use for block storage and object storage? Kubernetes and Ceph seems to be hard to setup and maintain. ------ altmind to control the costs. after the original purchase, the MRC is a fixed cost for hosting and for the network access. also, for the total control - the network, io, cpu performance does not change randomly. with better predictability our IT team can give more precise SLA. we're not 100% on-prem, but aws, gcloud and azure are the worst examples of 3rd party hosting - unpredictable and with complicated billing. we're considering the alternatives to big 3 for the "cloud hosting" ------ manishsharan I work in a very large financial institution in small tech team. I would give an arm and a leg to not deal with soul sucking bureaucracy that is our IT department. ------ PudgePacket This thread has been illuminating, a lot more non-cloud people than I thought! Drinking the cloud kool-aid had me thinking cloud was the only realistic way to go. ------ JoeAltmaier My client's customers are geolocated (continental US) and their personal data is sensitive. So their server is in their own firewalled server closet. ------ jerzyt Client data confidentiality. I know it's a weak argument, but if the contract requires that we store data in-house, there's no choice. ------ kortilla It’s not secure if you have air gap requirements or issues with employees from another company technically having access to all of your data. ------ FpUser I normally host on prem and also rent dedicated servers elsewhere as standby. Way cheaper than the cloud and full control the way I want. ------ GaryNumanVevo We run one of the largest Hadoop clusters on the planet. On-prem is very cost efficient if you're running jobs flat out 24/7 ------ natmaka Confidentiality. Protecting sensible data (avoiding letting some hostile obtain or even modify them) seems impossible on the cloud. ------ hans_castorp We are staying on-premise for security and privacy reasons. We can't store our data outside of the company (or even worse: outside of the EU) ------ moonbug TCO. ~~~ aspyct Which, all things considered, seems lower on the cloud. Could you give more details on this answer? ~~~ lazylizard A 2u asus with 2 xeon silver, 64gb ram and 4 x 2080ti is maybe us$15k? We'll use it for as long as its producing useful output. Lets say 5 years? Probably a little longer? A 60bay 4u western digital with 14tb drives is under us$50k? definitely got a dell md1280+1u server with 70x10tb 2yrs ago for under us$50k. Fully populated the following year.. A 2u dell with maybe 20 cores n 128gb ram each should cost less than us$10k. And we just got 4 or 5 dell switches with 48x10g ports for 50-70k? I'm not sure. What's the equivalent in the cloud? ~~~ aspyct Are you keeping tabs on the cost of the building and associated security? Video cameras, locks... Also the manpower needed to install and maintain that equipment. Similarly, your 70*10tb tells nothing. What's the redundancy on that, how much of that space did you lose to it, where do you store your backups and at what cost? As for networking, having those switches is nice, but you still need your internet connection if you're serving anything online from this. ~~~ lazylizard The manpower to run that stuff is already doing desktop support. Storage is usually a stripe of 10 disks in raidz2 + 2 spares. Then we do a nightly zfs send.. The internet connection for the hosting is 100mbps. The users share 1gbps of internet. We pay only for hardware. Rent and utilities. And the internet connections. Everything else we can do it ourselves. I mean. Seriously. Whats the cost of running 100 x 4 x 2080ti in the cloud for a year? Or storing 500tb(1 instance only,no redundancy) ? ------ leothecool If your customers still want the system to run air-gapped from the internet, cloud is basically off the table. ------ mlang23 Cost and data security, while cost actually weights more. It is simply not true that the cloud is cheaper. ------ totorovirus I work in a data science company with 30+ engineers. We've spent 80k dollars on GKE last month only.. ------ yread Yes just moved from 4 servers in AWS costing 700 usd/month to a single dedicated one that costs 40. ~~~ gizmodo59 Just out of curiosity, what downtime is in acceptable range for that single server? ~~~ yread At night noone cares ------ fulafel Cloud is even more muddled as a term in this space than usual (Hybrid/private added to the mix). ------ som33 >I honestly fail to see any good reason not to use the cloud anymore, at least for business. Cost-wise, security-wise, whatever-wise. Problem is point of failure, many businesses need to be independent and having data stored in the cloud is a bad idea overall. Because it produces point of failure issues. Consider if we ever got a real nasty solar wind and the electric grid goes down, the more we rely on the internet and centralize infrastructure into electric devices, the more it becomes a costly point of failure. While many see redundancy as "waste" in terms of dollars, notice that our bodies have many billions of redundant cells and that's what makes us resilient as a species, we can take a licking and keep on ticking. Trusting your data to out-side sources generally is a bad idea any day of the week. You always want to have backups and data available in case of diaster, mishap, etc. Like no one has learned from this epidemic yet. Notice that our economic philosophy didn't plan for viral infections and has forced our capitalist society to make serious adjustments. Helping people is an anathema to liberals and conservatives / republicans / democrats, so for void to come along and actually force co-operation was a bit tragically humorous. As a general rule you need redundancy if you want to survive, behaving as if the cloud is almighty is a bad idea, I'm not sold on "software as a service" or any of that nonsense. It's just there to lull you into a false sense of security. You always need to plan for the worst case scenario for surviveability reasons. ------ funny948 At least a few years ago, tons of company were afraid of the 'cloud'. It does change right now. ------ blaser-waffle Big public clouds have a genuine purpose but there is a shit-ton of marketing and FUD being thrown around about them -- I'd bet my hat that's what this post is, given the phrasing up top. I'm not a fan. In short: Cost. CapEx and depreciation vs. OpEx. The numbers look amazing for ~3 years until the credits and discounts wear off. Then it's just high OpEx costs forever. Meanwhile I can depreciate my $10k server over time and get some cash back in taxes; plus it's paid for after a couple years -- $0 OpEx outside of licenses, and CentOS has no license cost. Once you have significant presence in someone's cloud, they're not going to just lower costs either -- they've got you now. What in American Capitalism circa 2020 makes you think they won't find a way to nickle and dime you to death? It's not going to reduce headcount, either. Instead of 14 devops/sysadmins, now I have 14 cloud admins, sitting pretty with their Azure or GCP certs. Automation is what's going to reduce those headcounts and costs, and Ansible+Jenkins+Kubernetes works fine just with VMware, Docker, and Cisco on- prem. Trust. The Google Cloud just had a 12-hour outage -- I first read about it here on HN. AWS and Azure have had plenty of outages too... usually they're just not as open as Google is about it. You also have to trust that they won't get back-doored like what happened to NordVPN's providers, and that they're not secretly MITM'ing everything or dup-ing your data. We (and some of our clients) compete with some of the cloud providers companies and their subsidiaries, and we know for a fact that they will investigate and siphon any data that could give them an advantage. Purpose. _We just don 't need hyper-scalable architecture._ We've got a (mostly) fixed number of users in a fixed number of locations, with needs that are fairly easy to estimate / build for. Outside of a handful of sales & financial processing purposes, we will never scale up or down in any dramatic fashion. And for the one-off cases, we can either make it work with VMware, or outsource it to the software provider's SaaS cloud offering. If we were doing e-commerce -- absolutely. Some sort of android app? Sure, AWS or Azure would be great. But it's a lot of risk and cost with no benefit for the Enterprise orgs than can afford their own stuff. ------ hendry Bandwidth is the problem. You can't run a Youtube on any cloud provider. ------ alpenbazi Security. And Independency. ------ aripickar I work for AWS, so (in the most pedantic way possible) technically yes ------ whorleater Yes, but we work in financial information so very weird requirements. ------ gramakri We use servers on the cloud (IaaS) but still self-host all our apps. ------ _wldu Local caching recursive DNS servers work best close to the clients. ------ Abukamel Yes online.net and the like companies save costs ------ itworker7 In physics, a gallon of water weights 8.34 lbs. (For this analogy, a gallon of water is a unit of work.) And the gallon of water weighs 8.34 lbs irregardless if it is sitting on my desk in a physical building, or on your desk, in the cloud. Same weight, same unit of work. same effort. For a brand new, greenfield application, the cloud is a no brainer. I agree 100%. But for legacy applications, and there are so, soooo many, the cloud is just some one else's computer. Yes, the cloud is more scaleable, yes, the cloud is more manageable, and yes, you can control the cpu/storage/memory/network in much finer amounts. But legacy applications are very complicated. They have long tails, interconnections to other applications that cannot immediately be migrated to the cloud. I have migrated clients off of the cloud, back to on premise or to (co-lo) local hosting, because without rewriting the legacy application, the cloud costs are simply too great. The essence of IT is to apply technology to solve a business problem. Otherwise, why would the business spend the money? The IT solution might be crazy/stupid/complex but if it works, many business simply adopt it and move on. Now, move that crazy/stupid/complex process to the cloud and surprise, it is very, very expensive. So, yes, the cloud is better, but only for some things. And until legacy applications are rewritten on-premise will exist. One final insight. The cloud costs more. It has been engineered to be so, both from a profitability standpoint(Amazon _is_ a for profit company) but also because the cloud has decomposed the infrastructure of IT into functional subcomponents, each of which cost money. When I was younger, the challenge for IT was explaining to management, the ROI of new servers, expanded networking, additional technology. We never quite got it right and often had it completely wrong. That was because we lacked the ability to account for/track and manage the actual costs of an on-premise operation. Accounting had one view, operations had another view and management had no idea really, why they were spending millions a year and could not get their business goals accomplished. The cloud changed all of that. You can do almost anything in the cloud, for a price. And I will humbly submit, that the cost of the cloud - minus the aforementioned profitability, is what on-premise organizations should have been spending all along. Anyone reading this and who has spent time in a legacy environment, knows that it is basically a futile exercise of keeping the plates spinning. On-premise failed because it could not get management to understand the value on in-house IT. As I said, the costs are the same. A gallon of water weighs what it weighs regardless of location. It will be interesting to see, I predict the pendulum will swing back. ------ shakkhar Because we don't trust either amazon or google. ------ loeg Cost-wise is still a pretty compelling argument. ------ Stierlitz Annual costs, backups, security and latency. ------ markc on-premises not on-premise ------ sarasasa28 we are not zoomers
{ "pile_set_name": "HackerNews" }