text
stringlengths
44
950k
meta
dict
Weird Things Teens do in 2018 - Ephil012 https://medium.com/@etp5501/weird-things-teens-do-in-2018-8322b7896f5f ====== abenedic These are only weird if you give them weird names like the author. Teens have been doing the same things forever.
{ "pile_set_name": "HackerNews" }
Freelancer services co Fiverr raises $15m - sparknlaunch12 http://www.globes.co.il/serveen/globes/docview.asp?did=1000746276 ====== ecubed While people do sell alot of pretty useless stuff on that site, there's also alot of really useful services too. I wish they would do more filtering of the nonsense and get it focused to the more serious stuff, but I guess part of the fun of the site is getting a hipster to dance in spandex and sing happy birthday... Congrats to the team on a successful fundraising round
{ "pile_set_name": "HackerNews" }
How does one go about choosing a graduate school for computer science? - scarface548 ====== stonemetal Look at what they research. It will more or less define any research you do. If you don't do research it will color what your professors know, and how well it is covered in your classes. Another is how well is the school ranked, generally better ranked schools attract better candidates and better professors, cooler research etc. For instance consider DARPA grand challenge entries. How many were done by top universities and how many were done by community colleges or even universities ranked middle of the pack? And last but not least soft issues, cost, location, etc. ------ CyberED I'm assuming that you have an idea of what interests you and that you are shooting for a PhD or masters by research. Research your topic of interest on Google Scholar. Read widely to identify who are doing comparable work. Write a research proposal and send to the professors who are doing and supervising such work. You need to be very passionate about your topic in order to sustain you through 3-5 years of grad school. Of course, what you write your thesis on won't resemble your research proposal. That's just the way it is. You discover so much as you dig deeper and deeper into your chosen area.
{ "pile_set_name": "HackerNews" }
Products Over Projects - kawera https://martinfowler.com/articles/products-over-projects.html ====== mikekchar I have found that what is important for a software team is having members that can work well together. It takes time to make such a team and some people can _never_ work well together. Organisations should spend considerable effort building and maintaining effective teams. I am often surprised at how poorly this is understood. Technology also takes time to understand and be effective with. A good team can usually work on anything and still be a good team, given that they have enough time to ramp up, though. My personal feeling: build strong teams. Ignore technology for the most part. Put all your effort into getting people who work well together, working together. Allocate time and money into making sure that the team has time to gel and that they have the resources (tools, equipment, and working environment) that they need (hint: each team requires different resources. Think hard about what each individual team needs). After that you have choices. Keeping a team on a single "product" eliminates ramp up time. On the other hand, it also leads to singular ways of thinking and silos. If you have a "product" that absolutely needs responsive development all the time, then consider having a "product" team. On the other hand, not everything requires instantaneous response. A good team will usually be able to ramp up to very near full speed in about 3-4 months. For some things, that is completely reasonable. You have to consider the cost of maintaining a "product team" vs the cost of ramping up a "project team". The advantage of the "project team" is that a new team can often come in with new ideas that will improve the project. Moving project teams around from time to time can ensure that teams are exposed to other ways of thinking and this can help your organisation. The advantages and disadvantages are not necessarily obvious. ------ cateye It doesn't take the contradicting interests in to account. People in IT should really stop to pretend that they are the only smart people that know how to run a business. A cfo can easily understand the benefits of an organizational capability to continuously improve a core business asset. But he also understands the difference between capex and opex and needs to take decisions in order to achieve the strategic goals of the company. This strategy can or can not define software development (for a specific goal) as a core in house capability that gets an investment. This relates also heavily with the defined core business etc. Without understanding the long term strategy of a company, it does not make a lot of sense to come up with generic advice. The ceo and the management board do generally (almost intuitively) understand which activities will generate competitive advantage. If they are not in the silicon valley hype, this can mean that they can make a deliberate choice to invest in other things than software teams. Is it really that unthinkable that other things than software can bring value to a company? Do all companies need to act like Google or Facebook? >>for greater responsiveness and a higher benefits realization ratio, “product-mode” is a more effective way of working than projects. This is a sweeping statement. This could definitely lead to higher benefits and lower responsibility for the consulting company. That is almost for sure. But that doesn't mean that a short living project organization is a bad idea by default. The choice needs a much more substantial argument like the probability of a positive realization result (depending on the complexity and uniqueness of the software) versus risk if it does not get realized etc. If it can't be bought of the shelf, it can be outsourced or developed entirely in house by long term perm employees... >>it may feel unsound to those who are used to approving big change programs with detailed roadmap... If a project can be predicted by a high enough level of certainty, I would say that there is nothing wrong with a detailed roadmap. Also here, a false dichotomy. ~~~ maaaats > _People in IT should really stop to pretend that they are the only smart > people that know how to run a business._ But one problem I have seen as a consultant is companies don't realize they are software companies now. For instance, they are no longer a bank having a small IT department, IT is now their core business and should be treated as such. ~~~ sidlls No, IT is not their core business. Banking is. IT is a means to that end. ~~~ PeterStuer Not to be nit-picky, but the 'means' to provide the result are in the core- business of the enterprise inseparable from 'the business'. Being in the 'conceptual' banking business without the means to do banking isn't going to fly. Same as casting a farm as a 'food produce provider' and telling them that farming is not what their core business is about. Semantically correct, but not useful for most purposes. ~~~ ckastner Grandparent said IT is _a_ means. There are other means to provide the core business of banking. There are banks who outsource / spin-off not only their IT, but even transaction processing and settlement. > Same as casting a farm as a 'food produce provider' and telling them that > farming is not what their core business is about The correct analogy would be the other way round: a 'food produce provider' whose core business it is to distribute food produce. Here, too, the farm is a means to the end, but it's not the only means. ------ doug1001 i would think Martin Fowler would know better than this. to argue that the meaningful rubric of effort ought to be a "product", he creates a straw-man called a "project" the much more meaningful comparison is to compare products versus "platforms" and "tools/infrastructure" if he had done that, then he i bet that he would conclude that all three rubrics of effort are necessary, and preferring one vs the other is a question of priority, not "do one instead of the other" ie: some things you built to sell; other things you build so you can build those things better and more efficiently; and still others you build for use as reusable or modular components across multiple future products, so you don't have to build the same component over again for each product that requires it "tools", eg: the 250-developer-hours spent on a "project" to create an automated testing & deployment pipeline, will likely net out in a few months and result in higher quality products almost immediately "platform": a "project" to factor out code common across many workflows in a typical web app, and re-implement that code as microservices having endpoints available to any of the multiple data flows that need them (for instance, "user-login & authentication", "user-profile fetch", "re-ordering user-search results using learning-to-rank algorithm" etc a project to build such reusable utility micro-services obviously don't result in a "product", and yet such a project is usually cost-justified (in CFO terms) even over short-term periods (< 90 days) ~~~ pjmlp Those 250 hours need to be payed by someone, which at an hourly rate of 100 euros, pretty average for some consulting gigs, gets us 25k euros. Who is going to pay them and how to validate how much money would have been saved, if those 25k euros hadn't been spent? Edit: should learn math. :) ~~~ oatmale that would make it 25k not 250k ~~~ pjmlp Thanks. ------ skybrian The opposite of this would be "build it to walk away." It's not obvious that every project worth writing some custom software for is worth having a dedicated team working on it. You need to think seriously about how to avoid changing dependencies, though. It's like planning an exercise in retrocomputing in advance. ~~~ mr_toad In my experience, most software development projects have been more like "build half of it, throw it over the wall, and make yourself scarce." Especially when consultants are involved. I have never seen traditional project management techniques successfully applied to software. ~~~ pjmlp I used to think like that, then I became a consultant and got to enjoy how companies whose main business is not at all related to IT do work. 100% of them don't care about code beauty or maintainability, just something that does what they need for their actual business, done in X days within Y euros/dollars/yang/.... So they get what they want, with the quality they are willing to pay for. ~~~ tonyedgecombe Yes, it's hard to underestimate how poor most corporate software is. ------ baxtr I find it interesting to observe that Apple - THE product company - works in project mode. Apple is mainly organized by function. Presumably to avoid product silos ~~~ tensor My impression is that Apple works in project mode only for building a new project, not improving existing ones. The reason quoted for this was secrecy, not breaking down silos. Where did you read that they work in project mode more widely? ~~~ baxtr This is what I assume reading articles like [1]. Obviously, there is no direct product P&L responsibility [1] [https://stratechery.com/2016/apples-organizational- crossroad...](https://stratechery.com/2016/apples-organizational-crossroads/) ------ brucephillips This should be called "problem-mode", not "product-mode". The distinguishing feature is that teams continuously work to solve a problem. This doesn't end when the product is finished. ~~~ confounded Software products are never finished. ~~~ rubidium neither are houses or humans or science or forests or roads or hardware products or butterflies .... til they die at least. ------ HeroOfAges I'd like to see more organizations adapt this way of thinking in terms of the way they build software. Turns out business functionality is a great way to define boundaries for a service in a micro-service architecture. There's really no reason why the application code that gets a list of book recommendations (functionality provided by the business) can't be completely separate from the code that provides the user with a bio for an author (functionality provided by the business), or ratings for a book. This allows teams to work independently and in parallel. Organizing software development around projects makes it more difficult for software developers to leverage tools for continuous delivery and deployment. Project based development really does slow you down. Hmm... I guess my way would be to have teams that work together to deliver business functionality over products or projects. Functional teams, if you will. I see thinking in terms of products as a step in that direction. *edited to clarify my earlier statement ------ dboreham I'm not sure much can be done about this. It is what it is. Some organizations can afford to maintain an "A-team" to crunch through their problems with a good folk-memory. Other organizations can't. In addition the first kind of organization often needs something done pronto, ahead of when the A-team can get to it. ------ gadders Working in big orgs, I've seen before the problems that too extreme a project mentality can cause. Developers who move on to the next project on another system losing institutional knowledge, systems left to "rot" because no-one has budget to fix them or keep them up-to-date. However, what they are describing is really a series of projects with the same team on the same system or "product". I think there are advantages to both ways of working, but the appropriate one needs to be chosen. ------ gcb0 both are already the norm on bigger companies. the money making products have product teams, pumping billions every year while handling tons of incidents and stress. the new initiatives have project teams. burning millions while having a coin flip chance of success and getting every promotions and bonuses. ------ mannykannot It is ironic that, right now, we are in the midst of an event that shows this to be a false dichotomy, and that projects and products can be cross-cutting concerns. I am, of course, referring to the multi-product project of dealing with Meltdown and Spectre.
{ "pile_set_name": "HackerNews" }
Zappos is offering severance to employees who aren’t all in with Holacracy - SonicSoul http://qz.com/370616/internal-memo-zappos-is-offering-severance-to-employees-who-arent-all-in-with-holacracy/ ====== SonicSoul I love this idea, but wondering if this will backfire with mostly the best people (that can easily find other options) leaving
{ "pile_set_name": "HackerNews" }
Presumption of Stupidity - benzofuran http://www.aaronkharris.com/presumption-of-stupitidy ====== benzofuran Previous Discussion (~2 years ago) here: [http://www.aaronkharris.com/presumption-of- stupitidy](http://www.aaronkharris.com/presumption-of-stupitidy)
{ "pile_set_name": "HackerNews" }
BlackBerry accuses Snapchat of infringing its messaging patents - Zeta_Function https://www.theverge.com/2018/4/4/17196840/blackberry-snapchat-patent-infringement-messaging-app ====== bitumen Is this Blackberry’s new strategy? Become the next massive patent troll?
{ "pile_set_name": "HackerNews" }
A list of awesome minimalist frameworks - neiesc https://github.com/neiesc/awesome-minimalist ====== atrilumen Very cool. Please add [https://github.com/yoshuawuyts/choo](https://github.com/yoshuawuyts/choo) and [https://github.com/tachyons-css](https://github.com/tachyons-css). (Too lazy to send a pull; sorry ) ~~~ neiesc Thanks, added.
{ "pile_set_name": "HackerNews" }
The Science of Liberty, An Interview with Murray N. Rothbard (1990) - ableal http://mises.org/journals/aen/aen11_2_1.asp ====== ableal _Rothbard's law, which is that people tend to specialize in what they are worst at._ I suspect he was just, tongue-in-cheek, illustrating the specific cases he made immediately after. It does have a certain Karl "Half-Truths and One-and- a-Half Truths" Kraus ring to it.
{ "pile_set_name": "HackerNews" }
Google's $350B haircut - ForHackernews https://medium.com/s/which-half-is-wasted/googles-350-billion-haircut-fa1a0f33ace1 ====== smn1234 [https://webcache.googleusercontent.com/search?q=cache:h6l1gR...](https://webcache.googleusercontent.com/search?q=cache:h6l1gRN_rVoJ:https://medium.com/s/which- half-is-wasted/googles-350-billion-haircut- fa1a0f33ace1+&cd=1&hl=en&ct=clnk&gl=it&client=firefox-b) ------ masonic (paywalled)
{ "pile_set_name": "HackerNews" }
The Future of Neuromorphic Computing - anthotny http://www.newyorker.com/tech/elements/a-computer-to-rival-the-brain ====== bcatanzaro The reason AI has been so successful recently is that the research community has assumed a ruthlessly empirical philosophy: no idea, no matter how beautiful or interesting, is considered truly useful until it bears measurable results on some dataset. The reason neuromorphic computing gets such skepticism from AI researchers is that so far it has resisted any attempts at this kind of empiricism. No neuromorphic implementation has shown state of the art results on any important problem. If/When neuromorphic computers show groundbreaking results, the community will pivot quickly to using them. But expecting AI researchers to show deference to neuromorphic computing because it "mimics the brain" is to ignore the empirical philosophy that has led to AI's success. ~~~ andreyk To be fair, this whole Deep Learning renaissance was made possible and kicked off only after decades of research in multi layer neural nets (going back to the 80s) by Hinton, Lecun, etc. They stuck to their chosen method despite it not having great empirical results (the research community shunned NNs in the 90s for SVNs cause they worked better), because they believed it should and will work - and it did, eventually. So a similar argument for 'basic research' could be made for neuromorphic computing. ~~~ bcatanzaro Yes, I totally agree. Yann LeCun, Geoff Hinton, Jurgen Schmidhuber and others did unpopular work for a long time. And they deserve tons of credit for their perseverance which paid off. Similarly, I think it's great that there are AI researchers working on techniques which are currently out of favor. It's important to have diversity of viewpoint. What irritates me about neuromorphic computing is that much of the work I see publicized (including the work in this article) isn't being presented as basic research on a risky hypothesis. Instead it's presented as the future of AI, despite the current lack of any demonstrated utility, and the almost complete disconnect between the AI researchers building the future of AI and the neuromorphic community. The burden of proof is always on the researcher to show utility, and if the neuromorphic computing community can do that, I'll be super excited! Until then, I'll be waiting for something measurable and concrete, and rolling my eyes at brain analogies. ~~~ Russell91 > Yes, I totally agree. Yann LeCun, Geoff Hinton, Jurgen Schmidhuber and > others did unpopular work for a long time. ... > Until then, I'll be ... rolling my eyes at brain analogies. Maybe you don't realize this, but these guys made more brain analogies than you can count over the same period to which you attribute their greatness. Meanwhile, they were attacked year after year by state-of-the-art land grabbers saying the same things you just did. > isn't being presented as basic research on a risky hypothesis. It is basic research, but it's not a risky hypothesis. Existing neuromorphic computers achieve 10^14 ops/s at 20 W. Thats 5 Tops/Watt. The best GPUs currently achieve less than 200 Gops/Watt. Where is the risk in saying that a man-made neuromorphic chip can achieve more per dollar than a GPU. There is no risk, and suggesting that this field is somehow has too much risk for advances to be celebrated is absolutely crazy. ~~~ deepnotderp Non-neuromorphic (analog) deep learning chip startup here. We're forecasting AT LEAST ~50 TOPS/watt for inference. ~~~ Russell91 Sure - I guess it's productive for me to answer why this doesn't disagree with my comment. By the time you get the software to hook up that kind of low bit precision (READ: neuromorphic) compute performance with extreme communication- minimizing strategies (READ: neuromorphic), which will invariable require compute colocated, persistent storage (READ: neuromorphic) in any type of general AI application, you're not exactly making the argument that neuromorphic chips are a bad idea. We literally have to start taking neuromorphic to mean some silly semantics like "exactly like the brain in every possible way" in order to disagree with it. Edit: also, to ground this discussion, there are extremely concrete reason why current neural net architectures will NOT work with the above optimizations. That's the primary motivation for talking about "neuromorphic", or any other synonym you want to coin, as fundamentally different hardware. AI software ppl need to have a term for hardware of the future, which simply won't be capable of running AlexNet well at all, in the same way that a GPU can't run CPU code well. I think the term "neuromorphic" to describe this hardware is as productive as any. ~~~ p1esk Which existing neuromorphic computers achieve 10^14 ops/s at 20 W? If you compare them to GPUs, those "ops" better be FP32 or at least FP16. Also, you forgot to tell us what is that "extremely concrete reason why current neural net architectures will NOT work with the above optimizations". ~~~ Russell91 >Which existing neuromorphic computers achieve 10^14 ops/s at 20 W? If you compare them to GPUs, those "ops" better be FP32 or at least FP16. The comparison is of 3 bit neuromorphic synaptic ops against FP8 pascal ops. That factor is important (as it means that the neuromorphic ops are less useful), but it turns out to be dwarfed by the answer to your second question: > Also, you forgot to tell us what is that "extremely concrete reason why > current neural net architectures will NOT work with the above > optimizations". this is rather difficult to justify in this margin. But the idea is that proposals such as those above (50 Tops) tend to be optimistic on the efficiency of the raw compute ops. But these proposals really don't have much to say about the costs of communication (e.g. reading from memory, transmitting along wires, storing in registers, using buses, etc.). It turns out that if you don't have good ways to reduce these costs directly (and there are some, such as changing out registers for SRAMs, but nothing like the 100x speedup from analog computing), you have to just change the ratio of ops / bit*mm of communication per second. There are lots of easy ways to do that (e.g. just spin your ops over and over on the same data), but the real question is how to get useful intelligence out of your compute when it is data starved. This is an open question, and (sadly), very few ppl are working on it, compared to say low-bit-precision neural nets. But I predict this sentiment will be changed over the next few years. Edit for below: no one is suggesting 50 Top/w hardware running alex net software to my knowledge (though would love to hear what they are proposing to run at that efficiency) . Nvidia among others are squeezing efficiency for cv applications with current software, but this comes at the cost of generality (it's unlike the communication tradeoffs they're making on that chip will make sense for generic AI research), and further improvements will rely on broader software changes, esp revolving around reduced communication. There are a lot of interesting ways to reduce communication without sacrificing performance, such as using smaller matrix sizes, which would reverse the state of the art trends. ~~~ deepnotderp Our hardware can run AlexNet... ~~~ Russell91 In an integrated system at 50 tops/watt? How are you going to even access memory at less than 20 fJ per op? Like, you're specifically trying to hide the catch here. If we were to take you at face value, we'd have to also believe that Nvidia is working on an energy optimized system that is 50x worse for no good reason. For reference, reading 1 bit from a very small 1.5kbit sram, which is much cheaper than the register caches in a gpu, costs more than 25 fJ per bit you read. ~~~ deepnotderp So this is locked up in "secret sauce". But as a hint, the analog aspect can be exploited. ~~~ Russell91 Look, it sounds like your implying compute colocated storage in the analog properties of your system (which is exactly what a synaptic weight is btw), on top of using extremely low bit precision. So explicitly calling your system totally non-neuromorphic is a little deceiving. But even then I find this idea that you're going to be running the AlexNet communication protocol to pass around information in your system to be a little strange. If you're doing anything like passing digitized inputs through a fixed analog convolution then you're not going to beat the SRAM limit, which means that instead you have in mind keeping the data analog at all times, passing it through an increasing length of analog pipelines. Even if you get this working, I'm quite skeptical that by the time you have a complete system, you'll have reduce communication costs by even half the reduction you achieve in computation costs on a log scale. It's of course possible that I'm wrong there (and my entire argument hinges on the hypothesis that computation costs will fall faster than communication - which is true for CMOS but may be less true for optical), but this is really the only projection on which we disagree. If I'm right, then regardless of whether you can hit 50 Tops (or any value) on AlexNet, you'd be foolish not to reoptimize the architecture to reduce communication/compute ratios anyway. ~~~ p1esk Oh, I see what you meant now. Yes, when processing large amount of data (e.g. HD video) on an analog chip, DRAM to SRAM data transfer can potentially be a significant fraction of the overall energy consumption. However, if this becomes a bottleneck, you can grab the analog input signal directly (e.g. current from CCD), and this will reduce the communication costs dramatically (I don't have the numbers, but I believe Carver Mead built something called "Silicon Retina" in the 80s, so you can look it up). Power consumption is not the only reason to switch to analog. Density and speed are just as important for AI applications. ------ jjaredsimpson I never understand the odd advantage that brains are assumed to have over machines when comparing power consumption. >... AlphaGo ... was able to beat a world-champion human player of Go, but only after it had trained ... running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.) A human brain has a severe limitation though. It can't consume more or less energy even if it I wanted to. AlphaGo could double, triple, etc its power consumption and expect to improve its performance. The brain also took decades to train. Computers also have the advantage of being identical. You can't train any brain to be a master level Go player. I just don't see brains as the high watermark of intelligence. They occupy a very specific niche in what I assume is a vast unbounded landscape of possible intelligences. ~~~ pron > The brain also took decades to train. The brain of an insect doesn't take decades to train, and we're currently unable to match its capabilities, either. > I just don't see brains as the high watermark of intelligence. They occupy a > very specific niche in what I assume is a vast unbounded landscape of > possible intelligences. That is a hypothetical claim because we don't know what intelligence _is_. Surely, some algorithms are much better at some tasks than the human brain, but that has been the case since the advent of computing, and it does not make them intelligent. Intelligence, or how we would currently define it colloquially and imprecisely, is an algorithm or a class of algorithms with some specific capabilities. Could those capabilities be taken further than the human brain? We certainly can't say that they cannot, but it's not obvious that they can, either. The only kind of intelligence we know, our own, comes with a host of disadvantages that may be features of the particular algorithm employed by the brain and/or to limitations of the hardware, but they could possibly be essential to intelligence itself. Who knows, maybe an intelligence with access to more powerful hardware would be more prone to incapacitating boredom and depression or other kinds of mental illness. This is just one hypothetical possibility, but given how limited our understanding of intelligence is, there are plenty of possible roadblocks ahead. Even if a higher intelligence than humans' is possible, its hypothetical achievements are uncertain. Some of the greatest problems encountered by humans are not constrained by intelligence but by resources and observations, and others (e.g. politics) are limited by powers of persuasion (that also don't seem to be simply correlated with intelligence). For example, what's limiting theoretical physics isn't brains but access to experiments, and what's limiting certain optimization problems are computational limits, for which our own intelligence, at least, does not give good approximate solutions at all. ~~~ return0 > The brain of an insect doesn't take decades to train, and we're currently > unable to match its capabilities, either. It's not particularly useful to simulate insects. We can far surpass some of their capabilities, but the goal is not to make an insect-robot, just like we didn't care to make a mechanical horse. ~~~ nharada Both of these are under active development Robotic insects: [https://en.wikipedia.org/wiki/RoboBee](https://en.wikipedia.org/wiki/RoboBee) Robotic horses: [http://www.bostondynamics.com/robot_bigdog.html](http://www.bostondynamics.com/robot_bigdog.html) ~~~ return0 Those are mostly interested in biomimetic movement rather than intelligence. They do have some applications, but i don't think they ve convinced the world that mimicking organisms is necessarily optimal. ------ JackFr "Neuromorphic" = "Ornithopter of the mind" Giving up on flapping wings was the first step to flight. ~~~ varjag Indeed. Imagine that Wright Flyer never happened, but some time in 1940s, the progress in engines' specific thrust made a wing flapping machine able to take off. That's where we are with machine learning. ------ deepnotderp It bugs me when people always talk about "neuromorphic computing" and explore crazy ideas that never work and look at them in awe, but when anyone brings up a somewhat novel architecture for deep learning (nets that are being used today, successfully...) people say "that'll never work". For example, our startup uses analog computing to achieve accuracy roughly equivalent to digital circuits, yet we're told that we're crazy? Meanwhile people dreaming about memristors are showered with grants and money.... ~~~ petra You're from Isocline, right ? Your GPS chip was really good. But your SIMD chip will be much more impressive, right? ~~~ Quanticles No, they are not from Isocline... There are groups at UCSB and U-Tenn working on analog neural network technologies as well. ~~~ petra Could you please share a bit more about your chip and when it would be ready ? ------ lend000 There's a lot of backlash and/or dismissiveness on HN every time someone brings up neuromorphic architectures, and I think it has a lot to do with the same defensiveness that people display when their political beliefs are challenged. _When_ neuromorphic architectures start bearing fruit, programmers will no longer be so in-demand for configuring the machines, as it will shift the balance of power towards hardware engineers and hard scientists. ~~~ return0 Computational neuroscientists have been using simplified models like these for decades, and in principle the operation of these 'neuromorphic' neurons can already be simulated in large numbers in 'ordinary' computers. So, it's not clear at all what is to be gained. AFAIK, most of the neuroscience community considers Truenorth a marketing ploy. I don't think programmers should wait for these chips before they panic. They should already panic now, because deep learning works. ------ startupdiscuss If these articles get into the math behind it, I think they will realize that, currently, the brain is just a metaphor for a style of computation. The article does state this towards the end: "Given the utter lack of consensus on how the brain actually works, these designs are more or less cartoons of what neuroscientists think might be happening." We don't really know how the brain does what it does. ------ pron > the recent success of A.I. I guess they mean the recent success mostly due to modern hardware of 1960s statistical clustering and classification algorithms that for PR and historical purposes some people call "AI", but are currently unknown to have any significant relationship with what we call intelligence. When we achieve the capabilities of an insect we would be able to call our algorithms "AI" without getting red in the face, as we'd know there's a decent chance we're at least on the path to intelligence. Until then, let's just call them statistical learning. That wouldn't make them any less valuable, but would represent them much more realistically and fairly. It's funny how how statistics was once considered the worst kind of lie, and now for some it's becoming synonymous with intelligence. ------ partycoder In the movie Terminator 2, a futuristic robot with advanced AI was developed by reverse engineering a futuristic chip. In reality, we do not need to reverse engineer a chip. We can just reverse engineer our own brains. ------ return0 I see nothing in these "neuromorphic" architectures than hogwash trying to bullshit governments into giving them money. There's no conceptual advancement offered by these computers that can't be simulated with matlab. Until the day when we actually learn how neurons work, these will just be extremely premature optimizations. ~~~ p1esk These designs are advances in the field of computer architecture. They look at how brain processes information for ideas to make hardware more efficient, for some applications (such as pattern matching). Did you expect something more? ~~~ return0 They use very rudimentary sketches that have little to do with real neurons. ANNs have been mimicking these things in a slightly lower detail since the 60s. We can do better pattern matching with ANNs. ~~~ p1esk I think you might be confused about terminology. Neuromorphic computing is running some known ANN model directly in hardware. Why do we want it? Because ANN models in software work well for pattern matching, and we want to speed it up/make it more efficient. ~~~ return0 Nope, ANN's and deep learning are not used by these boards (Neurogrid, zeroth, truenorth). ~~~ p1esk [https://arxiv.org/abs/1603.08270](https://arxiv.org/abs/1603.08270) They have been designed, and are being used either for more efficient pattern matching, or to speed up brain simulations (again, using known neuronal models). You seem to expect something else from neuromorphic computing, why? ~~~ return0 I stand with Yann Lecun's criticism on the article: [https://m.facebook.com/yann.lecun/posts/10152184295832143](https://m.facebook.com/yann.lecun/posts/10152184295832143) > [the truenorth team had] to shoehorn a convnet on a chip that really wasn't > designed for it. I mean, if the goal was to run a convnet at low power, they > should have built a chip for that. The performance (and the accuracy) would > be a lot better than this. They used their 'neuromorphic' chip in an explicitly non-neuromorphic way, basically approximately mapping deep learning processes to their chip. There is very little neuromorphicity (brain-likeness) about it (plasticity rules out of their ass, for starters). And they still get less than state-of-the art performance in most tasks! I expect 'neuromorphic' to be used when sound neuroscience is used in large scale implementations that allow us to actually simulate parts of the brain. Anything else, we call it what it is, ANNs. ~~~ p1esk Well, none of those chips are brain-like at all. For example, TrueNorth is fully digital, it uses separate compute/memory blocks, signal multiplexing, signal encoding, routing protocols, instruction set, etc, none of which is in any way related to what brain is doing. What makes you think it's "neuromorphic"? Whether you like it or not, get used to people calling their hardware ANN implementations "neuromorphic". ~~~ return0 Nope, neuromorphic means the hardware would simulate the neurobiology, not ANNs. More practically, they would never publish in Science if their title was "printing ANNs in hardware". ~~~ p1esk TrueNorth hardware, as I illustrated, does not resemble neurobiology at all. There are no brain-like components there, on any level. Moreover, it can run ANN algorithms just as easily as more "neuromorphic" algorithms. Pointing to how they chose to name it for publication is not exactly a very convincing argument to support your view, is it? :) ~~~ return0 My view is they 're useless. i don't get your point, sorry. ~~~ p1esk My point is architectures like TrueNorth are very impressive from the point of view of a computer engineer, and they are very efficient when running their intended applications (neural network algorithms). The fact that they are not "brain-like" does not make them any less impressive. ~~~ return0 > very impressive from the point of view of a computer engineer Maybe, i suppose as much as a bitcoin ASIC is.
{ "pile_set_name": "HackerNews" }
Joe Stump: Why I switched from PHP to Python - dannyr http://www.joestump.net/2009/09/why-i-switched-from-php-to-python.html ====== steveklabnik While Python is a great language, aren't "Things Guido doesn't like aren't supported at the language level" and "Python treats developers as adults" sort of at odds? * TCO is dangerous and hard, let's not support it in the language * ++ is ugly, let's not support it in the language * Assignment during comparison leads to bugs, let's not support it in the language * Python gives you more than enough rope to hang yourself, but at least hanging yourself is an option. One of these things is not like the others. ~~~ hristov I wonder why supporting TCO or not has anything to do with the language. I mean isnt it up to the compiler or interpreter? If a compiler can get the same final results with TCO, then why shouldn't it? And the definition of a language should not define how a compiler works internally only what the actual effect of the language instructions should be. Compiler writers should be free to do what they want to realize the defined effect. BTW as someone that moved from Pascal to C and C++, I can appreciate the ugliness of a lot of C shorthand syntax. It is confusing and can cause a lot of bizarre and hard to catch bugs. ~~~ gaius That's true; Guido only says that TCO isn't going to be written into the Python you download at python.org. ~~~ steveklabnik You are incorrect. "Second, the idea that TRE is merely an optimization, which each Python implementation can choose to implement or not, is wrong. " - [http://neopythonic.blogspot.com/2009/04/tail-recursion- elimi...](http://neopythonic.blogspot.com/2009/04/tail-recursion- elimination.html) ------ zokier Does one really need to explain why he switched from PHP to anything? ~~~ sophacles This sort of thing is still needed. I have a friend who's boss wont let him use anything except c, php, and perl. Any attempts to get python in there are met with resistance and vague hysteria. More posts like these from can slowly lower people's loyalty to php, and contribute to saving my friend and many like him. ~~~ jrockway Well, is Perl really a problem? Anything you can do in Python you can do in Perl just as easily; Perl actually has _more_ abstractions available for the programmer than Python. (Grammars in regexes, real closures, coroutines, a metaobject system, etc.) Also, last time I checked the programming language shootout, Perl 5.10 is slightly faster than Python 3. So with that in mind, the logical "step up" from Perl would be something like Haskell. Python is pretty much Perl with different syntax, less features, and the impression that it's "cleaner" because nobody ever bothered using it for "quick and dirty" scripting. ~~~ codexon Python 3 smokes Perl. [http://shootout.alioth.debian.org/u32q/benchmark.php?test=al...](http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=python3&lang2=perl&box=1) And most people are using 2.x which is even faster. [http://shootout.alioth.debian.org/u32q/benchmark.php?test=al...](http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=python&lang2=perl&box=1) ~~~ kingkongrevenge That's winning at the special olympics. They are both so slow you'd never use either for cpu constrained tasks. ~~~ sandGorgon That is plain offensive. The special olympics are an even greater achievement of the human body as a machine, since it must push itself with far lesser degrees of freedom. ~~~ pieceofpeace As I saw, it was not meant to be offensive. Special olympics are for special people. They are not the best among all competitors. Most interpreted languages are not comparable to compiled languages. ~~~ jrockway Neither Perl nor Python are interpreted. They are both just slow. ------ lux You have no idea how much I wish I could do this. Just testing my code from PHP 5.2 to 5.3 and there are several serious problems now that I get to find workarounds for, all while maintaining backward compatibility too. How is it that a minor upgrade changes functions and their behaviour so much that it breaks existing PHP5-compatible code? WTF is wrong with the PHP developers? If I didn't have so much code (whole product with contracts to support) invested in PHP, I would switch in a heartbeat and never look back. ~~~ Maascamp Python 2.x -> Python 3 Ruby 1.8 -> Ruby 1.9 Do you actually know anything about languages other than PHP and want to change for legitimate reasons? Or are you just jumping on the bandwagon because it sounds cool? ~~~ lux I'm perfectly aware 1.8 to 1.9 was a pretty big move for Ruby folks (maybe could've been called 2.0?), and 2.x to 3 in Python, that's the time for big changes. I've also written real production software in about half a dozen languages, so yes I do. I just happen to have a codebase of 100,000+ loc in PHP that I support, so I'm kinda heavily invested in it more than the others. It sucked trying to rework so much PHP4 code for PHP5 while keeping b/c, since that felt like the slowest community migration to a new version I've ever seen... But it seems like each additional release from PHP changes and breaks something else. My codebase is older than many, so it's possible that's made it a bit more brittle over time, but things like completely changing the allowed characters in .ini file keys from 5.2 to 5.3 I find baffling. Yes, speed things up or improve things, but consider that existing liberties you've allowed may come to be relied on by your users. If you let them do something in 5.2, make sure they can still do it in 5.3. I'm always careful myself to make sure existing code will continue to work, and new features are offered as optional or via new APIs. Code from 3.x of my software continues to run fine on 5.x as a result, which lowers my support burdens. So again, no I'm not just fanboying for other languages or falling for the "grass is always greener" line of thinking. PHP just has certain inelegant things that I find add to my overall frustration over time. I value terseness, so things like -> versus . for objects adds up, just like $ versus nothing for variables wears at the wrists too. I value consistency, so not remembering which parameter goes first for in_array() versus strpos(), or why the differences in naming add up too. Those are minor things that have been present from the start, but then when I start getting emails from people saying my software is messing up and I find out it's just that the latest version of PHP totally broke something I'd been relying on, that frustration is amplified tenfold. ------ sophacles Hah, just be careful, having to go back the other way is kinda painful. I keep a file for myself called pythonisms.php in it i define a few functions that let me continue to think in ways I've come to love in python. One example, the get function. It takes 3 arguments, and works like python's dict.get(). I seriously have no idea how I did php without it. ------ ivankirigin I'm going in the other direction right now. I have a sheet of notes titled "PHP WTF". Seriously, PHP is such a mess. Thank Jesus for Thrift <http://incubator.apache.org/thrift/> ------ electronslave As a former academic gone mercenary, I love that programmers of all skill levels can get into organization, language and specification through Python. It's a wonderful gateway to learning, and in this case, it seems to have done exactly what Python was specified to do! For my own projects, I've used pretty much all the fad languages from BASIC to Pascal to C. I've used industry-adopted languages like Tcl, Java and Perl. I've experimented with Lisp and Erlang. After all is said and done, I like Python/Cython/C. It gives me the option to use a glue language for the Windows programming I've done, which is perfect. It gives me a compiled (albeit ctypes-restricted) subset language for writing hashing/storage/whatever functions. It gives me a super-awesome library set that comes built-in. I know it'll go away someday, but I'm glad to have worked in Python for as long as I have. ~~~ hristov You must be old. Basic hasn't been a fad since the 80s. ~~~ gaius I assume he means VB(.NET). ~~~ electronslave Now that this conversation is closed up, I'll date myself further. I do not mean Visual Basic (nor Victoria Bitter.)
{ "pile_set_name": "HackerNews" }
Sysdig Raises $13M and Launches Container-Native Monitoring - msarmento https://sysdig.com/monitoring-as-a-microservice ====== wasnaga Looks cool but do I need to be running containers? ~~~ msarmento Actually no you don't. Sysdig specializes in container visibility but we have customers that are getting plenty of value out of the product without using containers.
{ "pile_set_name": "HackerNews" }
How Complex Systems Fail (1998) - parentheses https://how.complexsystems.fail/ ====== dang A fine submission, except that it was last discussed less than a year ago, which puts it in the dupe window for HN (see [https://news.ycombinator.com/newsfaq.html](https://news.ycombinator.com/newsfaq.html)). [https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...](https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=How%20Complex%20Systems%20Fail%20comments%3E0&sort=byDate&type=story&storyText=none)
{ "pile_set_name": "HackerNews" }
Show HN: Buy the Humble Indie Bundle #3 with Bitcoins - elliottcarlson https://www.btcdeals.com/humble-indie-bundle-3/ ====== cjzhang This doesn't seem to be through the main humble bundle site. Are you reselling the bundles or something?
{ "pile_set_name": "HackerNews" }
PxCode Challenge Day 2 – Give us your Sketch, and we give you the Code - pxcode https://www.youtube.com/watch?v=kbQkXGBj2ho ====== pxcode Give us your Sketch, and we give you the Code! We make this page within 32 mins. Check our results below! Preview the result here: [https://bit.ly/33hARHI](https://bit.ly/33hARHI) Final source code at CodeSandBox: [https://bit.ly/35rn8Rl](https://bit.ly/35rn8Rl) ------ HolaMan A revolutionary programming way - visualize the coding process to achieve 10 - 20 times productivity
{ "pile_set_name": "HackerNews" }
Soyuz User Manual - samlittlewood http://www.arianespace.com/launch-services-soyuz/Soyuz_Users_Manual_CSG_June06.pdf ====== russss To elaborate, this manual refers to the Soyuz _rocket_ , not the spacecraft which is flown to the ISS. Specifically, it's the interface documentation for attaching your payload to it. (The Russians/Soviets have a long history of naming their rockets and spacecraft identically.) The ESA are collaborating with the Russians to build a Soyuz launch site at Kourou in Guiana. It's pretty much a complete clone of the site at Baikonur: [http://1.bp.blogspot.com/-SBynwXjyxKI/TZiECQqNtGI/AAAAAAAACS...](http://1.bp.blogspot.com/-SBynwXjyxKI/TZiECQqNtGI/AAAAAAAACSo/QRr2B4CxUdw/s1600/SoyuzESA.jpg) This gives commercial customers an increase in payload capacity compared to launching from Baikonur because Kourou is nearer the equator (especially relevant for geosynchronous orbits). Plus you get the legendary Soyuz reliability. I believe they're planning on the first launch from Kourou before the end of 2011. ------ arethuza It has some great historical details of the Soyuz launcher at the end: "Vehicles in this family have followed a conservative evolutionary path of development, and have been in continuous and uninterrupted production and flight for more than 45 years." ~~~ Typhon It's more or less the same technology since 1957. It's the most reliable rocket in the world. <http://en.wikipedia.org/wiki/R-7_%28rocket_family%29> ------ mckoss The chart on page 3-2 gives you a good idea of the G forces you'd experience on a launch. Imagine the feeling of instantly taking away 3.5 Gs as each stage blows away. Also notice how the Gs increase as the mass of fuel volume is reduced during stage firing.
{ "pile_set_name": "HackerNews" }
Scientists predict green energy revolution after new graphene discoveries - 3eto http://www.independent.co.uk/news/science/scientists-predict-green-energy-revolution-after-incredible-new-graphene-discoveries-9885425.html ====== trishume This may well be a very important and interesting discovery but I cringed a few times at the poor explanation of some of the science. Example: "a million times thinner than human hair, yet more than 200 times stronger than steel" the word "yet" suggests that it is very strong in an absolute sense _despite_ being so thin. Yet in reality the comparison is to an atom-thick sheet of steel, which is not very strong at all. This strength doesn't easily scale, as evidenced by the fact that graphite (lots of graphene layers) is not 200 times stronger that steel. (note that my explanation of the science may not be perfect but it is better than this article) ~~~ saosebastiao > This strength doesn't easily scale, as evidenced by the fact that graphite > (lots of graphene layers) is not 200 times stronger that steel. This is an even worse explanation of the science. On the level of saying that the strength of steel doesn't scale because I can bend pig iron with my hands. ------ Animats Materials-science articles about a modest advance in surface chemistry (which is usually called "nanotechnology") are regularly being hyped into "big commercial breakthrough real soon now" articles. Nature and MIT Technology Review (which, despite the name, is a commercial company) are the big offenders. Super-battery or fuel cell articles appear frequently, but never seem to result in actual products. Hydrogen isn't an "energy source", anyway. You have to crack it out of water or pull it out of hydrocarbons. Those processes are uphill energetically. At best, hydrogen is a storage medium. ~~~ ChuckMcM I agree with your assessment of the materials science articles but in this case I think you missed the 'breakthrough' part. Graphene allows positively charged hydrogen to pass through it. The thought experiment is you create a vessel with an inner porous core, you wrap it in graphene, and then you apply a vacuum. That device will pull any positively charged hydrogen out of the atmosphere (which frankly isn't much except during thunderstorms) This behavior with graphene isn't unknown, the first example was noting that graphene made for an exceptional water filter [1]. [1] [http://www.manchester.ac.uk/discover/news/article/?id=11561](http://www.manchester.ac.uk/discover/news/article/?id=11561) ~~~ Animats Even if that works, it may be subject to all the usual problems, such as filter clog. This is a big problem with hydrogen fuel cells, which require extremely pure hydrogen. It's a general problem with materials whose carefully constructed surface structure has some unusual property. The surface structure may be fragile. A good example is NeverWet, the ultrahydrophobic coating. It repels water because the surface is composed of tiny spikes, and water's surface tension keeps the water supported on the surface of the spikes. This works so well that mud will run off of shoes treated with it. The effect, though, doesn't last long; even slight wear on the surface damages the micro-spikes. Reviews of the product on Amazon agree: great at first, useless within hours to days of use: ([http://www.amazon.com/Oleum-274232-Never-Multi- Purpose/produ...](http://www.amazon.com/Oleum-274232-Never-Multi- Purpose/product-reviews/B00DNQBFAW)). ------ upofadown The article goes on about the idea of collecting hydrogen from the atmosphere using a graphene filter. There is no appreciable amount of hydrogen in the atmosphere. ~~~ Gravityloss Using air humidity? Say we have 10 grams of water per cubic meter of air (so about 1% of the total air mass), so about 1 gram of hydrogen. At about 140 kJ/kg, and assuming a 10% reduction efficiency, and 100 watts per square meter solar panel, it should be able to reduce 0,071 grams per second or 260 grams per hour. Assuming 10% of air moisture can be extracted, the wind passing through the system to keep the water flow would need to be about 0,7 m/s. It's almost always as windy as that. The rough numbers are frighteningly good. All we need is the system to be cheap. ~~~ danieltillett ... and some way to store and transport hydrogen. We can make hydrogen quite cheaply right now (it is kindergarten technology), but once you try to store and/or transport it then this is where things get really, really difficult [1]. 1\. [http://en.wikipedia.org/wiki/Hydrogen_storage](http://en.wikipedia.org/wiki/Hydrogen_storage) ~~~ Gravityloss No, I assumed (in hindsight could have mentioned it) that you could use the hydrogen locally to generate electricity. The thing is connected to the the electricity network. But I realized that is roughly the momentary day power of the panel. The real yearly average production is maybe 15% of that. Forgot the panel efficiency with this [http://en.wikipedia.org/wiki/Photovoltaic_system#mediaviewer...](http://en.wikipedia.org/wiki/Photovoltaic_system#mediaviewer/File:SolarGIS- Solar-map-World-map-en.png) ------ thret A bit OT but this reminded me of Maxwell's demon: [http://en.wikipedia.org/wiki/Maxwell's_demon](http://en.wikipedia.org/wiki/Maxwell's_demon) ------ andretti1977 Oops the site says "you've been hacked by the sirian electronic army (sea)" ------ SixSigma No surprise there, they always hook it onto the cause de jour
{ "pile_set_name": "HackerNews" }
Ask HN: What's your favorite corporate power move? - jppope I&#x27;ve never been able to find a list so I figure I would ask y&#x27;all.<p>Examples of corporate power moves:<p>- Showing up 5 minutes late to a meeting, to show someone your time is more valuable. - &quot;The Hard CC&quot; where you cc someone&#x27;s boss to throw them under the bus for not being accountable. - Having someone prepare a report or do a lot of work on something and then tell them to &quot;simplify it&quot; or just give you the &quot;key points&quot; in a meeting<p>What are your favorite (or least favorite if something drives you nuts) corporate power moves? ====== celim307 Giving my boss a heads up that I was gonna quit if things didn't change. Things didn't change, so I quit. They had almost no other technical staff so they called me asking to fix "just a couple little things". I quoted my rate at 300$ an hour, minimum 20 hours. They balked. Three days later my former boss calls me and agrees. I ended up billing 80 hours. ~~~ agumonkey So weird how human nature works. The more I grow old, the more I see everything is a struggle/war/negotiation. I'm always looking for friendly high drive collaboration but it's so rarely the case. ~~~ celim307 My new place is much better :) ------ themikesanto I enjoy being a nice person and not fucking over other people for my own personal gain. ~~~ mieseratte Seconded, with the caveat that I enjoy fucking over those who fuck over my team. ~~~ jessicalondon +1 to this! ------ stronglikedan I don't know if it's a power move, but I like to reiterate all verbal communications in a follow up email to the person(s) I was conversing with, prompting them to ensure I understood everything correctly. While it's more of a CYA thing, the power move comes into play when they try to feign ignorance on the topics of the conversation, and I can use their written words to set the record straight (or just have a record of what _I_ may have understood, but they failed to clarify by not responding to my follow up). ~~~ airbreather I would call that good practice and reasonable manners, this is normal in my professional circle and not doing it would be a dick move or maybe setting up for an ambush. Conflict arises from differing or unmet expectations, so continually managing alignment of expectations is crucial to an end result where both parties feel there has been a positive outcome. ------ LarryDarrell Quitting. Especially effective in the Midwest U.S. where jobs are more scarce and employers tend to take you for granted. ~~~ ok_coo I'm not sure why you got downvoted, but I have to agree. At least a couple of employers I had in smaller towns in the midwest would treat employees like property, and would assume you would never (or be able to) leave. "You should be grateful to even have a job" was common. ------ cottonseed Being late isn't a power move, it's just makes you look disorganized and disrespectful. This would have 100% the opposite of your intended effect with me. Real power moves are exercising real power: saying no and being able to back it up, expressing a dissenting opinion, solving problems and providing real value, brining in money, controlling budgets or headcount, threatening to quit because you can/will, leading by organizing people to get something done, articulating a framing for situations that cut through the BS, understand people's motivations and what's actually going on, etc. These moves "take up space," but don't confuse taking up space for power. People with actual power won't. So get to those meetings on time. ~~~ loco5niner > Being late isn't a power move, it's just makes you look disorganized and > disrespectful. Sometimes yes, sometimes no ------ nathan_long The classic programmer corporate power move is wearing whatever you want to the office. ~~~ aloisdg Metal/Video Game/Nerd T-shirt + Bermuda short + Espadrille was my main dress code this summer. Skater shoes + cargo pant + hoodies right now. My power move would be to switch to a sirwal (best pant ever!). I may try it. My god level move would be to wear a mundu or a pareo. ~~~ ljm I've done the pareo before on multiple occasions. The cooling factor in 30°+ heat is unrivalled. Easily one of my favourite things to wear at work when the weather's good for it. Probably want to practice putting one on and keeping it secure for a bit first though - don't want it slipping off in front of people! (That would be a different kind of power move.) ~~~ stronglikedan How much cooler if you freeball it? ~~~ ljm Hard to quantify it but it's far more comfortable due to the soft fabric and lack of friction around the thighs. It's super refreshing not just with the breeze but also because it's like shade for your legs. It's honestly a shame that it's not socially acceptable (in the West) for men to wear skirts and dresses. ~~~ AstralStorm One of these days I should go to work in the fencing outfit. (Mask optional.) Maybe even the full historical reconstruction set I have. ------ airbreather The response of "noted". You do a little scribble in your book or tablet or whatever and look up and say "noted" and then just stare with a pregnant pause, indicating you are waiting for the next meaningless "concern" to be articulated. ~~~ cyberpip I do this all the time and didn't realize it was a power move. I find myself saying it to curb additional discussion I feel like is going to be superfluous - especially over slack or whatever. Noted! ------ seiko988 Employ or feign total ignorance of operations so no one can expect a raise, because the boss/manager doesn't even know what you do here. Assume loyalty only from those whom you have directly hired; remove all those who predate you. Make a strict deadline, but merrily add features at your whim, when the deadline is missed, then act astonished "how could this have happened?" Elicit sympathy from your underpaid minions for crashing your Porsche again while auto racing. ~~~ farazbabar I am sorry. ------ Blackstone4 Having confidence in myself and my own future. Enough so that I can say exactly how I see a situation and I can deal with the consequences whether positive or negative. ~~~ Blackstone4 To add to my comment, having learnt this 10 years into my career, I feel like this thinking is a virtious circle. If I am able to communicate exactly what I am thinking in a diplomatic manner, the more it is appreciated and the more success I have. This leads to greater confidence and the more I express myself in such a fashion. ------ mtmail At $bigcorp we had a senior vice president continuing his phone calls in the mens room (toilet) at the pissoirs. I can imagine he saw himself doing a power move (I certainly didn't) showing how important or busy he is. He did wash his hands at least. ~~~ smacktoward U.S. President Lyndon Johnson used to take this power move a step further. In one-on-one meetings with members of his staff, he would go into the bathroom attached to his office, start urinating, and then _call the staff member into the bathroom to continue the conversation._ They would then have to either go into the bathroom and literally watch him relieve himself, or risk offending their boss, who happened to also be the most powerful man in the world. ~~~ mieseratte > They would then have to either go into the bathroom and literally watch him > relieve himself, or risk offending their boss, who happened to also be the > most powerful man in the world. Those are the times you politely demur, and when pressed ask the "reducto ad absurdum" question - "Sir, you wish for me to watch you urinate?" Helps if you have a solid poker-face. ------ akman From the comments, seems there is confusion on what you're looking for. I sense sarcasm in your request, i.e., 'What are cargo cult things one can do to show power?' But some responses are more genuine. If you make the ask clearer in what you're really looking for, it'll make it easier to decipher the responses. My 2 cents... ~~~ jppope Mea culpa. I could have been clearer. I was mainly looking for a "zoo of power moves". definitely not advocating for the use of dis-respectful social tools. I was mainly trying to identify what they look like out in the wild - the good, bad, and ugly included. ------ allworknoplay Document dump. Not software-related, but someone needs files for diligence and either has some obnoxious request list or wants something you'd prefer they not really know about; you just dump literally everything on them in terrible formats to potentially obfuscate negatives and consume an enormous amount of their time. Many people will find whatever they need to check the box and move on. It's a dick move. ~~~ paleotrope You mean hundreds of scanned pdf files without ocr and password protected with different passwords isn't helpful? ------ wdhodges CC boss is called "escalation" and is useful if direct requests don't work. It's not throwing under the bus, its ensuring accountability. CC boss on the first request is usually unnecessary. Showing up late to meetings is indeed a "power move" but not respectful. Its the same as "busy bragging", talking about how much work/meetings you have. Both these are completely lame. The biggest thing I see is holding the floor in meetings, even timing your words and sentences to prevent interruption. This is sometimes necessary when making brief, complete points, in a very aggressive environment. I like to teach people how to break into the monologues. ~~~ mbrodersen "busy bragging" tells me that you are unorganized/inefficient/trying to impress (i.e. not powerful). ------ NoNotTheDuo "Per my previous email" is my favorite. I usually attach the prior email that answers the question. ~~~ Blackstone4 Isn't this passive aggresive? Would you do this in real life? I feel like this thinking would breed bad culture and is a short-term win at the expense of a long-term relationship. ~~~ jerf "It depends". I've used this when someone is claiming to their boss that we never communicated something, generally in situations where we've been communicating like crazy, but possibly not with them _directly_ , but via emails to the several affected parties. Generally I assume that the person is just forgetting because they are very busy rather than out to get me, but, nevertheless, you have to defend yourself against that sort of thing. Passive-aggressive in this case is superior to aggressive-aggressive. If you don't defend your reputation, nobody will. ~~~ Blackstone4 I generally agree with what you said. Maybe I take issue with the phrasing: "Per my previous email" I feel like it triggers some people. I prefer to try and soften it by saying something along the lines of: "I don't know if you got my previous email (see attached)" ~~~ AstralStorm That one is additionally condescending, implying the recipient cannot handle their email. Good job! ------ RocketSyntax These just sound petty, not powerful ~~~ ljm Wholesome corporate power moves, also known as being a good leader: \- Allowing your team to work remotely if they want to \- Assuming responsibility for something that needs doing when it would be easier to pass the buck \- Shielding your team mates from unwarranted external criticism the complete list is practically endless ------ stepvhen The "Hard CC" isn't a power move its how things get done. I constantly check in to make sure I'm doing my job right and when I am and the problem isn't fixed, its time for it to be someone else's problem. source: working in Support. ------ thefrenchsmith My boss never accepts calendar invites for meetings, and may or may not show up to the scheduled meeting. Brilliant filter since people will ask again if she's really needed, vs just invited for show. ------ cspags Being the only remote employee when the rest of the company is onsite. ~~~ TeMPOraL Huh. Achieved that at my previous job (3 days out, 2 days in), but the effect was ultimately opposite to what you'd expect. Sure, initially I was _the_ person that managed to convince management to give them remote work. Fast forward couple months, I was next to nobody on the floor, because I couldn't keep up with the social aspects of the workplace and wasn't present at the relevant watercooler conversations. ------ human20190310 I like to leave at exactly 5pm. ~~~ RocketSyntax Ha. Slave to the man. Do you badge out too? ~~~ human20190310 Few people can entirely avoid having to work for a living. ------ legohead Refusing to sign NDAs and non-competes ------ cryptoz How are these power moves? They transfer your power to someone else? Showing up late for a meeting purely to waste time tells the other person you are extremely inconsiderate and someone they should actively avoid. If I value my time, why would I spend any of it around you?? The rest of these things are just extremely rude, I cannot understand why you'd call them 'power moves'. They show insecurity and a desire to destroy the company you work in, not 'power'. ~~~ rgoulter They're certainly quite rude. It's possible the OP isn't sincere in asking for more ways to be rude. In a sense, being rude is a signal of power, because if you're rude and not powerful then you'd get punished for being rude. (As you say, I wouldn't wanna work anyone who behaves so rudely, though). [https://medium.com/incerto/how- to-legally-own-another-person...](https://medium.com/incerto/how-to-legally- own-another-person-4145a1802bf6) ~~~ jppope (OP here) I am 100% not advocating the use of "power moves". I don't believe they grow mutual respect or help build constructive work environments... but let's be real, any of us that exist in the real world have seen this stuff from time to time and like everything, there are good and bad versions of it. The "Ask HN" is meant to be of an "observational" nature. My aim is that the discussion provides utility for understanding social dynamics, and hopefully people use discretion with the info (i.e. Hanlon's razor). Hopefully the info can be used to help people navigate their lives better. ------ RocketSyntax Scheduling an offsite or business trip to avoid trivial initiatives you don't want to deal with. ~~~ jppope nice. ------ ghostbrainalpha I didn't start doing this because it was a "power move" although its been pointed out to me that it was since. My company like big 20 person meetings where my portion of the agenda may only be 5 minutes long. The rest of the meeting I will listen to what is going on, chime in when relevant, but I keep on coding so I can meet my deadlines. This bothers some people who think your full attention should be on the meeting, but honestly it just isn't required for me so I multitask. ------ mturmon The re-org in which dead wood is shunted off to an out-of-the way planning or strategy assignment. Extra points for dotted-line reporting to another strategy person. Further extra points if the planning assignment is not funded adequately and/or has no personnel authority. ------ maliker Pretty much every scene in the movie Margin Call. [https://www.youtube.com/watch?v=Hhy7JUinlu0](https://www.youtube.com/watch?v=Hhy7JUinlu0) ------ stunt I don’t know what kind of corporate culture you are dealing with but I personally believe in Open,Honest,Direct approach to solve my problems. ------ segmondy Canceling the meeting after 3 minutes to show them you don't tolerate lateness. ------ zwieback "What I'm hearing you say .." and then say what you want to happen. ~~~ jppope lol. Yep seen many a manager use that ------ tpae Working from home
{ "pile_set_name": "HackerNews" }
10 Must-Have Google Chrome Extensions for Web Developers - mdolon http://devgrow.com/top-10-google-chrome-extensions/ ====== elxx Web Developer is a Firefox favorite of mine that also has a Chrome version available now. (<http://chrispederick.com/work/web-developer/>)
{ "pile_set_name": "HackerNews" }
Reddit users could be held accountable for suicide... say what? - ryangilbert http://thenextweb.com/insider/2012/04/12/nine-reddit-users-could-be-slapped-with-wrongful-death-suit-in-suicide-case/ ====== killnine "Our family has decided to take legal action not only against his ex-wife, but those who urged him on to take his own life. Next week, our lawyer will be filing a wrongful death suit in Washington State against nine individuals. Our lawyer hired a private investigator and three of the individuals have been identified from those who urged Jerry to kill himself. Subpoenas will be issued to find the identity of the other three, though it is possible that they were the same people. We don’t know yet. We were told by our lawyer not to give any other information out such as our full names or the people to be named in the lawsuit" I am bad at math , but.........? ~~~ kylemaxwell Six of the defendants are people other than the Reddit commenters, apparently. ------ itg This is what happens when blogs and tech "news" sites report on stuff without doing any fact checking at all. Plenty of redditors did a bit more digging around and it seems to be a hoax since there is no link between the guy who committed suicide and the posting. There is no subnoea served and the police report doesn't have any links between them. It's one group of redditors who dislike another group and seem to be trolling. Anyway, I don't think these stories belong on hn, so flagged. ~~~ dgabriel Agreed. This seems to be part of a troll feud between ideologically opposed subreddits. I find it tedious. ------ kylemaxwell I've no idea of the legality involved, but from a moral and ethical perspective, egging on a depressed, suicidal person is immoral and ethical with a possible few exceptions (e.g. terminally ill person wishing to cut short a remaining painful period). Ironic, perhaps, that these Reddit users probably got karma on the site at the same time they in fact earned negative karma under whatever value system you prefer. ~~~ ryangilbert "Ironic, perhaps, that these Reddit users probably got karma on the site at the same time they in fact earned negative karma under whatever value system you prefer." Probably true... so sad.
{ "pile_set_name": "HackerNews" }
CEO dares Microsoft to sue him over virtual desktops that flout licensing - GreekOphion http://arstechnica.com/business/news/2012/03/ceo-dares-microsoft-to-sue-him-over-virtual-desktops-that-flout-licensing.ars ====== forrestthewoods This really is a fascinating problem. If Guise Bule gets his way the the maximum number of software licenses sold would be equivalent to the maximum number of concurrent users. I don't think that's particularly desirable for either developers or users. A similar situation exists with video games. The market is going pure digital and gamers want the ability to sell "used" digital games. If that were possible then a middleman service would appear which would instantly "buy" and "sell" the game as you launched and closed a game. A wildly successful game such as Skyrim sells millions of copies but only had ~300,000k concurrent users (on PC) at launch. A couple of months later and that number is only ~50,000. Suffice to say this will never be allowed to happen. ~~~ psykotic This already happened for video games under similar circumstances with Internet cafes. These days the typical video game EULA prohibits use in Internet cafes without a special license, but as far as I know that has not been tested in court. The easier workaround from the developer and publisher's point of view is to just go online-only, which to a large extent has already happened, especially in Asia where gaming in Internet cafes is widespread. ~~~ forrestthewoods Nice point! Valve has a special program just for cyber cafes (<https://cafe.steampowered.com/index.php> ). I have no idea how much it costs but you can pay some recurring subscription for access to a large number of titles. My impression is that most popular cafes pay this and it does very well for both Valve and the cafes. ~~~ bane "Add a wide range of 100+ popular titles to your catalogue. We provide complimentary promotional materials and marketing support for your cafés. We are happy to support for in-café tournaments and special events. Please let us know about your upcoming events. All participants have access to the private Café Forums where they may discuss trends and issues in the cybercafé industry with fellow café owners and operators." Once again Valve is showing the world how things should be done, is wildly successful, and nobody else seems to pay attention. It shouldn't be such a huge issue to just meet your customer in the middle. ------ jlawer The funny thing is the worst thing Microsoft could do to this is ignore them.... That would leave the issue unresolved, exactly what the Desktop as a Service people don't want. I suppose the real question is if Microsoft's position on the desktop makes it illegal to do a favorable deal with On-Live? I am not aware if this would be illegal or not? One would think it would be at least against the spirit of the law to carve out a market niche for a microsoft "old boy" and protect it through their monopoly position... but it wouldn't surprise me if there is no regulation in place to stop this. I am wondering what the shareholders think. They are refusing to license to most players in order to protect windows and office on the desktop. On the other hand they are giving up a new non-trivial revenue stream during the transition period. I would think a lot of investors would want to take the short term cash and run. Having previously worked for Red Hat on the Red Hat Enterprise Virtualization support team, it always was strange looking at the VDI products that it was SO SO expensive to run the infrastructure purely due to Microsoft Licensing. Microsoft don't want to loose the grip so they make VDI licensing difficult, instead pushing mixed virtualization where you need to run windows on your desktop as well. VDI struck me as almost purely a windows solution. If your running linux you have a range of other options to have multiple users on a single infrastructure in a workable way without having hundreds of VMs. ~~~ toomuchtodo Why don't more Desktop as a Service organizations get together and refine LibreOffice to heavily compete against Microsoft? It clearly works; look at other open source solutions that are a collaborative effort. Linux as a whole, Python, Ruby, OpenStack, etc. ~~~ rbanffy LibreOffice's codebase is enormous and complicated. It's being cleaned up, but it may still be hard to maintain. It's doable, of course, but if you implement too much Office compatibility, you risk Microsoft suing you for some frivolous patent they keep just for the purpose of crushing LibreOffice when it becomes a threat. ~~~ rbanffy It's interesting how the downvoting of comments critical to Microsoft is time- related (down trends around noon and midnight UTC, up trends shifted about 6 hours). One day I'll have to write a bot to check up/down votes and relate them to sentiment. The demographics may prove interesting. ~~~ rbanffy to say nothing about downvoting of comments that are made to the wrong comments ;-) That should teach me not to post right before going to bed. ------ vDesktop Guise Bule here ! Am just about to begin an IAmA on reddit if you fancy joining me there : <http://bit.ly/redditor> Or follow my updates on twitter about this : <https://twitter.com/#!/Guise_Bule> OR go read my blog post : <http://bit.ly/MicroFap> ~~~ Drbble Wow, with that bitly URL for your AMA, you could launch a second career as an Internet marketing consultant. ~~~ vDesktop Yeah, attention to detail is important :) ------ Osiris I'm curious, why don't they just provide Windows Server 2008 R2? With the Desktop Experience installed, it's functionally identical to Windows 7, and you can license an unlimited copies on Windows Server Datacenter Edition, which is $3,000 per CPU. I have a hosted WS2008R2 instance that I can RDP into and use for whatever I want. Are there some other licensing constraints that prevent WS2008 from being used in that manner? ------ savrajsingh This is definitely an interesting space. I keep a Parallels Windows 7 VM on my machine, and I use it for the amazing features in the Windows version of Microsoft Office (former MS Office UI PM here). If I could replace that with an always up-to-date cloud-based copy (I'm always installing security patches since I rarely boot it) I'd do so in a heartbeat. ------ jpdoctor Not that I don't admire the guy's bravado, but there is no obligation for Microsoft to sue everyone equally. IANAL so correct as necessary, but he seems like he's tilting at windmills. ~~~ rbanffy nknight points this out very well: <https://news.ycombinator.com/item?id=3726555> ~~~ jpdoctor Not really. The amount of time and money for DOJ to prosecute microsoft the first time was huge, and there are competing virtual desktops, so good luck getting them on this point. ~~~ nknight Antitrust cases against big companies are always going to be expensive, but one of the many stupid things the DOJ and states did in the Microsoft case was making Microsoft fight for its life. It was unnecessarily aggressive and drove up the expense. And, there were competing browsers and operating systems during the first case, too. The mere existence of competitors does not preclude antitrust action. Just ask AT&T and T-Mobile. ~~~ jpdoctor > _The mere existence of competitors does not preclude antitrust action._ Agreed. I just don't see the anti-competitive behavior around virtual desktops that we did with WinOS and IE, so I think it would be a very tough sell. ~~~ rbanffy What are the reasons to favor one specific vendor giving it a key advantage over the other? Microsoft seems to be clearly playing favorites here. What are they trying to do? Give a monopoly to their favorite vendor? Is it legal to use your monopoly to expand someone else's? Can we really call this company, with its seemingly deep and strong ties to Microsoft, a separate entity? If it's not, this is clearly something the DoJ should look into. ~~~ chc "Use your monopoly"? What monopoly is Microsoft using? ~~~ regularfry That over the Windows desktop. There's nobody else you can get a license from. ------ powertower I don't get it... If I provide a virtualized hardware desktop hosting platform that can enable the client to load an ISO of Windows and enter his own serial key, am I violating any Windows hosted desktop restrictions? Or am I only violating it if I supply the serial keys (from legit separate/per-customer/never re-asigned OEM copies)? I think it's the latter, because in the former there is absolutly no contract between myself and Microsoft. What the client does with my platform is his business and doing. If he activates Windows there, that's between him and Microsoft. So what is the problem that these articles are trying to point out? How is it difficult for customers to buy their own OEM copy and supply a serial key?... Cusomter goes to your panel, clicks to load some already predefined or prehosted VHD or ISO, then supplies serial key him/herself. Problem solved? Or is this all about the cost of OEM copies? ~~~ wvenable From the article: "We do know from Microsoft's blog post that vendors can only host Windows 7 desktops in a virtual desktop infrastructure setting if the customers buy their own licenses from Microsoft. Even if this requirement is met, the vendor must host the desktops on separate physical hardware for each customer, ruling out a multitenant arrangement." You might be able to enable the client to load an ISO of Windows and have him enter his serial key _but_ you'd have to separate hardware for each customer making any virtualization completely pointless. ------ brisance I don't think Microsoft is playing favorites with OnLive; rather, they are trying to save their hardware "partners" from certain doom since they do not have any such offerings. This would indirectly impact Microsoft itself since OEM licensing makes Microsoft a lot of money. ------ J3L2404 Well it is digital so a copy of it is only like savoring the scent. ~~~ vonkow It's software, everything's a digital copy. ------ alanh I’m going to lose mad karma, but we have done all this before, so: _zzzzzZZZZZZZ_
{ "pile_set_name": "HackerNews" }
VUE, Behavioral analytics for mobile users is crowdfunding - waynesutton https://ramen.is/projects/get-vue-analytics ====== kumarski How does this affect the speed of the app...? ~~~ blaurenceclark Hey Kumarski, the app performance change is negligible and a majority of the work is done in the background, being a UX/Behavior analytics tool, focusing on not changing the application performance at all is key. ~~~ glovedotcom blaurenceclark - When you say "one line of code to add the SDK" can you add some light? Is your plan to track ALL interactions of the user...thats a lot of data to triage :) ~~~ blaurenceclark We track interactions/behaviors that matter. From working with a number of apps we've been able to identify and combine interactions to make the amount of data to manage much smaller and easier to triage ~~~ blaurenceclark Using similar language to the asked question :) ~~~ glovedotcom could become a scaling problem quick...but i suspect those are problem for future blaurenceclark to solve after the customers start flowing in ... im intrigued indeed
{ "pile_set_name": "HackerNews" }
Packing heat gets you shot, say profs - billpg http://www.theregister.co.uk/2009/10/04/guns_attract_bullets/ ====== newsdog not true - those guys got shot due to their gansta lifestyle, guns were orthogonal to that ~~~ CWuestefeld Right. The only explanation offered for the shootings was muggings. I wonder how much that explains, though. There are certainly those, but also gangsta stuff as the parent post notes; crimes of passion; drunken idiots; and on and on. Seems like a stupid study, as we really can't draw any kind of conclusions from it.
{ "pile_set_name": "HackerNews" }
Ask HN: Should I learn Vala to develop cross-platform software? - yapcguy Vala is a C# like language, which compiles down to C. It uses GObject from GTK+ as it&#x27;s base object class. For UI programming, the bindings to GTK+ are pretty good. There are no bindings to QT. I have been playing around with the language on both Linux and Mac OS X. While Vala is still a work-in-progress, it is used in production in applications like Geany and Shotwell. Should I learn GTK+ and Vala or should I use QT and C++ to develop cross-platform desktop apps? Who has more momentum and is likely to be dominant in 5 to 10 years? ====== dubcanada This is just my opinion, but... I don't see GTK and Vala lasting very long. For one, GTK3 doesn't even have a windows version, there are some random people who made one, but no community supported windows version. GTK3 is also less popular then QT, GTK3 only being more popular on Linux. While QT looks better and runs better on both windows and mac. I like Vala and GTK and I've contributed to elementary OS for a while now. But every day they get questions about why they use Vala. So... People don't generally like it or use it. But... really use whatever you want. If you like or need C, use GTK. If you like or use C++ use QT. If you use C and want a little more, try Vala. If you need Windows or really good mac. Use C++ and QT, or do the backend in C or C++ and platform specific frontend. Even with QT5, Mac users will know, because certain things still look off Another popular idea seems to be using HTML and JavaScript for your frontend with things like CEF. ------ switch33 Learn whatever you want I would say. It's your own choice. But in general it's good to develop learning a language that is applicable to other software development. Learning C or C++ for that matter is a lot better because C has a great relationship to show you what assembly does. QT also has it's strengths/weaknesses. Try using [http://libnui.github.io/nui3/](http://libnui.github.io/nui3/) if you want it may be easier. ~~~ yapcguy I already know C and have experience with ObjC and some C#. I've gone through the Vala tutorials, not really too complicated. I prefer to use a language like ObjC or Vala to do GUI programming, but QT appears to have momentum (e.g. LXDE porting to QT, Linus Torvalds porting his diving app Subsurface to QT [1]) which means having to use C++ as the bindings to other languages are always somewhat lagging. Perhaps C++11 is better for keeping one's sanity, but I haven't used C++ in a long time and I never really missed it, so I'm not sure I would enjoy QT and C++ programming. [1] [https://plus.google.com/105872806106213007611/posts/MwiTc3cH...](https://plus.google.com/105872806106213007611/posts/MwiTc3cHKgi) ------ rprospero Is there any particular reason that you've chosen to limit yourself to just Vala/GTK+ or C++/QT? For my current job, I've had to write cross platform GUIs in Python/wxWidgets, Mathematica, Racket, Tcl/Tk, Haskell/GLUT, LabView, Clojure/Seesaw, and Perl/Tk. From a longevity perspective, Clojure is the only language that's not older than Vala by at least a decade. None of those languages are a silver built and there's a couple I hope that I never have to touch again, but sometimes using the right tool can save you a bunch of heartache. ~~~ yapcguy I build desktop apps for the Mac but want to build cross-platform software going forward. A native look'n'feel is a bonus, as well as easy deployment. I also don't want to be tied down to proprietary APIs like I am on the Mac with Cocoa, Core(Whatever), etc. From a language perspective I prefer Objective-C, C#, Vala over C++. Personal taste. Sadly those languages don't have bindings to the toolkit which seems to have the most momentum, QT. There are third-party C# bindings to QT, Qyoto, but that requires using Mono which for some Linux users is a no no no. ------ AsmMan2 Short answer: No.(IMHO). What about D? it may be dominant in some years. I don't seen vala in real world development, a good community about vala... in fact,I'm not a vala expect, but what are the great vala's features? who is working on to make this a real world language? and improve it? I don't seen a good reason to invest in vala language. Just change to something like D. ------ FrankBro This might be off-topic but if you want to see some good examples, elementary OS uses it for most of their applications. I tried it a while ago and really loved some of the features (integrated contract driven programming, yes please). However, as with most new languages, had problems with availability of tools, tutorials and libraries. ------ artificialidiot I see no use of vala code in geany source. Am I missing something? Vala is not suitable for anything production ready because of its half assed memory management semantics. I believe it is better to invest in C++11 at this point than vala. At least, it would be a little bit easier to use libraries which don't have a binding. ~~~ yapcguy Apologies, I meant Geary the email client which is written in Vala. Geany is an IDE which has support for Vala.
{ "pile_set_name": "HackerNews" }
Ask HN: How should a software engineer go about making a website? - AgathaTheWitch I have been a professional software developer for four years now and must confess, I have never made a website. Mostly I have done systems engineering (Deployment, security, database infrastructure) but also some QA work and Java app development.<p>I wrote a fiction book in my spare time and would like to create a site to promote it and possibly develop the IP into other products. I already have a domain. The thing is, there are so many ways to create websites. I want a framework that is easy to use but also not too cookie-cutter. I&#x27;ve heard good things about Squarespace.<p>My best languages are Python, Java, and Ruby. Pretty solid with Unix (I&#x27;ve written hundreds of BASH scripts) and I wouldn&#x27;t mind writing HTML and CSS if necessary. I know Javascript a bit too.<p>Anyway what do you folks recommend? ====== jqm Wordpress? (Kidding...). I like Python Flask or Python Bottle. For a small site I would just write the HTML. I realize this isn't a popular opinion nowadays, but sometimes it beats monkeying with frameworks in my opinion. Especially when you don't need to. And easier to set up hosting on shared hosting services. Check the purecss.io post from earlier this morning. They have some nice layouts. ~~~ tomek_zemla Exactly! You will probably enjoy learning a few things by building a simple site from scratch and you will have a better feel for selecting any frameworks after. I would recommend getting this beautiful book by Jon Duckett that covers both the design and development basics for a website: [http://www.amazon.com/HTML-CSS-Design-Build- Websites/dp/1118...](http://www.amazon.com/HTML-CSS-Design-Build- Websites/dp/1118008189/ref=sr_1_1) ------ atmosx Hello, Ruby should be your language of choice for anything 'web', among the ones you know. Might be helpful in the future for other projects as well. As most people said, Sinatra (+HAML or LESS), is an excellent framework to build up a simple website really fast. However, I'd say go a step 'up' and use octopress[1]. It's a Jekyll-based blogging platform. You can set it up on Github, to avoid hosting. If you're not expecting extremely high traffic Github is fine for static content and JS. It's flexible enough to do whatever you really wanna do and there's plenty of documentation online. You can find many of themes and howto's on how to use 'Liquid' in order to roll your own or modify one of the current themes. It shouldn't take more than 2 days to have something acceptable, a little longer if you plan to design a theme from ground-up. Good luck :-) [1] [http://octopress.org/](http://octopress.org/) ------ DanBC FORM OVER FUNCTION. You have a fiction book. People will want to know what genre it is; what the reviews say; what it looks like; a sample chapter; where to buy it; publisher and ISBN and etc. If you're lucky they'll want to discuss it with other readers or to ask you questions. If I'm in a bookshop and I see a book, and I websearch it there are a few things that happen. 1) I get a great website. I am more likely to buy the book. 2) I get a terrible website. I am less likely to buy the book. A terrible website is one that does a bunch of stuff before I get to the information I need. ------ insky Well it seems to me that you are very happy to write scripts so wouldn't be alergic to writing a little bit of code. I'd suggest you pick out of your best languages the one you prefer. And go with that. Write a few pages of text in something like Markdown or RST (python). Use a microframework like Bottle (python), that resolves a route (example.com/about/) to a function. In that function convert your micro-markup to html, that you then wrap further in a template. You can find some free html templates available on-line. Before that, plan the site, define what it is and jot down your expectations of what it will do. Look at similar sites. Note the ones you like. Draw up how the pages will be connected. For what you describe, it sounds like you only need a few pages. With the homepage listing news/updates. When you add something to the site (news) syndicate elsewhere, with links back (i.e. Twitter, Facebook). If I was a fan, I'd probably want to join a community. Perhaps a newsletter and/or fan forum would be a good addition. You may want to manage a mailing list and/or a web forum. If you have a web forum, you might want to stick that on another domain (or subdomain). You should be able to find an off the shelf solution for that. It doesn't need to be in the same language as your main site. Or hosted in the same place. Getting different web components (forums, etc) to match cohesively theme-wise is a bit of a challenge. But it's not necessarily needed in my opinion. Personally as a fan I'm more interesting in content. You could however pay someone to theme the site after you have it working. That's easier if you restrict yourself to a minimal amount of templates. There are millions of frameworks out there to choose from. Start with something very simple like bottle, it will introduce concepts and if you want to take it further later on you can. If you haven't written a web page at all before, then basic html is a good place to start (elements: html, head, body, head, h1, h2, p, a, ul) page is a good place to start. Compose two html pages, link them back and forth to each other and try and add an image and text to both. You don't need a fancy web server to do that, you can just open a text file and start writing, and test in your web browser. ~~~ insky I wrote a quick shell one liner that aggregates text files and simple templates into resulting html files. find ./txt/ -type f -name '*.txt' -print0 | xargs -0 -I£ sh -c 'newname=$(basename £ .txt).html; cat templates/header.html £ templates/footer.html > html/$newname' Each new page (html file) is a sandwich. The filling could be transformed markdown rather than plain text. A little find and replace for titles, and such like would make for quite an easy static site generator. I even found a bash markdown transformer: [https://github.com/chadbraunduin/markdown.bash](https://github.com/chadbraunduin/markdown.bash) ------ canterburry Honestly, for what you are trying to do, I would go bootstrap or foundation framework, HTML5 and CSS3. Not even JavaScript. That's it. Skip the SASS, LESS, Python, WordPress, Flask or other stuff everyone here is talking about. You don't need it. Some plain static webpages. Period. ------ chriswessels Ruby is a terrible choice. Use a Node.js based static site generator. You could even host on Amazon S3 (super low cost but globally distributed and redundant)! Look at Whitesmith/Blacksmith/etc. ~~~ mahesh_gkumar Pray tell, why is ruby a terrible choice? You can get a basic website up and running with Rails in less than 10 mins. ~~~ not_kurt_godel Because you're running a webserver which can crash or otherwise fall over to serve static content when you could be running a literally zero-maintenance alternative. ------ YoAdrian You might start with Github Pages: [https://pages.github.com/](https://pages.github.com/). It's all Ruby and pretty easy to use. ------ mkobar This is what Tails is doing: [https://tails.boum.org/blueprint/replace_truecrypt/](https://tails.boum.org/blueprint/replace_truecrypt/) ------ adam419 Ruby + Sinatra
{ "pile_set_name": "HackerNews" }
Assange's Op-Ed in The Australian - teoruiz http://blogs.theaustralian.news.com.au/mediadiary/index.php/australianmedia/comments/julian1/ ====== sachitgupta Similar discussion: <http://news.ycombinator.com/item?id=1978955>
{ "pile_set_name": "HackerNews" }
The Mystery of the Creepiest Television Hack - mjschultz http://motherboard.vice.com/blog/headroom-hacker ====== mjschultz Also, here is the Wikipedia article on the matter [1]. And a Reddit thread (presented as one of the theories in the article) [2]. [1]: [http://en.wikipedia.org/wiki/Max_Headroom_broadcast_signal_i...](http://en.wikipedia.org/wiki/Max_Headroom_broadcast_signal_intrusion_incident) [2]: [http://www.reddit.com/r/IAmA/comments/eeb6e](http://www.reddit.com/r/IAmA/comments/eeb6e) ------ Mithaldu This is a very long article. Is it just a rehash of what happened, or does it contain any new info figured out in the past year? ~~~ MWil If you haven't read the reddit thread from within the last year then it certainly has a great amount of new information. In fact, I thought the reddit theory was a more compelling story than this article ~~~ dfc I couldn't get past the writing style in the reddit post. Maybe it will be less abrasive in the morning with fresh eyes and some rest. ------ middleclick What a well-written and researched article. An absolutely fascinating read! ------ stox I still think someone from the Ripco crew were involved. They were mighty conveniently located. Maybe, someday, someone will spill the beans. ------ jccalhoun great article. One correction, however: They say one "suspect" was in "nearby" Bloomington, Indiana but Bloomington isn't really near Chicago. It is in Southern Indiana, about 4 hours away from Chicago and to get to Chicago you have to go through Indianapolis, so if someone from Bloomington were to do something like this it seems much more likely they would do it in Indianapolis or even Louisville or Cincinnati which are both closer than Chicago. ~~~ todd3834 Unless that was part of how they intended to avoid getting caught ------ elwell is that the same character on the tv's in eminem's new video? [http://www.youtube.com/watch?v=XbGs_qK2PQA](http://www.youtube.com/watch?v=XbGs_qK2PQA)
{ "pile_set_name": "HackerNews" }
EC2 Instance Update – X1 (SAP HANA) and T2.Nano (Websites) - runesoerensen https://aws.amazon.com/blogs/aws/ec2-instance-update-x1-sap-hana-t2-nano-websites/ ====== kimcheekumquat >X1 instances will feature up to 2 TB of memory, a full order of magnitude larger than the current generation of high-memory instances. These instances are designed for demanding enterprise workloads including production installations of SAP HANA, Microsoft SQL Server, Apache Spark, and Presto. >The X1 instances will be powered by up to four Intel® Xeon® E7 processors. The processors have high memory bandwidth and large L3 caches, both designed to support high-performance, memory-bound applications. With over 100 vCPUs, these instances will be able to handle highly concurrent workloads with ease. Can't wait to launch one of these. ~~~ kchoudhu I assume I'll have to declare personal bankruptcy as soon as I launch one. ~~~ jewel As a point of comparison, the Dedibox Extreme SP launched in 2013 and has 1TB of RAM and costs €1899.99/month. [http://documentation.online.net/en/serveur- dedie/offres/serv...](http://documentation.online.net/en/serveur- dedie/offres/serveur-dedibox-extreme-sp/server-extreme-sp) ~~~ toomuchtodo Physical servers at other providers are almost universally cheaper than AWS. At AWS, you're paying for it to be in your same account, have access to your other AWS resources, etc. EDIT: This is not to hate on AWS. I love AWS! (I do devops and infrastructure). Its to say, if you need what AWS offers, buy it. If you don't, architect your solution around other providers. ~~~ chimeracoder > At AWS, you're paying for it to be in your same account, have access to your > other AWS resources, etc. You're forgetting the biggest part: you're also paying for the _flexibility_. Subject to availability, you can provision and deprovision AWS resources at will, which gives you far greater granularity than you can do with your own hardware. This flexibility enables you to save in the long run if you manager your resources appropriately, but it also comes at a per-unit premium. ~~~ toomuchtodo Very much this. If you're starting out or have a very dynamic load pattern (Netflix), AWS is for you. If you have a fixed load pattern, you can see quite a bit savings going to dedicated hardware (Stackoverflow/Stackexchange). ------ jewel 2TB of RAM would cost $31k or so when built out of 32GB chips. I don't know if EC2 is done this way, but imagine if the T2.micro class was running on these servers (to save physical footprint). At 1.3¢ per hour, the server would bring in $227k/year. ------ trjordan I know it's happening, but I'm so surprised to see them cater to technologies like SAP HANA. 3 years ago, there was basically no overlap between people who wanted to run products from SAP/Oracle and people who influenced product offerings from AWS. Obviously, that's different today. ~~~ edwinnathaniel hi @trjordan, long time no hear ;) It's amazing to see the progress from anti-cloud (or private-cloud) to "we're cloud only" within that timeframe (I'm referring to companies that was hard-on on-premise). AFAIK, SAP HANA is being used by NBA (NBA.com/Stats), NFL, NHL ([http://www.nhl.com/ice/news.htm?id=754248](http://www.nhl.com/ice/news.htm?id=754248)) to provide real time statistics for the whole league. I wonder if they want to move their infrastructure to AWS. PS: I work for SAP ~~~ AndyNemmity I know way too much about this to respond, but I'll just say there are a lot of pieces driving this. :) ~~~ edwinnathaniel Send me an email :D, would love to hear the story. ------ AndyNemmity It'll be nice, I'm in the middle of building a 14 node HANA cluster, and it would be much easier to just have it all on AWS provided I get the same performance. edit: let alone the fact I could take it down when not in use. That's a ridiculously cool thought. the price of this thing all together is insane. ------ garyrichardson "640K ought to be enough for anybody" \- Bill Gates, 1981 I can't wait to spin up a 64 node cluster of these new X1's. ~~~ strictnein > "I've said some stupid things and some wrong things, but not that. No one > involved in computers would ever say that a certain amount of memory is > enough for all time … I keep bumping into that silly quotation attributed to > me that says 640 K of memory is enough. There's never a citation; the > quotation just floats like a rumor, repeated again and again." \- Bill > Gates, 1996 [https://en.wikiquote.org/wiki/Bill_Gates#Misattributed](https://en.wikiquote.org/wiki/Bill_Gates#Misattributed) ------ late2part Very cool. I'm not an AMZN or AWS fanboy, but this is really neat stuff, and very impressive how much they continue to increase their market lead. ~~~ monksy I just wish there was a good competitor that bundled the other services like they do. ~~~ nullrouted Both Azure and GCE are competitors, both have pretty feature rich clouds. They may not have feature parity but I'm guessing what they do have fits 70-80% of peoples needs (Compute, Database, Storage, DNS, Load Balancing). ------ nivertech My speculation is that X1 instances is where Amazon QuickSight in-memory BI service is running. ------ nabaraz Hacker news has become AWS news lately. In serious note, I am curious to see pricing structure on X1. ~~~ ShaneOG AWS re:Invent[0] is taking place this week, so there are a lot of news and announcements. [0] [https://reinvent.awsevents.com/](https://reinvent.awsevents.com/)
{ "pile_set_name": "HackerNews" }
Ask HN: Do you accept LinkedIn connection requests from recruiters? - johnhess Let&#x27;s assume you&#x27;re open to new opportunities.<p>Does it look odd to have a bunch of connections to recruiters? Is it worthwhile? ====== jsinkwitz Yes, but not for the reason you're thinking. Recruiters and sales people (guilty!) tend to network exceptionally well, which means by connecting with them, you'd just significantly expanded your 2nd level connections within your own industry. Winz ~~~ k__ hehe, yes did the same. But then a recruiter I didn't know wrote me how he heared that people say I'm a real good JavaScript developer. I asked him who says this and his reference was another of these recruiters I didn't know. This I found rather suspicious... ------ taprun I went out of my way to connect with recruiters because it extends my network. Two benefits: 1) It makes me a 2nd level connection to lots of people I probably _do_ care about. 2) It ups my visible number of connections and makes me _look_ important. People see 500+ connections and assume I'm someone important. ~~~ Gustomaximus For this connections number, I now look at connections to skill recommendations ratio. Generally I see someone as genuinely connected if they have ~10% skill recommendation numbers for their top skill as they do connections. If someone have 500+ connections and 10 people have hit a skill recommendation you know they are spammy. ------ Davidbrcz I will accept one if the recruiter is targeting me. That means he/she wrote a custom message, read my profile, propose me something... Each time that I was asked, they were on a frenzy and it looked like they were connecting to as many people as they could ------ lmilcin LinkedIn is meant to create networks for recruiters and candidates. Waiting to build your network until you actually need work is like waiting to make your backups until after you had an outage. As a rule I will accept requests from recruiters unless: 1\. They obviously don't know how to do their job properly. For example offers sent out without getting acquainted with my profile or asking me to do their job is a reason for me to drop the recruiter. 2\. They can't explain the offer, for example what I would be expected to do, or can't connect me with somebody who can. I understand that recruiters can't have my level of technical knowledge or they would most likely be working as engineers. Nevertheless, they are responsible for communication and if I can't get required information I drop the recruiter altogether. 3\. Recruiters who require me to do any work before I can get to know who I would be working for. 4\. Recruiters who just outright give me the form to fill in to build their database, including all the information I already included in my CV or Linked In profile. ------ 505 I am open to new opportunities. If I get a request from someone with a lot of connections, my heuristic is I don't add the someone to my network. I don't think a single person with lots of connections is actually helpful to the information represented in 'my' network. If a recruiter looks interesting, I try and open a conversation without making them part of my network. And rules are made to be broken, of course. ------ dozzie It doesn't look odd, the same way it doesn't look odd that you have in friends list bunch of people you have never seen and know nothing about. People just accept anybody as a contact. Me, I don't accept any recruiter unless I know them in person. I even have it stated clearly in my profile, though most of the recruiters don't even bother reading it. ------ ramtatatam I used to accept when I was potentially interested in getting new job :-) I even got quite an interesting interview from my LinkedIn network long time ago. Now recruiters contact me for slightly different purposes and mostly it's a waste of time from my perspective. Some of them will try to connect with you if you have many connections so then they harvest your connections too - I deal with it by making sure nobody can see my connections (you can set this up in privacy section) ------ adamb_ Generally no. The thing is the value of that connection is really low, because most recruiters connect with as many candidates as they can indiscriminately. The reason for that is financial: If you're directly connected then it doesn't cost any credits to send an in InMail message. ------ joeax I'll accept a connection every now and again, and only if we share connections with at least 2 other non-recruiters/technical people. I try not to overload my connections with recruiters as they don't provide much network feedback value (i.e. they rarely comment or like anything I post). ------ hulahoof I don't think it's odd from an employer's point of view. I accept all requests that may offer some value to me in the future. I used to dedicate some time to replying to the messages that follow but they became a bit overwhelming. ------ JSeymourATL > Is it worthwhile? Perhaps more compelling-- connect with senior executives in roles & companies that you can help. Imagine connecting with CTO/CIO's, VP's of Engineering in your space. ------ camhenlin Yes, I don't care to talk to them at this time, but my situation could change at any time and I may want access to their contact information ------ jeena I don't use LinkedIn. ~~~ tonyedgecombe Me neither but if I was still in the jobs market I would.
{ "pile_set_name": "HackerNews" }
Why font rendering sucks in electron based editors? - xstartup Font always appears fuzzy or blurry. It doesn&#x27;t happen in Sublime Text. Is there any solution to this problem? ====== Rotareti I ran into this issue when I installed Atom two years back. Today I installed VSCode and run into the exact same issue again. I tried a bunch of stuff to fix it; none of it worked. It was a pain two years back and it's a pain today. Except for the ugly fonts the Editor seems nice. Maybe I'll give it another try in 2019... ------ folknor Several solutions/workarounds, explanations, and links to lower-level bugs are here [https://github.com/Microsoft/vscode/issues/35675](https://github.com/Microsoft/vscode/issues/35675) I would explain more if I knew - but I haven't read it. I just found the solution and applied it.
{ "pile_set_name": "HackerNews" }
Why the developers who use Rust love it so much - ayoisaiah https://stackoverflow.blog/2020/06/05/why-the-developers-who-use-rust-love-it-so-much/?cb=1 ====== plerpin I've read the Rust book, maintained someone else's Rust code once, and written a few of my toy programs. But I find that idiomatic Rust is difficult to grok. All the constructs required for safety seem to obfuscate the control flow. There also seems to be a weird schizophrenic combination of imperative and functional styles in idiomatic Rust. ~~~ oconnor663 > All the constructs required for safety seem to obfuscate the control flow A couple thoughts on this point, speaking as an admitted Rust fan: \- A lot of this has gotten better since "non-lexical lifetimes" were stabilized. In particular, it used to be somewhat common to need extra pairs of nested curly braces, to convince the borrow checker that some reference you'd taken really wasn't going to be used again. But today that is almost never necessary. \- In some cases, I find Rust's safety-oriented APIs clearer and nicer than their equivalents in other languages. For me, the big one is Mutex<T>. In Rust, a Mutex is a _container_ , and you can only access its contents by locking it. It's _impossible_ to forget to lock a Mutex when you access the thing it protects. ------ zozbot234 Discussed before at: [https://news.ycombinator.com/item?id=23437202](https://news.ycombinator.com/item?id=23437202) Should be flagged as a dupe. ------ zx14 Does Rust have a reach outside the C/C++ section of the market? Or is it only popular in its own bubble, where developers are now excited to see many of their once painful problems now fixed? ~~~ gameswithgo a lot of ruby people are into it, and there is some Go crossover. a lot of us who don’t like that almost every new language turned into a jitted or interpreted universe like Go and Rust ~~~ zx14 I can see the crossover with Go, but Ruby?! What is the reason behind its attractiveness in the Ruby community? I get that some developers believe that Rust is "better than Ruby" and that Rust has far more momentum, but their use cases are very different. ~~~ jkachmar Steve Klabnik and Sean Griffin were both early adopters of Rust (and are now fairly visible members of the community), and I believe that helped sow the seeds of the language in parts of the Ruby community [0]. I think Rust managed to also cultivate a community that was very much like Ruby’s [1], which brought people over. [0] cf. Klabnik’s “Rust for Rubyists” [1] mostly being very “people-oriented” in terms of inclusivity and documentation, as well as putting effort into ensuring that the language and its tooling provide a lot of “developer happiness” ~~~ steveklabnik Don't forget Yehuda Katz. Fundamentally, I don't think it should be surprising that someone who loves Ruby would look at Rust. That it's very different is the entire point! Why learn something that's extremely similar, that won't expand what you can do very much? ------ mangix Any cool projects to try out? I remember there was one to replace coreutils. ~~~ jabirali I am a daily user of fd (find), rg (grep), exa (ls), bat (cat), and am planning to try out sd (sed). These tools are in my opinion significantly better than the classical UNIX tools: they can be orders of magnitude faster, understand .gitignore, have more intuitive command-line options, and make better use of colors.
{ "pile_set_name": "HackerNews" }
Google woos developers by releasing cloud platform code to GitHub - jeffreyfox http://www.zdnet.com/google-woos-developers-by-releasing-cloud-platform-code-to-github-7000010190/ ====== manidoraisamy Smart move! This will neutralize cloudfoundry's advantage. We should see Appfog like platforms on Google cloud in future.
{ "pile_set_name": "HackerNews" }
Ask HN: Why hasn't the email client been reinvented lately? - dc-tech-fan For a tool we are all required to use but hate using, I&#x27;m surprised there haven&#x27;t been huge improvements in email clients since Google introduced Gmail 10 years ago.<p>There have only been small features added over time, like inline attachment viewing, and minor UI tweaks, like matching OS UI trends, but for the most part it&#x27;s the same Outlook, Apple Mail, Gmail, etc as we had 10 years ago: a bunch of folders, a single column for the inbox, a box to read or write mail, some icons to delete, forward, reply, etc.<p>There are plugins, like ActiveInbox and Rapportive, but these can be kludgy or not well integrated, especially now that we expect to be able to seamless jump from desktop to mobile email.<p>Am I alone in expecting a significantly better experience from my email client than where we were 10 years ago?<p>Is there somebody out there working on a new type of email client? ====== stevekemp I seems like you're restricting yourself to graphical clients, ruling out the (minor) changes introduced by things such as notmuch, etc. At the end of the day what people do with email is very much the same as it was 20 years ago: * Send it. * Read it. On that basis there's little innovation that seems missing, unless you're thinking of something special? Me? I like console mail clients, so I wrote one with built-in scripting via lua. It makes me happy, but it'll never take over the world: [http://lumail.org/](http://lumail.org/) I suspect my next task is to try something graphical, but I'm in no rush. ------ kohanz First, I disagree that we don't have today a "significantly better experience" with e-mail than we did 10 years ago. Gmail was not even available 10 years ago [0]. Do you remember what Outlook, Yahoo and Hotmail were like back then? There's a reason GMail was able to scoop up a massive market share. There hasn't been a complete paradigm shift, if that's what you mean, but I don't feel it has been necessary. Second, I don't "hate using" e-mail and I don't feel that I am alone in that. Your basic argument boils down to saying that e-mail is broken, but I don't agree with that premise. [0] [http://en.wikipedia.org/wiki/Gmail](http://en.wikipedia.org/wiki/Gmail) ------ PaulHoule In the late 90's it was said that Venture Capitalists wouldn't give a dime to anyone who competed with Microsoft. Today, VC's don't want to give money to any company that competes with Google. Google has a good product with several competitive advantages. gmail has great deliverability, you can easily join mailing lists from gmail which could exclude you if you were hosting from your_name.com. It handles large mail spools with ease because it uses parallel programming tricks. Perhaps when you boot it, it runs a quick Map/Reduce job to build an in-memory index to support all the searching and sorting you do. I quit Thunderbird a few years ago because it just couldn't cope. It would overload, it would fail and get corrupted and I couldn't live with that. I was using Outlook at work so I switched to Windows Live Mail, which held up pretty well to the load. With Live Mail I had two problems. First, People often couldn't read my attachments. The other was that I felt overwhelmed with the various categories of mail I was getting. Some of this was spam, but a lot of it wasn't, such as receipts when I made transactions, newsletter subscriptions, etc. That really got me to gmail, which goes a long ways to relieve that overload. Any product has to make decision about client-server components; I can read my gmail anywhere, and that is a big plus. Some people imagine a massed up Outlook or Thunderbird with 64 bit addressing and designed to make the most of multicore processors and large RAM and SSD disks. The trouble is that most corporations don't give out good laptops so you can't sell this to large numbers of salespeople unless you bundle it with the hardware. The excitement in the client world is really on the other end, you could shoehorn it into a tablet and they even have octo-core phones so maybe you can go somewhat far, but the work is going to all be in super-efficiency and managing differences between tablet platforms. No matter what a big part of the intellectual work is some system that maintains document storage full-text index, metadata index, contact book etc. I'd like to see this subsystem reusable so that it could be used to handle other sort of document collections. If you move the database away from the client to the server, you lose many big differentiations against gmail. (ex. You can physically destroy your mail spool if it is on your laptop, who knows who can grab it from the firm you work for. On the other hand, industrial spies can grab a laptop from out in front of you, and if you didn't encrypt the volume, they have your mail spool.) Move the database to the "cloud" and then you are head-to-head with gmail and the advantage they have is a lot more data. It's much easier for them to know what a "discussion group", a "promotion", or a "update" is than it would be for you to do it. Some kind of CRM functionality could be a big plus. The idea is to put structure in the communications so that there could be much more automation and also teamwork in that you can hand off the job to someone else and have people held accountable if it doesn't happen. I think people hate CRM, Project Management and those kind of tools worse than they hate e-mail, so a "business process automation construction set" might be big success if someone can find a way to make it fun. ~~~ PaulHoule See reddit [http://www.reddit.com/r/programming/comments/20fmbu/ask_hn_w...](http://www.reddit.com/r/programming/comments/20fmbu/ask_hn_why_hasnt_the_email_client_been_reinvented/)
{ "pile_set_name": "HackerNews" }
“So, you decided to contribute to open source” - kawera https://twitter.com/eranhammer/status/881614401471520768 ====== x1798DE I don't know why people take this stuff so personally. I maintain open source resources and for the most part people have been fine. Most of my complaints are that I don't always get reproducible bugs and I don't have enough time to do everything. Even if someone comes in acting all entitled, that's their problem, not mine - since in reality I'm under no obligation to support their use case. I understand that as infrastructure maintainers, you only get noticed when your stuff breaks, so you may feel under siege for doing a thankless job, but honestly the incentives will never change, so it's probably helps to try to recognize how much control you have in that kind of situation. ~~~ voltagex_ [https://twitter.com/eranhammer/status/881615072283340800](https://twitter.com/eranhammer/status/881615072283340800) ------ aarohmankad My biggest pain point when I contribute to open source has always been the owner/maintainers being unresponsive to new issues/pull requests. I wonder if this is a consequence of the behavior Hammer is describing? ~~~ voltagex_ Issues I can understand not responding to but pull requests that fix things and pass tests should be a one-click merge. ~~~ klez That may be dangerous. What if the patch works, passes all tests by contains non-obviously malicious (or simply non-obviously buggy, which no current test catches) code? I prefer maintainers taking a bit more time to review it instead of doing one- click merges. Of course, there's a world between one-click merge and total unresponsiveness, but still. ------ erlend_sh This is also available as a blog post: [https://medium.com/@eranhammer/so-you- decided-to-contribute-...](https://medium.com/@eranhammer/so-you-decided-to- contribute-to-open-source-93b640cf2ae2)
{ "pile_set_name": "HackerNews" }
The Early History of F# [pdf] - strangecasts https://dl.acm.org/doi/pdf/10.1145/3386325 ====== tejasv We, at Chaldal (YC S15, chaldal.tech), use F# in production. Most of our backend projects are in F#, the last one is being migrated over slowly from C#. Our new frontend projects are also in F# (using Fable). I want to share some insights of using F# with the community. We started from a C# codebase, and realized that a better language can help weed out most bugs in our system. And yes, it works. Pretty much all bugs we face these days are parts where the F# world touches something non-F#, like .NET and other libraries written for C#, where interaction (like nulls) is not well-defined. We've taken Scott Wlaschin's (fsharpforfunandprofit.com) teachings to heart, and we have a giant banner in our office that reads "Make illegal states unrepresentable". It took a bit of learning for everyone to jump in, but our dev team has loved the experience as the language is a pleasure to use; when they need to go back to write C# or TypeScript code, a lot of these learnings transfer. People just become better programmers (as is true with learning any functional language). To get all the benefits of F#, you must adopt the whole paradigm. While F# allows C#-like OOP, and while this can be an initial stepping stone in the path towards F#, you must go all the way. If you simply do OOP, the trade-offs aren't worth it, IMO, as F# is a functional-first language, and the OOP is mostly provided for interop with the rest of .NET. IDE support has been janky in the past, but its improving. Latest VS 2019 is pretty good, and JetBrains Rider works pretty well on the Mac. ~~~ throwaway_pdp09 I get the "Make illegal states unrepresentable" in theory but how do you do it actually for nontrivial preconditions like "this list must be sorted"? (as a precond for a binary search on a vector, for example). OOP is fine with functional if you make it immutable too, so I don't see the problem (I've not really got a problem with mutable OOP, or state generally, if it's done carefully). ~~~ choeger > how do you do it actually for nontrivial preconditions like "this list must > be sorted"? This depends on your data model, of course. But for a list of integers you could do the following: 1\. Have a unique data type that is the list of Deltas, plus the initial element (So for instance the list [5, 3, 6] would be encoded as (3, [2, 1]). 2\. Provide a sort function that creates such a sorted list. 3\. Make the sorted list your input parameter. This is obviously oversimplified, but I hope you get the idea. A cheaper alternative is to 1\. Make an opaque datatype "sorted list", together with a function that translates this type to a normal list. Implemen this type just as a list, but keep the implementation private. 2\. Provide a function sort, that is the only function that can yield that type. 3\. Demand that type as input. ------ chrisaycock _Type providers_ were the biggest innovation I learned from F#. Type providers are a way to have static typing determined from an external source, like a CSV file or SQL database. I ended-up using a similar trick (with different syntax) in my own language, Empirical. I needed a way to infer a Dataframe from a file while in a REPL. It wasn't until I read the MSR paper that I realized I could do it entirely in the compiler without creating separate logic for the interactive users. [https://dl.acm.org/doi/10.1145/2908080.2908115](https://dl.acm.org/doi/10.1145/2908080.2908115) ~~~ ridiculous_fish Will you please share your insight? What are Type Providers? What is their advantage over, say, naively dumping type definitions from pre-processing a CSV file? ~~~ chrisaycock The user could certainly dump a type definition, but for users who just want to point to some data in one go, the F# type provider[0] works with just: CsvProvider<"trades.csv">.Load("trades.csv") I personally didn't like having to list the file twice, so in Empirical[1] it's just: load$("trades.csv") My current version requires the dollar sign to tell the compiler that the parameter will be _static_ (known at compile time). I'm planning on an updated version[2] that will eliminate the dollar from the function call: load("trades.csv") Basically, it means that users can just load the file like in a dynamically typed language, except that Empirical is statically typed. [0] [https://fsharp.github.io/FSharp.Data/library/CsvProvider.htm...](https://fsharp.github.io/FSharp.Data/library/CsvProvider.html) [1] [https://www.empirical-soft.com](https://www.empirical-soft.com) [2] [https://github.com/empirical-soft/empirical- lang/issues/23](https://github.com/empirical-soft/empirical-lang/issues/23) ------ strangecasts Published as part of the History of Programming Languages (HOPL) IV proceedings[1], which also include articles about C++, D, Groovy, and JavaScript, and a few other languages. [1] [https://dl.acm.org/toc/pacmpl/2020/4/HOPL](https://dl.acm.org/toc/pacmpl/2020/4/HOPL) ------ bern4444 I've recently become interested in F# after I saw its use in a bunch of videos by Scott Walashin. I came across his talks, and subsequently his blog, as I'm getting more into functional programming and he has some awesome content. I use Javascript/Typescript professionally and I really like both of them. F# though has a minimalism in its design that is very appealing. Code I write in F# feels a lot less cluttered as opposed to the same function in Javascript/Typescript. This minimalism seems to be part of the language design with some following specific examples. \- `let` is used to declare variables and functions. \- The last statement in a function is implicitly the return value. \- The type safety/mechanism is great and seems to be extremely similar to Typescript's so its easy to pick up for me (I'm not sure which came first or if there's a relationship but it certainly wouldn't surprise me). This along with some classic functional ideas that are built in like immutability by default, automatic currying of functions etc all allow for a language that seems be made to get out of the way. That's a very compelling idea for a language design to me that I previously haven't seen. Because functions and variables are both declared with let, perceiving functions as data kind of finally clicked for me since I first heard that concept 3 or so years ago. Another feature was to not require a semicolon as a line terminator and instead use it for another meaning as a separator. While JS doesn't require semicolons at the end of a line, its reserved for that purpose. Overall, I find F# to be a refreshing language (from javascript, python, java and even others like Rust or Go but I haven't used those as much so don't want to make bigger claims) and I'm glad to see a post like this on HN almost like validation of a community behind the language. I'd be curious to get a feel for some companies that actively use it as well. I also love that there's a compiler for F# to Javascript so I can use it to build front end code as I learn the language. Its pretty incredible we can do cross compilation of high level languages like this and with others like Rust and Go to wasm etc Edit: formatting of bullet points ~~~ logicprog Wow this is a great comment about a new-to-FP viewpoint on F#. I came to it after I'd already experimented with Haskell and OCaml, used Rust a lot, and been programming with Lisps for years - and I was really impressed with it nevertheless! I really love the ML-family type systems. They're really great aides to programming, and makes planning really nice. F# has that, plus that usability from C# and TS. It's also, like you said, got a really beautiful syntax IMHO. I keep coming back to Rust though, because Rust is more general purpose, more performant, and applies better to the stuff I like to work on for my hobby projects. I'd love to use F# more but can't find a reason to at the moment. TypeScript, Python and Rust have become my comfort languages, but I like Rust the best, because it's got a lot of really nice features from ML languages and a powerful and safe type system. ~~~ kowalgta We use F# to build fairly complex portfolio management apps. It's a great match for F# to Javascript transpilers (Fable, WebSharper). It makes it easy to share functions and types between UI and backend and to have powerful type safety. I.e. you change your DB data type and compiler informs you where in the UI you need to make appropriate changes. This works great with "makes illegal states unrepresentable" approach. It helps to reduce a need for boring unit tests and lets you focus more on expressing domain in code directly. ~~~ bern4444 I love this idea too. Is it possible to create a type (in F# or Typescript) to represent an idea like greaterThan2? A value whose type is greaterThan2 would have the obvious constraint the value is always bigger than 2. Having the compiler check such a condition would be awesome. I kind of can do it for Strings by doing something like this type ValidStrings = 'name' | age | 'dob'; The easy way around this is a function to determine this function greaterThan2(num) return num > 2; but it'd be cool to express this level of dynamism as a type. ~~~ kowalgta As others mentioned - using smart constructor technique, but not directly as F# has no dependend type capability. Smart constructor technique works well with 'parse, don't validate' approach [0]. You can push type construction to the boundries of your system so that you can work on a domain code with more precise types. It's not always so rosy however as too much types can become a burden. [0] [https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t- va...](https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) ------ xvilka The original OCaml is more interesting. It has language flexibility F# lacks, gets new features every year, crossplatform, and compiles into the native code, thus faster. Once Multicore OCaml [1] project is finished the major painpoint of it will go away. [1] [https://discuss.ocaml.org/t/multicore-ocaml- may-2020-update/...](https://discuss.ocaml.org/t/multicore-ocaml- may-2020-update/5898) ~~~ jolux I love OCaml. F#, if you squint, is kind of philosophically like Clojure for the CLR: it leverages a heavily object-oriented but nonetheless very powerful runtime to allow developers to create strongly-typed functional programs that interoperate with a massive array of first- and third-party libraries that already exist. I wouldn't go so far as to say OCaml is more interesting. F# has things like computation expressions, async, units of measure, and type providers, none of which are trivial (or exist in OCaml), several of which are incredibly influential, and all of which are interesting. The long slog towards multicore for the OCaml community has been kind of embarrassing, with Haskell having offered arguably the best concurrent and parallel programming support in all of software development for years and years now. I will also note that F# gets new features fairly frequently as well, is cross-platform, and is _extremely_ fast by virtue of running on the CLR, which I would not underestimate: [https://benchmarksgame- team.pages.debian.net/benchmarksgame/...](https://benchmarksgame- team.pages.debian.net/benchmarksgame/fastest/fsharp.html) The features that it is missing from OCaml are admittedly pretty major, though. Row polymorphic records and the structurally typed object system are missed, not to mention rich modules and functors. ~~~ yawaramin OCaml has the equivalent of computation expressions since 2018: [http://jobjo.github.io/2019/04/24/ocaml-has-some-new- shiny-s...](http://jobjo.github.io/2019/04/24/ocaml-has-some-new-shiny- syntax.html) It definitely has async: [https://github.com/ocsigen/lwt](https://github.com/ocsigen/lwt) It doesn't have units of measure but due to strong abstraction properties of modules we can fairly easily roll abstract types. It doesn't have type providers but it does have plugins for type-directed derivation of JSON codecs, equality, comparison, printing, etc. > The long slog towards multicore for the OCaml community has been kind of > embarrassing IMHO it's been refreshing to see the pushback against 'multicore at all costs'. There is a benefit to taking the time to do things the right way. OCaml is a 30+ year-old language with an even longer ML heritage. It's going to be around for a while; it doesn't need to rush into a sub-par implementation. Meanwhile, people are writing highly concurrent, multi- threaded applications with it right now, and using multiple processes just fine for parallel computing. ~~~ jolux I’m glad to see that the language is moving forward in these areas, but I don’t think you understand how F# implements units of measure. It’s not just another data type, they’re fully inferred and checked and work with generics: [https://stackoverflow.com/questions/21538563/can-f-units- of-...](https://stackoverflow.com/questions/21538563/can-f-units-of-measure- be-implemented-in-ocaml) Type providers are a native feature, no plugins required. I have no animosity towards OCaml (in syntax and semantics, I prefer it!) but this is a bit of a stretch. I don’t think anyone would say Haskell does multicore the “wrong” way, and OCaml as you say has had decades to solve these problems. I love the language but the community is very small and the language evolution has been extremely conservative for the most part. ~~~ yawaramin > I don’t think you understand how F# implements units of measure. It’s not > just another data type, they’re fully inferred and checked and work with > generics Yes, units of measure are really cool, and I understand they support dimensional analysis out of the box. It's quite unique, and I fully agree that abstract types don't _fully_ replace them. Just saying that that's what people do in OCaml. > I don’t think anyone would say Haskell does multicore the “wrong” way And it certainly is not something that I said. > OCaml as you say has had decades to solve these problems. The 'problems' being the lack of multicore support? As I said, this is not really stopping people from getting real work done in industry–see everyone using OCaml currently in highly concurrent and multi-process parallelized applications, or in fact even people using NodeJS, Python, Ruby. > the community is very small Correct, but it is definitely growing. > and the language evolution has been extremely conservative for the most > part. Not so correct. There have been massive amounts of changes in the last few years: [https://www.ocamlpro.com/2019/09/20/a-look-back-on- ocaml/](https://www.ocamlpro.com/2019/09/20/a-look-back-on-ocaml/) And this is not even taking into account the features which OCaml had before then beyond F#: \- Polymorphic variant types \- Structural subtyped object types \- Named arguments \- Optional arguments \- Functors ~~~ jolux F# has named and optional arguments and I named everything else as real differences in my initial comment in this thread. I’ve interviewed with Jane Street and gotten an offer on the basis of my OCaml and F# skills. I understand where these ecosystems differ. I rest my case. ~~~ yawaramin From [https://docs.microsoft.com/en-us/dotnet/fsharp/language- refe...](https://docs.microsoft.com/en-us/dotnet/fsharp/language- reference/parameters-and-arguments#named-arguments) > Named arguments are allowed only for methods, not for let-bound functions, > function values, or lambda expressions. > ... > Optional parameters are permitted only on members, not on functions created > by using let bindings. ~~~ jolux I concede that point. ------ algorithmsRcool F# is the language that I keep coming back to. And as the paper notes, it has kept up well with the rapidly changing landscape of .NET Core and newer defacto standards introduced by modern C#. ~~~ GiorgioG It had a very rocky start in the .NET Core world thanks to Microsoft's lack of interest in supporting F#. It lagged behind C# in significant ways and continues to get very few resources behind it. ~~~ jackfoxy Actually Microsoft has quite a respectable team of developers dedicated to F#. It is true development suffered because the Roslyn compiler that MS invested a lot of effort into cannot be made to work for the language. F# has been made compatible with surrounding Roslyn tooling over time. (There is a distinction between the Roslyn compiler and the Roslyn tooling I am not qualified to explain.) The fact is that as a functional first language it does not require as much surrounding tooling as C#. Requirements for refactoring tooling, for instance, are far simpler. ~~~ GiorgioG > Requirements for refactoring tooling, for instance, are far simpler. If that's the case, it's sad that Visual Studio's refactorings for F# are so meager. ~~~ JanneVee Yet I don't miss any refactoring tools in VS for F#. It is a simpler language and has some nice design affordances that makes refactoring superfluous in some cases. I have a small story around it. A couple of years back I had the nasty habit of enumerating sequences multiple times with LINQ. Resharper constantly complained about it. Programmed some F# for projecteuler and advent of code. My nasty habit of multiple enumeration went away even in C#, because F# design affordances made me rethink enumeration and sequences. So since then I haven't triggered multiple enumration in Resharper with my own code. It is always somebody elses. ------ lihaoyi I have a special place in my heart for F#. While I do a lot of Scala these days, professionally and open source, F# was my first introduction to functional programming that eventually got me into Scala. F# and Scala are extremely similar languages: garbage collected, hybrid OO/FP guest languages hosted on a widely-used runtime. Even though the superficial syntax is very different, and the advanced language features they provide are pretty different, the core experience of modelling your data as "dumb" data structures and transforming collections of data using higher-order functions is almost identical. Scala ended up winning out for me due to a broader ecosystem (Much more OSS Java than OSS C# out there), better tooling (Visual Studio's F# support was always disappointing...), and easier interop (F# <-> C# feels a lot more clunky than Scala <-> Java). But I can easily imagine an alternate universe where I'm happily writing F# all day, and almost nothing would be different from my current work writing Scala. ------ rurban Oh my so many warts. In haskell, ocaml but also F#. > Specifically, method constraints were added, introduced by a deliberately > baroque syntax: let inline (+) (x: ^T) (y: ^U) : ^V = ((^T or ^U): (static member op_Addition : ^T * ^U -> ^V) (x, y)) > This definition says that any use of + is implemented via inlining a call to > an appropriately-typed op_Addition method, defined on the type ˆT or ^U, > i.e. the type of either the left-hand or right-hand argument. The ˆT > notation for type variables indicates statically resolved type parameters > (SRTP), i.e. type parameters which are resolved to a nominal type at > compile-time. Why ^ and not using the existing : for seperating types from values? :T * :U -> :V would have looked like types, not pascal ptrs. Equally interesting are ocaml warts like let x2 = 1.0 +. 2.0 (+ not overloaded or defined for floats), or haskell's decision to torpedo the "class" keyword. Or ocaml hijacking :: let xs = 1 :: xs, with :: being cons, which was initially the lisp dot notation '(1 . xs) In retrospect perl's syntax is sane compared to this. ~~~ smabie I agree, though Haskell is certainly the most advanced. ML languages tend to be... primitive, at least to the modern eye. The lack of function polymorphism in OCaml is such a huge bummer. Modular implicits have been proposed (in a paper from 2014), but it doesn't seem that they are being actively worked on. If I could get multicore and modular implicits, OCaml would be _the_ ideal language. ~~~ octachron Modular implicits are being worked on actively, it is just a hard feature to get right in a scalable and future-compatible way. ------ addictedcs There are a couple of concepts in F# that I really like: * actor-based approach to concurrency[1]. Very useful when you want to design your code lock-free. Treating instances as agents that read messages from a mailbox, frees the developer from using low-level threading primitives. First time I've used this approach in production with AKKA and Scala. In F# it is much cleaner because the language itself is built on a better platform. * exhaustive pattern matching[2]. Writing in F# means using pattern matching a lot. Exhaustive matching gives you the confidence to refactor and maintain your code. The compiler will warn you when adding a new value in your type and not covering the execution path that uses it. It catches a lot of errors before you even push the changes to CI. * obviously immutability, conciseness, currying, these have been mentioned so far by others. One nitpick in F# are Exceptions [3]. They approached it similarly to how it is done in C#. You are allowed to define and throw an exception that will "jump" somewhere in the execution path. I prefer when a method returns Option- style types. This way, you know what to expect from a call, and pattern matches on the result without adding one more execution path in your code, which covers exceptions separately. It was added to support C# style error handling, though I very much prefer an error-code-based approach. [1] [https://fsharpforfunandprofit.com/posts/concurrency-actor- mo...](https://fsharpforfunandprofit.com/posts/concurrency-actor-model) [2] [https://fsharpforfunandprofit.com/posts/correctness- exhausti...](https://fsharpforfunandprofit.com/posts/correctness-exhaustive- pattern-matching) [3] [https://fsharpforfunandprofit.com/posts/exceptions/](https://fsharpforfunandprofit.com/posts/exceptions/) ------ dang A draft was discussed last year: [https://news.ycombinator.com/item?id=18874796](https://news.ycombinator.com/item?id=18874796)
{ "pile_set_name": "HackerNews" }
Gem install will soon be significantly faster, thanks to tenderlove 💙💚💜 - midas007 https://twitter.com/tenderlove/status/449699452895240192 ====== rancor Every Ruby developer on the planet thanks you for this. Lengthy `gem install` cycles are one of the less fun things about the Ruby ecosystem.
{ "pile_set_name": "HackerNews" }
Are You a Hacker? - thefox http://www.textfiles.com/hacking/ruhacker.txt ====== ssebro I have a problem with this: I'm extremely curious and I wind up going down long paths to find the ultimate solution to problems, when I could have just asked someone about it. I do it because I learn + remember better, and because the "How" is important to me. My problem is that wanting to know the "How" is often at odds with getting stuff done very quickly, and that being someone who gets stuff done quickly often becomes your personal brand - something that you are known for. So tradeoffs between the two aren't encouraged in the workplace- it's often one or another. Thoughts? ~~~ lloeki _Sometimes_ it's worth asking because researching it yourself brings little value compared to a honest exchange with a knowledgeable person. This exchange is a two-way process and a worthwhile path to travel along in itself, to which asking is only the entry point. Core to the hacker ethos and at the heart of the Internet, Unix, open-source and free software is that exchanging ideas increments both persons knowledge. The key lies in the way and the intent with which you ask: don't ask to receive, but ask to build yourself. (I can't seem to write something about the workplace without getting personal and it wouldn't be wise currently) ------ ramdac Are you asking because you are looking for answers? Either way, yes. ~~~ thefox No, that's not my question. It's only the title of this text. ;) ------ hc cool story, bro
{ "pile_set_name": "HackerNews" }
The Exclusive, The Embargo and The Arrington - humanlever http://www.centernetworks.com/arrington-embargo ====== truebosko It seems whenever something doesn't go right for Arrington a huge story comes out of it, 40 blogs write about it, and we hear about it for an entire week. All I can say is: Yawn.
{ "pile_set_name": "HackerNews" }
Show HN: Super Wizard Fever – A retro style infinite runner for Android - kcbanner https://play.google.com/store/apps/details?id=com.caseybanner.wizardrunner.android ====== kcbanner Hi everyone! I've been working on my first Android game in my off time from my day job as a game dev, and I've finally released it. I was inspired by games like Canabalt, but wanted to put my own twist on the genre. The gameplay is simple but challenging (I can't even beat the current highscores on the leaderboard, dangit!). I have full Google Play Games integration with leaderboards and achievements. Screenshots: [http://imgur.com/a/sUFYG](http://imgur.com/a/sUFYG) I'd love to get some more feedback!
{ "pile_set_name": "HackerNews" }
Ask HN: Will this community avoid an Eternal September? - w0de0 If so, how?<p>Is it not a risk for this community? Why?<p>Is the idea too nebulous for this to be a meaningful question?<p>Reference: https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Eternal_September ====== psyc Regret to inform you that HN's Eternal September passed circa 2011. Before that, according to my recollection, the community was mostly YC founders and employees. Discussions were mostly about web startups, ideas, and tech. Tone was professional, comments were substantive, and people used their real names. Best of all, people were free to share their experiences without being harangued at every turn by the "citation needed" and "my science is more science than your science" cargo cults that became entrenched here at some point. ~~~ psyc Almost forgot: Paul Graham used to comment frequently, and now he comments never. ~~~ fabrice_d He's too busy since he joined the resistance... ------ jasonkester We're something like 8 years beyond HN's Eternal September, and seem to be doing fine. Early on, this was a place for entrepreneurs to share actionable advice about building startups and software businesses. Over time, the developer-friendly nature and hostility-free discussion attracted refugees from Slashdot and Reddit. Over time, other people from those communities trickled across, and many of them were just average developer folk without any real interest in all this entrepreneuring. But there was the occasional Haskell story here that they could discuss, so they stuck around. Eventually, the less civil members of those now empty communities arrived, bringing their attitude and inclination to snark and pun threads with them. You'll notice them voted down by the grown-ups, but less frequently than before. Still, it's a generally good vibe, and nothing yet has emerged to take its place. But it's a little late to worry about the word getting out. This is now the most popular place on the internet to discuss tech. ------ greenyoda > _" Every September, a large number of incoming freshmen would acquire access > to Usenet for the first time, taking time to become accustomed to Usenet's > standards of conduct and "netiquette". After a month or so, these new users > would either learn to comply with the networks' social norms or tire of > using the service."_ This phenomenon is unlikely to occur in a moderated forum. Articles and comments that violate HN's social norms (off-topic, uncivil, etc.) get flagged pretty quickly. Users whose behavior is persistently bad get banned. ------ bigiain Your question assumes it hasn't happened already... ------ angersock I think that has already been answered by [http://www.n-gate.com/hackernews/](http://www.n-gate.com/hackernews/) rather conclusively. ~~~ soneca I think the author doesn't read the same comments that I do. It is easy enough to have a deeply cynical view of the world, especially if you chose to ignore all things that contradict your cynicism.
{ "pile_set_name": "HackerNews" }
UX Professional Isn't a Real Job - webwright http://thinkvitamin.com/opinion/ux-professional-isnt-a-real-job/ ====== sp332 What if I'm not designing a web page?
{ "pile_set_name": "HackerNews" }
Fake Steve Jobs Was A Blog Hater - transburgh http://www.techcrunch.com/2007/09/03/fake-steve-jobs-was-a-blog-hater/ ====== byrneseyeview Lyons blogged as Fake Steve Jobs because he knew that when he came out, the discussion would go from "Fake Steve Jobs says..." to "Daniel Lyons is..."
{ "pile_set_name": "HackerNews" }
Tackling the misinformation epidemic with “In Event of Moon Disaster” - MindGods http://news.mit.edu/2020/mit-tackles-misinformation-in-event-of-moon-disaster-0720 ====== gambler The very idea that deep fakes are somewhere near the top of the list of significant concerns for the media right now is in itself disinformation. And it seems to be very persistent. I keep seeing articles and videos about it all the time. Someone is pumping significant amount of money into this narrative. The irony is that all this "anti-disinformation" research will clearly be used to run better disinformation campaign if/when the primitive shit being used right now stops working. ~~~ nsnow70 Care to elaborate? ~~~ owenmarshall I can't speak for OP, but consider: the very strong scientific consensus is that the earth is warming and that warming is caused by human activity; the best-case study I've seen showed 75% consensus among Americans and that's sharply up over previous years. The gap between scientific consensus and common belief has been driven by "regular" misinformation campaigns: promotions of conspiracy theories, manufactured doubt, cherry-picking of claims. Why worry about the novel when the basics are still working? ~~~ mudsnail What would you think about a deep fake depicting secret video footage (from a cell phone) of real climate scientists at at a real conference discussing how they are manipulating the data to convince the public and the lawmakers that climate change is a real phenomena. This deep fake would contain real people who really are climate scientists who really did attend a conference together. Its just that this discussion never took place. It was created by deep fake technology. This type of scenario seems like a very real concern to me. You can extend this example into practically any hot button issue. ~~~ pessimizer I wouldn't think it would matter at all. Nobody would know who the scientists were, so you wouldn't even need to use real ones for the same effect. I don't even think most warming deniers would really care, and it wouldn't even spend a week in the news cycle, if aired at all. Do a video of them sacrificing a child to Beelzebub, and maybe you'd get some attention. Just telling everybody that you were told that this meeting happened through secret messages from a secret high-level traitor from the Soros Foundation would work just as well. It would work on hundreds of people even if you said that you were receiving these messages psychically or encoded through subtle changes in reruns of _Law & Order._ ------ Press2forEN There seems to be this hopeful belief out there that if only indisputably correct information was available to the general public, at long last they would finally support the correct policies and elect the correct leaders. I think that is a false premise. ~~~ troughway And yet even on HN this is disputed with such frequency as to warrant raising eyebrows as to what kind of mess we're walking into. Book burners are with us today, and they're embedded within the "cancel culture" and every online upvote/downvote system. ------ bsenftner 15 years ago I created a "pre-deep fakes" automated actor replacement visual effects production pipeline. My ambition was to create "Personalized Advertising" where ordinary people would appear in video advertising for desirable products like film trailers, and various other products and services that typically have a celebrity spokesperson - the basic idea was the celebrity spokesperson would be "with you" in the advert explaining how great the product was while you are depicted nodding and enjoying it. The system worked, and still would work if I un-mothballed it. I even globally patented the system, with an ungodly expense doing so: [https://patents.justia.com/patent/7460731](https://patents.justia.com/patent/7460731) However, nobody believed what I was pitching 15 years ago was possible. After I'd demonstrate a scaled down implementation, I'd get interest and an investment pool would start to form. Then one of the investors would realize the tech could be used for porn, and then no matter how I explained the economics of failure such an application would cause, their stupid dicks would take over and I could not get them to realize "porn with anyone inserted" is a lawsuit engine and not a sell-able product. While pursuing this, I became extremely jaded about what people think occurs in a film production, and the time and expense necessary to add visual effects to any media. The majority of angels and VC I pitched were flabbergasted at what current media and VFX productions are like, and their current level of expense and sophistication. My proposing a fully automated incarnation struck them as complete fantasy. Yet I had a working implementation. Slowly I'd convince them of the possibility... and then they'd get fixated on porn, ignoring the fact that there is no way to make money producing deep fake porn. I eventually went bankrupt and left media production entirely. After creating my fully automated VFX pipeline, working in VFX without my automation tools drove me nuts. I work in facial recognition now. ------ quacked Forget faking news events. What about petty crime? The cops get sent a video of you defacing property. Your phone data and several eyewitnesses tie you to the area. How do you escape, without a blanket ban on video evidence? If there's a blanket ban on video evidence, what if you have video proof that you were somewhere else? ~~~ neixidbeksoxyd If this becomes a problem, I think cameras and smartphones would be upgraded with a way to certify they are real videos. I have no idea how, and it would likely have to keep evolving, but if the financial incentives are there I'm sure someone will figure it out. Maybe the video gets signed by the phone and can only be verified by that device? This comment was generated by GPT-3 ~~~ JoshuaDavid Flash the flashlight in a sequence corresponding to the most recent hash on the bitcoin blockchain, then take a hash of the video and send 0.00000001 BTC to that address from a wallet associated with your personal identity? Would pretty much prove that a specific person recorded this video between two fairly close together times, and the blinking lights would probably do things with shadows that deepfakes aren't great at replicating (and retouching the video to fix those artifacts would make the hash not match). ------ mellosouls Very interesting and quite creepy in this specific context - genuinely enlightening as to wider possibilities. The actual faked Nixon speech is in the "Special Report" starting at 3.37 here (after a scene setting prelude): [https://moondisaster.org/film](https://moondisaster.org/film) The original contingency letter from which the speech is derived: [https://www.archives.gov/files/presidential- libraries/events...](https://www.archives.gov/files/presidential- libraries/events/centennials/nixon/images/exhibit/rn100-6-1-2.pdf) ~~~ Angostura Fascinating. It's good, but there are lots of little emphatic "head nods" that don't make sense in the context of the speech - unless that was just an odd mannerism of Nixon's. ~~~ owenmarshall I wonder if a more recent subject would've been better. We all know the tics and habits of our current president, but Nixon was half a century ago and I wonder how many of us have seen his speeches in any real depth. Hell, play me Billy West reading Nixon as "Richard Nixon's head" from Futurama and I might buy it. ------ camjohnson26 Here’s the video: [https://moondisaster.org/film](https://moondisaster.org/film) Not bad but the audio doesn’t sound quite right. ~~~ ghaff The thing is that you know it's a fake and are looking for clues that it's a fake. But for many purposes, it doesn't matter if you can fool careful study or expert analysis, it matters whether you can fool a quick glance from someone already predisposed to believe the contents who will retweet or share on facebook and move on. ~~~ najarvg Well said. Imagine a similar deepfake with a Nixon "confession" about having to stage the moon landing to make America look superior. You now suddenly have the next viral facebook or whatsapp forward for conspiracy theorists who are fully convinced of their beliefs. ~~~ rightbyte Ye but it has been doable with modern movie production technology for some time without any fancy AI. The biggest difference is the price drop of it? ~~~ gowld Not just price drop. Also co-conspirator drop, to reduce the number of helpers you need to create the artifact. But, overall, "deep fake" isn't a critical waterfall aspect. It one piece of the overall trend toward fake archaeology. Fakeaelogy? Farkchaeology? ~~~ CamperBob2 The term "simulation" comes to mind as a description of the emerging art and science of ultra-realistic bullshit generation. From Baudrillard ([https://cla.purdue.edu/academic/english/theory/postmodernism...](https://cla.purdue.edu/academic/english/theory/postmodernism/terms/simulacrum.html)): "Simulation is no longer that of a territory, a referential being, or a substance. It is the generation by models of a real without origin or reality: a hyperreal.... It is no longer a question of imitation, nor duplication, nor even parody. It is a question of substituting the signs of the real for the real." ~~~ lioeters > Hyperreality, in semiotics and postmodernism, is an inability of > consciousness to distinguish reality from a simulation of reality, > especially in technologically advanced postmodern societies. > Hyperreality is seen as a condition in which what is real and what is > fiction are seamlessly blended together so that there is no clear > distinction between where one ends and the other begins. > "The authentic fake." – Umberto Eco ------ myself248 The irony is that the people most susceptible to this kind of BS also don't believe we landed on the moon, so the effect may be somewhat muted... ~~~ smabie Everyone is susceptible to deep fakes. ------ ivanhoe For some reason shirt collars seem to be the weakest point on all deep fakes?? ~~~ acdanger I noticed that, too. I kept noticing the chin skin tone "smudging" over the shirt collar throughout the video. ------ shadowprofile77 Cute and well made job. I suspected that the video simply took a standard Nixon televised speech and used CGI to remake just the lower half of his face (thus the odd head movements even though his words and lips synced correctly, and if you look closely you can see that the lower part of his face looks somehow "cleaner" than the upper half), what stumped me a bit though was the audio. His voice sounds odd but recognizably his own, especially since Nixon had a rather unique voice. so I thought maybe they cut together words he'd said in different recorded contexts and digitally modulated the audio to construct a speech with an even tone, but no, they actually faked his voice, which impresses me more than the face CGI. ~~~ chrisdalke I don't think they are using conventional CGI techniques to "remake" his mouth/lips and lower face, if that is what you are suggesting. Seems like the dialog replacement is using a technology called VDR: [https://www.fxguide.com/quicktakes/cannyai-vdr-face- replacem...](https://www.fxguide.com/quicktakes/cannyai-vdr-face-replacement- as-a-service/). There's a demo video on that page and some more details about the technique. ~~~ shadowprofile77 Sorry, maybe I'm conflating terminology, but I was referring to the use of AI and digital editing to reconstruct the lower part of his face so that it looks real while moving in ways that are false and say things he didn't actually say. This is what they did in some way at least, no? ~~~ chrisdalke Yeah, I see your point, AI-generated CGI is still CGI! ------ foldr To be honest, you could probably get more convincing results than this just by using spaghetti-Western-style dubbing of existing footage. ------ DubiousPusher > an Information Ecosystem at Risk At Risk? Where have these people been for the last 40 years? I don't want to say the authenticity of information doesn't matter. But what clearly matters much more is collective trust in information institutions and from that standpoint the ecosystem has already utterly failed. ------ prateek_mir A really good example to showcase the capability of the technology ! In context of India, where news channels have reneged from due diligence on evidently doctored videos, this is something which can have huge consequences. ------ 14 I am very curious how the legal system will handle deep fakes as they progress to perfect quality that even forensics can not tell. ~~~ Nasrudith There is already a tool in place that handles much of it, chain of custody for evidence. The video evidence would need corroborating evidence to "pin it down" essentially. Just showing say Hillary Clinton making infant stew wouldn't cut it without a chain of evidence proving there was a location, time, that a camera would actually be there, and there is some actual physical evidence of her murderous cannibalism. ------ jmount A tangent: the movie "The Landing" (2017) is a fun treatment of a non-existent Apollo 18 and disaster.
{ "pile_set_name": "HackerNews" }
Markets are efficient if and only if P = NP - confluence http://arxiv.org/abs/1002.2284v2 ====== meric A while ago I studied a paper where the author proposes using high frequency trading to exploit price patterns. If that can be done profitably it would mean markets aren't even weak form efficient! Lately however there have been emerging views that "efficiency" is not black or white but a continuum. i.e The market will only be efficient enough such that the marginal cost of making the market more efficient is equal to the marginal benefit of doing so. Anyway, I wonder if solutions to NP-complete problems can be approximated using a market. EDIT: According to the paper, they can. ~~~ justincormack "The market will only be efficient enough such that the marginal cost of making the market more efficient is equal to the marginal benefit of doing so." Do we have a model of what markets are like under this type of model? It is not clear to me that prices are even close to what efficient ones might be under this situation (though they could be). Also of course most of the missing markets are the contingent and future ones, which suggests the ways in which things might be biased (forward planning, the firm, the business cycle etc) which are quite important inefficiencies. ~~~ meric "Do we have a model of what markets are like under this type of model? It is not clear to me that prices are even close to what efficient ones might be under this situation (though they could be)." I don't know what markets are like under this type of model but here are a two papers recommended by my lecturer on how efficiency is not an either-or proposition. _Adaptive Markets and the New World Order_ "Under the AMH, markets are not always efficient, but they are highly competitive and adaptive, and can vary in their degree of efficiency as the economic environment and investor population change over time." [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1977721&#...](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1977721&); _Efficient Markets II_ (by Fama who was the one who originally came up with the efficient market hypothesis describing the three forms of market efficiency) "A weaker and economically more sensible version of the efficiency hypothesis says that prices reflect information to the point where the marginal benefits of acting on information (the profits to be made) do not exceed the marginal costs (Jensen (1978))." <http://efinance.org.cn/cn/fm/Efficient%20markets%20II.pdf> "Also of course most of the missing markets are the contingent and future ones, which suggests the ways in which things might be biased (forward planning, the firm, the business cycle etc) which are quite important inefficiencies." You can often find sets of stock options' prices that you can arbitrage and make a profit, but only if there were no transaction costs. Information will only be taken into account of if the cost of the information (after taking into account all costs including opportunity costs and risk) is less than the benefit from exploiting that information. So, there are probably lots of information from the future that are missing in the market, which if properly exploited will provide a lot of benefit; but the costs of gathering that information is even greater, possibly requiring the use of a time machine. _My thinking: It could very well be that markets are only efficient only as far as everything else allows it to be; If investors are not able to evaluate information over more than one business cycle, (due to the cost of doing so, economical or psychological or otherwise), then so be it. The market will move its efficiency to match the environment and its participants._ ------ stevenrace Previous discussions [Aug. 2011]: \- <http://news.ycombinator.com/item?id=2895474> \- <http://news.ycombinator.com/item?id=2868498> ------ josephlord This is I think a bogus debunking of a bogus theory. Try 'Debunking Economics' by Steve Keen for a good breakdown of real problems with many economic theories including EMH. Mostly by showing the logical fallacies in assumptions but also the naive maths and divergence from empirical reality. [http://www.amazon.com/Debunking-Economics-Expanded- Dethroned...](http://www.amazon.com/Debunking-Economics-Expanded-Dethroned- ebook/dp/B006BG8UFY/ref=sr_1_1?s=digital- text&ie=UTF8&qid=1348943992&sr=1-1&keywords=steve+keen) His blog is <http://www.debtdeflation.com/blogs/> ------ kenster07 Economics is clear when explaining its foundation: Markets are only as efficient as its participants are rational and fully informed. The entirety of modern economics is built upon the presumption that the last two conditions are 100% true. ~~~ aristidb No, it's based on them being true enough that other approximations are less useful models. ~~~ wisty Unfortunately, most economists don't develop other models, so the other models are very immature. ------ csense From the paper: > Now, one may argue that a weaker form of market efficiency as per Fama > (1991) and Jensen (1978) is that patterns are exploited until the marginal > revenue of further discovery equals the marginal cost of further search. A > counterargument to that would be graduate students and other hobbyists and > day traders searching for an edge: to them, the marginal cost is zero and at > times negative because the value of the search itself, regardless of the > outcome, is a positive learning experience for them... A hobbyist has to spend more computation time for every additional strategy they search, which costs real money. The hobbyist's cost is near-zero for small searches -- those that can be done by one person using the computers they already own. Once the size of the search increases beyond what they can manage with their immediately available resources, they have to hire assistants and/or buy computation time like anybody else. ------ anonymouz Is this a serious paper? I can't judge at all if it is, because I cannot find any useful definitions, exact statement of the claimed theorem or proofs. Then again, I'm not an economist (coming from a mathematics backround), so maybe this is the expected form of a paper here? ~~~ yummyfajitas The original papers proving similar results do prove theorems. This is one example which springs to mind: [http://dpennock.com/papers/chen-ec-2007-betting-on- permutati...](http://dpennock.com/papers/chen-ec-2007-betting-on- permutations.pdf) Here is a review of the literature from a while back, it cites plenty of actual math papers: [http://dpennock.com/papers/pennock-ijcai-workshop-2001-np- ma...](http://dpennock.com/papers/pennock-ijcai-workshop-2001-np-markets.pdf) ------ Bakkot Of course this doesn't mean markets aren't within some small constant factor of being perfectly efficient. ~~~ confluence Generally with NP-complete problems - that small constant factor is still often an order of magnitude away from optimal (path finding/search/salesman/shortest subsequence etc). ~~~ sold In general, approximation of NP-complete problems is not that simple. Assuming P /= NP, Subset-sum can be approximated within 1+epsilon for any requested epsilon>0 [a PTAS] MAX-3SAT can be approximated within 8/7 and not better. Vertex cover can be aproximated within 2 and its not known if that's optimal. Set cover can be approximated within O(log n) and not better. Clique and TSP have no sensible approximation. I would argue that potentially a loss of factor 2 is not an order of magnitude away from optimal. I don't know what is the situation for markets. ------ Symmetry Does this actually contradict the _weak_ form of the EMH, that "future prices cannot be predicted by analyzing prices from the past"? If it's computationally unfeasible for the market to incorporate new information, surely its then equally computationally unfeasible for some outside observer to predict future prices? I'm not sure this even contradicts the semi-strong form of the EMH, though it clearly does rule out strong EMH. ~~~ confluence Weak form EMH is so pointless so as to be essentially useless in predictive value. It's like those horoscopes that state - "you will meet someone of great personal importance soon!" ------ brador Direct link: <http://arxiv.org/pdf/1002.2284v2> [PDF] ------ confluence A slightly long and/or dodgy video by the professor who wrote this (I'm setting your expectations): <http://www.youtube.com/watch?v=7iOJZZFDKpc> I found it rather enlightening - but the paper is still highly readable and I encourage HNers to peruse it and perhaps check out the video as well. Previous discussions: <http://news.ycombinator.com/item?id=2895474> <http://news.ycombinator.com/item?id=1144548> More from the professor: <http://philipmaymin.com/cv.pdf?q=phil/cv.pdf> ------ drfloob ... and if P = NP, the market doesn't matter because everyone will have cracked all your bank passwords. ------ pron I have only had time to skim through this very quickly, but it seems entirely based on false definitions. He begins so: _The efficient market hypothesis claims that all information relevant to future prices is immediately reflected in the current prices of assets. In other words, you cannot consistently make money using publicly available information._ Then, he encodes a 3-SAT problem into market orders and claims: _So what should the market do? If it is truly efficient, and there exists some way to execute all of those separate OCO orders in such a way that an overall profit is guaranteed ... then the market,_ by its assumed efficiency, _ought to be able to find a way to do so._ So he defines market efficiency to be all-knowing, and then gives at an NP- hard problem to solve. It seems like he could have given it the halting problem as well, only it might be harder to encode as market orders. But the point is that an efficient market is "all knowing" only that which is "knowable". The market isn't an oracle that can answer any question about the past, but a true machine that can answer all questions about the past that are answerable using the machine. What he's doing seems similar to saying "if God exists then P=NP". After all, if God is omniscient and omnipotent, He could solve all NP problems in polynomial time. :) ~~~ uvdiv I skimmed it quickly too, but I think you've misread. The "interesting" claim isn't the 3-SAT encoding; it's earlier where he claims that markets "naturally" have to solve NP-hard problems, in particular that to optimize an investment strategy you have to solve the Knapsack problem. (?!) It's hand- waving nonsense. ~~~ pron It doesn't matter. When people say efficient market, they mean that the market knows whatever is knowable by the present time using current technologies. They don't mean that the market knows everything that's theoretically knowable using unknown/impossible algorithms. Before there were computers in an efficient market (supposing one exists), the current prices would reflect everything that is known and can be computed/deduced without computers by the present time. Anyway, the paper misunderstands "known" to mean "anything which can theoretically be known", which obviously includes solutions NP-hard problems. It's just that nobody claims that's what efficient markets are. ~~~ 001sky RE: _When people say efficient market_ The hypothesis is predicated on perfect rationality. <http://en.wikipedia.org/wiki/Perfect_rationality> vs <http://en.wikipedia.org/wiki/Bounded_rationality> ------ confluence Honestly - stuff like this reinforces a feeling I have had recently that the social sciences are nothing more than pseudoscience dressed up most of the time. Economists have essentially no idea what is going on and neither do their financial compatriots and the EMH is by far the clearest example of this. I'm trying to make it a personal rule that anything outside of the hard sciences (math/physics/chemistry/statistics etc) is bullshit to me until further notice. I mean really - I do advanced math and statistics in moderately complex systems - AI/robotics - and I can barely wrap my head around them. I shake my head in disgust at those who apply platonic and unrealistic theories to the extremely complex system that is the world. Indeed, the surety they display in their theories amuses me, because to even think that one can reason about the entire world all at once is, in fact, hilarious. ~~~ throwaway64 This kind of armchair criticism is not particularly enlightening, or productive. You can still make useful and largely correct predictions without understanding all of the dynamics in a system. After all, all science amounts to an approximate model, even your vaunted "hard sciences". ~~~ confluence I'm sorry but were you alive during the last couple hundred financial crises? I mean if your entire field is dedicated to predicting the future of finance and economics and you don't predict them - it kind of shows that you really don't know what you are doing. All I'm doing is pointing out that the social sciences have no clothes on. ~~~ jbrechtel right....because the field of math has never changed course (see: Godel), nor physics (see: Newton, Einstein, )... You're reading too much into the disproportionate effect mistakes in economics have had on society. Science get things wrong all the time...failure is part of the learning process. ~~~ confluence True but at least the hard sciences modify after being falsified. The social sciences don't have to because they aren't based on the scientific method - mainly the design and independence of repeatable experiments with controlled variables. Hard science equations have no room for bullshit whereas the social ones do - hence EMH is still taught. ~~~ yummyfajitas _The social sciences don't have to because they aren't based on the scientific method - mainly the design and independence of repeatable experiments with controlled variables._ The same is true of many physical sciences - geophysics, oceanography, climate science, astronomy, etc. Your criticism applies to basically any scientific theory which has poorly understood microfoundations (i.e., a lot of them). The EMH is taught because it's a useful approximation to reality, even if it's imperfect. Or, as the article puts it: _Whether markets are efficient or not, and whether P = NP or not, there is no doubt that there will be markets that can allocate resources very close to efficiently and there will be algorithms that can solve problems very close to efficiently._ Incidentally, the EMH claims that financial crises are unpredictable. So the lack of useful predictions of the financial crisis is evidence in favor of the EMH. [edit: Note, in response to Dn_Ab that 3SAT, the problem considered by the paper, is NP complete.] ~~~ Dn_Ab But he is not, in that quoted statement saying much. For example, if the problem is NP-Hard but not NP-Complete then we will not even be able to tell how well we are doing. Or for markets, aspects of it may invovle solving NP-Hard problems with efficient approximations that are themselves NP-Hard (you are better placed to opine on whether such a possibility is likely). ------ realize The ability of some traders to make consistent significant profits should be an "existence proof" of the inefficiency of markets. The EMH is just wrong. ~~~ tatsuke95 Now you just need to point to some traders who are able to make "consistent significant profits". ~~~ rafcavallaro And not just "some" but a proportion of profitable traders greater than what we would expect due to chance - i.e. the existence of some lottery winners does not prove that playing the lottery is in general profitable. ------ michaelochurch Efficient Market Hypothesis isn't very well defined. It's more like a class of assertions, some of which are demonstrably true and some of which are false. Loosely speaking, it says the market price of an asset (or exchange rate between two assets) will be fair, which means that it corresponds to the expected value of its basic value. If you're talking about a bond, you can look at the (known) payment stream, discount the cash flow, and compute a fair value based on known market conditions. For an equity, there is no fair value other than "the expected value of its price in the future". Some stocks pay dividends, but many don't, so their values are based on something other than a current dividend stream, and largely that "something" is: what is the expected value of the thing in the future? The question is: what is the timeframe? In the very short term, EMH is false for computational reasons. Efficient markets require someone (i.e. an arbitrageur) to keep them efficient. The profits of arbitrage are the incentive for people to keep markets as efficient as possible, and they tend to have this effect. This is an extremely competitive business, and in this day in age, it often comes down to _microseconds_ , but it can be done and billions of dollars are made every day by people who are doing it (and, contrary to popular depictions, arbitrage is actually _good_ for the markets and economy). In the very-long term, EHM is probably also false. By very-long, I'm talking about 10+ years. The reason for this is rooted in the scarcity of money: there are a lot of projects and companies that will produce and capture economic value in the future, but people don't have enough money to fund them all. Hence, stocks are "cheaper" than they should be, and risky growth stocks (much less illiquid private equity) especially so. ("Equity premium.") There are some people who (demonstrably) "get" value investing and are better at predicting long-term corporate futures than others. The issue here is the long feedback cycle. If your frequency of investments is that low, you have no way of knowing if you're _actually_ good, or just getting lucky, especially in the context of the non-normal (i.e. fat-tailed) distribution of equity returns driven by "black swan" events. EMH is true enough that if you don't have the technical machinery to trade at microsecond latency, nor the reputation that will allow you to invest for the very long term despite market caprice-- i.e. even if the market tanks, people will trust Warren Buffett's judgment-- you probably can't reliably make a better profit on the stock market than you'd get if you invested in an index fund. What does EMH rely upon? Ultimately, it says that if there is _expectancy_ to be made selling or buying a security at a price other than P, it will be sold or bought until the price reaches P. This assumes an infinite amount of capital ("smart money") that people are willing to deploy in order to exploit pricing inefficiencies or inconsistencies. This is an obviously false assumption, but for liquid securities of _known_ expected value, it's close enough. Leverage (borrowing) generates a lot of "additional" smart money, so that even a 20bp (0.2%) discrepancy can be levered up into a 10% gain. (If you have $100, borrow $4900, and turn that $5,000 into $5,010, your equity position has gone from $100 to $110.) Microprofit opportunities will be exploited so long as there's sufficient leverage to make them worthwhile, but the willingness of lenders is not infinite. EMH is usually used to make mathematical analyses work. It's a guideline, but no one who understands financial markets believes it to be literally and universally true. No one can actually predict the future or human behavior, but the (false) assumption that there is no arbitrage produces closed-form numbers that are often very close to the real values. Also necessary is the distinction between smart and dumb money. The latter isn't a pejorative; "dumb money" means that there are incentives other than informed speculation. For example, when you buy a house because you want to live somewhere, that's dumb money. Or when an index fund buys stocks because of its chartered requirement to do so, that's "dumb money", not because the buyer is an idiot, but because his purchase doesn't convey information about the stock's real value in the way that smart money would. Markets are efficient when there's enough smart money to keep the dumb flow from pushing the price around. This is going to be true of highly liquid stocks, currency rates, and commodities, but not true of assets like real estate. Financial engineers tend to discount "dumb" activity as harmless Brownian motion, but the 2008 subprime mortgage meltdown established that not to be always wise. ~~~ confluence What happens when the smart money realizes that shorting a massive amount of dumb money would leave them freight trained by the crowds? Smart money can be dumb as well - because it pays to do so. I'm quite sure that the number of rational investors/capital are greatly outnumbered by the irrational investors/capital such that many correct trades become essentially insolvent before the market becomes rational again. See value shorters for the last two booms. ~~~ mason55 _such that many correct trades become essentially insolvent before the market becomes rational again_ Keynes was quoted as saying "Markets can remain irrational longer than you can remain solvent."
{ "pile_set_name": "HackerNews" }
NSA and Facebook undermine spontaneous gatherings (german) - radiospiel http://www.heise.de/newsticker/meldung/NSA-Skandal-Facebook-unterwandert-Flashmob-Verabredungen-2592853.html Sry for double posting, see https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9310402 ====== radiospiel Sry for double posting, see [https://news.ycombinator.com/item?id=9310402](https://news.ycombinator.com/item?id=9310402)
{ "pile_set_name": "HackerNews" }
Windows 8 tablet freezes in Microsoft keynote demo - fvbock http://www.theregister.co.uk/2012/03/19/microsoft_demo_trouble/ ====== ugh So, uhm, I’m not a fan of Windows 8, but why exactly is this in any way a big deal? It's not finished. It will break. You have a backup device (or two) ready when you present it in front of a crowd. Apple does it (I remember keynotes where they had to switch to their backup Mac), Microsoft does it, everyone does it. ~~~ mitchty I'm an Apple fan, and I agree, stuff breaks. This is a non-news story. ------ ArthurLozinski "We're essentially enabling you to break the electronic concrete of the past and move your business to the future, by connecting your people, by enabling your people to communicate and collaborate in real time, and by taking all the wealth of communication, collaboration, and social networking opportunities, and apply those into very specific business scenarios." Now that sounds good! What would be even better, is if you build the whole thing on web-standards ;-) ------ jaems33 "Luckily, there was a second tablet available on stage that was working, so the changes for the hypothetical app were successfully written to the pretend database, and all was well." That's not luck. I would think that a backup is almost always a must for any presentation. ------ sigvef "Windows 8, which Microsoft touts as the first operating system to run across multiple devices" Doesn't Linux already do this? Or does it not count, as it is only a kernel? ~~~ freehunter It's marketing speak, and no one of importance to Microsoft is going to call them on it. I would imagine their line of thinking would be Ubuntu doesn't ship on tablets or phones. OSX doesn't ship on tablets or phones. What they run are stripped down and re-imagined versions of the parent OS. Approval of the magnitude of this claim is up to the reader. ~~~ rbanffy Still, Linux runs on just about everything between a 68000 and an IBM mainframe. As does NetBSD. I find the propensity to lie disturbing. ~~~ freehunter This could get into deep discussions about what is an OS (versus a kernel), what constitutes the "same" OS (versus a modified version of the OS) and even what can be put in the same class as Windows. I think what Microsoft is getting at is, OSX and Ubuntu don't run the same code on their mobile devices. Windows 8 will. It's marketing, and NetBSD doesn't even cross their mind. ~~~ rbanffy It all depends on where is the line defining what's an OS. A Linux machine doesn't need X to be a Linux machine - Unix machines have been serving terminals for decades. So much, in fact, I joke that, in order to be a serious computer, one has to have no monitor, keyboard and mouse ports - if you really need a physical console, a serial port will do. So, I've seen Linux running programs on a very broad selection of hardware, from ARM to zSeries (68020+ has always been more of a curiosity, albeit there were serious Unix machines using them). Really, it's marketing. Not truth. ------ firefoxman1 Has anyone figured out why exactly Microsoft makes such buggy software? Is it a company culture thing? ~~~ freehunter Massive company. Massive software projects. Massive amount of end users. Massive amount of use cases for their software. Time restrictions on their development. The wording you used makes it sound like they make buggy software on purpose, which makes no sense. ~~~ firefoxman1 Oh, sorry. Yeah I meant why they end up producing buggy software when competitors like Apple have proven it's possible to both ship _and_ make stable software. And Apple does both hardware and software. ~~~ stevejabs I seem to recall an iOS presentation where people were told to turn off their wifi or Jobsy wouldn't continue... ~~~ Zirro Which had absolutely nothing to do with buggy software. ------ Radzell Lol, maybe Microsoft hasn't changed. I like windows on desktop, but I love android on tablet. I guess I'll sticking with android for now. ~~~ freehunter Android never freezes or crashes. Come to think of it, neither does iOS or OSX, or Linux. Oh wait, almost all software crashes. ~~~ twargoth No iOS product would crap itself like that during a keynote demo. Sure, I've had them panic and reboot on me in real world use, but Apple takes the time to make sure the demo runs smoothly. When you can't or don't make your demo run smoothly during a high visibility presentation, it makes you look careless, either in the development of your product or of your demo. In either case, it reeks of incompetence. This might not be a justified impression (accidents happen), but that's how life is. ~~~ dangrossman No iOS product comes close to the complexity of a Microsoft Dynamics product either, nor are developers at Apple expected to develop and demo such complex products on pre-beta operating systems created by a different group. ~~~ twargoth You're right about apple developers not being expected to demo apps on pre- beta software. Because doing that makes your software look shitty. Don't do that. Or script out the demo really carefully. PR exists to make the products look good. ------ conradfr Classic Microsoft :)
{ "pile_set_name": "HackerNews" }
Apophis is currently considered the largest threat to our planet - ca98am79 http://en.wikipedia.org/wiki/99942_Apophis ====== ca98am79 at least that is what it says here: <http://en.rian.ru/science/20091230/157423845.html>
{ "pile_set_name": "HackerNews" }
California Zero-Emission Push Grows to 8 States, 3 Million Autos - tocomment http://www.bloomberg.com/news/2013-10-24/california-zero-emission-push-grows-to-8-states-3-million-autos.html ====== tocomment I'd be curious if any of these 8 states is also blocking Tesla sales with dealership laws. Or have special taxes for electric vehicles. I know VA charges something like $64/year.
{ "pile_set_name": "HackerNews" }
Quora is down - gauravsc http://www.quora.com/ ====== raghav305 i am able to access quora
{ "pile_set_name": "HackerNews" }
The End of the Past - diodorus https://medium.com/@MarkKoyama/the-end-of-the-past-2f028cb970ed ====== adrianratnapala A good article with an opaque headline that gives no clue that it compares the ancient Roman economy to the early modern one. I agree slavery was very important but, early modern Europe had slavery too. It begs the question to just say slavery was less important to the modern economy: why was that? One difference is that early modern Europe, for all its wars, had no dominant apex predator like the Roman Republic. True, the Republic fostered advanced trade -- it was wise enough not to destroy provincial economies. But aristocrats still lived off the taxes and bribes payed in conquered lands. This meant that if you had some wealth, the best investment for it was to bribe your way to a government post. This general pattern lead to the empire, and to its corruption. It also explains the rentier culture that the article describes. ~~~ PeterisP Yes, it's important to read such claims exactly and literally - as the OP states, " __ _Roman Italy_ __had comparable per capita income to the Dutch Republic in 1600 ". Assuming that this is true, it is still clear that Roman _Empire_ and even more so Roman _Egypt_ did not have a comparable income. Roman Italy was fed and clothed by products of non-Italian economies; It's easy to get a multiplier of your GDP if you have landowners that "farm" overseas land with overseas slave labor that isn't part of your economy consumption. ~~~ abecedarius There's a recent book by Ober, _The Rise and Fall of Classical Greece_, pointing out that the pre-Roman Greek world had per-capita income that high. I think a better question is why did _that_ culture not keep advancing, and maybe the biggest part of the answer is that they got conquered by Rome. ~~~ adrianratnapala But first they were conquered by Macedonia -- was Greek in many ways, but it did not have the political culture that Ober says distinguished the Poleis. ~~~ abecedarius I'm including the Hellenistic era. It continued to grow economically at well above normal preindustrial rates (according to Ober), if not quite as high as in the classical period. Technological progress might've been even faster then -- it was the time of Archimedes and the Antikythera device, and much else. (Russo, _The Forgotten Revolution_.) Ober concluded with a post-Alexander chapter: the Hellenistic cities were not completely at the mercy of empire- building despots because a walled city at the time was very hard to conquer (instead of starving out) leaving a Nash equilibrium where cities paid substantial taxes but weren't messed with too much in their internal politics, so that much of that culture survived for quite some time. (From my memory of a book I read a year ago.) I'd guess that even without Rome, military technology & organization would've made cities increasingly vulnerable, and it'd have been a race between increasing despotism from such developments and the fact that freer more dynamic societies also have a kind of military advantage (see Archimedes again at Syracuse). But of course I'm indulging in total speculation. ------ Animats (Title is poorly chosen.) I've asked this question before - why didn't the Roman Empire progress to an industrial revolution? The Roman Empire never developed the concept of the corporation. They never got beyond the "one rich guy" or "rich family" stage of business organization. They never had much in the way of inter-city businesses. They didn't have common carriers for shipments. They had the roads and the legal system to make that work, but somehow never developed something like Wells Fargo, the stagecoach line. They lacked the organizational tools to scale a business. The Roman Empire had figured out how to scale government, training provincial executives in Rome and sending them out to govern. The Empire had a sizable grain and oil shipping operation, but this seems to have been done as a Government contracting operation, not as a private business. Another argument is that the industrial revolution needed coal, iron, and water power in reasonably close proximity. Italy doesn't have much of that that. England and France do. So do parts of the US. Once you've got railroads, the proximity doesn't matter as much, but until then, it's hard to get started. The political importance of land ownership may have been an obstacle, but England had a landed gentry all through the Industrial Revolution. The landowners couldn't stop progress, although some of them opposed it. They couldn't even stop railroads; Parliament could and did approve "compulsory purchase" of the right of way. It's a good question. For a thousand years, the Roman Empire couldn't solve this basic economic problem. What are we missing about our own society? ~~~ milesrout The concept of a company is not important to economic and technological progress. In fact, it's actively detrimental to it. ~~~ MR4D I'm not sure how you come to that conclusion, let alone your confidence in it. A corporation allows for both pooled and limited risk. This permits a civilization to take on large projects such as building a railroad or a refinery - activities that were both expensive and financially risky. The only other alternative is the State, but then that involves taxes and keeping the populace happy, which tends to impede progress. Things like the Coliseum get built instead of railroads. Individuals have high tendency not to do it because once people have attained a certain amount of wealth, they are more interested in protecting against losses than taking on expensive risky projects. Daniel Kahneman wan a Nobel prize for work related to this kind of decision making ([http://www.apa.org/monitor/dec02/nobel.aspx](http://www.apa.org/monitor/dec02/nobel.aspx)). ------ maldusiecle The author links to pseudoerasmus's blog post about Empire of Cotton as if it were a refutation of Empire of Cotton's thesis. But that blog post is a summary, not a critique--the author says this explicitly in its comments. The closest the author comes to a critique is an aside in this post: [https://pseudoerasmus.com/2015/04/26/mccloskey-cotton- ir/](https://pseudoerasmus.com/2015/04/26/mccloskey-cotton-ir/) with similar points made earlier in: [https://pseudoerasmus.com/2014/11/10/slavery_and_industriali...](https://pseudoerasmus.com/2014/11/10/slavery_and_industrialism/) ...which is far from open-shut, in my reading of it. It's a tangential point, I guess, but worth considering in evaluating the plausibility of the whole argument. ~~~ adrianratnapala Just in case anyone is wondering, the linked pseudoerasmus post is: [https://pseudoerasmus.com/2016/06/16/eoc/](https://pseudoerasmus.com/2016/06/16/eoc/) ------ zeteo > Was the Roman economy only as developed as that of Europe circa 1300 or was > it as advanced as that of western Europe on the eve of the Industrial > Revolution in say 1700. The question is ill posed. It assumes that economies move on a linear scale - with definite positive and negative directions. Was the economy of the Soviet Union in the 1930s more advanced than that of the Dutch Republic in the 1600s? Well, the USSR could produce tractors and icebreakers. But the Dutch had a stock market and a really good internal transportation network. ------ b_emery Makes me wonder: Would the slavery-based economy of Roman times represent a model for what will be the robot-based economy of the future? ~~~ bdrool That was my first thought as well. Another thing that came to mind is that way manual work / craftsmanship was looked down upon has its parallels in the present day, particularly in the US. It's often said that other parts of the world see engineering in particular as a prestigious line of work, but the same cannot be said for the US. It's strange how the US lauds only the extremes: either very blue-collar manual laborers, or very white-collar 0.001%-ers (very often rent-seekers who don't actually add much value to society). People who actually engineer things and drive innovation are not looked up to. ~~~ ci5er > People who actually engineer things and drive innovation are not looked up > to. In the US? Compared to whom? Doctors? Bankers? Lawyers? Politicians? Who do you think that you are better than? ~~~ bigger_cheese I don't know about the US but here is Australia Engineers are consistently rated as one of the most ethical and trusted professions eg. [http://thenewdaily.com.au/money/work/2016/05/14/most- trusted...](http://thenewdaily.com.au/money/work/2016/05/14/most-trusted- professions/) Most of my ethics course from university focused on US based case studies (Tacoma Bridge, Kansas City walkway collapse, Space Shuttle etc.) So it could be possible Engineers in US have a worse reputation. ------ thisrod There is potentially a really simple answer to the question, "What happened in England in the 17th Century and changed everything?" Isaac Newton. I'm surprised that historians don't consider that possibility, at least to the extent required to exclude it. ~~~ GlennS It's an interesting proposition. There is certainly a history of scientific paradigm shifts, although I don't think it's very popular among historians. Consider also the printing press. Invented much earlier, but something which built up momentum as it was improved and paper was made cheaper. Perhaps the improved ability to disseminate information made the industrial revolution inevitable? ------ lolive Holy cow, dudes!!! There is a massive cultural overload in all these comments! Are you really the same guys who code websites all day/night long?
{ "pile_set_name": "HackerNews" }
Judge Shoots Down ‘Bitcoin Isn’t Money’ Argument in Silk Road Trial - pat2man http://www.wired.com/2014/07/silkroad-bitcoin-isnt-money/ ====== ChuckMcM This wasn't particularly surprising, either in putting it out there or shooting it down. Every trial starts with a whole bunch of motions which the judge evaluates. If your lawyer isn't throwing up every possible angle they aren't earning their pay. ~~~ absherwin -Bad: Do nothing -Better: Object to everything and you might win the lottery -Best: Object reasonably that they may be taken seriously -Beyond amazing: Admit everything in such a sympathetic way to minimize guilt* *Gerry Spence ~~~ dlss Source? I can't find your last quote online, and want to read more about it ------ mikeyouse This was always an idiotic premise. The Federal money laundering statute refers to 'monetary instruments' which it then defines: (i) coin or currency of the United States or of any other country, travelers’ checks, personal checks, bank checks, and money orders, or _(ii) investment securities or negotiable instruments, in bearer form or otherwise in such form that title thereto passes upon delivery;_ If you could convince a judge that Bitcoin wasn't currency, it would still qualify as an investment security / negotiable instrument by just about any definition. ~~~ Drakim I'm pretty confused over that even an idiot would try to argue this case. It's something that has "coins" in it's name, and is touted as "the internet money of the future" and that is being used in substitute for regular money when buying wares all over the world. If that isn't money, is ANYTHING money? Could one not just as well argue that dollars are just ink on paper, and therefore not money? ~~~ kaonashi Money is a loosely-defined word that ends up being a huge sticking point in economic conversations because people end up using different definitions. One thing Bitcoin is not is a unit of account; it is nobodies liability the way that dollars, treasuries or reserves are a liability of the issuer. When I think money, I think balance sheet; and by that criterion, Bitcoin is not money. It's a currency and an asset. ~~~ ahomescu1 Germany seems to disagree with your point: [http://www.spiegel.de/international/business/germany- declare...](http://www.spiegel.de/international/business/germany-declares- bitcoins-to-be-a-unit-of-account-a-917525.html) Edit: Wikipedia gives the following requirements for "unit of account": _To function as a 'unit of account', whatever is being used as money must be: Divisible into smaller units without loss of value; precious metals can be coined from bars, or melted down into bars again. Fungible: that is, one unit or piece must be perceived as equivalent to any other, which is why diamonds, works of art or real estate are not suitable as money. A specific weight, or measure, or size to be verifiably countable. For instance, coins are often milled with a reeded edge, so that any removal of material from the coin (lowering its commodity value) will be easy to detect. _ Except for having a specific weight, Bitcoin has these: 1 Bitcoin is the same as any other, and it can be broken into 0.5, 0.1 and even 0.0001 Bitcoins. ~~~ kaonashi I think the sticking point there is differences in definition of 'unit of account'. My main point is that Bitcoin is nobodies liability; this is by design. The dollar is a liability of the issuer, which can be used to extinguish tax debt. This goes into a much larger point about the nature of money and how it evolved from ad-hoc credit arrangements. [http://www.youtube.com/watch?v=9Tks7oJkFRg](http://www.youtube.com/watch?v=9Tks7oJkFRg) [http://www.youtube.com/watch?v=0zEbo8PIPSc](http://www.youtube.com/watch?v=0zEbo8PIPSc) ~~~ Mandelbug Regardless of its "technical" or "economic" definition, to the layman and all its users, Bitcoin is treated, traded, and consumed as money. The exact origins of Bitcoin do not change its utility, which is as a unit of exchange, which is what money is in its most general and flexible definition. ~~~ kaonashi All true, but it does not diminish my point; that since the dawn of capitalism, money has been a balance sheet phenomenon: being simultaneously someone's asset and someone else's liability, and in this Bitcoin is different. ~~~ lifeisstillgood That seems odd - if A visits will road and buys drugs off B then A owes B one bitcoin - that is a human level contract that is clearly understood. A balance sheet accounting of this transaction would as easily be In bitcoins as dollars (A has a liability of 1 bitcoin, B has an asset) As long as B wishes to denominate their accounts in bitcoins this is quite reasonable. Or am I missing something ? (Interestingly the Indian mathematician who invented negative numbers did so using accounts and debt as an example (bhagravita? 500AD) ~~~ kaonashi >Or am I missing something ? Banking. ~~~ lifeisstillgood I don't understand - why is banking meaning that I owe someone 100 bitcoins a problem? It's an unstable currency yes but the same can be said for Zimbabwean dollar ~~~ kaonashi In your case, the transaction is barter, a transfer of one non-financial asset for another. If you have a contract to provide 100 BTC, then that contract would be money, denominated in BTC (not the BTC themselves). It would exist on one balance sheet as an asset, and on another as a liability (at the same time). ------ canjobear The article says this was a motion to dismiss all charges, on grounds that bitcoin isn't money and that Ulbricht wasn't responsible for website users' actions. What happened to the murder-for-hire charges? Those seemed pretty damning... ~~~ herendin All dropped, except the Maryland case, which isn't being tried yet and could also be dropped when the time comes ------ svedlin The claim is that the term "funds" included in the definition of "financial transaction" [1] covers bitcoins. "Funds" typically denotes cash or an asset that is highly liquid, not just anything which can be purchased or sold. Black's Law Dictionary defines "fund" as a "sum of money or other liquid assets established for a specific purpose." Bitcoins, at this stage, are accepted almost nowhere, recognized by few people as currency, and represent a tiny, highly volatile and risky niche compared to other markets. It's much easier to convert a used iPhone (the top search on eBay) into cash, sell concert tickets on craigslist, or trade collectibles, than it is to convert bitcoins into legal tender, but no one would mistake those goods for "funds." The law isn't intended to cover anything which can be exchanged for cash. In fact, it specifically enumerates the following types of property separately when defining the scope of "financial transaction": "real property, vehicle, vessel, or aircraft." [1] Because currency is accepted nearly everywhere as payment, the government considers it difficult to police and subject to special penalties when used for criminal ends. Goods like collectibles or bitcoins don't operate at that scale, since they're accepted by relatively few businesses and specialized experts as payment. If a tiny (in comparison to the economy at large) experiment like bitcoin is "funds", then almost any object bought or sold in a zillion stores and marketplaces can be construed as "funds". Any quid-pro-quo could be considered laundering, which broadens the law beyond its intended scope. The ruling says: "Ulbricht's alleged conduct is more akin to a builder who designs a house complete with secret entrances and exits and specially designed traps to stash drugs and money; this is not an ordinary dwelling, but a drug dealer's 'dream house.'" [2] It's not illegal to build a house with secret entrances and compartments. Such a house could be used by anyone who wanted to hide their own possessions on their own property. The fact that someone might use it for illicit purposes doesn't implicate the owner or builder. It must be proved separately that the owner or builder knowingly entered into some kind of agreement with another conspirator to facilitate a crime [3]. Merely hosting a chat room isn't a conspiracy. Unfortunately, this is just a perpetuation of the same pointless drug war. The prosecution's position is wildly overblown. The alleged activities are akin to operating a motel where a victimless offense might have been committed by a third party. Drug prohibition is ineffective and is on the way out in favor of treatment and social programs that are more effective. [1] [http://www.law.cornell.edu/uscode/text/18/1956](http://www.law.cornell.edu/uscode/text/18/1956) (4) the term “financial transaction” means (A) a transaction which in any way or degree affects interstate or foreign commerce (i) involving the movement of funds by wire or other means or (ii) involving one or more monetary instruments, or (iii) involving the transfer of title to any real property, vehicle, vessel, or aircraft, or (B) a transaction involving the use of a financial institution which is engaged in, or the activities of which affect, interstate or foreign commerce in any way or degree; [...] [2] [http://www.scribd.com/doc/233234104/Forrest-Denial-of- Defens...](http://www.scribd.com/doc/233234104/Forrest-Denial-of-Defense- Motion-in-Silk-Road-Case) [3] [http://conspiracy.uslegal.com/elements-of-the- crime/intent/](http://conspiracy.uslegal.com/elements-of-the-crime/intent/) ~~~ rtpg >It's much easier to convert a used iPhone (the top search on eBay) into cash, sell concert tickets on craigslist, or trade collectibles, than it is to convert bitcoins into legal tender Really? What about all the services like Bitstamp? I am fairly sure that bitcoin is more liquid than used iPhones. ~~~ svedlin I think for most people, bitcoin isn't particularly user-friendly yet. By comparison, selling a used phone on eBay or craigslist is pretty easy... ~~~ a_c_s Being user-friendly has nothing to do with liquidity. Stocks, which most people don't understand and can't be sold without contacting a broker are considered liquid. Jewelry & stamp collections are not considered liquid assets, yet everyone understands what these are and where to purchase them. ------ sashanna Breaking news: judge exercises common sense. ------ BeyondTime Bitcoin is based of cryptography which is completely broken before implementation so, this would have to be a discussion about before time. No one has experienced security.
{ "pile_set_name": "HackerNews" }
How the backpropagation algorithm works - mblakele http://michaelnielsen.org/blog/how-the-backpropagation-algorithm-works/ ====== mblakele Direct link: [http://neuralnetworksanddeeplearning.com/chap2.html](http://neuralnetworksanddeeplearning.com/chap2.html)
{ "pile_set_name": "HackerNews" }
Monetization Tips For App Developers - SavvyGuard http://mobile.openx.com/blog/monetization-tips-app-developers/ ====== shahinh Great read, Best tips, everything makes sense logically. Would be interesting to crunch the data and see if everything plays out as described. ------ shahinh Wow Great tips, everything makes sense logically. Would be interesting to crunch the data and see if everything plays out as described. ------ jackson1990 Best tips, everything makes sense logically. Would be more interesting to crunch the data and see if everything plays out as described. ------ jackson1990 Great Idea, everything makes sense logically. Would be interesting to crunch the data and see if everything plays out as described. ------ jackson1990 Great tips, everything makes sense logically. Would be interesting to crunch the data and see if everything plays out as described. ------ shahinh Great tips, everything makes sense logically. Would be interesting to crunch the data and see if everything plays out as described. ------ jackson1990 Great tips, everything makes sense logically. Would be interesting to crunch the data and see if everything plays out as described. ------ jackson1990 Great tips, everything makes sense logically. ------ shahinh Great read, Best tips, thanks for posting. ------ shahinh Best tips, thanks a lot for posting.
{ "pile_set_name": "HackerNews" }
Google will now show bosses if employees are actually using its apps - Leary https://www.cnbc.com/2018/09/18/google-g-suite-launches-work-insights-tools-to-track-app-adoption.html ====== tannhaeuser I can see PHBs wanting to discuss G suite's performance reports, and leverage that for firing. Office 365 does the same thing? MS, for all their faults, was once a pioneer in personal computing, and pitched Word and Excel against centralistic mainframe text procesing and accounting software. How could MS not build on their pro-end user stance and deliver mainframe-like (borg-like) "telemetry" software to spy on you instead? I doubt this is even legal in EU under privacy and employment legislation.
{ "pile_set_name": "HackerNews" }
Why didn't Osama Bin Laden get a trial? - Khao I remember that Saddam Hussein got a trial when he was captured by american soldiers. Why didn't Osama get a trial then and got killed in his hideout instead? ====== jws Edit to include: it appears he was not shot on sight, but shot when he "resisted". This is tremendously important when you think about the order which must have been given. Saddam's trial was about documenting his government and demonstrating the authority of the new Iraqi regime. No one ever expected it to end in anything but a hanging. There is no similar motive to have a trial for Bin Laden. Another factor since 2006 is a development in the United States presidential interpretation of the constitution that gives the president the authority to order the death of individuals without judicial oversight[1]. This comes as a surprise to most Americans and should be addressed, but it won't happen soon. [1] Except for foreign officials or heads of state. Notice the gymnastics the US went through to say they were not targeting Ghadafi, but might target command centers where he might be present. ------ chickenorshrimp Because he resisted. Saddam Hussein surrendered. [http://www.google.com/hostednews/ap/article/ALeqM5j_PA3pyZRo...](http://www.google.com/hostednews/ap/article/ALeqM5j_PA3pyZRo9ICegSAKkm1e9xPk8A?docId%3D272f40ddeb4f42e48e3cb06132d9c7c3) ~~~ Khao Oh well, this looks to me as if they had to have an excuse to shoot him. What I find weird is how it doesn't look like this question gets asked a lot. Everyone seems to think that killing him without a trial is how we do it now, but Saddam Hussein had a trial and the nazi SS had a trial. Killing bad people on sight is normally NOT how we do it. ~~~ genericbrandx Saddam and the Nazis surrendered, bin Laden did not. The rules of engagement are clear, which also hold true for metro police actions, in that lethal force is authorized when the other party brandishes a weapon. If he had not had a weapon, if he had raised his hands instead of an AK then he would have faced trial but that is not how things went down. ~~~ Khao I cannot find any article that talks about how exactly did bin Laden resist. Do you have any link saying that bin Laden was armed? ~~~ chickenorshrimp I believe the story I linked from the AP has the most info that's out now. The President still hasn't authorized releasing the photos of the body yet either, so there is still more info to come. ------ thedangler I'm sorry, But were any of you guys there? How do we know he resisted. Assuming it was really him they killed and tossed in the water with in a couple hours. Where is the report and photos? He's resisting shoot him. You saw that right? he resisted. Yup. Cops do it all the time what makes you think the NAVY Seals are any different. History is written by the winners, Correct? This is fishy and really good timing. Around 2 min mark <http://www.youtube.com/watch?v=UnychOXj9Tg> He was a CIA asset. I really don't know what to believe any more. I'd really like to know if the guy who tweeted the whole thing knew he was there. Someone should interview him.
{ "pile_set_name": "HackerNews" }
DoDont: A New Social Network; Bring On The Snarky Comments - jblarge http://blog.dodont.com/2010/07/dodont-a-new-social-network-bring-on-the-snarky-comments/ ====== Jun8 Interesting idea: Two recommendations: (i) Add an openid-based login (e.g. GMail), nobody in their right mind would use Facebook connect (ii) The home page design is kinda weak, for example it doesn't show how the comments are grouped, e.g. will I be able to see all the Do's for restaurants, are they grouped by geotags, etc. ~~~ AmberShah I agree the Facebook Connect was enough to scare me off. I did end up signing up, but didn't actually post any information because I don't care to connect with people on my/through my Facebook account. For example, I thought about using my "Do/Don't" as a funny resource for hackers but none of my Facebook peeps really apply. By the way, I LOVE the write up. What a great way of circumventing the inevitable ridicule by doing it yourself. ------ gergles "How is this different from status updates? DoDont is a filter for important and useful information, and with great tag integration, your trusted information is always easiely available." Great tag integration? There aren't any examples of tags in any of the samples (unless they're hidden, in which case it still seems silly to discuss them as the only differentiator of this service from FB). Also, guys. Spell check your HTML. You're a startup, but you don't have to look like one. Modern HTML editors will helpfully put red squiggles under words like "easiely" to draw your attention to the error. They're not there for flair. nthing the complaint about FB connect; you don't need to know my real name, especially without a privacy policy other than "Trust us, we won't post anything unless you say it's OK!". Sorry, that's insufficient to give you complete, unfettered access to my FB account and to datamine all of my friends (part of the bad about that falls on FB, but since that's what you chose to use...) ------ ratcliffco I avoid FB connect - sites/services I try don't need to be linked to my identity on Facebook, that's just too private, sorry. ~~~ powrtoch Seconded. I think the move towards not having a new login at every site is a good one, but FB connect doesn't seem like the way to go. I'd like to see something like OpenID, but at the very least there should (always!) be a simple userid or email option. ~~~ c1sc0 Or maybe you're just an outlier who (a) understands how Facebook Connect works (b) actually cares. Does anyone have hard numbers on Social (FB/Twitter/...) signup vs. old-fashioned signup. In other words: as a site owner, why should I go through the trouble of setting up my own user management when I can get that for free by integrating FB Connect? ~~~ mikumetz As a site owner you should probably have both. FB connect isn't "free": implementing integration probably takes more time than installing authentication plugin for Rails (and doesn't Django have built-in user management?) But I think you _should_ have Facebookless login if you care about early adopters: too many of them are indeed "outliers" you've identified above. I also think you should have facebookless option simply because it's good for the Internet: I wouldn't want to see so much of it being controlled by a handful of large corps. ------ hexidecimal0 So how do I sign up? I don't see a link anywyere. There's FB button but I don't use FB. ~~~ texasrgr453 I feel like I'm being forced to get a fake FB account (or a bunch of them) just for purposes to bypassing "Facebook tax" on the Internet. ------ dpnewman Maybe i missed something obvious, but why call yourselves a social network at all - why lead with that? Opinion engine is much more accurate and much more compelling. ~~~ what Opinion Engine reminded me of Bing calling itself a decision engine. ------ sjs382 "Sign in or Sign up here" doesnt work in Chrome. DONT make your new startup's page unusable in a major browser. ;) ~~~ DotSauce It's Facebook connect and works for me. Really hard time envisioning this taking off. Needs rewards, benefits, resources... something, anything besides people telling me what I should and should not do. ~~~ philcrissman the FB connect button did not appear for me, either (also in Chrome). Saw it in Firefox; but... I'm not interested in logging in with my facebook account, so I guess I won't be using the site. If it had you create a new account, quick and simple like, I would have checked it out. ------ limaya Interesting to note that half of the comments isn't about the service but about Facebook connect... ------ thinker There are plenty of niche networks that are interesting and useful (HN, Quora, Reddit). As long as creating friends isn't your primary activity, you're good. This reminds me of the FML-style websites out there. Your logo is a train crash right now. I hope its meant as an anti-web-2.0 joke otherwise there is no reason you needed to get Frank Gehry to design you a logo. ------ mindcrime Hmmm... gotta admit, it's an intriguing idea. I think you might just be onto something. A new "general purpose" social network, probably not a great idea. But something dedicated to a particular topic, or theme (like opinions) could very well succeed. Good luck! ------ shaunxcode "Never lose track of what you've thought and experienced" Man I would love to spend some time writing a debord-esque critique of that statement, but I am too busy also developing software to help people never lose track of their authentic experience... ------ cmars232 DONT require a sign in just to browse. I'm not going to sign up just to see if there's anything in there worthwhile. ------ dshupp witty. looking forward to tweetdeck picking it up ------ djb_hackernews why not just aggregate #dodont on twitter/facebook? There is no need to create a totally new messaging platform, just build something on top. ------ tysonlundbech i just did it.
{ "pile_set_name": "HackerNews" }
Has the TIME Person of the Year vote been fixed? - MattBearman A few days ago I voted for Edward Snowden, and at the time I&#x27;m sure he had nearly 200,000 votes. Now he only has 44,000 - http:&#x2F;&#x2F;poy.time.com&#x2F;2013&#x2F;11&#x2F;25&#x2F;vote-now-who-should-be-times-person-of-the-year&#x2F;slide&#x2F;edward-snowden&#x2F; ====== dylz It has always been. The online vote is just merely for fun, the actual choosing is done by the editors, and hsa absolutely nothing to do with the online vote. ------ api I love how the Internet is facilitating this slow awakening to the degree to which the media is controlled and spun. ~~~ sparkie The media is not controlled at all! [http://www.youtube.com/watch?v=9R9oJZswV6Y](http://www.youtube.com/watch?v=9R9oJZswV6Y)
{ "pile_set_name": "HackerNews" }
Amazon Joins the Instant Party - what http://amazoninstant.appspot.com/ ====== what Woo, got it to return relevant results, finally. If anyone ever does anything with the Amazon API, when doing an ItemSearch always include a BrowseNode. Post any queries that get bad results. ------ fookyong I'm guessing this is caused by their API rather than this app... but it's actually faster to search Amazon the regular way than it is to use this "instant" version. ~~~ what Yeah, I know--at least I didn't spend to much time on it :( Seems to be better if you set the category to books/music/movies. Problem is you can't sort the results with api if you search across all products. Even if you can sort the results, most categories seem to return a bunch of junk. Don't think it's the same search Amazon uses on their site. ------ ajennings Here's another one: <http://shoptivate.com/amazoninstant.php> ------ what I know it's a few days late, but I was bored between classes so I slopped this together. Not sure if anyone already did amazon. ------ mikecane I think I broke it. All I get is a spinning wheel now, searching for "Fleming, Ian," one of my test searches. ~~~ what No, that was me. I changed something without testing it : / Fixed though, I think. ~~~ mikecane Whatever you changed broke it in Opera. Won't clear a search now. ------ hrrld Interestingly, when I type 'kindle' I don't see any kindles...
{ "pile_set_name": "HackerNews" }
The webworkers driven UI framework – defining the scope of the v1.3 release - tobiu https://github.com/neomjs/neo/projects/16 ====== tobiu As an entirely free to use open source project, it relies on your input. I just started to define the scope of the next minor release and your feedback is not just only welcome, but makes a big impact on the current roadmap. So, what would you like to see next?
{ "pile_set_name": "HackerNews" }
Livestreaming Google Chrome Announcement @ 10:30 Pacific - panarky http://www.youtube.com/googlechrome ====== panarky Sure bets - Cloudprint (<http://www.google.com/chrome/intl/en/p/cloudprint.html>), Chrome App Store (<https://chrome.google.com/webstore/>) Likely - Chrome OS alpha/beta release (<http://www.chromium.org/chromium-os>) Speculative - Chrome netbook or tablet released with hardware partner, free or subsidized 'test drive' of Chrome notebooks ~~~ panarky Watching it live ... Chrome Browser: 1\. Google Instant baked into the Chrome omnibox -- type one character, favorite page loads 2\. Very fast PDF reader baked into the browser -- can load 1,990 page PDF in less than a second 3\. Hardware (GPU) graphics accelaration in the browser 4\. Chrome beta was V8 engine had 16x better Javascript performance ... today adding 'Crankshaft' to V8, will be 100x faster than IE from 2 years ago (<http://blog.chromium.org/2010/12/new-crankshaft-for-v8.html>) 5\. Sync bookmarks, themes, extensions across machines 6\. Security sandboxing pioneered by Chrome is extended to plugins like Flash and PDF Chrome Web Store 1\. Helps users find apps, allows developers to get paid; rich interactive apps from NPR, Sports Illustrated, New York Times HTML5 app 2\. Games from EA in Javascript + HTML5 instead of Flash 3\. 120 million regular Chrome users will make this the biggest app store in the world Chrome OS 1\. "Nothing but the web" -- browser running as close to the hardware as possible 2\. Fast boot, suspend with instant resume, reconnects to network subsecond 3\. "Friends let friends log in" -- share your notebook with other people and preserve everyone's privacy 4\. Offline Google Docs, games, apps -- resync automatically ​when reconnected 5\. Cloud print (<http://code.google.com/apis/cloudprint/docs/overview.html>) 6\. Partnered with Verizon for pay-as-you-go mobile data 7\. All data on the disk is encrypted by default 8\. Partnered with Citrix for 'Remoting' -- demoed running Excel, SAP, CAD/CAM hosted in the datacenter right in the browser 9\. Notebook pilot program! Jailbreaking built-in!
{ "pile_set_name": "HackerNews" }
SoundBrake 2.0 device makes headphone users less oblivious - sharieskenas https://www.kickstarter.com/projects/914595512/soundbrake-20-the-awareness-device-for-headphones ====== slang800 Cool concept, but why isn't it an app? Couldn't you use the mic on my phone to listen for volume spikes, or even fine-tuned sound patterns? Surely a modern phone CPU can do audio processing a great deal faster than a common microcontroller. Plus it would be far easier to keep the phone charged than to worry about a second device running out of battery. ~~~ smt88 I agree that an app has better ergonomics, but I believe it would murder the phone's battery life. Better to run out of battery on a third-party device than your phone. Plus, a separate device allows you to use it with your laptop or your phone. ~~~ slang800 > murder the phone's battery life I don't think that kind of audio processing would be an issue for battery life. They're able to do everything on a coin-sized battery in their system. From their prototype pics, they're using this: [https://www.adafruit.com/product/1572](https://www.adafruit.com/product/1572) That's a 120mAh coin cell that they say lasts for 50 hours. They might be using something different in their final product, but we can safely assume that their algorithm isn't super power-hungry and running a small mic isn't going to do anything to your 3000-something mAh phone battery. Obviously implementing this is a higher-level language like Java is going to incur some overhead, when compared to their microcontroller implementation... But the point is, they're not doing some advanced pattern matching on a database of sounds or anything that's going to use crazy amounts of CPU-time. > Better to run out of battery on a third-party device than your phone. I disagree on that - it's really easy to remember to charge my phone. I do it every single night. Other devices that I don't use for days at a time are easy to forget about. > a separate device allows you to use it with your laptop or your phone. True - it would need to be ported to work on those platforms. ------ smt88 This is a cool idea. When posting your own projects on HN, it's better if you prefix with "Show HN:" so that it's clear to us that you're advertising something, rather than a random person endorsing a product you like. ~~~ sharieskenas Thanks smt88!
{ "pile_set_name": "HackerNews" }
$10 ebook debate - who should decide? - francissson http://www.wired.com/epicenter/2010/02/panacea-or-poison-pill-who-gets-to-decide-about-the-10-e-book/ ====== Nogwater Where else do manufacturers decide on the price of the good and not the retailer? I know lots of items come with MSRPs, but they're just suggestions. Also, what about resale price maintenance? <http://en.wikipedia.org/wiki/Resale_price_maintenance> I just stumbled upon that page from the MSRP page. Edit: fixed typo ~~~ Confusion _Where else do manufacturers decide on the price of the good and not the retailer?_ Everywhere. If a retailer starts selling a product below a certain price, that will have its effect on the reputation of the product. Therefore the manufacturer often contractually requires that the retailer will not sell the product below a certain price. ------ francissson Are ebooks a tangible artifact? I don't have the answer. But, I think that ebooks should be cheaper than their paper counterpart. The paper version should be considered premium version. ------ Zak Ebooks should cost whatever the seller charges for them. The publisher's only influence over that should be the amount the publisher charges for wholesale copies. ------ waterlesscloud In the end, of course, the consumer will decide.
{ "pile_set_name": "HackerNews" }
On the History and Future of Cosmic Planet Formation - dstyrb http://arxiv.org/abs/1508.01202 ====== CaiGengYang This is a related article on alien life by Nasa [http://www.space.com/29041-alien-life-evidence- by-2025-nasa....](http://www.space.com/29041-alien-life-evidence- by-2025-nasa.html) I would love to be able to help build cheap spaceships that can travel to Europa , drill into the ocean underneath it and fish for life in there. There could very well be life in there, and to find it would be an amazing hack ...
{ "pile_set_name": "HackerNews" }
Python 3 Wall of Shame - iamelgringo http://python3wos.appspot.com/ ====== agentultra "Wall of Shame," sounds harsh. However I don't think it's entirely inappropriate. The premise is that it has been two years since py3k has been release and the community has little to show for it. One would think two whole years would be enough time to port a library. Especially if most of the fundamental challenges of doing so are matters of syntax (not in all cases I'm sure) and changing some names. OTOH, maybe the community still refuses to switch? Maybe py3k is viewed as "that bad," that no one wants to bother with it? FWIW, py3k is quite fast now and the language "enhancements" do make a significant difference. I actually quite like it now that my primary OS, Arch Linux, gave me the boot and made py3 the default interpreter. Many of my projects are still in py2.7, but I have been working on converting them. Sometimes I get lucky and all I need to do is run 2to3! ~~~ briancurtin "Maybe py3k is viewed as "that bad," that no one wants to bother with it?" I don't think anyone's viewing it that way. A lot of people want to switch and many are, but there's the non-zero cost of porting and also the generally low demand. No one wants to spend 3 days of initial work on a port that no one currently wants, then tack on the continued maintenance of two branches (or a single code base that works with both and requires twice the testing). Part of the reason we have the PSF Sprints funding is to solve that problem. A group of 6-7 developers in Cape Town is taking advantage of the funding in early March and plans to complete more of the py3k port of matplotlib. matplotlib is a great one to work on in terms of community impact since it usually ranks highly in any poll of 3.x blockers. ~~~ agentultra Actually, my LUG is also considering doing some sprints. I'm more inclined to join them after using py3 and wishing there was more support for it (and feeling bad about my own laziness in porting my own stuff). The sprints are a great idea and we're quite lucky to have such support! ------ JonnieCache The equivalent for Ruby 1.9 is here: <http://isitruby19.com> And there's a version that shows rails3 compatibility amongst other things here: <http://railsplugins.org> ------ yuvadam Let's admit it. Once Django is Py3K-ready, all other projects will follow suit. ~~~ dagw Is Django really that big a deal in the python world? I know it is a big deal in the python web world, but the web seems a pretty small part of the python world as a whole. If I where to ask all the python developers I knew to prioritize the projects they want ported to py3, I'd imagine the results being numpy/scipy/matplotlib first followed wxpython or pyqt second followed by PIL and a whole bunch a smaller libraries, with Django getting a "sure I guess" down at the bottom of the list. ~~~ glenjamin I would expect NLTK to have a fairly broad user base too, although possibly the community is less likely to be involved in the broader python landscape. ~~~ dagw The thing with Python is it has very long tail of libraries, like NLTK, OpenCV, openopt, Mayavi etc. who's individual communities might not make up significant portion of the python community, but taken together is probably more significant than that of any single high profile project. ------ paganel Funny to see zope.interface on that list. I remember back in 2005, when Zope 3 had just been launched, about how everyone who mattered in the Zope world would say things like "only stupid, retrograde people won't switch to Zope3". I guess the stupid people won, because 6 years on Zope is like the Cobol of Python web frameworks. I just hope the same thing won't happen to Python itself. ~~~ rbanffy Let's just call Zope 2 the Common Lisp of Python frameworks... I guess it survives mostly because Plone needs it. Zope 3 and Grok are pretty cool. ~~~ jnoller twisted and a few others rely on zope.interface too. I never saw the draw personally ------ pnathan I've been doing a ton of Python work lately. Frankly, in my experience, Python 3 is more of a pain to develop with, primarily due to two factors: * Encoding specifications required. * maps/reduces returning iterators objects instead of a list. Having to call list() on anything I map is just a pain. I don't plan to switch to Python 3 until a compelling reason shows up. 2.7 works great for what I do. The real story here is how Python 3 appears to be a largely unwanted improvement, IMO. ~~~ briancurtin maps/reduces returning iterators objects instead of a list. Having to call list() on anything I map is just a pain. Why do you require it to be a list? ------ mapleoin This is a really good list for people who want to get involved in a Python project. I only wish it were bigger and it would contain smaller projects as well. DecoratorTools shouldn't really be on that list though, since it doesn't make any sense for py3. ~~~ jnoller The list is horribly wrong and misleading. The author and others are working to rectify it. There's a lot of packages on there that simply don't belong there. ------ garnaat "Wall of Shame"? Really? As someone who's project appears on this list (and in the WRONG color) all I can say is that I don't think anyone is trying to dis Python 3.x. Support will come when a critical mass of developers are using 3.x. I know it's kind of a chicken and egg problem but this seems to be saying that it's the package developer's fault and I don't really think that's fair or true. ~~~ DeusExMachina I think it's not really a chicken and egg problem though. From a user point of view, using Python 3 looks like giving up on a lot of libraries, which in turn means a lot more work to accomplish the same things. It's a big effort that does not benefit anyone. If switching to a new version of a language makes my life harder, then I will not, especially if working on a startup or a project which is time critical. On the other hand, this is a work that the developers of the libraries will have to do anyway, sooner or later. Doing it now would benefit a lot of people that would like to use the latest version of Python and now simply cannot afford. So why do not do it? In this case the effort would benefit a lot of people at once, which is the purpose of a library in the first place. ~~~ garnaat It's a chicken/egg problem in that developers are saying I'm not porting because no one is using Python 3.x and users are saying I'm not moving to Python 3.x because none of my packages are available. I can't just move to Python 3.x and abandon 2.x. And I have not been able to find a way to have boto support both with the same code base. So, then it becomes a matter of maintaining multiple versions of boto. Just shoot me now. If the barriers weren't so high, more packages would be running in Python3.x. ~~~ wisty Yep, everyone has limited time and resources. Another problem is that many libraries have dependencies, and they can't start porting until the dependencies have been ported (or a replacement is found). Finally, library developers only want to develop for languages they like using, and who likes a language with no good libraries? ------ jnoller Pretty much inaccurate data. For example many project release under a new "3k" name, and leave the old ones. One of those is a backport of a stdlib module (multiprocessing), etc. So, it's bad data, and misleading, but I admire the idea. ~~~ ubershmekel Thanks :) You'll notice I removed multiprocessing. I'm considering removing setuptools as well. Too bad I have to hardcode these things. I wish there was some metadata about a package having a py3k counterpart. ------ rm445 Is this a wall of shame for the packages in red, or for Python 3 itself? ~~~ ubershmekel I think my inspiration was a little of both. ------ overgard I think a large part of the problem is that there aren't strong incentives to switch to Python 3, or at least that was the case when it first came out, and first impressions tend to matter. To me, as a user, it looked like "slower and none of my libraries will work." A better way to sell python 3 should be to highlight python 2's pain points, and show why 3 is better. If they were to show a significant performance gain over Python 2.x, or some sort of killer new feature (ie, get rid of "self" everywhere) I suspect python 3 would get a lot more traction. As it is, Python 2 as a language works just fine for me, and most of what I want (a better interpreter) are being addressed by the PyPy project. ------ leejoramo At first I was going to say that we need some sort of dependency graph to focus on what needs to be ported first. As a Plone developer, I see lots of stuff down the stack that needs to be done before Plone could move to Python 3. However, simply ranking based on number of downloads seems to do a pretty good prioritization. Out of the first 10 packages that haven't been ported to Python 3, I think nearly all are needed for Plone. (Although things like virtualenv are not a requirement for Plone, they are widely used in Plone development) ------ Encave I am currently doing a similar sort of website, but trying to limit it to popular projects only. Also trying to group together a lot of the python 3 related articles and porting tips. My current research can be found here: <http://goo.gl/SCImr> Any corrections, or ideas for what can be on the site would be appreciated. ~~~ victorg5 You're welcome to use content from <https://bitbucket.org/pypy/compatibility>. There is a growing list of dependencies at <https://bitbucket.org/pypy/compatibility/wiki/depends.yaml>. ~~~ Encave That just saved me a hell of a lot of work. Thank you! ~~~ victorg5 You're welcome. You can even clone that and make a wiki for py3k from it if you want. There's a script (in the wiki repo) to parse the wiki into YAML, so reuse of the current content should be easy no matter what format you want. ------ sigzero So this is a small shout out to Pythonistas. Get involved with a project and help port it. ------ moe It will be interesting to see how Guido et al will judge the py3k transition- strategy in hindsight and whether they will repeat it or move back to a more traditional, incremental development model. ~~~ beoba If you 'incrementally' include changes which are incompatible, you end up with many more compatibility barriers to keep track of. "Oh wait, this machine has foo 1.5.2, not 1.5.6!!!" Though a solution to this scenario is to avoid including too many libraries in the standard distribution in the first place, so that incompatible changes in those libraries don't affect the base. This in turn means that big packages with lots of dependencies would need to say "you need fooliba-1.5.6, foolibb-4.3.2, etc" instead of just "you need foo-5.2", but other languages do this and they seem to manage it alright. ~~~ moe Well, most other languages use the incremental approach, python is the outlier here. I'm not saying one or the other is definitely better, just that I'm looking forward to the final judgement after this multi-year effort. ------ zeemonkee Arch Linux uses Python3 as the default system Python, which I found a bit daft as almost every Python library I use relies on Python 2.x. ------ lbolla Just picked one from the red list (pytz) and it looks like Py3.1 is supported... ~~~ briancurtin Too many authors don't update their classifiers before uploading to PyPI. Plenty of things on that list are incorrect, but it's the package author's fault. ------ s3graham There's a lot of improvements in 3, but I'm sad to say I'm really hung up on print (and that's why I'm still using 2.x). Eventually, I guess. If I have to. ~~~ metageek Yeah, there's no good reason to change print. Sure, making it a function can be handy; but they should recognize the "print foo" syntax and compile it to a function call. ------ RyanMcGreal The real issue is that for both developers and library maintainers, moving to Python3 entails a lot of aggravation for a comparatively small net benefit. ~~~ jnoller Porting is actually fairly trivial in many cases. If you go and look at most of the porting stories for Python 3, you'll find the authors saying "I was worried it would be hard... but it was really easy". ~~~ metageek I have the impression that translating the Python is easy (it was for the one project I did), but the C bindings are more of a pain. ------ enduser The wall of shame itself is written in Python 2. ~~~ danielsoneg Well of course - there's no libraries for Py3k. ------ j2d2j2d2 Interesting to see pymongo on that list. What about other databases? _Edit: Mysql appears to be represented too_ ~~~ zeemonkee SQLAlchemy is ported at least, don't know about all the low-level drivers though. ------ alifaziz You don't need to name it as Wall of Shame. ~~~ sigzero No, that is a sensational title. It got everyone to look though didn't it. ------ listic I just realized: Python is supposed to move slowly. Maybe they should have come up with a better name? ------ juanefren Why isn't reportlab in the list? ------ hazelnut well, it's the same with php4 and php5. and php5 has been released in 2004 ... ~~~ eel PHP 5 was still, as much as I can remember, compatible with PHP 4, wasn't it? Anything that worked in PHP 4 worked in PHP 5. ~~~ jinushaun That's not true. If one used OOP in PHP4, running that code in PHP5 will produce lots of bugs related to the new object model. ------ c4urself does this highlight how hard it is to move from python 2 to 3? sidenote: love the pink select must be html5 boilerplate :) ------ tonetheman funny... everyone needs to run out and convert so that the 3 people using py3k can have new packages... doh ~~~ dagw And the reason only 3 people are using py3k is that there are no packages for it... ~~~ muuh-gnu So what is the point in using py3 then? They either should keep using the version with the largest library base, in order to profit from other's momentum, or invest their own ressources (money/time/skill) into making a py3-only library attractive enough to pull everybody else against their will. What they're doing right now, is not investing anything at all to make py3 more attractive, but simply being loud and vocal and trying to coerce the majority through social pressure to do work to support their version of choice, even if there is no actually noticeable benefit in for them to do so. ~~~ sigzero Well...eventually 2.7.x is going to be it except for security patches. The future of Python is the 3 branch and I am confident that everyone will port. You have to keep in mind though that P2->P3 was laid out as a 5 year process. We are only half way through it at this point. ------ zeynel1 "Get the source for this GAE app at google code." Google App Engine should be in the list as well. GAE supports Python 2.5. ~~~ lusis Google only supports 2.5 for anything python related. Take a look at the python protobuf client: [http://code.google.com/p/protobuf/issues/detail?id=66&q=...](http://code.google.com/p/protobuf/issues/detail?id=66&q=Python%202.6&colspec=ID%20Type%20Status%20Priority%20FixedIn%20Owner%20Summary) Don't expect google to upgrade anytime soon. ~~~ anamax Python 2.7 is now on the roadmap. <http://code.google.com/appengine/docs/roadmap.html> ~~~ lusis Oh now that'll be nice. Wonder if they'll let let me future import ;) I seem to recall some restrictions in how they implemented the VM for AppEngine.
{ "pile_set_name": "HackerNews" }
Advice concerning rucx.com as a startup domain/brand name - rucx What do you think about the domain rucx.com for a startup? Will it lead to too many spelling issues? Does the X imply pornographic content too strongly (which it shouldn't)? Is it memorable enough? ====== mikerhoads X does not imply porn if there is only 1 of them. To me this is a decent name for a dev shop or someone that offers technical B2B service. If you want to do something that would appeal to the average person, I'd look for something a little more "natural langageish".
{ "pile_set_name": "HackerNews" }
Understanding Machine Learning: From Theory to Algorithms - Anon84 https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/copy.html ====== nafizh The author has also kindly posted a solution manual for all the exercises [0]. Last time I checked (year ago) this wasn't available publicly. I love books that have solution manuals available, crucial for self-learning. 0.[https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearnin...](https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/exercises.html)
{ "pile_set_name": "HackerNews" }
AMD's Radeon HD 6970 & 6950 Debut: Enter Cayman - MojoKid http://hothardware.com/Reviews/AMD-Radeon-HD-6970--6950-GPU-Reviews-Enter-Cayman/ ====== frisco _Hold on, hold on a second._ Let's put this in perspective. This _brand new_ ATI flagship card has a whopping 24 cores clocked at 880 MHz and a memory bus capable of 5.5 GB/sec. There's no general purpose API here; everything is shaders. Let's compare this to the latest NVIDIA desktop GPU, the GTX 580, capable of both shaders (graphics-native code) and general purpose programming: _512 cores clocked at 1.5 GHz with 192 GB/sec in memory throughput_. What? And this isn't even NVIDIA's most powerful card. How does ATI get out of bed in the morning? ~~~ sparky There is a rational debate to be had about the relative merits of the two architectures and programming ecosystems; this isn't it. 1) A GPU "core" is loosely defined. Those "512 CUDA cores" are 16 streaming multiprocessors with 32-wide SIMT. 2) Big parts of GF100/110's SMs are double-clocked; most of the rest of the chip runs at 750MHz (for the clock rates in your example). 3) 5.5 Gigabits per second per data pin (faster than GF100/110). 256 data pins (fewer). 176 Gigabytes per second per chip (close). 4) OpenCL. There are and have been others, but OpenCL is probably ATI's bet for general purpose programming. You could say it's inferior to CUDA (and I'd agree), but to act like it doesn't exist cheapens the debate. This does speak to the immense power of marketing though; it's easy to lay down such a thicket of buzzwords that you can spin a product any way you want. ~~~ MojoKid Exactly, it's all how the two companies market their architecture is all, that and branding. Technically speaking, AMD does more in less silicon area but also has to run at higher clock speeds to do so. The power draw is about comparable between similar price/performance ratios from each camp.
{ "pile_set_name": "HackerNews" }
Ask YC: Books/Links on giving better demonstrations. - Hates_ I fudged my through a demonstration this morning and really want to improve the way I do it. I'm not looking for info on presentations so-much but just books on how to give a better run through of software. Are there techniques I can use to do a better job next time a client comes in for a meeting and I'm left holding the ball. ====== mixmax I think that this is not the kind of thing you learn by reading a book - or sitting in front of a computer. It's a people thing. People have a knack for seeing through other people, whether they believe what they are presenting, like what they do etc. These are the things I would recommend if you want to do a great demonstration: \- Love your product - nothing shines through like enthusiasm. Don't be afraid to show it either. \- Talk to people, study their reactions, smile to the waitresses, hook up with girls (or guys if that's your preference), start conversations with strangers on the bus. Like in all other walks of life if you practice interacting with people you will become good at it. \- Study great speakers when they do their thing - Start by looking at some of Steve Jobs keynotes. The guy is amazing. Bill Clinton and Barack Obama are great at it too. \- Be prepared - Do your presentation until your girlfriend starts complaining that you talk in your sleep and she has heard more about you product while you sleep than she has heard about her other girlfriends sexlife. And that's a lot... Steve Jobs recites until everything is absolutely pixel perfect, and every eventuality is covered - and it shows. On their keynotes I heard that they have three independent AV systems. Just in case two of them break down. \- Read Dale carnegies "how to win friends and influence people - it's much better than all the modern crap. It's from the 1930's if my memory serves me correctly. Good luck :-) ------ johnm (A) Presentation Zen both the book (that came out recently) and the blog: <http://www.presentationzen.com/> and Beyond Bullet Points, <http://beyondbulletpoints.com/> the book and online seminars. Yes, both of them are primarily focused on "static" presentations, however the focus on the telling a good story through your demonstration is the point. In presenting Krugle at DEMO06 (and winning a DEMOGod award, woohoo!), the biggest failing we saw in many of the other presentations was the fact that the "story" was a confusing, convoluted mess. (B) Spend a lot more time building and practicing your presentations/demonstrations than you think. Video yourself is great if you can do it but just standing up and actually running through the entire presentation/demo repeatedly goes a long ways. (C) For online/web demos, I always build a completely usable, static presentation (in Keynote/PowerPoint) using lots of e.g., diagrams and screenshots so that the presentation still works for the audience even if the network/server is down/slow. (D) Have a personality -- and bend it towards your audience. I.e., a presentation needs to be engaging/interesting to the audience. I.e., think at least as much about entertaining (in the best possible way) as you do about being e.g., informative. (E) Have fun! Hope this helps, John ------ dkokelley I don't know exactly what your background/experience is, but experience in sales will give you good, everyday practice interacting with other people and demonstrating on the fly. Like mixmax said, watch for Steve Jobs' presentations. He has a strange personality but his stage presence is outstanding. How to win friend and influence people (Dale Carnegie) is also really good. I would avoid most of the current "sales/presentation" self-help books out there. Most of them are not worth the paper they're printed on. One last thought, try and find a mentor or a coach and/or join a speaking club. Toastmasters (<http://www.toastmasters.org/>) and the National Speakers Association (<http://www.nsaspeaker.org/>) are good, but they may be overkill if you don't do demonstrations regularly or to larger audiences. ------ wallflower There are many things I can list. But I'll provide just one - get a tripod/camera and videotape yourself presenting. "Speaking Secrets of the Masters: The Personal Techniques Used by 22 of the World's Top Professional Speakers" is the best book I have ever come across on speaking as an art/science because it has many different insights. ------ sanj 1\. <http://presentationzen.blogs.com/> 2\. Practice, practice, practice. ------ edw519 Run, don't walk to: <http://www.toastmasters.org/> Listen to mixmax - all the books/info in the world are absolutely no substitute for practice. Find the smallest toastmasters chapter close to you (so you get more chances to present), join, and go every week, no matter what. In 6 months you'll wonder why you ever had to post this here. I always used my Toastmasters group to practice my sales presentations. It forced me to be ready by a deadline, and the people there provided excellent feedback. It gave me a chance to try new things, and best of all, it never cost me a sale. (Oops, just noticed that dkokelley suggested the same thing. See?)
{ "pile_set_name": "HackerNews" }
First contact: what if we find not organic life but ET’s AI? – Aeon Essays - rbanffy https://aeon.co/essays/first-contact-what-if-we-find-not-organic-life-but-ets-ai?__twitter_impression=true ====== mojomark The article discusses the history of the concept, but completely misses Disney's fair treatment of the topic in Flight of the Navigator(1) CA. 1986;) 1\. [https://youtu.be/gVebPEYiq2o](https://youtu.be/gVebPEYiq2o)
{ "pile_set_name": "HackerNews" }
Threat Modeling Newsletter - threatmodeler https://www.toreon.com/threat-modeling/keep-up-to-date-with-the-latest-threat-modeling-news-and-insights/ ====== jvandenbroeck More appsec people should start performing threat models, I would definitely recommend the newsletter. You can read old editions on mailchimp: [https://us11.campaign- archive.com/home/?u=b3d749f15634df3ec1...](https://us11.campaign- archive.com/home/?u=b3d749f15634df3ec111f27ee&id=a9ff7b2f72)
{ "pile_set_name": "HackerNews" }
Next-gen Intel notebook chips to exceed 3.0GHz - jmorin007 http://www.appleinsider.com/articles/08/02/18/next_gen_intel_notebook_chips_to_exceed_3_0ghz.html ====== wmf They're at 2.8 GHz now, so I rate this rumor "duh". ------ simianstyle Moore would be spinning in his grave.
{ "pile_set_name": "HackerNews" }
How to undo accidentally clicking flag at hn? - jerven ====== ColinWright If you really do mean "flag" then it should have turned to "unflag" and you can just click it again. If you mean the down-arrow, there is no undo. ~~~ jerven Thank you, unfortunately I lost the story l clicked flag on in the list. ------ detaro the flag link should have turned to "unflag"
{ "pile_set_name": "HackerNews" }
Ask HN: Any good Python App Engine Frameworks? - gcmartinelli What framework would you recommend for Google App Engine (in Python), besides the default webapp?<p>If it has a module for integration with Facebook Connect it would be a plus.<p>Pre-built admin like Django's would be great also (I know I can use Django on GAE, but I believe the pre-built Admin area does not work with GAE). ====== aitoehigie take a look at web2py.com. I used it in developing gowork.com.ng and I had no issues, worked perfectly on localhost and also on app engine. It has no inbuilt module for FB integration but you can try Janrain ~~~ gcmartinelli thanks. I'll check it out
{ "pile_set_name": "HackerNews" }
White South Carolina Police Officer Shoots Fleeing Black Man in Back - dankohn1 http://daringfireball.net/linked/2015/04/07/south-carolina ====== dankohn1 Wait until every cop in America has a video camera running at all times, and lack of video footage implies they have something to hide. ~~~ GFK_of_xmaspast Wait until it still doesn't make a single bit of difference. ~~~ dankohn1 A cop has been charged with murder, while yesterday he was being lauded for a good shoot. Not "a single bit of difference"? ~~~ GFK_of_xmaspast They charged the guy who killed Eric Garner, and look what happened there. ~~~ dankohn1 No, they did not charge the cop that killed Eric Garner. So this case is already different.
{ "pile_set_name": "HackerNews" }
Re-implementing the XMonad window manager core in Coq: PDF - dons http://www.cs.ru.nl/~wouters/Talks/BrouwerExtraction.pdf ====== chwahoo This is a cool effort. Since agda : haskell :: coq : ocaml, I'm curious why agda wasn't used. Does it not not support extraction in the same way? I would liked to have seen the talk itself since there are a few places in the slides where more detail would be nice. For example, I saw no reason why focusLeft on slide 40 wouldn't be total---it's not recursive, so why wouldn't it terminate? (given that reverse is total) I also wonder whether most functions in functional programs can be rewritten so that termination is evident due to structural recursion (like the transformation described on slide 41). ~~~ colanderman Never heard of Agda, thanks for the pointer. From what I read, it seems that Agda proofs are written declaratively, rather than interactively using tactics as in Coq. From experience, Coq's tactic-based proof system is very nice as it "unravels" the proof term in a way that makes it more manageable than balancing lambda terms in one's head. Regarding the question of totality on slide 40, a naïve analysis would say that the deconstruction "let (y : ys) =" could fail (if the list returned from reverse were empty), and thus the function would not be total. Of course this is not the case, since reversing a non-empty list returns a non-empty list, but this requires that reverse be dependently typed. (Slightly off-topic: the Mercury logic programming language supports just the right amount of dependency to allow this specification of reverse, and would deduce totality for this function.) Back on topic, yes, most _practical_ functions can be rewritten to be structurally recursive. The most brain-dead easy way is add a "time-to-live" argument to the recursive function. This argument is a Peano number initially set to a value higher than the expected number of recurrences. (In the example from the slides, one could choose the total number of windows on the screen.) The function is then modified to decrement the time-to-live argument on each recurrence, and to return a dummy value should the time-to-live reach zero. It's then trivial to prove that the function terminates, and usually easy to prove that it does what it's supposed to do before the time-to-live zeros out. ~~~ chwahoo > Regarding the question of totality on slide 40, a naïve analysis would say > that the deconstruction "let (y : ys) =" could fail (if the list returned > from reverse were empty), and thus the function would not be total. Of > course this is not the case, since reversing a non-empty list returns a non- > empty list, but this requires that reverse be dependently typed. You are right, I missed the deconstruction. Thanks!
{ "pile_set_name": "HackerNews" }
All The Metrics! Or How You Too Can Graph Everything. - pmoriarty http://sysadvent.blogspot.com/2011/12/day-23-all-metrics-or-how-you-too-can.html ====== SEJeff Graphite co-maintainer here. We are working through the issues to finish up the 0.9.13 release now, the last of 0.9.x. There are some exciting features coming up in the master branch (future 0.10) as we slowly merge megacarbon into master. We are also slowly upping the test coverage one project at a time (I spent some considerable effort on whisper over the Christmas break) to give us more confidence when merging contributions from our huge user community. So TL;DNR: if you've got graphite questions, feel free to ask here :) ~~~ pmoriarty How would you respond to the criticisms of graphite in a previous graphite- themed HN thread? [1] Some examples: "FWIW I've found influxdb considerably easier to install and manage than graphite (graphite doesn't play well with virtualenv, which makes dependency management horrible, compared to influxdb's single static binary) Also, I can see logging dictionaries being much more efficient and useful than logging single values -- with graphite if you want to track page hits per section of your site (of which you have 10) per user (100) per browser (5), you end up with 5000 individual metrics, and you need to have thought of them in advance. With influxdb you can log {"section": "front page", "user": "bob", "browser": "firefox", "hits": 1} as a single metric and then use an SQL-like query to filter by section / user / browser (or any combination of those) as and when you want to."[2] "I've spent the last week working on upgrading our Graphite system. I ultimately killed it and went with InfluxDB. The ease of installation and cluster creation were clear winners. Additionally the storage options for Influx trump Graphite across the board. I tried writing a custom backend and it went nowhere. The docs and code are terrible. I also noticed that Ceres hasn't had a commit in a year - kind of disheartening."[3] "I haven't had the best experience with Graphite. Namely, our main systems practically never crash but Graphite does fall over every few months. Seriously, Graphite is less reliable than the systems we use it to monitor."[4] "We've had the same problems."[5] "My biggest problem with Graphite was that it managed to grind an expensive large RAID array into the ground with a relatively small number (in my eyes) of metrics. We had the realisation that we'd waste a tremendous amount of hardware or have to cut down drastically on our data collection if we were to roll out Graphite across the board. (And yes, we had crashes too) The reason for the disk grinding was simple: The whisper storage system is ridiculously inefficient as it does tiny writes all over the places, and an excessive number of system calls to boot."[6] [1] - [https://news.ycombinator.com/item?id=8739208](https://news.ycombinator.com/item?id=8739208) [2] - [https://news.ycombinator.com/item?id=8739784](https://news.ycombinator.com/item?id=8739784) [3] - [https://news.ycombinator.com/item?id=8740058](https://news.ycombinator.com/item?id=8740058) [4] - [https://news.ycombinator.com/item?id=8739391](https://news.ycombinator.com/item?id=8739391) [5] - [https://news.ycombinator.com/item?id=8742242](https://news.ycombinator.com/item?id=8742242) [6] - [https://news.ycombinator.com/item?id=8739465](https://news.ycombinator.com/item?id=8739465) ~~~ SEJeff Ooooohhh a hard one. I better get an upvote for this (joking)! So WRT installation, it is a PITA to install graphite as it follows an antipattern of hardcoding /opt into setup.cfg, which we'll be changing in the 0.10 release (to making installing via normal python tools like virtualenv and pip work). That being said, there are really 3 main components to graphite: 1\. Whisper, the "improved" RRD. 2\. Carbon, the relay and caching daemon that writes out metrics (as whisper by default) 3\. Graphite, which is simply a webui for reading data from carbon and creating graphs, or returning JSON. The only part that is actually interesting is Graphite, whereas carbon and whisper can be though more of as implementation details for when Chris Davis first wrote graphite. There is a large collection of tools that work with carbon's super simple text based line protocol. Additionally, there are tons of tools that work with the json or png graph data that graphite-web returns. As an ecosystem, we've made it so that swapping out for a different backend is trivial. """ Also, I can see logging dictionaries being much more efficient and useful than logging single values -- with graphite if you want to track page hits per section of your site (of which you have 10) per user (100) per browser (5), you end up with 5000 individual metrics, and you need to have thought of them in advance. With influxdb you can log {"section": "front page", "user": "bob", "browser": "firefox", "hits": 1} as a single metric and then use an SQL-like query to filter by section / user / browser (or any combination of those) as and when you want to."[2] """ This is precisely what statsd is for, if you're not familar with it, you really should look into it. Now regarding "influx vs graphite", I honestly don't see influx as a competitor nor will I likely ever see it as a competitor. Sure you can do some aggregations and apply some functions to your data stored in influx, awesome, but Influx can be used as a backend for graphite[1]. So many people don't realize that writing a pluggable backend for graphite is pretty simple. For super scalable backends, I'm somewhat partial to the Cyanite[2] backend which stores metrics in Cassandra, but other people swear by opentsdb + graphite[3]. One thing I will say is that the carbon relay is not great software. Around 100k metrics per second even tuned on excellent hardware, twisted (python) falls over and it just eats it. I've been meaning to take a shot at rewriting this in golang, but don't have tons and tons of free time to do this on just yet. For a faster relay/aggregator, may I point you to the c version[4] which is super performant. Now for anyone who says that whisper is inefficient because it does tiny writes, perhaps they should understand the software they are attempting to use before deploying it. Whisper is more or less RRD, but allowing backfilling old data. Lots and lots of tiny writes are how the software it replaced worked and how it is meant to work. That being said, there have been several pull requests to batch the writes so that where possible, it has less of an IO penalty. However, you fundamentally need to understand how to troubleshoot a Linux system and tune a system / hardware for the application running on it. I can't stress that enough. Regarding other choices, I think quite highly of a drop in graphite-web compatible rewrite in flask named graphite-api[5] from one of our excellent contributors who works for exoscale. Graphite api is interesting in that it is basically just the good parts of graphite-web, namely the functions.py that do all of the transformations. For a dashboard with that, I also suggest you look into grafana[6], which also supports influxdb natively, but again, graphite has a ton more options for transforming the data so they are overlapping, but not competitors. Also please note that unlike Influx or many of the "competitors" of graphite, we're a small handful of developers spread out over 2-3 continents who don't have a ton of free time. We work on Graphite for fun and try to make things better for the community. There are plenty of good tools != to graphite to use, this is a good thing! #monitoringlove Does this answer most of your questions? [1] [https://github.com/vimeo/graphite- influxdb](https://github.com/vimeo/graphite-influxdb) [2] [https://github.com/brutasse/graphite- cyanite](https://github.com/brutasse/graphite-cyanite) and [https://github.com/pyr/cyanite](https://github.com/pyr/cyanite) [3] [https://github.com/mikebryant/graphite-opentsdb- finder](https://github.com/mikebryant/graphite-opentsdb-finder) [4] [https://github.com/grobian/carbon-c- relay](https://github.com/grobian/carbon-c-relay) [5] [https://github.com/brutasse/graphite- api](https://github.com/brutasse/graphite-api) [6] [http://grafana.org/](http://grafana.org/) ~~~ pmoriarty Fantastic answer! Thank you for taking the time to answer in such detail. You've definitely got my upvote, though you deserve far more. ~~~ SEJeff Thanks! I should also point out the Synthesize project, which makes setting up the entire graphite stack trivial. It was written by another one of the Graphite co-maintainers, the famous Jason Dixon aka obsfuscurity: [https://github.com/obfuscurity/synthesize](https://github.com/obfuscurity/synthesize)
{ "pile_set_name": "HackerNews" }
Yahoo acquires Astrid - tamersalama http://blog.astrid.com/blog/2013/05/01/yahoo-acquires-astrid/ ====== general_failure I don't want to take anything away from Astrid. I keeps dismissing my ideas as trivial and already done. When I think I have a new idea, my friends would dismiss it as trivial and already done. Take astrid's todo list or task sharing for example. My initial and only thought would be 'does the world need another todo app', 'another list making app' really? If I take the idea to my friends, they would bombard me with companies who do exactly the same thing. And yet here we are with Astrid being acquired. It seems to have some angel investment/funding even. The founders may not be millionaires but they definitely made more money than I did. Note to self: do something. anything. and get acquired. ------ greenyoda Already discussed here: <https://news.ycombinator.com/item?id=5641288> ------ shenanigoat Good for them. Astrid is best todo/task app I've ever tried...and I've tried many. In fact, testing out todo/task apps is a great way to procrastinate.
{ "pile_set_name": "HackerNews" }
Cooking Patterns - appplemac http://alexey.ch/cooking-patterns ====== batbomb They aren't called patterns, they are called techniques. There's tons of books about all the techniques you can use. When you understand some of the basic techniques, it's apparent what techniques are being used when you read a cook book. The problem is, even if you only got as technical as saying "make a velouté sauce" in half the cookbooks you see, then people would freak out if you didn't tell them how. When you learn the fundamental techniques, you can easily extrapolate them and realize half the recipes you read in your cookbooks are (necessarily) overcomplicated and can be reduced (no pun intended) to a few techniques. Jacque Pépin is an good resource for beginners and intermediate cooks to learn french techniques. You can find techniques online and in his book New Complete Techniques. The CIA book is good, but a big gripe with the CIA book and the FCI/ICC book (Fundamental Techniques of Classic Cooking) is that the portions are pretty huge because they are for professional chefs and caterers. That aside, they are still a good resource for learning that. IIRC the James Peterson "Cooking" book is pretty good at basic techniques. The Joy of Cooking is still one of my favorites, because the recipes are basic (but delicious), and because it's a compendium of recipes, it builds on itself more than nearly every cookbook you can find. So the recipes include in the ingredients do say "2 cups béchamel (Page 400)", and you can backtrack to that recipe and learn. One problem with these books people don't usually like is the basic recipe isn't often fancy enough to be novel. It's kind of up to the cook to understand "Oh Coca-Cola would be a good substitute for the acid and sugar here" or "maple syrup would be better than brown sugar here" or whatever. For that, it's nice to have McGee's "On Food and Cooking", as it goes into details about ingredients you've never really thought about. ~~~ wiz21 > They aren't called patterns, they are called techniques. I totally agree. That's weird to see the n+1-th geek discovering something people have been doing since the dawn of humanity. Damn, we're just talking about basic cooking. I understand many of us didn't learn the basics, but nonetheless, it's basic. Imagine somebody saying he discovered "patterns to ride a bicycle" and explaining how to go from A to B with a regular bike in the most obvious fashion... ~~~ mercer I think it's great that someone is trying to make cooking more approachable by using 'geek jargon'. Who cares that it's slapping a different label on an old thing? ~~~ VLM Confusion when you meet a chef and try to learn something or at least commiserate about cooking and what he calls a béchamel you try to provide ... a functional programing lambda statement based on map and reduce statements applied to lecithin proteins using heat as an anonymous lambda function. Sorta. Fooling around as a mental exercise is fun. Hey look at this, a floating point multiplier in BF! The problem is mis categorizing or mis titleing it as "learning floating point math". Describe Ops activity as a "insights from looking at cooking thru a programming lens" would sell a lot smoother than learn to cook using c++ design patterns. There is a minor area of danger in that there are many ways to hurt yourself cooking but working slowly with common sense should prevent serious accidents (I hope?) Perhaps a good analogy to "don't write your own crypto" would be "don't invent your own canning recipes" or "don't invent your own deep fat frying procedures (unless you like burn wards)" ------ mattdotc To those of you who got into cooking later in life, consider your experience if you [ever] have kids. You will enrich the rest of their lives if you involve them in the kitchen and teach them some of what you know. I learned how to cook by helping my mother and father for as long as I can remember. I don't honestly know when I started but it was definitely before 10, and likely around 7 or 8 when I could make meaningful contributions and not just get in the way. It has benefited me greatly and I should really make a point of thanking them more often for it. Sure, I might have groaned when being tasked with preparing my own school lunch, or being asked to help peel potatoes, but through the years I picked up lots of valuable experience without even realizing it. I learned these patterns that the author talks about, even if I didn't have a word for them. Cooking is one of my greatest pleasures and, to be honest, I feel sad that some people see it as only a means to an end. ------ tptacek This is the premise behind Ruhlman's books _Ratio_ and _Twenty_. Both are great. Another interesting prism through which to look at cooking is the format used by the CIA's _New Pro Chef_, which covers technique, still focuses on recipe, but also introduces evaluation criteria for each dish: you're not simply following steps, but also judging the outcome carefully, which forces you to focus on what you're actually doing. And then there are recipe books that use recipes as a vehicle for teaching a broader technique. A good example would be _Sauces_, which is compromised of recipes for sauces, but is a survey of the techniques involved in saucing a dish. ------ gms7777 I really like Mark Bittman's How to Cook Everything Series for this. It has a ton of recipes, but there is also a decent amount of discussion of the concepts behind recipes and multiple ways to alter most recipes, as well as tables that make this sort of concept explicit (e.g. There is a table for soups that has a list of well known soups with a column for the liquid base, protein and vegetable). I rarely pull recipes directly out of this book, but it has completely changed the way I think about cooking. ~~~ jschulenklopper Second that advice for Mark Bittman's book. The main advantage (for me) is that it briefly discusses the basic recipe (and some of the reasoning behind it), and then mentions 10-20 variations w.r.t. ingredient alternatives that are worth a try. ------ xutopia That's funny you say that. I am a computer programmer and I also do a once a year pop up restaurant. I cook on the level of some very good chefs without the formal training so I'm more wasteful and slower but create great dishes just the same. I started thinking of design patterns in cooking when I took a class on stocks, soups and sauces. In traditional French cooking you see the bones, shells and carcass of any animal you cook used to make a base liquid that can then be transformed (refined) further. Take a chicken for example. I'll debone it and use the bones, feet, head and excess skin to make stock with it. I'll either grill it before dipping it in water to extract the flavour or do a "white" stock by dipping in water without browning. To this I'll add aromatic veggies and spices. Once you understand how to extracts taste from the carcass you can expand on that and concentrate the flavour by reducing it and then you have a liquid with many good properties. You can then apply the same technique to any mammal, bird, fish or seafood you can think of. Perhaps my favorite "cooking pattern" is the demi-glace. This takes the (usually veal) stock, concentrates flavours further with tomatoes, mushrooms and a standard mirepoix but adds a roux to thicken it. You can then use any tasty liquid you can find to mix with it and you have an instant high quality sauce. I've made demi-glace that I've used for mushroom sauce, bordelaise (red wine), tarragon poultry sauce, porto and cherry sauce, etc... The reality is that a lot of the idea of patterns have been codified by the late Auguste Escoffier [http://en.wikipedia.org/wiki/Auguste_Escoffier](http://en.wikipedia.org/wiki/Auguste_Escoffier). His influence is huge in the cooking world. Kitchens and cooking just wouldn't be the same without him. ~~~ appplemac Sincerely saying, I have never made a demi-glace, although I should have. Will try after studying the process, thanks for sharing! For me, Escoffier’s book is probably comparable to Bjarne Stroustrup’s “The C++ Programming Language”, at least by complexity of the material, so I am somewhat afraid of using it. As far as I know, “Le Guide Culinaire” is used as a source of recipes for the Master Chef exam, for example. Maybe I should try approaching it again with some patience. Cheers! ------ pit Michael Ruhlman's _Ratio_ espouses a similar philosophy: that recipes can be looked at as patterns which you can build on. It's a great idea, especially because it encourages experimentation. ------ jpp I couldn't agree more. So much so that I wrote the O'Reilly book on the topic: [http://www.cookingforgeeks.com](http://www.cookingforgeeks.com) ~~~ ascorbic It's a great book! I use it often. ------ L_Rahman As someone who's recently started cooking as well, thanks for putting into words something that I've been struggling to do myself. Hoping to submit a pull request soon. My go-to pattern is stir-fry: \- Aromatics (ginger, garlic, onions) \- Crispy vegetable (red/green/yellow Peppers, snow Peas) \- Thin cuts of meat \- Absorbent starch (Vermicelli, egg noodles, steamed rice) \- Sauce (Cornstarch, soy sauce, oyster sauce, fish sauce) ------ dipanddough Nice article! I definitely think this is the right approach for absolute cooking novices who also happen to think like engineers. For those who haven't learned to think like engineers, this might seem... boring. I won't say that you're taking the discovery out of the equation, but I think you're distilling this process. Discovery is important as a novice because it inevitably helps build your palate. Patterns, in this particular viewpoint, seem to have a limit with regards to becoming a better cook. Sure, you're going to learn how to cook, but you won't really know why things come out a certain way. Rather than use the analogy of a pattern, I think it would be more advantageous to break meals down into flavor profiles. These are the building blocks AND personas of food. By learning how to make something taste salty, sweet, sour, bitter, spicy, or even French-y, Chinese-y, Mexican-y, Mediterranean-y, etc, etc, you can take very foundational dishes and produce countless variants. Anyways, I think it would be really helpful for you to check out how the French structure their mother sauces. They are very foundational and develop into so many different things. Not unlike what you're talking about, but allowing for unlimited creativity, engineering. ------ andy_wrote I'm also a coder who has recently started attacking his kitchen incompetency! (but on the order of months ago, not years ago...) I prefer baking because I can rigorously follow directions in the worst case and get something acceptable, and because it feels a little alchemical and magical. Something I wish recipes would discuss is "why do we do X?" or "what would happen if we did not do X here?" Like, say, the recipe calls for one teaspoon of salt. What if we added zero, or two? I think this is a little different from the pattern recognition discussed in the article. These explanations would help beginners understand what is essential and what can be omitted (if necessary) or substituted. It would also foster creativity in the learning process. I don't want to experiment blindly and fail and have spent lots of time and effort on something inedible, especially given that I'm a novice who needs all the encouragement he can get. But if I understood the reasoning behind a particular step in the recipe, I'd be more willing to mess with it. ~~~ mercer A friend of mine once remarked that I would probably love baking, and that I might want to start with that before I move on to cooking. Why? Because, at least according to him, baking is more like chemistry, where doing it right means doing things _exactly_ right, whereas cooking is generally more improvisational and free-form. For 'programmer types' he figured the former would be easier. I took his advice, and baked a really good cheesecake. I would like to say I was hooked and kept going, but I didn't. But it was the first time I started to see the fun of making food, and I'm sure I'll pick it up again soon. ~~~ andy_wrote I agree with that distinction. Also, making food for many people is much more rewarding than making food for just yourself. (When I do the latter, it's lots of work for 5-10 minutes of payoff, and I often find myself wishing I'd ordered instead.) Many baked goods are easy to toss in a box and bring to the office or to friends' places. You're not going to bring a tureen of polenta to work and tell your colleagues to dig in for a midday snack. ~~~ vonmoltke > Many baked goods are easy to toss in a box and bring to the office or to > friends' places. You're not going to bring a tureen of polenta to work and > tell your colleagues to dig in for a midday snack. A nine-pound pork butt on the other hand... ------ arafalov I am a beginner cook and was moving countries. I had to start the kitchen setup from scratch. So, I got a Thermomix. It's very expensive but was a great match for my use case. Usually, the target audience is mothers with multiple kids, especially when kids have allergies. But to me, it was a gadget that allowed to select temperature, time, and strength of pulverization/cutting/mincing. It also has built-in scales. And it came with recipes that were using absolute quantities for weight and all settings, so no guesswork required. So I could follow the recipe/algorithm to the letter and get perfect result. Then, I could slowly learn _why_ that happened in the repeatable conditions. Then, I could start change things and see what happened. And adapting non- Thermomix recipes based on understanding the temperature/time/cutting axis. So back in September 2014 I was looking up how to fry an egg (seriously! Not, apparently, at the highest heat). By now, I've made risottos, soups, breads, sweets, chocolate, smoothies, Indian Chai, some Russian specialties (hrenoder), etc. I am feeling a lot more comfortable in the kitchen. And, since I eat at home most of the time now, Thermomix - nearly - paid for itself already. So, the kitchen equipment is also about patterns, not just the ingredients/steps. Bad news: Thermomix is not available in the USA. Not yet anyway, maybe in a year. ------ jeffyee I saw "Ratio" by Ruhlman mentioned below, but also wanted to mention "The Flavor Bible". Technique is half of the battle, and flavor combination is the other. The Flavor Bible is basically a encyclopedia for food combinations. Look up a food and see what goes well with it. Eg apples go well with cinnamon, pork, and nuts. It's based on interviews with chefs, but I've also built a version myself with recipe ingredient analysis. There's scientific analysis that can be done on flavor compounds in foods as well to find complementary flavors, foodpairing.com is working on this. ~~~ tptacek The Flavor Bible is interesting (and so badly wants to be a web app). There are things it's _amazing_ at; for instance, if you have one or two base ingredients, The Flavor Bible will generate thousands of plausible soup ideas. I don't find that it informs my cooking all that much though; the most important combinations are also very well-known. ------ pjmorris Julia Child built a career out of using recipes to teach reusable techniques, starting here [1] and culminating with [2]. In each of these books, she embeds what a software person would call a pattern in each 'master recipe', illustrates with a handful of variations, and offers suggestions of applications. [1] 'Mastering the Art of French Cooking' [2] 'The Way to Cook' ------ noelwelsh Agree 100% with the approach described in the post. The most useful cookbook I own is The Modern Cooks Handbook[1] which follows the pattern approach. [1]: [http://www.amazon.co.uk/Modern-Cooks-Handbook-Lynda- Brown/dp...](http://www.amazon.co.uk/Modern-Cooks-Handbook-Lynda- Brown/dp/0718138155) ------ zwieback Good to see coders getting excited about cooking but if we now call technique and process "pattern" then I wonder what else is a pattern. Pretty much any creative process would end up being a pattern of some sort so the term ends up being meaningless. ------ Tiktaalik I believe this is actually how professional cooks discuss food. For example the various components of a soup all have their own names. * Stock * Mirepoix (flavour base eg. carrots, onion) * Bouquet Garni (more flavourful herbs eg. basil, pepper) * Protein Replace the various ingredients in these component categories and you get different soups. (I am not a professional cook) ------ v1p1n just putting it out there: this idea is not new. [http://www.huffingtonpost.com/linda-buzzell/pattern- recipes-...](http://www.huffingtonpost.com/linda-buzzell/pattern-recipes-the- cure-_b_811588.html) ------ jkscm But the recipe is the solution, the finely crafted source code, that we have to deploy. Cooking may be more similar to DevOps than software engineering. ~~~ pit Throwing spaghetti code over the wall? ------ pcthrowaway I would say the example with different kinds of fritters is more a demonstration different implementations on an interface than of cooking patterns ------ jorjordandan Nice! Now we just need to be able to deploy lunch with `git push table` ~~~ teh_klev And for that after dinner trick 'git pull tablecloth'. ~~~ jschulenklopper Might try that once with "git checkout new-trick" ------ venomsnake Cooking is really simple. Lets take meat: Tender - keep the juices inside. Heat to the minimum possible safe temperature on the inside. Tough - nuke it till its gelatinized. Brown generously because people love that taste. Salt is your friend in 1-2% range. Just by knowing these four things you will be able to convert any cut of meat into something edible with whatever equipment you have on hand.
{ "pile_set_name": "HackerNews" }
Show HN: Iran's army prevents couchsurfers to host foreigners - randy_gilette https://www.couchsurfing.com/people/peace-gulf ====== f14ist Not a 'Show HN' but an (important) news item, should be re-titled
{ "pile_set_name": "HackerNews" }
Association of insulin resistance marker with severity and mortality of Covid-19 - sudoaza https://cardiab.biomedcentral.com/articles/10.1186/s12933-020-01035-2 ====== arkades Please note: The calculation of TyG is ln (fasting blood sugar) x triglycerides/2). Many studies make the error of calculating it as (ln (FBS x TG))/2\. The only online calculator I've found, I think, falls into the latter category, or has worse errors - I didn't go through it too rigorously, but I put in values beyond what human life can sustain and didn't get close to the cut-off for this paper's bottom risk tier. If you look at studies/calculator using the latter calculation, it looks like this study looks at _really, really_ severe diabetics. If you compare with the appropriate calculation, though, they're looking at more run-of-the-mill "not optimally treated and obese" diabetics. Hosseini 2017 did a paper analyzing a number of other TyG papers and calculating results under both calculation methodologies, for context. Please also note that this paper does not state _when_ the results were collected. Insulin resistance/hyperglycemia is a symptom of sepsis - if these labs were drawn on already-severe patients, it would be entirely unclear whether they reflect a cause or an effect (or, as is almost certainly the case, both!). ~~~ dreamcompiler > The calculation of TyG is ln (fasting blood sugar) x triglycerides/2). You have a extra right parenthesis, which makes it ambiguous. Do you mean ln (FBS) x triglycerides/2 Or ln (FBS x triglycerides/2) ? ~~~ tw000001 Doesn't matter since both multiplication and division are...commutative? Don't remember the terminology but (10x2)/4 == 10x(2/4) == 5 ~~~ rat9988 It matters here because he is using ln, and we don't know if we should stop at the first right parenthesis or the second. The word you are looking for is associative. ~~~ CydeWeys Ooof, thanks for that. I didn't realize that was the natural log function because of the space. My brain was parsing it as the word "In" and essentially ignoring it. ~~~ pbhjpbhj There shouldn't be a space IMO, reading it out of context I couldn't see what the issue was either as I too was not parsing the ln as log_e. ln x or ln(x) but not ln (x) ------ subsubzero Covid-19 is a very strange disease, it seems like it is amplified either way (severe vs. non-severe) depending on health. This seems different from the flu as the flu hits everyone very hard, the elderly/sickly especially hard. With covid some people who have it, do not and will not have any symptoms which cannot be said for the flu. I feel like the insulin marker data is an albatross, having high TyG means alot of systems in your body are not doing well, and the virus attacks weakness, it(covid-19) is also found to produce extreme clotting so that is probably why people with diabetes/heart disease and hypertension are all at high risk. ~~~ nradov Asymptomatic infections are about as common with influenza as with SARS-CoV-2. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4586318/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4586318/) Some influenza strains hit young, healthy patients harder than the elderly by triggering a cytokine storm. This was particularly bad in the 1918 H1N1 pandemic. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4711683/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4711683/) ------ ashtonkem I've suspected for a long time that Insulin resistance is going to be one of the next big areas of focus for public health, but I thought that it was going to happen the moment Apple finally figured out blood glucose measurement through the skin. I did _not_ see a pandemic being part of it. ~~~ gumby I love my Apple watch but I am astonished at your faith in their ability to do noninvasive glucose measurement. People have broken their picks on that particular coal face for decades. For that matter DM2 is an area of quite active research (and, like diagnosis, has been quite active for decades) as the financial payoff for any success is enormous. The only downside for transdermal diagnosis is the lack of consumables, which makes it a tough market to enter and to be funded for. That _is_ an area that's good for Apple as they are already selling the platform, so this would be a feature that would add to sales. And one I'd use. ~~~ dreamcompiler O2sat measurement has been noninvasive without consumables for a long time and many phones can do it now. Why do consumables matter? ~~~ gumby Investors don’t typically like diagnostics as the margins and volume tend to be quite low. They also tend to be more “vitamin” than “aspirin”.* Consumables, a least, give you recurring revenue and even in some cases the opportunity for a razor-and-blades model. You’d be surprised how many med products are designed specifically to require consumables. * (funny analogy to use in a med context) ------ 49para Insulin Resistance is the start of (all?) metabolic disease. So easy to resolve using fasting, intermittent fasting, keto, carnivore etc diets. Unfortuneately it slowly builds up over decades and only once disease has progressed do Drs move on to treat the resultant disease (and mainly with cholesterol lowering drugs). Instead of measuring fasting glucose levels (which indicate diabetes), insulin levels should be measured as they are the leading indicator. ~~~ conistonwater > _So easy to resolve using fasting, intermittent fasting, keto, carnivore etc > diets._ Or, you know, you could resolve it by eating a balanced diet too. ~~~ dghughes On my mother's side of the family they were practically vegetarian or normal as it was called back then. Vegetables all week and maybe on Sunday a roast probably fowl of some sort. This was during the 1940s. There were nine children and the parents on a farm in a small rural area. So no extravagant purchases no pop, candy, etc. My mom told me one Christmas her present was an apple and she was excited! The apple came from the tree out back. Sounds good? But all the women (grandmother and aunts) in the family developed diabetes, one male (uncle) too. Type 2 diabetes isn't always due to a poor diet. ~~~ 49para All carbohydrates become sugar, sugar elevates insulin, constantly elevated insulin leads to diabetes. I'm not sure all the causes of Type 2 diabetes but I would wager that the current epidemic of Type 2 Diabetes is caused by too much sugar. ~~~ conistonwater It is not currently established what the exact relationship is between sugar in your diet, obesity, diabetes. They are all basically risk factors for the next one, but they definitely don't _cause_ (in the strict scientific meaning of the term) one another---there is nowhere near enough scientific evidence that would support that. There's enough uncertainty about the whole process that saying "avoid known risk factors" is about the best advice you can get. ------ danans Given that insulin resistance is strongly correlated with obesity [1], I'm surprised that wasn't a factor they controlled for, especially since the respiratory difficulties associated with obesity seem to be a significant risk factor for death with Covid19 cases. 1\. [https://obesitymedicine.org/obesity-and-insulin- resistance/](https://obesitymedicine.org/obesity-and-insulin-resistance/) ------ shakil Lets not confuse correlation with causation. All this study shows is people with insulin resistance are at significant risk of dying from Covid, it doesn't identify what actually kills them. However, if you look at the role Vitamin D plays [1] in suppressing cytokine storms, which is what actually pushes over an organism to the point beyond recovery from Covid, and then understand that Vitamin D deficiency is common [2] in Type 2 diabetes, you can begin to understand the fuller picture. 1\. [https://www.medrxiv.org/content/10.1101/2020.04.08.20058578v...](https://www.medrxiv.org/content/10.1101/2020.04.08.20058578v4) 2\. [https://pubmed.ncbi.nlm.nih.gov/26375925/](https://pubmed.ncbi.nlm.nih.gov/26375925/) ------ hirundo "[triglyceride and glucose] index was closely associated with the severity and morbidity in COVID-19" So perhaps part of the reason why COVID-19 morbidity is lower in Japan/Korea/Taiwan compared to the U.S. is due to lower prevalence of metabolic syndrome. I wonder if that's also true for Europe. ~~~ arkades > It suggests a protective effect of a keto/paleo diet. No, if it can be taken at face value, it suggests a protective effect of a healthy lifestyle. ~~~ floatingatoll If taken at face value, it suggests a protective effect of low insulin resistance as measured by the TyG marker. Everything else is interpretations and chains of logical reasoning. Neither of you are particularly wrong necessarily, but a third option is that someone could be sedentary and eating carbs every day and have low TyG. It's common to state that "activity XYZ will provide action-at-a-distance medical benefit ABC" because stating accurately what's going on takes more words that sound less certain: "Paleo and keto diets may weaken Covid-19 by lowering insulin resistance" "A healthy lifestyle may weaken Covid-19 by lowering insulin resistance" But it's really worth saying it like this, even though few do. (And yes, these aren't 'maybe' enough, but they're an improvement.) ~~~ arkades You replaced the word "suggests" with "may". You're seeing a distinction between these two words that I apparently do not, since I take them both to indicate "this is a possibility, though not the only one, and a far from proven certainty." I would add that, even there, my statement was predicated on making an inference about dietary habits - something I cast doubt on with the opening, "if taken at face value." Something I further opined on in my other post, where I pointed out that the paper may be reversing causality. I feel like you're criticizing a strong inference having been made, that I think pretty clearly wasn't.
{ "pile_set_name": "HackerNews" }
Google veterans head off on their own to work on self-driving trucks - lemiant http://www.theverge.com/2016/5/17/11686912/otto-self-driving-semi-truck-startup ====== cheriot Sending a truck down an interstate for 20 hours has to be the lowest hanging fruit for self driving vehicles. When was the last time an entire category of jobs became obsolete fast enough that laid off employees blamed the technology? When long haul truckers are out of work, how will municipal bus driver unions react? ~~~ wmeredith An actual long-haul trucker commented at length about this on a reddit thread. I can't seem to find it. The TL;DR was that tyhe actual driving of the truck was a no-brainer to automate away, but that despite being a large portion of the job from a time perspective, it was a small portion of the job functionally speaking. The majority of a long haul trucker's job is interacting with legal authorities at weigh stations, dealing with loading crews, performing required truck maintenance on the road, etc... It was an interesting insight. I'm sure all of that could be automated away eventually, but no where near as quickly as the actual driving. I agree with other commenters in this thread that I see truck driving going the way of the train engineer quite soon, but that having truly driverless trucks is a bit further out. ~~~ calbear81 The train engineer analogy is a good one, you would imagine that a single "truckgineer" so to speak could be responsible for managing a convoy of let's say 10-15 trucks which would make it much more efficient. In regards to fueling, would you make that more efficient by adding a larger fuel tank or fuel car? ~~~ chefkoch > In regards to fueling, would you make that more efficient by adding a larger > fuel tank or fuel car. Or have someone at the gas station fill up the trucks like it was decades ago. ~~~ awesomerobot or you know, automate it ------ chefkoch >If you need to replace all of your trucks to get the technology on it, the rate of penetration you'll be able to have is pretty low. Trucks last ten years, a million miles. Actually i would find a adoption rate like this very fast. ~~~ danvoell Yea, seems extremely fast. They last 10 years now, with maintenance. But assume you have a moderate repair come up and you start running the calculations drive labor + maintanance vs. maintenance only and that 10 year life will shrink to bring in new vehicles. ------ B1FF_PSUVM _" Many of Otto's founders have done well for themselves over the years, and it shows: the company is entirely self-funded right now without any external investment. (In the wake of the reported $1 billion Cruise Automation sale to General Motors, I ask Ron if the plan is to get acquired, but he's insistent that they're focused on bringing a product to market.) Even George Hotz's scrappy upstart Comma.ai has recently taken on venture funding from Andreesen Horowitz."_ My crystal ball tells me that they'll soon find this course of action unwise. ~~~ xiphias I wanted to ask the same thing: did they quit because they didn't get enough money/stocks at the companies that they were at, or were not working on important enough stuff, or just wanted more risk for the sake of getting into more risk? Or do they think that their business model is so much better than Google's? (facilitating drivers to drive even longer without sleeping) The salary of a truck driver is about $50000, I guess he costs at least $80000 for the employer. Buying a new self-driving truck for $200000 should pay for itself in at most 3 years. Retrofitting may work, but it's quite short term business. ~~~ luma You're forgetting that one driver can only drive 70 hours a week (with rules around breaks in the middle), while presumably the AI could run the truck much closer to 24/7\. In this case, installing a $200k system would allow the company to replace ~3 drivers with no interruptions in service. ~~~ icefox More valuable than replacing three drivers is the fact that because the AI can drive all of the time the total time to deliver goods goes down which means it is more valuable and a higher price can be charged for the delivery. ------ TY Reading this article, brought back memories of this 1986 film about trucks going homicidal that Stephen King's directed himself: [http://www.imdb.com/title/tt0091499/](http://www.imdb.com/title/tt0091499/) Looked ridiculous back then, does not seem so any more. Imagine a new class of malware that turns our vehicles into weapons for criminal and terrorism purposes. Imagine this conversation in a sitcom about near future: _Honey, did you update antivirus on the car? Some script kiddie just destroyed our neighbours car - thankfully they weren 't in it..._ Yeah, I know it won't really work this way - OTA updates and etc, but try to picture this from the layman perspective. ------ ndr The Simpsons predicted this: [https://www.youtube.com/watch?v=xq7CnsZzEEM](https://www.youtube.com/watch?v=xq7CnsZzEEM) ------ fwiwm2c This makes a lot of sense. Given how rule-based the trucking industry is (max 11hrs/day; 30min break after x amount of driving etc., strict speed limits, complete in lane driving with little overtaking etc.), self driving trucks could gain very strong adoption given they could follow the road rules in place predictably and will not have any time based restrictions wrt driving thus saving a lot of cost. ~~~ JoeDaDude Agreed. HN highlighted this artice a few weeks ago. It does the numbers and estimates a 400% increase in productivity by having trucks drive (themselves) around the clock. [http://techcrunch.com/2016/04/25/the-driverless-truck-is- com...](http://techcrunch.com/2016/04/25/the-driverless-truck-is-coming-and- its-going-to-automate-millions-of-jobs/) ------ ju-st Company name not to be confused with German e-commerce company Otto group [https://en.wikipedia.org/wiki/Otto_GmbH](https://en.wikipedia.org/wiki/Otto_GmbH) ------ karussell Everyone who is wondering ... here is the website: [http://ot.to](http://ot.to) and the original post: [https://blog.ot.to/introducing-otto-the-startup- rethinking-c...](https://blog.ot.to/introducing-otto-the-startup-rethinking- commercial-trucking-cfdc502ef452#.j3gfbzfl9) ------ the-dude Already being tested in North West Europe : [https://www.youtube.com/watch?v=kANWFKKT1AA](https://www.youtube.com/watch?v=kANWFKKT1AA) ------ monk_e_boy It'd be interesting if they could sell the customer a little 'truck' so when I order something and that thing gets to a local warehouse my little truck could drive off and collect it. It could collect from various places (depending on how I prioritize my items) and then drive back home. I don't have a private parking space and I'm rarely in my house when deliveries are being made, so a mini 'collection truck' would hold my items in a secure way for me. I suppose it could be tiny, maybe only a couple of meters long. ~~~ reustle In theory, this sounds cool, but I feel like in practice it would be pretty wasteful. Does a town of 500 people really need 500 trucks sitting around doing nothing most of the time? Isn't one of the main promises of autonomous vehicles that we can all share vehicles and we don't need to let them sit parked for days on end? Regarding the safe storage of your items, I think that can be solved with something similar to what Amazon Lockers attempted. ~~~ uola Some buildings have integrated lockers. In some countries you can order things to your nearest convenience store. It a fairly solved problem. ------ parfamz Otto, really bad name regarding googling ~~~ coredog64 It's the name of the autopilot in "Airplane!". I'm hoping they whimsically include an inflatable entity to let people know the truck is under computer control. ~~~ cjslep What if Otto starts deflating? ------ nabla9 >there's nothing on the books banning self-driving cars as long as a human is in the vehicle (which Otto's product would always still require). This company basically wants to increase efficiency of long haul truckers. They can sleep in the wheel and move 24/7 without mandatory breaks. Turning 11 hour drive into 23 hour drive would bring huge savings for the company. If they make it happen, it sells like candy and after few years all have one. ------ zipfle OT, but can someone explain why this won't instantly be sued into oblivion by the former employers? It seems like some of the Otto team will be using knowledge they gained at their last job on what seems like a competing product. ~~~ spacecowboy_lon Depends CA is quite liberal in terms of non competes and I suspect they woudl argue that self driving trucks is sufficiently different from self driving cars. ~~~ exhilaration This is one reason why California is great for tech workers, the unenforceable non-competes. In much of the country you'd have to wait out a year before working in the same industry as your current employer. ~~~ spacecowboy_lon Agreed the USA having 52 separate sets of employment laws does not make sense - really employment law should be done at federal level - think of the saving in reduction in red tape. Though I suspect HR and Lawyers might lobby against that as a job protection scheme. ------ lgieron Who goes to jail when the self-driving truck kills someone? ~~~ true_religion Why must someone go to prison? If the vehicle is operating normally and regulation allows it to operate without a driver in command, then there is no crime. ~~~ lgieron I agree that if the regulation allows it, then the self-driving car is a device like any other. I wonder if startups such as Otto will actually wait till such legislation passes in entire country, which is arguably going to take at least a decade (and they declare they don't want to be acquired, but sell to end customers). ------ codecamper refueling? ~~~ jon-wood All the out of work truck drivers can become pump attendants waiting to fill up their former vehicles as they pull in for more fuel. ~~~ stuxnet79 Funny comment, but it looks like you are fishing for downvotes here buddy. Assuming these trucks are electric, they could set it up like the Tesla recharging stations where you can recharge simply by driving over a mini- trapdoor that houses a fully charged battery pack and a robotic arm which can swap your expended pack for the new one. ~~~ jon-wood What's the point in karma if not to blow it on bad jokes now and then? I'd completely missed the electric trucks angle, but that could work well assuming they can master hit swappable battery packs. ------ amelius > Otto isn't alone in trying to automate big rigs. Daimler and Volvo Trucks > have both demonstrated self-driving systems in recent months, but > Levandowski doesn't sound worried about those efforts. "I think the trucking > folks are doing a great job, and eventually they would probably solve the > problem. But a company that is used to building trucks is not well > structured to solve a technology problem," he says. As opposed to what, an advertisement company? ~~~ wrsh07 As opposed to a company with top of the line machine learning systems. ~~~ vonmoltke A company with top of the line machine learning systems designed to improve advertising results is not better positioned to build an autonomous vehicle than a company with top of the line non-autonomous vehicles. ~~~ amelius Indeed. It has been said many times before: machine learning is all about the data, not about the algorithms.
{ "pile_set_name": "HackerNews" }
Introducing Syngr, my attempt at an Standard Library for PHP - niteshade https://github.com/hassankhan/Syngr ====== MattBearman I once starting building a PHP library that was pretty much identical to this, I was really excited by the prospect of being able to write beautiful, object oriented, method chained code. Eventually I was overwhelmed by the sheer scope of what needed to be done and abandoned it. These days I work (almost) exclusively in Ruby, and life is good. I'm not trying to dump on this project, quite the opposite, I hope they achieve what I was never able to. The point I'm trying to make is that I could only defend PHP for so long before I had to admit that it's a lost cause, and other languages are simply a better fit for web development. Stuff like this should already be part of PHP, but it's not, and I'd bet good money that it never will be. But hey, we got namespaces and late static binding, things I've used maybe once each. ~~~ niteshade Totally agree with you, and just having done the String class, I am appalled at how many different kinds of return values you can get from methods. If that's not bad enough, then PHP will throw a curveball at you with its 'this method may return falsy values'. That's the kind of thinking I've avoided here, trying to explicitly return a boolean value, or results. ~~~ MattBearman Indeed, PHP desperately needs some standardisation. At least with your string class there'll be no more needle/haystack confusion :) If I find a bit of free time I'll see if I can contribute anything to this, as while I've all but abandoned PHP, I'd still like to help bring a library like this in to existence :) ~~~ niteshade Your help would be highly appreciated - there's loads to cover yet, and God knows what other horrors PHP has in store. As a sidenote, I read somewhere recently that there is some method to the needle/haystack madness. Basically, for strings, its needle/stack, and for arrays its stack/needle. Thought I'd share that. ------ sandfox I know microbenchmarks are generally daft and horribly misapplied, but it might be worth showing/measuring the difference between using built-in procedural methods and your OO ones on some some example use cases to stop people speculating and just to make sure you aren't majorly shooting yourself in the foot perfomance wise for the benefit of a nice API. All that aside, kudos! ~~~ niteshade Funny you mention that, just started working on something now. It's slow, I'll tell you that much. Noticeably slow? Not so sure. ~~~ rorrr2 Show us the numbers. I'd expect an order of magnitude slower for most of string and number operations. ~~~ niteshade Someone on Reddit did the honours: [http://www.reddit.com/r/PHP/comments/1my5cv/introducing_syng...](http://www.reddit.com/r/PHP/comments/1my5cv/introducing_syngr_my_attempt_at_an_stl_for_php/) ~~~ rorrr2 So your code is 45.7 times slower. HAHAHAHA. ------ beberlei some feedback from me: 1\. the value types are not immutable, which will make it very hard to work with them on larger scale. You should create new instances instead of modifying the original ones, keeping the originals intact. 2\. The object hierachy is borked. You are exposing way to much information. Instead of generically allowing to save values and options in the Object, you should specialize the types much more and hide the actual value. 3\. You should convert the value when creating the type, using (int) for example, or throw an exception if not convertable. The current converts the values at many places, which increases the complexity of the code unnecessarily. ~~~ niteshade 1\. I had thought about that, but then I thought I'd go sort of like the Javascript route where everything can be modified (well, mostly, anyway) 2\. Specialise the types more how? Why would I want to hide the actual value - although I suppose I'd want to make sure that no external object can modify its value directly. 3\. Like in the constructor? What about for Number, where you can pass it an int, double or float? ~~~ MattBearman If you're interested, when I was working on a lib like this, each method had two versions, to* and as*, eg: $string->to_upper_case() # change the original $string->as_upper_case() # keeps original, and returns a new upper case string ------ cabalamat AIUI, if I do: $x = new Number(-3.2); $y = $x->absolute(); Then not only is $y equal to 3.2, $x is now 3.2 as well. Is this correct? If it is, I think it is non-intuitive. ~~~ niteshade That is correct. What would you expect it to do? ~~~ cabalamat I would expect $x to remain the same. What's wrong with: $x = -3.2; $y = abs($x); ------ ivanhoe It looks that you don't do any checks if the object is really initialized with the value of the given type? However, if you did, then one could use this for a quick & elegant input validation and that would be extremely cool. It should check the value in the constructor and throw the exceptions if the type is wrong. And then you will also need more granular types like Integer and Float to make this more useful for everyday cases. ------ cstrat This came to mind when I saw what you've created - [http://xkcd.com/927/](http://xkcd.com/927/) Although I do believe that there is value in someone forking PHP and standardising everything in this fashion... just throwing out the legacy support entirely. ~~~ niteshade Yeah the idea is to have something that hopefully paves over the rough parts, and who knows? One day this might be similar to what PHP will look like in the future (in a galaxy far, far away) ~~~ TazeTSchnitzel Actually, it is possible that PHP in the near future will look like this. I, nikic, and some others, want to add methods to primitive types. ------ gh0zt I like the idea but i don't like the actual implementation. For example: Number::tan takes an array of flags as an argument but only one flag is ever used to determine which kind of tangent method is eventually executed. For me as a user this does not only complicate the usage but it is also potentially (microoptimizationwise) slower because of the necessary condition check. So instead of $number = new Number(4.2); $number->tan(array(Number::TRIG_ARC)) why not just implementing it as a separate method? $number->atan(); ~~~ niteshade Well, PHP has four 'tan' functions, and in all fairness, I might just remove those functions altogether and leave the basic ones. I do agree that its longer to type now, but at the expense of remembering WTF atan() does. ~~~ gh0zt Well, if you need the atan function you probably know what it should do and apart from that why does the Number:tan method take an array of arguments when only one is ever used? So at least it should be usable like tan(Number::Number::TRIG_ARC). ------ eridal Hey, nice project. I had started something very similar some time ago but I realized that the oop approach doesn't work quite well as you can't force other to use your classes. Then I decided to go functional style, where you receive "kind of" unexpected input but produce predictable output; the result is a very simple and small api that plays nice with others. Take a look at the code at [https://github.com/eridal/prelude](https://github.com/eridal/prelude) I'd really like to join forces :) ------ VMG STL stands for Standard Template Library -- I think you just mean "Standard Library" ~~~ agumonkey The right acronym would have been SPL ------ rnts08 This is great, to bad I left PHP for Python a year or so ago. I'll keep an eye on this though! ------ agumonkey Arrays could benefit from a little wrapper ~~~ niteshade Haha, I just made a commit where I'm going to start adding that. ------ dancecodes not bad, but where getContent(); how about boxed objects? ~~~ niteshade getContent() is a magic method. How would I go about adding boxed objects in PHP? ~~~ dancecodes Hint: just use container for any php term - but this can unify access I hope. ------ rorrr2 Why would you do this for numbers? How is $number = new Number(6.9); echo $number->ceiling() // 7 ->max(array(5, 9, 49.1)) // 49.1 ->floor() // 49 ->sqrt() // Value ->value(); // Get raw value rather than string better than writing it in actual functions? sqrt( floor( max( 5, 9, 49.1, ceil(6.9) ) ) ) ~~~ niteshade I think its ugly? ~~~ rorrr2 I think converting numbers to objects and back is ugly. And insanely inefficient. For this particular example it must be orders of magnitude slower. ~~~ rorrr2 OK, if I never need to write insanely slow object oriented code for no good reason, I will download your "library". ~~~ matthewmacleod Don't be a dick, dude.
{ "pile_set_name": "HackerNews" }
How do I find an entry-level software position? - homie Recent CS grad here.<p>Is it me or have entry level positions almost completely dried up in the software industry? Almost everything that I find requires 2+ years of experience. I apply anyway, but needless to say I&#x27;m not having any luck finding a job.<p>Am I approaching the job hunt the wrong way or something? I&#x27;m beginning to worry that I&#x27;ll never find any sort of desirable employment simply because there are hardly any reasonable positions for me to pursue.<p>P.S. I&#x27;m looking for jobs in and around Chicago (this may be contributing to my lack of luck finding a job, but I don&#x27;t think that&#x27;s the case). ====== segmondy I just hired 3 PAID interns that can turn full time, and they have 0 experience in the industry. These positions are out there, go knock on as many doors as you can. I've also hired many entry level developers. What do I look for? I look for "can you code" as an entry level developer. I don't expect you to understand design patterns and all that crap. Can you hack your way around and get the damn code running? Great, you are better than 50% of the candidates. Have you taught yourself any new tech recently? Great! Do you know more than one language? Great! Have you finished any online course? coursera, udacity, udemy, whatever, just something or a book? Awesome. Can you show me some of your shitty code for a silly side project? Awesome. Are you passionate and willing to learn? Great! Do you know other things that you need not know such as Unix, DB, RESTful API, git, etc? You already met everything I need in tech. The only other thing is to at least pretend to be a decent person during the interview be nice, polite, respectful, punctual, clean. Go knock on doors. ~~~ tudelo Not to be that guy, but emphasizing that it's paid? Is that really necessary? What are the chances that it wouldn't be... ~~~ camhenlin Well there are lots of unpaid internships out there, and the OP is looking for a paying job, so I think it was worth mentioning that the internships were paid ~~~ tudelo I guess it depends where you live. I have never heard of or known anyone doing an unpaid internship in real life. I have only heard of it in whispers on the internet. ------ 1ba9115454 If you're applying for jobs requiring 2 years experience and you don't have that on your CV then probably you don't make it past HR. The hiring manager never sees your CV. Also, most jobs ask for a particular programaming languages and usually an architecture. i.e. Ruby on Rails. If your CV isn't targetting the skills asked for in the job you won't get past HR. CV --> HR --> Hiring Manager --> Interview --> Hiring Manager --> 2nd Interview --> Offer You might be stuck at the 1st stage. When I was starting out I got on a free course that helped with my CV and got me an unpaid intern poisition. I worked for free for around 9 months, then I progressed rapidly after that. ------ protonimitate Unfortunately it's largely a numbers game. If possible, be willing to relocate for work. Most entry level seekers are applying to everything they can, all over the place. Work any and all connections you have. Anything you can do to get past the HR filter will help 10x your chances. If you are pinched for money, you can always look for temp to hire work. It's unstable and the pay is usually crap, but it will get you in the door at one or multiple places and will get you to broaden your network. Keep at it. It's discouraging and terrible, and the process is largely broken, but its entirely possible. ------ meric You can get some working experience doing freelance gigs. You could probably find better ones than upwork.com, but otherwise, do some jobs there, do bigger and bigger ones, until you've got a couple of 1-month long gigs and some good reviews. Then go after the 1-2 year work experience ones. The lower end you go, the less onerous the requirements are, and it's possible you can build up your work experience that way. ------ inertiatic I think that they're just not as advertised, but they're there. I'd consider applying for any non-senior position in your situation (and in fact I did when out of university). Are you sure your CV is on point? Without much working experience you get to use up the space to show that you're actually "passionate" about your craft by stating anything remotely interesting you've done. ------ saluki Keep learning while you are looking. I would recommend learning Rails or Laravel developing web applications. Those jobs are pretty interesting and they are out there. Usually when you find jobs it's through someone you know or meet so go to meet ups, contribute to an open source community. Pick a side project to build so you have some samples of your work. Landing your first job takes work, keep at it. Good luck. ------ meatbundragon Highly recommend Glassdoor. Also, I work with people who attended coding bootcamps and then used the connections and new skills (front-end development) to join an established startup. ------ akulbe In the meantime, find projects on GitHub that are written in languages you know and fix bugs. That may help give you cred, where you have a dearth of it, otherwise.
{ "pile_set_name": "HackerNews" }
Smaller is Faster - yusufaytas http://www.yusufaytas.com/smaller-is-faster/ ====== speedplane Smaller is faster is nothing new to software developers. Any embedded software engineer that has spent hours to avoid loading a few extra kb of code into RAM on a hardware limited system can tell you that.
{ "pile_set_name": "HackerNews" }
Two Congresswomen Not Allowed to Speak on House Floor in Michigan - clbrook http://www.npr.org/blogs/thetwo-way/2012/06/14/155059849/michigan-state-rep-barred-from-speaking-after-vagina-comments?ft=1&f=1001 ====== manglav being banned for mentioning "vagina" and "vasectomy" during a bill about abortions...how else would you discuss the topic?!
{ "pile_set_name": "HackerNews" }
Are you ready for America's data protection laws? - elorant https://venturebeat.com/2019/10/12/are-you-ready-for-americas-data-protection-laws/ ====== alexfromapex The government should be providing a free guide for how to be compliant. It’s stupid that we have all of these regulations and the layman won’t know how to follow them. ~~~ m-p-3 Considering how slow the government and legal system are to adapt to new technologies compared to the private sector, I do not think this is a good idea.
{ "pile_set_name": "HackerNews" }
How to beat the competition (Steve Pavlina) - tomjen http://www.stevepavlina.com/blog/2008/02/how-to-beat-the-competition/ ====== jgrahamc I'm always mystified when Steve Pavlina posts get voted up onto the front page. He does a great job of self-promotion, but I have a hard time reading his stuff because he goes on and on, and he has that whole 11:11 thing on his web page. If you are not familiar with 11:11 then read Uri Geller's web site: <http://www.uri-geller.com/articles/11.htm> Benford's Law would tell us to expect to see numbers beginning with 1 a lot and hence it's no surprise that the number 11 seems to turn up a lot. Ooh. Cue the spooky music. That's just another example of my law: "To idiots, any sufficiently simple explanation is indistinguishable from magic" ~~~ Xichekolas While I generally agree that Steve Pavlina has jumped the shark (when he started pitching that self-affirmation guy (whatever his name was)), he does have some good articles from back in the day. His 10 reasons you should never get a job is one of my favorite motivator articles: [http://www.stevepavlina.com/blog/2006/07/10-reasons-you- shou...](http://www.stevepavlina.com/blog/2006/07/10-reasons-you-should-never- get-a-job/) Like you say, if we can learn anything from Steve, it's how to promote ourselves. The guy is either really good at it (if he makes what he claims to make) or really good at faking it. (Is there a difference on the internet?)
{ "pile_set_name": "HackerNews" }
Question about Docker and supported ISAs - lch_ian I am new to computer programming and computer architecture, and have a stupid question on the relationship between Docker and underlying hardware instruction set architecture (ISA). My question is: if Docker (or Linux Container in general) is built on top of OS of Linux, and also if Linux runs on various ISAs (e.g. x86, PowerPC, MIPS, ARM, etc.), why Docker cannot run on those ISAs out-of-box? By reading the post (https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=6709517), my impression is Docker may be able to run on different ISAs in the future, but so far only supports x86_64 and ARM. I wonder why is that, why cannot Docker run on Linux unmodified which runs on many ISAs? I guess my real question is what is the relationship between the layers (hardware &lt;-&gt; OS &lt;-&gt; Docker). Thanks for your comments! ====== wmf Good question. In theory, Docker should be able to be compiled for all ISAs, but it's written in Go and Go does not have compilers for many ISAs. (gccgo is more portable but it's incompatible enough with real Go that it's a net negative IMO.) ~~~ lch_ian Thanks a lot for your answer.
{ "pile_set_name": "HackerNews" }
Amazing Stat: California Uses More Gas than China - sah http://blog.wired.com/wiredscience/2008/07/amazing-stat-ca.html ====== bilbo0s There is already a thread on this stat here: <http://news.ycombinator.com/item?id=249867>
{ "pile_set_name": "HackerNews" }
The Internet and the Third Estate - theNJR https://stratechery.com/2019/the-internet-and-the-third-estate/ ====== rayiner > “China is building its own internet focused on very different values, and is > now exporting their vision of the internet to other countries. Until > recently, the internet in almost every country outside China has been > defined by American platforms with strong free expression values. _There’s > no guarantee these values will win out._ A decade ago, almost all of the > major internet platforms were American. Today, six of the top ten are > Chinese.” (Quoting Zuckerberg) I’m pleasantly surprised to hear Zuckerberg articulate this thought. It’s something I’ve thought about a lot over the past decade or so. At one time it seemed investable that key American values like free speech would become universal. We thought our engagement with China and the Middle East would hasten adoption of our culture and values. That future is far from certain now. Americans need to think really hard about what kind of world they want their kids to grow up in. ~~~ vkou > I’m pleasantly surprised to hear Zuckerberg articulate this thought. I'm pleasantly surprised that people are waking up to this problem, but disappointed as hell that what they are waking up to is not the problem - but rather, the disruption of the status quo. To the rest of the world, American values, media, culture, etc, being the dominant shaping force on the internet (And before that, through literature, television, film, etc) was cultural imperialism, that has shifted discourse, starved local culture, and, in short, was Americanizing the world. [1] Americans now feel threatened by China's cultural weight being thrown around in this space. Okay. You don't like China using the same mechanisms that you used in the past, to broadcast and spread your mono-culture. But instead of taking a moment to self-reflect, about whether it is good for the world to have an 800-pound cultural gorilla warping discourse, culture, and media around the world... ... We are upset that _we_ stopped being that gorilla! It's more than a little hypocritical and peevish. [1] Talk to a Canadian sometime, and ask them about Canadian culture, versus American culture - in the media sense. You'll find there to be very little of the former left, despite the government's best efforts to promote, and develop it. ~~~ ivl The difference is that with American cultural values the freedom to criticize them remains. The option exists to say "I don't care for the US's culture or values". I fear the concept of an internet where posts are censored and police arrive at your door for the wrong opinion. ~~~ vkou The option also exists to say "I don't care for China's culture of values", you just have to not worry about selling your media there. You're speaking from an incredible position of privilege - from the point of view of a net exporter of culture. Countries that were net importers of culture never had this concern. Nobody in Germany would care about this sort of thing, for instance, because, I am sorry to say, nobody outside of Germany consumes German culture. Nobody in Germany needs to tailor their speech to not offend China, because nobody in China cares to listen to what they, their films, or their sports teams have to say. This whole thing is uniquely an American problem - and you're discovering what it feels like to have your culture be shaped by the orthodoxies of a foreign set of values. It sucks, but that's how the rest of the world has had to operate for a long, long time. ~~~ harryh _nobody outside of Germany consumes German culture_ Everyone in the world that listens to electronic music would like to disagree with you. ~~~ monocasa House and Techno are from Chicago and Detroit respectively. ~~~ harryh Florian Schneider and Ralf Hütter would like to have a word with you. ~~~ jpadkins electronic music != house or techno. kraftwerk early stuff does not sound like house or techno. ------ oflannabhra Thompson has recently been on a tear. Two of his recent articles (this and China Cultural Clash [0]) are some of his best writing yet. Actually, I'd say they are some of his best _thinking_ yet. Whether you agree with him or not, I think it is clear that he is asking the right questions, at a time when most people are not. More than even asking the right questions, I think he is seeing much more clearly, and forwardly, than is common. I'm not sure I agree with all of his framing. For example, I'm not sure I agree that the invention of the printing press directly lead to nation states 450 years later. However, there is definitely more than a nugget of truth in his framing of the future, here. I think hindsight will judge him quite well, in more than just technology. [0] - [https://stratechery.com/2019/the-china-cultural- clash/](https://stratechery.com/2019/the-china-cultural-clash/) ------ NotSammyHagar Another great piece. The world is changing, in the same way that the printing press caused change, people to break away from the control of the church. For many years it has been American views of freedom of the press that had a lot of sway in the world. > And then China is building its own internet focused on very different > values, and is now exporting their vision of the internet to other > countries. ... There’s no guarantee [American notions of free speech] ... > values will win out. A decade ago, almost all of the major internet > platforms were American. Today, six of the top ten are Chinese. > We’re beginning to see this in social media. While our services, like > WhatsApp, are used by protesters and activists everywhere due to strong > encryption and privacy protections, on TikTok, the Chinese app growing > quickly around the world, mentions of these protests are censored, even in > the US. Scary to think that China can force censorship here via TikTok. I'm not a tiktok user, but that's terrifying. Hidden, defacto commercial censorship. (edited to remove the > on the last thing above, that was my comment) ~~~ newfangle There is hidden and overt censorship. I actually find it absurd that we allow our children to use social media software developed by a hostile foreign power. ------ dmvinson Ben Thompson continues to amaze. Pointing out the pointlessness of controlling the impact of tech by limiting Facebook's influence is important. Authoritarian China's biggest advantage is its ability to pick winners and multiply their impact by leveraging how decisive their decision making can be. Where this fails is at finding local maximums. The winners China picks will be the best of what's available right now, which for a while has often been a Chinese clone of what's working in America, adapted to local preferences. America's startup culture and competitiveness is our biggest advantage, and so I wish the US and EU would do more to force tech. companies to fight it out instead of picking winners by regulating the industry and controlling what Facebook and Google can do. Instead of just putting cumbersome regulations like the GDPR around user data, also dictate companies above a certain size have to have open APIs and easily exportable/programmatically accessible user data. Obviously this must be balanced by granular controls, but how can upstarts be incumbents when the data moats are so large. The APIs of Google, Facebook, Instagram, Twitter, etc. are pretty abysmal and continue to become worse with no punishment. Yet, that programmatic access to data is probably the only way an upstart could compete outside of a complete paradigm shift. ~~~ aidenn0 > America's startup culture and competitiveness is our biggest advantage It's not an advantage against China though when they can do both of: \- Block US apps from the Chinese market \- Encourage wholesale copying of the best features from US companies If I were to start a FB clone of <insert any social app> in the US, it would fail miserably. China can ban the US version and simultaneously fund a clone for the Chinese market. Combined with the fact that Chinese networks (e.g. TikTok) have full access to the US market, this makes competition very asymmetric. ------ dredmorbius The "fifth estate" usage as referencing bloggers long predates any recent social media apologia. Wikipedia article history shows the reference already by 2009: [https://en.wikipedia.org/w/index.php?title=Fifth_Estate&oldi...](https://en.wikipedia.org/w/index.php?title=Fifth_Estate&oldid=308845816) It cites Stephen D Cooper (2006). _Watching the Watchdog: Bloggers as the Fifth Estate_. Marquette Books. ISBN 0922993475. It's possible that there are yet other estates to be discovered: [https://mastodon.cloud/@dredmorbius/102989723532565277](https://mastodon.cloud/@dredmorbius/102989723532565277) And of course there was the 2013 film of the same title: [https://en.wikipedia.org/wiki/The_Fifth_Estate_(film)](https://en.wikipedia.org/wiki/The_Fifth_Estate_\(film\)) ------ 40acres Generally I think Zuckerberg is correct, Facebook is very powerful no doubt. But that power is vested upon by it's users. There clearly was a lot of misinformation going around during the 2016 election and at some point we have to look back on ourselves and ask: "why are we so gullible"? There is a balance between Facebook moderating it's platform and controlling speech and I think we are near that line. Labeling articles as misleading or doing some fact checks along with making sure that there are no bots and extreme hate speech is near the limit of my expectations of what a platform should do in terms of moderation. ------ Tomte > Europe’s Three Estates Wikipedia says the same in the context of the press as the fourth estate, and I think it's wrong, at least from a German point of view. While it is correct that the medieval Estates (or "Stände") were the nobility, the clergy and the people, nobody has ever called the press the fourth "Stand". The press is the fourth "Gewalt" (or "Power"), and that is a clear reference to the three powers in the state: the legislature, the executive and the legislature. It's interesting how those two trinities mix with the press in different languages and societies. ~~~ Torwald > nobody has ever called the press the fourth "Stand". Because unlike in the UK, the press as a Gewalt in a political system with Gewaltenteilung (division of power), appeared only after the society divided in estates (Ständegesellschaft) ceased to be. It's a timeline issue. ------ reilly3000 I find it incredibly ironic that Mark Zuckerberg claims that the 5th estate he helped create has no gatekeepers. In fact, Facebook finds itself the largest gatekeeper of speech the world has ever known. How our society deemed it fair and prudent to allow a private corporation to 'moderate' and prioritize billions of communications a day... that I'll never understand. ------ dredmorbius There's a glaring issue with Ben Thompson's essay, in that so far as I'm aware there is _no_ independent US tradition of "estates" independent of either Continental or British European formulation. Rather, there are the three _Constitutional_ branches of government, the legislative, executive, and judicial. One can find informal references to fourth (and occasionally higher) _branches of government_. But not "estates" as such, within the US. Otherwise, you _will_ find frequent usage of "fourth estate" in the traditional European sense, almost always referencing the press. See: [https://en.wikipedia.org/wiki/Fourth_branch_of_government](https://en.wikipedia.org/wiki/Fourth_branch_of_government) [https://duckduckgo.com/?q=%22united+states%22+%22fourth+esta...](https://duckduckgo.com/?q=%22united+states%22+%22fourth+estate%22&ia=about) I'm trying to decide if this is a major or minor flaw with Thompson's essay. Either way, the claim without reference suggests a sloppyness or lack of diligence, which calls into question his larger points. And there are certainly questions to be asked. The claim that the so-called Fifth Estate is free of gatekeepers is specious, as Jon Evans pointed out at TechCrunch (posted yesterday to HN: [https://news.ycombinator.com/item?id=21306086](https://news.ycombinator.com/item?id=21306086)): Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage. Whether the gatekeeping is one of blocking specific types of content, or amplifying others to the point that unwanted messages are completely drowned out really doesn't much matter. Attention, individual or collective, is finite, and whatever means are used to deny it, the end effect is the same: a message is lost. ------ bovermyer That's an excellent read and gives me much to think about. In particular, I need to reassess my understanding of Facebook's role in political discourse. ------ juped >It’s also a framing that is, appropriately enough, uniquely American; in the United States, the first three estates are commonly thought to be the three branches of government: the executive, legislative, and judicial. The author of this article may have misunderstood it this way but I guarantee that this is not commonly thought in the United States. ~~~ dredmorbius In the US, the press is occasionally referred to as the _fourth branch of government_ (after the legislative, executive, and judiciary). Occasionally others are proposed: lobbyists, special interests, the intelligence services. [https://en.wikipedia.org/wiki/Fourth_branch_of_government](https://en.wikipedia.org/wiki/Fourth_branch_of_government) ~~~ juped That's a different term used to mean entirely different things. ~~~ dredmorbius My point is that Ben Thompson seems to have confused the terms "fourth branch" and "fourth estate" in his essay. Which is not among his better ones. The usage of "branches of government", the three Constitutional ones (leg, exec, judicial), and various others generally posited as a fourth, or occasionally higher, branch. See: [https://en.wikipedia.org/wiki/Fourth_branch_of_government](https://en.wikipedia.org/wiki/Fourth_branch_of_government) I'm entirely unfamiliar with a notion of _estates_ in the US independent of Continental or British traditions. That seems to be a novel creation, or mis- remembering, of Thompson. DDG finds nothing aside from the European usage under fourth/fifth estate, specific to the US: [https://duckduckgo.com/?q="united+states"+"fourth+estate"&ia...](https://duckduckgo.com/?q="united+states"+"fourth+estate"&ia=about) ~~~ juped Estates are real and they aren't branches of government - more like divisions of society. I agree regarding the confusion in the essay. [https://en.wikipedia.org/wiki/Estates_of_the_realm](https://en.wikipedia.org/wiki/Estates_of_the_realm) ------ ajudson The discussion of the press reminds me of the propaganda model of media [https://en.wikipedia.org/wiki/Propaganda_model](https://en.wikipedia.org/wiki/Propaganda_model) ------ cowpig > The third concern is what has dominated the news cycle as of late: > Facebook’s decision to not fact-check any posts or ads from politicians. > This is largely being framed as aiding President Trump in particular, which > is probably both true and also an unsurprising complaint from the Second > Estate used to having monopoly control over fact-checking. Does he mean the American second estate, i.e. the legislative branch? Or does he mean the Nobility? And what does he mean by Monopoly Control? And what is he trying to imply with the word "unsurprising" here? ~~~ naringas I think he means the European second state, the one after the church-backed monarch state, i.e. nation state. (I also think his use of the nth-state analogy is slightly is confusing). I think he is referring to the media establishment's (i.e. TV news) monopoly over fact checking. and it's unsurprising because of course they would like to keep that same power they had before the internet ~~~ cowpig Isn't the media the fourth estate? I can't find any way to interpret this excerpt such that it makes sense. How does anyone have "monopoly control over fact checking"? And I also can't think of a way to favorably interpret that statement, given that I consider fact-checking to be an integral component of any information dissemination system. Like many of Ben Thompson's articles, I find his ideas here compelling, but I also find myself feeling like he's an industry apologist and it clouds his thinking/makes him myopic. ~~~ dmvinson I think he made a logical leap based around the US political parties and wealthy elites' influence over the fourth estate - sanctioned political debates on CNN, sources of information with perceived authority such as Fox News, etc. While I also think it's explained poorly, there may be a connection here to his point about the largest advertisers (politician and large corporations) who fund the media which is meant to check them. ~~~ cowpig That seems like a lot of hand-waving to me. > this was the context for Edmund Burke’s remarks in 1787 that “There are > Three Estates in Parliament; but, in the Reporters’ Gallery yonder, there > sits a Fourth Estate more important far than they all.” There's this quote early in the article about the Fourth Estate being the most powerful, and yet by the end I'm supposed to assume all these things to accept this claim that "monopoly control" of the Second Estate exists on fact- checking? What I actually see is an article where someone started with a conclusion (that some sort of free-market/libertarianism/lassaiz-faire/whatever brand of let tech companies do whatever they want is The Way) and then spent a lot of time thinking about how to reach it. ------ icodestuff > The third concern is what has dominated the news cycle as of late: > Facebook’s decision to not fact-check any posts or ads from politicians. > This is largely being framed as aiding President Trump in particular, which > is probably both true and also an unsurprising complaint from the Second > Estate used to having monopoly control over fact-checking. > The broader issue is that the third concern and first concern are so clearly > in direct opposition to each other. If Facebook has the potential for > immense influence on politics, why on earth would anyone want the company > policing political speech? Maybe because they're fact-checking lots of other things to promote themselves as a platform you can find facts on? I don't know, I'd think if there was any ostensible non-partisan shared value in a democracy, it'd be a desire for factual information. Spun and biased, sure, but fundamentally factual. (In reality, I don't think this is true anymore, at least from the head of the executive branch and his sycophants, but this shared value is not irreparably broken nationwide.) If Facebook isn't going to fact-check anything, then that's one thing, but if they're going to fact-check some things, it's far from unreasonable for them to fact-check political ads.
{ "pile_set_name": "HackerNews" }
Programming Languages Aren't - BigZaphod http://blog.bigzaphod.org/2008/09/29/programming-languages-arent/ ====== jwilliams He's right that the API is important in the language debate - I'd argue the .NET class library is semantically more significant (e.g. to learn) than the individual .NET languages.
{ "pile_set_name": "HackerNews" }
A Translation of Genesis I using word2vec - mdlincoln https://llamasandmystegosaurus.blogspot.com/2017/05/alpha.html ====== qbrass Emphasis on the 'A'
{ "pile_set_name": "HackerNews" }
Ask YC: How do you get the word out about your product? - abstractwater Even great products like the Macintosh had an evangelizing effort behind it, and from talking to people at Startup School it seems like the "TechCrunch" effect just gives you a peak that dies out quickly, with low conversion rate.<p>Does a compelling product just do the magic by itself, or do you need a marketing plan? What strategies do you have to "bring the good news"? ====== axod I'd say this is a good recipe (Not speaking from a lot of experience, but the theory seems sound to me :/ ): 1\. Make it so good people want to tell others about it. It can't be an "hmm, yeah I guess that might be useful later. I'll bookmark it". It has to be a "Woahhhhh I've been looking for this exact thing for years. So have my friends I bet. I need to use this now, and tell my friends what I found". (A hard ask, but it's nice to aim for that 'wow' factor). 2\. Make the base product free, and politely refuse donations. Ask people to simply tell their friends about it. I think people kinda feel guilty about having a cool product for free, so they become evangelists for it, and help you sell it. Having said that, it's good to get something to start off. Look at websites/domains that are related, and have been neglected... Send emails, make an offer, buy domain, get traffic. It's not a sure fire thing - some people still believe domain names to be worth millions whatever they are. But you can pick up some real gems. More likely to be sustainable traffic unlike tc/reddit/etc mentions. ~~~ shiranaihito > "Woahhhhh I've been looking for this exact thing for years. So have my > friends I bet. I need to use this now, and tell my friends what I found" \- That sounds more like Facebook! Are you sure that the "hmm, that might be useful" -reaction is not good enough? ~~~ axod Just speaking from my own experience. If I say to myself "Yeah that might be useful", I bookmark it, and probably never get around to looking at it again. ------ pg Make it so good that people tell their friends about it. ------ optimal I'm sure there are people who can shout louder than everybody else, but this may be one of those cases where the best way to achieve something, is to do something else first. [I hate stuff like that. ;)] If you want to date a girl, you could just approach one after another until one finally says yes (e.g., "My name is George. I'm unemployed and I live with my parents."). Or you could improve yourself by learning how to cook or play music, getting in shape, etc., so you have something to actually bring to a relationship. At that point you'll either be turning them away, or you'll have a life and won't care (which can also be an attractor). Same approach in the technical world. Do you have a blog? Or do you participate in open source projects, or attend conferences, or anything else to demonstrate how you bring value to others? Another common path is consulting, which basically accomplishes the same thing. Prove your worth to others, and your goal of introducing a new product becomes much easier. I think sometimes we may care so much about the end product that we have trouble switching from the techie role to the business role. But you have to save some of that energy for marketing, and it may be better to take a more dispassionate view of the finished product (or risk never releasing anything). ------ geuis Forget about marketing for a second. Is your product compelling? The first thing I always look for when developing something new is whether or not it gives me an "ohhh" response once I see the initial concept in action. To be specific, as you are working on your project have you had the moment where it does something unique and you suddenly see that your idea has real potential? Or conversely, have you spent a lot of time working on your idea but you haven't had that moment of realization? If so, then unfortunately your project might be dead on arrival. I am working on something right now that started as an idea that popped into my head. I took 6 hours writing the core part of the app, then let it run for a couple days. As I started using it myself, I had that "Ohh!" experience. To me, that's what tells me this can be a very compelling experience and why I'm pursuing further with it. So if your product is compelling, then you've already gone 80% of the way there. The last 20% is the hard part. Find the audiences that will use what you're building. Demonstrate it for them. Free access, full-on support to your initial base of users, etc. Devote yourself to answering people's questions. You will find that word of mouth happens very easily when people are excited about what you're doing. When you have introduced it to your core audience, which will be very small to begin with, then look around at media outlets. These days media outlets range from popular forums about the niche you are filling, to blogs and related mass-media publications. Getting on Techcrunch is cool. Its a good way to get some exposure. Hacker News itself is becoming a good place to launch, because the community is still very small and much more devoted to trying new things. I used to get hit occasionally by Slashdot and Digg, and that traffic was good for big spikes, but I would only see about 1-2 percent overall traffic boosts after that overally. However, that was an extra 2 percent I didn't have before. Now, once you are established and have a small but active user base then you might consider bigger venues for advertising. Buying ads on popular sites like Digg and Techcrunch can help. Its very targeted, which is what you want. If you want some Google juice, oddly enough doing press releases can help. I started a company in the beginning of 2007 and we were paying about $150-$200 for press releases a few times a month. We didn't see huge amounts of direct web traffic, but it was really good for getting the word out. People in the financial industries (who pay more attention to press releases than the rest of us) were talking about us and daily I was seeing our name being talked about in the niche circles we were targeting. The co-founder was invited to interview on a few radio shows, which helped after that. (Sadly the company is defunct now. That's how startups go. However that was not because of the advertising and word of mouth.) Don't spend a ton of money on press releases. If your target audience is general web users, then releasing a press release 4x a month will do bollux for you. If you're targeting businesses, then press releases can be more helpful. But still be careful. The last thing that I can suggest, is remember your users are "customers" and NOT "consumers". You can always tell a marketing shill versus a true entrepreneur by how they refer to people. Everyone forgets that before the 20th century, consumption meant you had tuberculosis. Then that term got morphed to refer to "the dirty poor masses that buy the rotten horse meat we sell them as hot dogs." I am a customer, a user, not a consumer. Remember that if you treat your potential audience with respect, _your_ users, then they will respect your product and that goes much further than any dollar spent on marketing. ~~~ cmm324 I agree 100%. We recently validated that there is a need for our product and it is an amazing thing. So much so, that one of our best clients wants to start promoting us to collect a commision on every new signup once we launch. Its amazing that if you put yourself in the right place, or talk to the right people about your valuable product, then its like the virus from "28 days later", it spreads quick. Chris Co-Founder, Property Stampede LLC ------ Jesin The purpose of advertising is to get people look at your product. The product itself is what determines who sticks around as a user and who wanders off to look at something else. I don't seem to have any advice beyond that. ------ shafqat Make sure you have a great blog that exudes passion and excitement about your product. Thats a start. Also, treat every user like how you would treat your mother. ------ iamelgringo As Marc Andreessen said, make it so good that people can't ignore you. ~~~ mhartl I was surprised he said that. People can always ignore you. Marc himself emphasizes that the greatest product in the world can still lose if the market isn't good. Some might reply, "How many great products can you name that people ignored?" I can name very few, but there's massive selection bias at work; the great products that people ignore are _ipso facto_ ones that most people haven't heard of. ------ redorb be as natural as possible. Most natural (word of mouth) Least natural (spam) most success is probably somewhere in between
{ "pile_set_name": "HackerNews" }
The Curious Case of Linux Containers - coloneltcb https://medium.com/@sumbry/the-curious-case-of-linux-containers-328e2adc12a2#.ehqvuvenq ====== tinco We solve all of those use cases in our cluster management system of which Docker is a key enabling component, except for point 5, though we do have other components that require multiple steps to provision. I bet we could run Hadoop if we needed it. It's easy to point at Docker and say it's a trivial wrapper around some technologies that have been around for years and call everyone crazy for buying into the 'Docker is going to change the (sysadmin) world' hype. Of course in essence it's pretty trivial, many things are. Docker, and Docker-like systems really are changing the sysadmin world. Easily formalised software environments is what this hype is about, and the fact that the Docker community isn't really looking at the stateful service problem right, doesn't mean they're not enabled by Docker either. To me, and to many 'devops' (i.e. developers rolling into sysadmin roles) Docker was eye-opening. We reduced our chef scripts by 80%, and thus our headaches by 95%. It's saving us months of ops work. If you were a seasoned sysadmin and rolling your eyes at the fact that we were rolling out application level services in the same chef codebase we were setting up our system environments with, then yeah you already were better than us. But as far as I could tell the way we used to do it really was the status quo, at least in the startup/small it business world. ~~~ sumbry Someone once said that the public cloud was never about virtualization but rather automation. I would contend that the container revolution we're going through now isn't really about containers but ultimately distributed system platforms. And if it changes the way many have traditionally approached problems great! My main point is that we're really just at the beginning of all of this. I'm not picking on Docker (I'm actually embracing containers) but rather pointing out that many of these next gen solutions are only tackling the basic use cases right now. While I'm excited there is so much more work left to be done. Implementing many of these solutions still requires a bunch of engineering effort and I'd like to see more turnkey solutions. The developer end is definitely getting easier and more productive but the operational end is getting more complex and still not solving some use cases everyone has. How many different ways can different companies implement HA MySQL for example?! There are tons of other platforms out there that have recognized and solved many of this class of problems for years but the Cloud and microservices are actually starting to make this worse as adoption skyrockets. Platforms are not really a new thing, we went through this with JVM Platforms a ~decade ago, Heroku style PaaS ~five years ago, and now containers + cloud today. I guess this is what progress looks like :) ~~~ iheartmemcache Ha you weren't kidding about the number of HA MySQL drop-in engine replacements out there, damn. At least most people follow the ANSI-SQL standard, which is something. I'm not being snarky, but what are the use-cases that everyone have? I'd argue authentication and authorization would be one of the few things, and it was already solved with JSR 196 like 15 years ago. It worked, had a well defined spec, and most importantly it had interop with PAM on UNIX, Kerberos, LDAP, raw db's, mixed mode, you name it. Everyone complained it was too enterprisey and no one used it, so they re-invented the wheel in the early 00s with Rails and Devise and Authlogic and a billion other non-standards. Transactions? Persistence? Java's spec took care of that too, and fairly rigorously. So I'm with 100% with you on this being a solved problem. We're making gradual progress (i.e., the option to develop in a language where the correctness of our code can be formally verified thanks to more readily available, mathematically sound type-systems) but like any society there are trends, and where there are trends you'll have the recurrence of many old things (Interpol ripping off Joy Division) and the invention of a few new things (where are often the composition of two older ideas, or the implementation of a new idea which wasn't computationally feasible previously but now is, or a concept from another industry like signal analysis or three- phase road traffic theory applied to our code-monkey'ing domain). RE: Overall progress - Microsoft is doing some fantastic things in Powershell, effectively taking concepts like package management, man pages, and the shell, extracting the best elements from each of those, and implementing them in a consistent manner. No more choosing between systemd or init.d or other holy wars. If you want to do it differently, you effectively have a standardized interface to write your new implementation against within most of the platform. Don't like ASP.NET's templating system? No problem, it's all open source, and you can swap your own in, but it's all modular so nothing will break, and your co-workers can continue working in the traditional Razor templating. ------ tobbyb What I find curious about all the container discussions and narrative is the strange lack of context. Sure discuss Docker but also discuss namespaces, cgroups, overlayfs, aufs, and all the other critical enabling technologies where a lot of major problems with containers exist and will be solved. For instance user namespaces, cgroups are not namespace aware, how to integrate overlayfs or aufs so they can be mounted by user namespaces seamlessly. Surely these projects and developers need support and focus. Or else it become mere marketing for companies that have the funds or ability to market themselves. Do we just talk about libvirt without context or understanding of kvm and xen, how would that be useful or meaningful? An ‘immutable container’ is nothing but launching a copy of a container enabled by overlay file systems like aufs or overlayfs, a ‘stateless’ container is a bind mount to the host. Using worlds like stateless, immutable or idempotent just obscures simple underlying technologies and prevents wider understanding of core Linux technologies that need to be highlighted and supported. How is this a sustainable development model? Docker chooses to run containers without an init. The big problem here is most if not all apps you want to run in a container are not designed to work in an init less environment and require daemons, services, logging, cron and when run beyond a single host, ssh and agents. This adds a boatload of additional complexity for users before you can even deploy your apps, and a lot of effort is expended is just managing the basic process of running apps and managing their state in this context. Contrast that with LXC containers which have a normal init and can manage multiple processes enabling for instance your VM workloads to move seamlessly to containers without any extra engineering. Any orchestration, networking, distributed storage you already use will work obviating the need for reinventing. That’s a huge win and a huge use case that makes deployment simple and slices all the complexity, but if you listen to the current container narrative and the folks pushing a monoculture and container standards it would appear there are no alternatives and running init less containers is the only ‘proper’ way to use containers, never mind the additional complexity may only make sense for specific use cases. ~~~ icebraining _Docker chooses to run containers without an init._ I don't think Docker chooses one way or the other, and people do run Docker with an init: [http://phusion.github.io/baseimage- docker/](http://phusion.github.io/baseimage-docker/) ~~~ shykes Correct, Docker doesn't care what you run inside the container. You provide a command to run, and it runs it. That command may be your application server or a traditional init process which in turn will fork multiple children. Docker _does_ make it easier to follow an "application container" pattern, and that pattern avoids (with good reason) booting an entire traditional init system inside the container. But following that pattern is not mandatory. Not forcing too many patterns upon users all at once was part of the original Docker philosophy. Unfortunately that aspect was drowned in the cacophony as a few loud and passionate people interpreted Docker through the lens of their own favorite patterns. In retrospect I wish we had been more assertive in reminding everyone to respect the philosophy behind Docker: that you can share tools with people without forcing everyone to use them in the same way as you. ------ anotherhue Many of the author's desires are handled in SmartOS ([https://www.joyent.com/blog/triton-docker-and-the-best-of- al...](https://www.joyent.com/blog/triton-docker-and-the-best-of-all-worlds)) ~~~ zwischenzug And OpenShift: [https://enterprise.openshift.com/](https://enterprise.openshift.com/) ~~~ jacques_chester I'll pile on! Cloud Foundry is another opensource PaaS: [https://www.cloudfoundry.org/learn/features/](https://www.cloudfoundry.org/learn/features/) (Disclaimer: I work for Pivotal, which donates the majority of engineering effort to Cloud Foundry) ------ jondubois I don't like containers for the purpose of deployment. All they do is hide complexity from DevOps people - But the complexity is still very much there. Hiding complexity from DevOps people is useful in a PaaS context, but it is an anti-pattern when you consider pretty much every other use case. Docker encourages developers to keep adding more and more different technologies to a system. If you consider most popular applications today, they are usually made up of hundreds of different technologies - Each of which requires their own bootstrap/setup logic. Maybe if each micro-service was made up of fewer different technologies, deployment wouldn't be such a headache and you wouldn't need Docker to begin with. ------ vezzy-fnord Author's terminology seems muddled. The first bullet is correct, but only on a technicality, and it fails to point out that strictly there is no "Linux container," only an emergent and weakly cohesive combination of features that as an artifact create a so-called Linux container (as opposed to being a well- defined atomic unit/OS resource). The third bullet equivocates archive formats with application containers. ------ contingencies The author's 'platform' concerns are in fact infrastructure automation concerns generic to any service-oriented architecture and _not_ specific to containers. My take as an early LXC adopter (way pre docker) and from-concept builder of two production clusters using custom automation: 1\. Service state. The author essentially requests master-slave promotion. Corosync/pacemaker is an excellent place to look for well tested solutions here. The normal approach used there is superior to the author's, ie. a floating IP used by all clients is switched to an already live replacement master, which is first jabbed in to action, and which shares the same backing storage as the nodes which has failed. (A great solution for shared backing storage on standard hardware is DRBD) 2\. MySQL update. Change management with SOA has to be far more rigid due to complex interdependencies. Typically you version your entire service, exhaustively test it against peer services, and only then deploy the updated service. This implies a workflow process more formalized than the series of manual operations the author hand-waves about. That said, in a typical database scenario it is often possible to upgrade across minor versions simply by updating the binary, since it will read the old on-disk database store fine. 3\. Heartbleed. With an appropriately automated approach to service builds, this should be trivial. Mask the affected packages or versions, rebuild all services, test and deploy. This really goes back again to overall workflow formalization and automation. (Solid infrastructure would have cryptographic signoff on stuff before it runs, eg. production demands a signature from test infrastructure that all tests have passed before executing a new service) 4\. Service dependencies. This is far more complex than people assume. My advice is to use corosync/pacemaker, a well tested approach. (There are others) 5\. Hadoop. Same as any other service. There are many other problems with SOA-type infrastructure automation; some ideas I wrote up ages ago can be seen at [http://stani.sh/walter/pfcts/](http://stani.sh/walter/pfcts/) as well as the sketch/documentation of the solution I built. ------ gingerlime > Did you know that even with all the advances with technology the amount of > time that a housewife spends maintaining a household today is the exact same > as it was in the 1950s! Unrelated to the core of the article, I know. But it seemed odd to me. Is this true? Or was it a metaphor? EDIT: did a bit of googling and found a couple[0][1] of not-extremely-academic references that claim that it's less than half... but couldn't find anything more authoritative. [0] [http://www.telegraph.co.uk/women/womens- life/9721147/Women-s...](http://www.telegraph.co.uk/women/womens- life/9721147/Women-spend-half-as-much-time-on-housework-today-compared- to-1960s.html) [1] [https://www.anglianhome.co.uk/50years](https://www.anglianhome.co.uk/50years) ------ cbd1984 Are cgroups meant to be secure against malicious code trying to get out and install rootkits on the underlying system? AFAIK, hardware-level virtualization is. If code running under Xen can get out, that's considered a bug in Xen and it gets fixed, right? OTOH, chroot is not considered to be secure. It isn't designed to be secure. It is not a security tool. Code in a chroot jail is expected to be able to leave without subverting the security mechanisms which, by and large, don't exist in the first place. So, are cgroups Xen or chroot? ~~~ kijiki Containers (which are more than just cgroups) are supposed to be secure. In practice, the attack surface is much bigger for containers than VMs, so there are far more "break out of container" vulnerabilities out there than "break out of VM" vulnerabilities. Most containers are run in a VM, usually one being run for you by Amazon or Google. ------ powera "Also be warned — I have no idea what I’m talking about. Take everything in this article with a grain of salt." \- pretty much. I'm not sure whether the author thinks "platforms" even are. And the complaints that "linux cgroups aren't the entire platform so why call it containers" seem pedantic. I don't see why "handling MySQL failover", "resource management", "upgrading OpenSSL" should all be handled by the same piece of software in any case.
{ "pile_set_name": "HackerNews" }
New Software Products Tracker - sg_ltv Curious if anyone knows a website that tracks all new and upcoming software products (e.g. productivity tools from different categories, CRMs ,finance platforms, etc. - really can be anything). I see products and companies pop up here and there (e.g. on TechCrunch) but can&#x27;t find anything in one, organized place. Please let me know. ====== antoineMoPa You mean like Product Hunt [1]? [1] [https://www.producthunt.com/](https://www.producthunt.com/) ~~~ sg_ltv YES, thank you a lot! ProductHunt it is
{ "pile_set_name": "HackerNews" }
The Totally Normal Town Where Everyone Worked on Weapons of Mass Destruction - Huhty https://motherboard.vice.com/en_us/article/the-totally-normal-town-where-everyone-worked-on-weapons-of-mass-destruction ====== schoen I thought this was going to be Amarillo, Texas, because of the Pantex plant. There was a book from the early 1980s focusing on the religious beliefs of some of the nuclear weapons workers there [https://www.amazon.com/Blessed-Assurance-Home-Amarillo- Texas...](https://www.amazon.com/Blessed-Assurance-Home-Amarillo- Texas/dp/0815605080) which tried to convey some of the cultural and psychological tensions about working on these weapons, and how people dealt with them.
{ "pile_set_name": "HackerNews" }
Pascal Executable Parser - peter_d_sherman https://github.com/stievie/pesp ====== peter_d_sherman Excerpt: "A collection of classes and functions to parse executable files for the Pascal language, namely for Free Pascal and Delphi. Everything is implemented in Pascal, there are no external dependencies."
{ "pile_set_name": "HackerNews" }
John McAfee AMA - Garbage https://www.reddit.com/r/netsec/comments/3hr9f0/i_am_john_mcafee_ama/ ====== jmkni I thought this was good: > OT, but do anyone know exactly what went wrong with McAfee after the founder > left the company? > It grew, it got big, like every company. When I started, there were 4 of us. > Generating $10M/yr, we could have lived happily for our lives on that. VCs > came and offered to make it bigger, we had to grow, we didn't have sales, > marketing, etc. I gave it away, unless you were a government, corporation, > etc. Once I went public, I had 1000 bosses, investors, FTC, SEC, all my time > in meetings and interviews. I hired a programmer/day for over a year! I used > to spend time taking apart viruses, not I was an accountant. Once a company > gets big, it becomes slow, and cannot survive in its current form. Nice cautionary tail there about ruining something good by trying to go too big ~~~ derrickdirge Going public is often the turning point where a 'Good Company' starts to become a 'Bad Company'. It becomes no longer good enough to make a good product. ~~~ yuhong I didn't think it was that simple in case of McAfee or for that matter in most cases however. ------ orthoganol When asked what someone can do to be successful in software: > That;s a tough one. get out of the box no1. REALLY out of the box. Abandon > every social norm, esp those closest to you. Then look at the world with > objective eyes. Look what is the thing to do? Every entrepreneur I know (I > know Steve Jobs, and he was out of the box) went out of the box. If you > can't go work for someone else. Yeah the older I get the more it becomes clear that there's inherent conflict between following social norms (developing a cushy social life) and achieving something big. Like with being a startup founder I think almost all have to go through intense anti-social periods, admittedly I judge founders as being average or worse if they don't seem to have some anti social or obsessive streak to them. Kobe is a cool example too, a guy who had a relatively bizarre social life for an NBA player, with a manic obsession about work and improvement. I think he even said "Friends come and go, but banners stay forever." Extreme, but hits on a deeper truth about what all it takes to make it big. ~~~ eli I mean this in the nicest possible way, but that kinda sounds like a justification for being a workaholic. ~~~ prawn Often, work pays off. The Lakers used to try and lock Kobe out of the gym to stop him getting shots up at all hours before dawn. But he persisted and found other ways in. ~~~ _RPM Why would his team not want him to practice? ~~~ prawn He would go in at 2am, 4am, etc and put up hundreds of shots. They changed the locks. He would wedge open service doors, etc. Risk was burnout or injury from sheer repetition. They might've underestimated the sort of character he is. There are some great stories out there about his training habits, the Lakers scouting him as a rookie pre-draft, etc. Contentious character, but hard to deny his focus. There is one great story of him calling in a trainer to work with him after midnight. They take a break, the trainer understanding they were done for now and going home. Trainer came back in the morning for team training and made smalltalk with Kobe about hopefully he had had a good rest after their workout. Kobe was confused. He'd been there all night training and hadn't gone home. ------ nnq This is a gem of wisdom: _pair-programming between devs and hackers will allow for instant security feed-back [...] It will be the only possible way to develop ironclad software. Starting with the system architects, there need to be arcdhitectural hackers - all the way through the coding process._ ...and reading the other stuff too, he seems to still be knowing what he's talking about! ~~~ butwhy Yes he is very good at explaining good practice, such as precise instructions on removing software: [https://www.youtube.com/watch?v=bKgf5PaBzyg](https://www.youtube.com/watch?v=bKgf5PaBzyg) ~~~ Someone1234 The dude is very likeable and I don't care about him doing illegal drugs (I'm no moralist). That being said, the accusations that he killed his next door neighbor in Belize and then fled the country are disconcerting... [https://en.wikipedia.org/wiki/John_McAfee#Legal_issues](https://en.wikipedia.org/wiki/John_McAfee#Legal_issues) ~~~ speeder As someone that lives in a third world country, I think that everything John said about the case (that it was about people wanting bribes) is true. Also he said people hit him with a baseball bat after putting a helmet on his head, I know even other techniques (a cop friend told me one of the things his corrupt colleagues do is hit people with soap bars inside towels, it also hurts a lot without leaving bruises if you know what you are doing). ~~~ rainer_muell I don't think he said this had been done to him. He merely said that's a common 'questioning' technique used in Belize. ~~~ drzaiusapelord In another reply he claims they did it to his post-arrest. ------ dr_zoidberg What I liked the most is that they took the time to reply more than the top- level question -- many AMAs I've seen* just answer the top level question and move on to something else. * disclaimer: I don't read so many AMAs, just the ones I find interesting. ~~~ raverbashing Several AMAs lately have been a PR exercise (like showing up on a talk show) ~~~ jessriedel I don't think there's a clean line between for-fun celebrity AMAs and PR exercises. They blend smoothly into each other, and the real division is just whether it trips your personal "authenticity" alarm. ~~~ nickysielicki There's an easy well to tell, just look in the OP and see if they're plugging something. John McAfee is not. Woody Harrelson was. ~~~ jessriedel Everyone is plugging something all the time. Just because they're plugging their general popularity rather than a particular movie does not make something unassailably authentic. ------ mentos It was wild to follow John McAfee a few years back when he was on the run in Beliz. His podcast with Joe Rogan at the time was pretty entertaining and so I'm not surprised to see they are looking to make a movie (not sure if this was mentioned in the AMA, still making my way through it) Scott Alexander & Larry Karaszewski To Script John McAfee Film For Warner Bros [http://deadline.com/2013/07/scott-alexander-larry- karaszewsk...](http://deadline.com/2013/07/scott-alexander-larry-karaszewski- to-script-john-mcafee-film-for-warner-bros-551655/) ------ paradisechris Wow really enjoying the level of effort put into these responses ~~~ nathanvanfleet Possibly drug fuelled.. ~~~ _RPM Regardless if that's true or not, how is that relevant? ~~~ slowmotiony Relax, it was funny. ------ ousta this is the best AMA ever from the most fascinating crazy man in the IT world.
{ "pile_set_name": "HackerNews" }
Musings on JIT performance - luu https://github.com/burningmime/curves/blob/master/RyuJITPerf.md ====== Joky Context: this shows the improvement of the new JIT implementation for C# (RyuJIT is the codename for Microsoft’s project to improve the performance and functionality of the just-in-time compiler used by .NET).
{ "pile_set_name": "HackerNews" }