id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,666,686 | http://www.meteor.com/blog/2012/10/17/meteor-050-authentication-user-accounts-new-screencast | Page Not Found - Meteor Software | null | Oh oh, lost in the universe?
Take Me Back Home | true | true | true | Oh oh, lost in the universe? | 2024-10-13 00:00:00 | null | website | meteor.com | Meteorjs | null | null |
|
6,325,065 | http://www.kickstarter.com/projects/563644496/fwd-powershotthe-first-sensor-for-hockey-players?ref=home_location | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,240,829 | https://planetscale.com/blog/ga | PlanetScale is now generally available | null | It has been an incredible six months since we released our product in beta. From the outset, we focused on creating a platform that would delight developers, built on top of the only open source database proven at hyper scale, Vitess. For years, our technology has allowed companies like Square, GitHub, and YouTube to scale their database and, in turn, their businesses. Our mission is to bring this power to everyone.
We believe databases should be powerful, easy to use, and have impeccable developer experience. This is why we chose to build our serverless platform so that developers can be productive without ever having to worry about scale.
The beginning of this journey was December 1st, 2020. This was when the first line of code was committed on PlanetScale’s cloud database platform. Everything you see and experience on our platform, apart from Vitess, has been created in less than a year. This incredible pace will not stop. We are on a mission to bring in a new era for databases.
So far, the journey has been wild. There are already major sites running on top of the platform with some of the world’s largest brands working with us to solve their database challenges. The response from the community has been almost overwhelming. We have felt your love and excitement and it’s incredibly energizing.
All of this energy and momentum has led us to our last announcement. I am excited to say that we have closed $50M in Series C funding led by Kleiner Perkins. We are proud to welcome Bucky Moore to our board and are delighted to be working with him. The round includes further investments from our existing investors a16z, SignalFire and Insight Partners, alongside new investments from Tom Preston-Werner, Max Mullen, and Jack Altman.
I am so incredibly proud of the PlanetScale team. They bring an unrelenting spirit and love for our mission. This is a group of people who are deeply unsatisfied with the state of databases and show up every day to build the future. To date, we have not even exposed 10% of Vitess’ power and functionality. We have truly only just begun.
Thank you team, and thank you to our community. We are doing this for you: the builders, the optimists, the creators, the scalers.
Sam Lambert
CEO | true | true | true | The PlanetScale database platform is ready for your production workloads. | 2024-10-13 00:00:00 | 2020-12-01 00:00:00 | website | planetscale.com | PlanetScale, Inc. | null | null |
|
14,139,273 | http://www.popsci.com/are-long-airplane-flights-bad-for-your-health | Are long flights safe for your health? | Kate Baggaley | Currently, the world’s longest non-stop commercial flight takes 18 hours and 50 minutes: It connects Singapore’s Changi Airport to New York City’s John F. Kennedy Airport. But is that trek necessary? With AI-assisted flight routes, electric planes, and other tech poised to change air travel, it’s only a matter of time before long-haul flights become more efficient. And more importantly, are long flights like that safe for your health?
There are a few health risks linked to flying (aside from being swarmed by mosquitoes or breathing in dog farts), but tacking on a few more hours probably won’t have much of an impact.
“If it’s one-seventeenth of the trip, it’s not that big of a deal,” says Fanancy Anzalone, an aerospace medicine physician and past president of the Aerospace Medical Association. Still, he says, “There’s a multitude of things that you need to be concerned about when you do go on a long-haul flight.”
## Cramped conditions
Sitting still in a cramped seat for hours isn’t just unpleasant—it can lead to deep vein thrombosis, when blood clots form in the legs because of poor blood flow. The longer you don’t move, the greater your risk. Worst-case scenario, the clot can break free and lodge in the lungs. Fortunately, this is rare. And you can cut down on your risk by getting up and walking around or flexing your legs.
Passengers “really need to think about getting up anywhere between three to four hours and walk around,” Anzalone says. “But by sitting on your chair and just pumping your legs—in essence pressing down on your heels and up with your toes—that little bit can make a big difference in whether somebody is going to have [deep vein thrombosis].”
## Dry air and germs
It also helps to focus on hydration—which means avoiding the very drinks you’re most likely to reach for on a flight. Soft drinks, booze, and coffee are all diuretics, meaning that they make you pee more. “If you are going on a long haul, it’s recommended that you start [hydrating] the day before,” Anzalone says. Keep a water bottle on hand in your carry-on bag.
The super dry air on a plane can make it easier to get dehydrated. It also dries out your mucus membranes, which keeps them from trapping germs. Which is unfortunate, because there is always chance you’ll catch a cold or worse from your fellow passengers. “As each hour goes by, you have a little more exposure, and so therefore the probability of catching a cold on a flight like that grows,” Anzalone says.
So you might be out of luck if you’re seated next to someone who is already ill. However, the idea that the recirculating air on a plane abets disease transmission is a myth. “Airflow and circulation of cabin air is quite sophisticated technically, so there is usually no high risk of getting infected even if you have someone [sick] sitting two rows before,” says Jochen Hinkelbein, a professor of anesthesiology at the University of Cologne in Germany and treasurer for the European Society of Aerospace Medicine.
You should be more concerned about the tray tables, bathrooms, and other germ-gathering surfaces you’re likely to come into contact with, even though they do get wiped down after flights. “The major airlines that are flying long-haul in my experience do extremely well in making sure that the airplane is as clean … as possible,” Anzalone says. But he does recommend traveling with disinfecting wipes or sanitizer. Really, it’s best to touch as little as possible.
## Radiation and air pressure
There’s not much you can do about the cosmic rays, though. Each time a passenger flies, they are exposed to a tiny amount of radiation from space. “The more time you’re on the plane, the more radiation exposure you’ll get,” says Steven Barrett, an aerospace engineer at MIT.
However, the radiation most travelers are exposed to in a given year falls comfortably within the recommended radiation exposure for a member of the public. “The very frequent travelers who are flying on long-haul flights could potentially go above the recommended limits of radiation exposure,” says Barrett, who has calculated how much radiation flyers are exposed to. “But that’s not within the region where you’d have any real health concerns.” It’s unclear how harmful these still-low levels of radiation exposure are, or if they are harmful at all, he says.
Pilots and other flight crewmembers do spend enough time in the air that the Centers for Disease Control and Prevention considers them radiation workers. The agency recommends they try to limit their time on flights that are very long, fly at high altitudes, or fly over the poles.
Another concern is that the air pressure is also lower on a plane than it is at sea level. This doesn’t bother most people. However, the thin air can cause problems for those who are old or have heart conditions or other pre-existing illnesses.
## Overall risk factors
Ultimately, the longer a flight is, the more time you have for something to go wrong. And planes have become larger in recent years, which also increases the probability of in-flight medical emergencies.
“Traveling itself is becoming more and more popular, more and more convenient even for the old ones with … pre-existing diseases,” Hinkelbein says. “So we have an unhappy triad which is the setting is not ideal for unhealthy persons, the persons are older and older and having more pre-existing diseases, and not moving within the aircraft cabin, drinking only a little bit.”
There’s no specific amount of time that is unsafe, and it depends on the individual traveler. “But my feeling is below 12 [or] 14 hours, you can nearly send everyone [on a plane]. If it’s longer, you should be a little careful,” Hinkelbein says.
**[Related: The best travel accessories you can buy]**
Many of the medical issues that do crop up on planes are cardiovascular troubles such as fainting or dizziness. Estimates for how often people have in-flight medical emergencies vary, but it roughly comes out to one in every 604 flights globally.
For these crises, airline staff are equipped with medical kits and equipment such as defibrillators. “Every one of the long-haul flights have a way by radio to connect to physicians that are available around the world to talk to them,” Anzalone says. “I have talked to pilots about medical issues that are on board and how to handle it, do you divert or not divert.”
However, very few airlines have forms to document when passengers do get sick, Hinkelbein says. He’d like to see standardized forms and an international registry where all in-flight medical problems are reported. “Then you can try to figure out what are really the most [frequent] causes of in-flight medical problems.”
For the vast majority of people, though, even the longest flights will pass uneventfully. “The flying public on major airlines is very safe,” Anzalone says.
## Plane emissions
In fact, a plane’s most profound influence probably isn’t on the passengers—it turns out that airplanes cruising miles above the Earth’s surface can cause problems down below.
“The main health impact is probably emissions that come from them and the health impacts for people for the ground,” Barrett says. He and his colleagues have estimated that 16,000 people globally die each year because of air pollution caused by planes. These emissions, which are linked to lung cancer and cardiopulmonary disease, came from planes at cruising height as well as those in the midst of takeoff and landing.
But ultra long-haul flights may actually spew less harmful pollution than routes that include stopovers. “From a human health perspective the direct flight would be better,” Barrett says. “Even though the high-altitude emissions do affect human health on the ground, the low-altitude emissions at airports when the airplanes take off and land and taxi are still more impactful because they’re closer to where people live.”
**[Related: All your burning questions about sustainable aviation fuel, answered]**
One of the more radical ideas to cut down on plane-related pollution is to use electric aircraft, which would release no emissions while flying. Unfortunately, however, the longest flights are unlikely to be good candidates for this technology.
“Electric aircraft might be possible for shorter ranges, maybe up to 1,000 or so miles, but it looks much less likely that electric aircraft could contribute in a meaningful way for ultra long-haul flights,” Barrett says. “That’s where there’s no obvious or no real solution on the horizon.”
*This post has been updated. It was originally published on April 18, 2017.* | true | true | true | There are a few health risks linked to long-haul flights. Here's how to prepare your body for cramped seas, germs, and other elements. | 2024-10-13 00:00:00 | 2017-04-18 00:00:00 | article | popsci.com | Popular Science | null | null |
|
2,221,170 | http://altdevblogaday.org/2011/02/15/a-fun-hack-ipad-to-fpga-data-passing/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
19,048,618 | https://nwn.blogs.com/nwn/2019/01/jesse-damiani-ar-vr-ai.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
20,411,658 | https://www.bloombergquint.com/business/banned-chinese-security-cameras-are-almost-impossible-to-remove | Banned Chinese Security Cameras Are Almost Impossible to Remove | Olivia Carville | # Banned Chinese Security Cameras Are Almost Impossible to Remove
## But thousands of the devices are still in place and chances are most won’t be removed before the Aug. 13 deadline.
(Bloomberg) -- U.S. federal agencies have five weeks to rip out Chinese-made surveillance cameras in order to comply with a ban imposed by Congress last year in an effort to thwart the threat of spying from Beijing.
But thousands of the devices are still in place and chances are most won’t be removed before the Aug. 13 deadline. A complex web of supply chain logistics and licensing agreements make it almost impossible to know whether a security camera is actually made in China or contains components that would violate U.S. rules.
The National Defense Authorization Act, or NDAA, which outlines the budget and spending for the Defense Department each year, included an amendment for fiscal 2019 that would ensure federal agencies do not purchase Chinese-made surveillance cameras. The amendment singles out Zhejiang Dahua Technology Co. and Hangzhou Hikvision Digital Technology Co., both of which have raised security concerns with the U.S. government and surveillance industry.
Hikvision is 42% controlled by the Chinese government. Dahua, in 2017, was found by cybersecurity company ReFirm Labs to have cameras with covert back doors that allowed unauthorized people to tap into them and send information to China. Dahua said at the time that it fixed the issue and published a public notice about the vulnerability. The U.S. government is considering imposing further restrictions by banning both companies from purchasing American technology, people familiar with the matter said in May.
“Video surveillance and security equipment sold by Chinese companies exposes the U.S. government to significant vulnerabilities,” said Representative Vicky Hartzler, a Republican from Missouri, who helped draft the amendment. Removing the cameras will “ensure that China cannot create a video surveillance network within federal agencies,” she said at the time.
Dahua declined to comment on the ban. In a company statement, Hikvision said it complies with all applicable laws and regulations and has made efforts to ensure its products are secure. A company spokesman added that the Chinese government is not involved in the day-to-day operations of Hikvision. "The company is independent in business, management, assets, organization and finance from its controlling shareholders," the spokesman said.
Despite the looming deadline to satisfy the NDAA, at least 1,700 Hikvision and Dahua cameras are still operating in places where they’ve been banned, according to San Jose, California-based Forescout Technologies, which has been hired by some federal agencies to determine what systems are running on their networks. The actual number is likely much higher, said Katherine Gronberg, vice president of government affairs at Forescout, because only a small percentage of government offices actually know what cameras they’re operating. The agencies that use software to track devices connected to their networks should be able to comply with the law and remove the cameras in time, Gronberg said. “The real issue is for organizations that don’t have the tools in place to detect the banned devices,” she added.
Several years ago the Department of Homeland Security tried to force all federal agencies to secure their networks by tracking every connected device. As of December, only 35% of required agencies had fully complied with this mandate, according to a 2018 report by the Government Accountability Office. As a result, most U.S. federal agencies still don’t know how many or what type of devices are connected to their networks and are now left trying to identify the cameras manually, one by one.
Those charged with complying with the ban have discovered it’s much more complicated than just switching off all Hikvision or Dahua-labeled cameras. Not only can Chinese cameras come with U.S. labels, but many of the devices, including those made by Hikvision, are likely to contain parts from Huawei Technologies Co., the target of a broad government crackdown and whose chips power about 60% of surveillance cameras.
“There are all kinds of shadowy licensing agreements that prevent us from knowing the true scope of China’s foothold in this market,” said Peter Kusnic, a technology writer at business research firm The Freedonia Group. “I’m not sure it will even be possible to ever fully identify all of these cameras, let alone remove them. The sheer number is insurmountable.”
Video surveillance is big business in the U.S. Sales of video cameras to the government are projected to climb to $705 million in 2021 from $570 million in 2016, according to The Freedonia Group. Hikvision is the world’s largest video-surveillance provider, with cameras installed in U.S. businesses, banks, airports, schools, Army bases and government offices. Its cameras can produce sharp, full-color images in fog and near-total darkness and use artificial intelligence and 3D imaging to power facial recognition systems on a vast scale.
Once they arrive in the country, some of Dahua and Hikvision’s cameras are sent to their U.S.-based warehouses. Others go to equipment manufacturers like Panasonic Corp. or Honeywell International Inc., and are sold under those brands, said John Honovich, founder of video surveillance site IPVM. Then the cameras are bought by intermediaries, such as security firms, which go on to sell them to government agencies and private businesses. The NDAA also covers Dahua and Hikvision’s extensive agreements with original equipment manufacturers, sweeping up any vendor who re-sells the devices or uses the companies’ equipment.
Effectively, two cameras running identical Hikvision firmware could carry completely different labels and packaging. This means it would be nearly impossible to tell if the thousands of video cameras installed across the country are actually re-labelled Chinese devices. A Honeywell spokeswoman said the company couldn’t track these re-labelled products, even if asked. Panasonic didn’t respond to emailed requests for comment.
This convoluted supply chain has left government agencies confused over how to actually obey the law. “We’ve been trying to get our arms around how big the problem is,” says a government worker at the Department of Energy, who asked to remain anonymous because he’s not authorized to speak publicly. “I don’t think we have the full picture on how many of these cameras are really out there,” he said.
The law itself is vague on whether it means agencies must remove the cameras or simply stop renewing existing contracts. A group of government officials and experts will meet next week in Washington to try to parse the legislation. Hikvision has about 50,000 installation companies and integrated partners that are all wondering how broadly the law is likely to be interpreted.
Many have contacted the company, asking how they could be affected, according to a person familiar with the discussions. Some security vendors are already refusing to purchase equipment from Hikvision and Dahua. Shares of both companies have tumbled since March amid speculation of U.S. sanctions. Last month U.S. President Donald Trump said he would allow U.S. companies to resume supplying some of their products to Huawei, if they apply for a license and if there is no threat to national security.
If someone is routinely tapping into cameras to spy on federal agencies, they could easily determine the identities of those who work in government departments and even CIA operatives, said Stephen Bryen, former deputy under-secretary of defense for trade security policy. “This is extremely dangerous,” he said. “It can’t be tolerated and quite frankly every agency should be writing its own directives to make sure the job gets done.”
To contact the editor responsible for this story: Molly Schuetz at mschuetz9@bloomberg.net, Andrew Pollack
©2019 Bloomberg L.P. | true | true | true | But thousands of the devices are still in place and chances are most won’t be removed before the Aug. 13 deadline. | 2024-10-13 00:00:00 | 2019-07-10 00:00:00 | /icons/apple-touch-icon.png | article | ndtvprofit.com | NDTV Profit | null | null |
1,891,654 | http://krugman.blogs.nytimes.com/2010/11/10/b-rating-america/ | B-Rating America | Topics Nytimes Com; Top; Opinion; Editorialsandoped; Oped; Columnists; Paulkrugman; Index Html | OK, actually A+ rating America. Via Calculated Risk, China has created a new, independent rating agency — and it has now sharply downgraded the United States:
Dagong has downgraded the local and foreign currency long term sovereign credit rating of the United States of America (hereinafter referred to as “United States” ) from “AA” to “A+“, which reflects its deteriorating debt repayment capability and drastic decline of the government’s intention of debt repayment.
The serious defects in the United States economic development and management model will lead to the long-term recession of its national economy, fundamentally lowering the national solvency. The new round of quantitative easing monetary policy adopted by the Federal Reserve has brought about an obvious trend of depreciation of the U.S. dollar, and the continuation and deepening of credit crisis in the U.S. Such a move entirely encroaches on the interests of the creditors, indicating the decline of the U.S. government’s intention of debt repayment. Analysis shows that the crisis confronting the U.S. cannot be ultimately resolved through currency depreciation. On the contrary, it is likely that an overall crisis might be triggered by the U.S. government’s policy to continuously depreciate the U.S. dollar against the will of creditors.
Way to build credibility, guys — just in case anyone wondered whether Dagong would be truly independent, or just a tool for Chinese policy …. | true | true | true | Up the Chinese downgrade. | 2024-10-13 00:00:00 | 2010-11-10 00:00:00 | article | nytimes.com | Paul Krugman Blog | null | null |
|
3,467,572 | http://www.waterforward.org/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
752,790 | http://antoniocangiano.com/2009/08/10/how-much-faster-is-ruby-on-linux/ | How much faster is Ruby on Linux? | Programming Zen | Antonio Cangiano | In a previous article I compared the performance of Ruby on Windows, built through Microsoft Visual C++ and GCC. The numbers for the MinGW version were very impressive. So the question now becomes, how does its performance compare to that of Ruby on Linux? To quote one person (Alex) who commented on the aforementioned post:
With the new mingw32 substantial speed improvements, think it makes sense now to also test at least the baseline (MRI) on Mac/Linux on the same battery of tests, so we Windows folks could get a better idea of how far behind are we yet and what the different Windows interpreters speed target shall be.
Any sort of performance improvement for something that is notoriously slow on Windows is more than welcome, but would this be enough to fill the gap between Ruby’s performance on Windows and on Linux? How much faster is Ruby on Linux? Let’s find out.
**Setup**
- As a reminder, the operating systems used were Windows XP SP3 32bit and Ubuntu 9.04 32 bit.
- The Ruby implementations tested were ruby 1.8.6 (2009-03-31 patchlevel 368) [i386-mingw32], ruby 1.9.1p129 (2009-05-12 revision 23412) [i386-mingw32], Ruby 1.8.6 (built from source on Linux) and Ruby 1.9.1 (built from source on Linux as well). The same identical versions of Ruby were used for both operating systems. I’m aware that these are not the latest available versions for Linux, but we are trying to compare apples to apples.
- I used the Ruby Benchmark Suite; the times reported are the best out of five runs, with a timeout of 300 seconds per each iteration.
**Benchmark results**
The following table/image compares the performance of Ruby 1.8.6 on Windows and Linux. A light green background indicates which of the two was faster. The total times exclude tests that raised an error or were not available (N/A) for any of the four implementations, but includes timeouts (they are counted as 300 seconds to provide a lower bound). The ratio column indicates how many times faster Ruby on Linux was:
The second table/image below compares Ruby 1.9.1 on Windows and on Linux, using the same criteria as above.
Note: The totals shown are different from the ones seen in other posts since the subset of benchmarks included in the totals is different.
**Conclusion**
According to the geometric mean of the ratios for these tests, it appears that on average **Ruby 1.8.6 on Linux is about twice as fast as Ruby 1.8.6 on Windows**. Conversely, **Ruby 1.9.1 on Linux is about 70% faster than the Windows version**.
The Windows implementations use GCC 3.4.5 (a four year old compiler) at the moment, while I built the implementations on Ubuntu with GCC 4.3.3 (which is available by default). This helps, at least in part, to justify the performance gap. Luis Lavena, leader of the Windows port, confirmed to me that a switch to GCC 4.4.x is planned for the future. This should significantly increase performance on Windows yet again, and bump Ruby’s speed on Windows a bit closer to the speed that’s obtainable on Linux.
For the time being, switching to Ruby 1.9.1 on Windows will give you a performance that is better than what’s obtained by those who are still using Ruby 1.8.x on Linux. If it’s possible, switch.
#### Get more stuff like this
Subscribe to my mailing list to receive similar updates about programming.
Thank you for subscribing. Please check your email to confirm your subscription.
Something went wrong.
Any comments about why Ruby on Linux appears, from these tests, to be 2x more buggy (1.8.6) or 1.5x more buggy (1.9.1) than the Windows version?
Does this mean that if a lower risk of error or pathological computation time (generating timeouts in the test codes) is a priority, then I should prefer Windows over Linux?
@ARJ: You raise an interesting point. Out of a demanding test suite, only 3 tests did significantly better on Windows than on Linux. Two timed out for certain parameters, and one raised an error.
Considering all the performance improvements shown in other tests, I’d say that for most mixed workloads, Ruby on Linux would still perform much faster.
Regarding the lower risk of error, the only difference appears to be related to the stack size. The Windows implementation of Ruby 1.8.6 managed to go through a very deep recursion with no errors. All the other implementations raised a SystemStackError: stack level too deep.
Chances are that you could tweak the Linux version to allocate more memory for the stack, anyway.
First, thanks for your efforts on performance comparisons and your RBS GitHub project!
I’ve just discovered RBS so I’m not familiar with what each test does. That said, I’m wondering how much IO the tests perform especially in light of the recent 24842, 24839, and 24837 posts on ruby-core. For example:
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/24839
FWIW, I’ve been seeing noticeable wall clock differences between 1.9.1 and 1.8.6 on my Windows machines when running rspec. One spec that uses a lot of Kernel.system and Kernel.open takes ~20s on 1.9.1 and ~12s on 1.8.6. There’s also noticeable differences doing simple things like “gem list”.
It’s not clear what the outcome of the ruby-core posts will be, but I’m wondering if a key part of the differences you’re seeing in RBS results are due to an issue with the 1.9.1 MRI implementation of IO on Windows?
Any plans on doing still-too-early 1.9.2 comparisons?
Jon
You probably shouldn’t average the ratios. That makes no mathematical sense at all. Just calculate the average of the ratios at the end.
Steve, the approach taken is simple, but statistically sound. I don’t average the ratios. I calculate the ratio for each test, and then at the end I calculate the geometric mean of these ratios. It’s a valid way of summarizing the outcome of these benchmarks.
MinGW uses either crtdll.dll or msvcrt.dll that ships with windows. crtdll.dll is only available in 32Bit and is fairly old (mine is from 2004) and has problem with threading and is generally considered slow. msvrt.dll is newer but lacking optimisation as well.
Maybe someone can create a Visual C++ build or an Intel C++ build and test it?
Thanks Antonio for the tests and for your kind citation. It is interesting indeed to see that Windows is becoming a worthy platform at least for the dev/stg phases.
(BTW, love your book and your writing style 🙂
Thanks Alex. I appreciate it. 🙂
I think the good news here is that ruby on windows is *only* 70% slower–I guess this means that 1.9 on windows is faster than 1.8 in Linux 🙂
If you compare jruby times on windows and linux, you see similar speed differences–I think so anyway.
And yes, as noted above, I/O reading tests are far slower in windows as currently the I/O is very slow for some reason in 1.9 + windows.
Thanks for the writeup.
-r
That would be great to have the comparison of the same Ruby version but on Windows host and virtualized Linux guest on the same host, could be helpful for those who develop on Windows machines in making decision to switch to Linux/run virtual machine, etc.
@Antonio: FYI, updated preview2 releases of the RubyInstaller for Windows have recently been released at http://rubyinstaller.org/ in case you’re not already aware.
Ruby 1.8.6 updated to p383 and 1.9.1 updated to p243 in addition to new CHM-based documentation for core and stdlib.
Jon
So the computer language benchmarks game has always been showing Ruby in a most favorable light 😉 | true | true | true | In a previous article I compared the performance of Ruby on Windows, built through Microsoft Visual C++ and GCC. The numbers for the MinGW version were very impressive. So the question now becomes, how does its performance compare to that of Ruby on Linux? To quote one person (Alex) who commented on the aforementioned post: With the new mingw32 substantial speed improvements, think it makes sense now to also test at least the baseline (MRI) on Mac/Linux on the same battery of tests, so we Windows folks could get a better idea of how far behind are we yet and | 2024-10-13 00:00:00 | 2009-08-10 00:00:00 | article | programmingzen.com | Programming Zen | null | null |
|
13,380,122 | http://www.uq.edu.au/news/article/2017/01/bioclay%E2%80%99-ground-breaking-discovery-world-food-security | ‘BioClay’ a ground-breaking discovery for world food security | null | A University of Queensland team has made a discovery that could help conquer the greatest threat to global food security – pests and diseases in plants.
Research leader Professor Neena Mitter said BioClay – an environmentally sustainable alternative to chemicals and pesticides – could be a game-changer for crop protection.
“In agriculture, the need for new control agents grows each year, driven by demand for greater production, the effects of climate change, community and regulatory demands, and toxicity and pesticide resistance,” she said.
“Our disruptive research involves a spray of nano-sized degradable clay used to release double-stranded RNA, that protects plants from specific disease-causing pathogens.”
The research, by scientists from the Queensland Alliance for Agriculture and Food Innovation (QAAFI) and UQ’s Australian Institute for Bioengineering and Nanotechnology (AIBN) is published in Nature Plants.
Professor Mitter said the technology reduced the use of pesticides without altering the genome of the plants.
“Once BioClay is applied, the plant ‘thinks’ it is being attacked by a disease or pest insect and responds by protecting itself from the targeted pest or disease.
“A single spray of BioClay protects the plant and then degrades, reducing the risk to the environment or human health.”
She said BioClay met consumer demands for sustainable crop protection and residue-free produce.
“The cleaner approach will value-add to the food and agri-business industry, contributing to global food security and to a cleaner, greener image of Queensland.”
AIBN’s Professor Zhiping Xu said BioClay combined nanotechnology and biotechnology.
“It will produce huge benefits for agriculture in the next several decades, and the applications will expand into a much wider field of primary agricultural production,” Professor Xu said.
The project has been supported by a Queensland Government Accelerate Partnership grant and a partnership with Nufarm Limited.
The Queensland Alliance for Agriculture and Food Innovation is a UQ institute jointly supported by the Queensland Government.
**Media: Professor Neena Mitter, ****n.mitter@uq.edu.au****, +61 434 628 094; Hannah Hardy, QAAFI Communications, ****h.hardy@uq.edu.au****, +61 7 3346 2092.**
**Twitter: @QAAFI** | true | true | true | A University of Queensland team has made a discovery that could help conquer the greatest threat to global food security – pests and diseases in plants. | 2024-10-13 00:00:00 | 2017-01-10 00:00:00 | article | uq.edu.au | UQ News | null | null |
|
37,992,930 | https://philaverse.substack.com/p/british-museum-announces-plans-to | British Museum announces plans to digitize its entire collection | Phil Siarri | # British Museum announces plans to digitize its entire collection
### The famous institution believes this effort could reduce theft
On October 18, the **British Museum** announced plans to **digitize its entire collection**.
Here are a few key points:
This initiative aims to provide global access to the museum's extensive collection for the first time (approximately
**2.4 million records**to upload or upgrade, with about half of the work already completed).The project's timeframe is estimated to be around
**five years**.By increasing visibility, the institution
**believes theft will decrease**, and any missing items will be promptly noticed (which is a significant concern for many museums these days).
## Keep reading with a 7-day free trial
Subscribe to The PhilaVerse to keep reading this post and get 7 days of free access to the full post archives. | true | true | true | The famous institution believes this effort could reduce theft | 2024-10-13 00:00:00 | 2023-10-23 00:00:00 | article | substack.com | The PhilaVerse | null | null |
|
39,162,937 | https://v-fonts.com/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,674,332 | https://www.osnews.com/story/139943/exclusive-mozilla-reverses-course-re-lists-extensions-it-removed-in-russia/ | null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
5,317,764 | http://allthingsd.com/20130301/eu-may-fine-microsoft-over-browser-ballot-bungle/?refcat=news | EU May Fine Microsoft Over Browser Ballot Bungle | John Paczkowski | # EU May Fine Microsoft Over Browser Ballot Bungle
Looks like there could be legal consequences for Microsoft’s European Union browser ballot bungle — and soon.
Reuters reports that the European Commission plans to sanction Microsoft for failing to comply with a mandate to offer Windows users in Europe a choice of Web browsers beyond its own Internet Explorer. And sources familiar with the matter have confirmed to **AllThingsD** that this is indeed the case at this time. No word yet on the size of the fine, but given EU Competition Commissioner Joaquin Almunia’s public threats over the misstep, penalties could be severe. Whatever they are, sources say the EC will likely announce them sometime in March.
Under the terms of Microsoft’s 2009 antitrust settlement with the European Commission, the company was to present Windows users with a ballot screen offering them an opportunity to swap out Internet Explorer for one of 11 other browsers. And Microsoft did do that — at first, anyway. But when an update to Windows 7 rolled out in February of 2011, the company unwittingly eliminated the ballot screen, and didn’t realize it had done so until last summer.
In July of 2012, the commission opened an investigation into the matter, despite Microsoft’s apologies for what it claimed was a “technical error.” And by fall it had filed formal charges against Microsoft. “If companies enter into commitments, they must do what they have committed to do or face the consequences,” Almunia said during an October news conference. ” [They] should be deterred from any temptation to renege on their promises or even to neglect their duties. This is why, when this happens, the commission has the power to impose fines.”
And in this case it seems the agency plans to exercise it. That’s potentially bad news for Microsoft, which has already been fined about $1.28 billion by the EU. If the commission follows through on its current plan to sanction Microsoft, it could slap the company with fines equivalent to 10 percent of its fiscal 2012 revenue. That’s about $7.4 billion.
Microsoft declined comment. | true | true | true | Sorry is never enough. | 2024-10-13 00:00:00 | 2013-03-01 00:00:00 | /wp-content/themes/atd-2.0/images/staff/john-paczkowski-170x170.jpg | article | allthingsd.com | AllThingsD | null | null |
38,634,558 | https://futuretimeline.net/timeline.htm | Future Timeline | Timeline | Technology | Singularity | 2020 | 2050 | 2100 | 2150 | 2200 | 21st century | 22nd century | 23rd century | Humanity | Predictions | Will Fox | The 21st century
An increasingly globalised humanity is faced with climate change, overpopulation, dwindling resources and technological upheaval.
Read more...
2000-2009
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
2010-2019
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
2020-2029
2020 2021 2022 2023 2024 2025 2026 2027 2028 2029
2030-2039
2030 2031 2032 2033 2034 2035 2036 2037 2038 2039
2040-2049
2040 2041 2042 2043 2044 2045 2046 2047 2048 2049
2050-2059
2050 2051 2052 2053 2054 2055 2056 2057 2058 2059
2060-2069
2060 2061 2062 2063 2064 2065 2066 2067 2068 2069
2070-2079
2070 2071 2072 2073 2074 2075 2076 2077 2078 2079
2080-2089
2080 2081 2082 2083 2084 2085 2086 2087 2088 2089
2090-2099
2090 2091 2092 2093 2094 2095 2096 2097 2098 2099
The 22nd century
Diverging paths for humans and transhumans, eco-technic societies dominate the globe, and colonisation of space begins in earnest.
2100-2149
2150-2199
The Far Future
Post-biological humanity begins to spread throughout the Galaxy, transforming dead worlds into computational substrates.
2200-2249
2250-2299
2300-2999
3000-9999
Beyond...
A future timeline of the Universe and its ultimate fate.
Beyond 10,000 AD
Beyond 1 million AD | true | true | true | Future Timeline | Latest Predictions | Technology | Singularity | 2020 | 2050 | 2100 | 2150 | 2200 | 21st century | 22nd century | 23rd century | Humanity | Predictions | Events | 2024-10-13 00:00:00 | 2008-01-01 00:00:00 | null | null | futuretimeline.net | futuretimeline.net | null | null |
40,560,759 | https://engineering.grab.com/data-observability | Ensuring data reliability and observability in risk systems | Yi Ni Ong · Kamesh Chandran · Jia Long Loh | # Ensuring data reliability and observability in risk systems
Grab has an in-house Risk Management platform called GrabDefence which relies on ingesting large amounts of data gathered from upstream services to power our heuristic risk rules and data science models in real time.
As Grab’s business grows, so does the amount of data. It becomes imperative that the data which fuels our risk systems is of reliable quality as any data discrepancy or missing data could impact fraud detection and prevention capabilities.
We need to quickly detect any data anomalies, which is where data observability comes in.
## Data observability as a solution
Data observability is a type of data operation (DataOps; similar to DevOps) where teams build visibility over the health and quality of their data pipelines. This enables teams to be notified of data quality issues, and allows teams to investigate and resolve these issues faster.
We needed a solution that addresses the following issues:
- Alerts for any data quality issues as soon as possible - so this means the observability tool had to work in real time.
- With hundreds of data points to observe, we needed a neat and scalable solution which allows users to quickly pinpoint which data points were having issues.
- A consistent way to compare, analyse, and compute data that might have different formats.
Hence, we decided to use Flink to standardise data transformations, compute, and observe data trends quickly (in real time) and scalably.
## Utilising Flink for real-time computations at scale
### What is Flink?
Flink SQL is a powerful, flexible tool for performing real-time analytics on streaming data. It allows users to query continuous data streams using standard SQL syntax, enabling complex event processing and data transformation within the Apache Flink ecosystem, which is particularly useful for scenarios requiring low-latency insights and decisions.
### How we used Flink to compute data output
In Grab, data comes from multiple sources and while most of the data is in JSON format, the actual JSON structure differs between services. Because of JSON’s nested and dynamic data structure, it is difficult to consistently analyse the data – posing a significant challenge for real-time analysis.
To help address this issue, Apache Flink SQL has the capability to manage such intricacies with ease. It offers specialised functions tailored for parsing and querying JSON data, ensuring efficient processing.
Another standout feature of Flink SQL is the use of custom table functions, such as JSONEXPLOAD, which serves to deconstruct and flatten nested JSON structures into tabular rows. This transformation is crucial as it enables subsequent aggregation operations. By implementing a 5-minute tumbling window, Flink SQL can easily aggregate these now-flattened data streams. This technique is pivotal for monitoring, observing, and analysing data patterns and metrics in near real-time.
Now that data is aggregated by Flink for easy analysis, we still needed a way to incorporate comprehensive monitoring so that teams could be notified of any data anomalies or discrepancies in real time.
### How we interfaced the output with Datadog
Datadog is the observability tool of choice in Grab, with many teams using Datadog for their service reliability observations and alerts. By aggregating data from Apache Flink and integrating it with Datadog, we can harness the synergy of real-time analytics and comprehensive monitoring. Flink excels in processing and aggregating data streams, which, when pushed to Datadog, can be further analysed and visualised. Datadog also provides seamless integration with collaboration tools like Slack, which enables teams to receive instant notifications and alerts.
With Datadog’s out-of-the-box features such as anomaly detection, teams can identify and be alerted to unusual patterns or outliers in their data streams. Taking a proactive approach to monitoring is crucial in maintaining system health and performance as teams can be alerted, then collaborate quickly to diagnose and address anomalies.
This integrated pipeline—from Flink’s real-time data aggregation to Datadog’s monitoring and Slack’s communication capabilities—creates a robust framework for real-time data operations. It ensures that any potential issues are quickly traced and brought to the team’s attention, facilitating a rapid response. Such an ecosystem empowers organisations to maintain high levels of system reliability and performance, ultimately enhancing the overall user experience.
## Organising monitors and alerts using out-of-the-box solutions from Datadog
Once we integrated Flink data into Datadog, we realised that it could become unwieldy to try to identify the data point with issues from hundreds of other counters.
We decided to organise the counters according to the service stream it was coming from, and create individual monitors for each service stream. We used Datadog’s Monitor Summary tool to help visualise the total number of service streams we are reading from and the number of underlying data points within each stream.
Within each individual stream, we used Datadog’s Anomaly Detection feature to create an alert whenever a data point from the stream exceeds a predefined threshold. This can be configured by the service teams on Datadog.
These alerts are then sent to a Slack channel where the Data team is informed when a data point of interest starts throwing anomalous values.
## Impact
Since the deployment of this data observability tool, we have seen significant improvement in the detection of anomalous values. If there are any anomalies or issues, we now get alerts within the same day (or hour) instead of days to weeks later.
Organising the alerts according to source streams have also helped simplify the monitoring load and allows users to quickly narrow down and identify which pipeline has failed.
## What’s next?
At the moment, this data observability tool is only implemented on selected checkpoints in GrabDefence. We plan to expand the observability tool’s coverage to include more checkpoints, and continue to refine the workflows to detect and resolve these data issues.
# Join us
Grab is the leading superapp platform in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across 428 cities in eight countries.
Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today! | true | true | true | As the amount of data Grab handles grows, there is an increased need for quick detections for data anomalies (incompleteness or inaccuracy), while keeping it secure. Read this to learn how the Risk Data team utilised Flink and Datadog to enhance data observability within Grab’s services. | 2024-10-13 00:00:00 | 2024-04-23 00:00:00 | article | grab.com | Grab Tech | null | null |
|
14,883,382 | http://nautil.us/issue/50/emergence/claude-shannon-the-las-vegas-cheat | Claude Shannon, the Las Vegas Shark | Jimmy Soni; Rob Goodman | Many of Claude Shannon’s off-the-clock creations were whimsical—a machine that made sarcastic remarks, for instance, or the Roman numeral calculator. Others created by the Massachusetts Institute of Technology professor and father of information theory showed a flair for the dramatic and dazzling: the trumpet that spit flames or the machine that solved Rubik’s cubes. Still other devices he built anticipated real technological innovations by more than a generation. One in particular stands out, not just because it was so far ahead of its time, but because of just how close it came to landing Shannon in trouble with the law—and the mob.
Long before the Apple Watch or the Fitbit, what was arguably the world’s first wearable computer was conceived by Ed Thorp, then a little-known graduate student in physics at the University of California, Los Angeles. Thorp was the rare physicist who felt at home with both Vegas bookies and bookish professors. He loved math, gambling, and the stock market, roughly in that order. The tables and the market he loved for the challenge: Could you create predictability out of seeming randomness? What could give one person an edge in games of chance? Thorp wasn’t content just pondering these questions; like Shannon, he set out to find and build answers.
In 1960, Thorp was a junior professor at MIT. He had been working on a theory for playing blackjack, the results of which he hoped to publish in the *Proceedings of the National Academy of Sciences*. Shannon was the only academy member in MIT’s mathematics department, so Thorp sought him out. “The secretary warned me that Shannon was only going to be in for a few minutes, not to expect more, and that he didn’t spend time on subjects (or people) that didn’t interest him. Feeling awed and lucky, I arrived at Shannon’s office to find a thinnish, alert man of middle height and build, somewhat sharp featured,” Thorp recalled.
Thorp had piqued Shannon’s interest with the blackjack paper, to which Shannon recommended only a change of title, from “A Winning Strategy for Blackjack” to the more mundane “A Favorable Strategy for Twenty-One,” the better to win over the academy’s staid reviewers. The two shared a love of putting math in unfamiliar territory in search of chance insights. After Shannon “cross-examined” Thorp about his blackjack paper, he asked, “Are you working on anything else in the gambling area?”
Thorp confessed. “I decided to spill my other big secret and told him about roulette. Ideas about the project flew between us. Several exciting hours later, as the wintery sky turned dusky, we finally broke off with plans to meet again on roulette.” As one writer, William Poundstone, put it, “Thorp had inadvertently set one of the century’s great minds on yet another tangent.”
Thorp was immediately invited to Shannon’s house. The basement, Thorp remembered, was “a gadgeteer’s paradise. … There were hundreds of mechanical and electrical categories, such as motors, transistors, switches, pulleys, gears, condensers, transformers, and on and on.” Thorp was in awe: “Now I had met the ultimate gadgeteer.”
What impressed him more than any of the gadgets was his host’s uncanny ability to “see” a solution to a problem rather than to muscle it out with unending work.
It was in this tinkerer’s laboratory that they set out to understand how roulette could be gamed, ordering “a regulation roulette wheel from Reno for $1,500,” a strobe light, and a clock whose hand revolved once per second. Thorp was given inside access to Shannon in all his tinkering glory:
Gadgets … were everywhere. He had a mechanical coin tosser which could be set to flip the coin through a set number of revolutions, producing a head or tail according to the setting. As a joke, he built a mechanical finger in the kitchen which was connected to the basement lab. A pull on the cable curled the finger in a summons. Claude also had a swing about 35 feet long attached to a huge tree, on a slope. We started the swing from uphill and the downhill end of the arc could be as much as 15 or 20 feet above the ground. … Claude’s neighbors on the Mystic lake were occasionally astounded to see a figure “walking on the water.” It was me using a pair of Claude’s huge styrofoam “shoes” designed just for this.
And yet, Thorp wrote, what impressed him more than any of the gadgets was his host’s uncanny ability to “see” a solution to a problem rather than to muscle it out with unending work. “Shannon seemed to think with ‘ideas’ more than with words or formulas. A new problem was like a sculptor’s block of stone and Shannon’s ideas chiseled away the obstacles until an approximate solution emerged like an image, which he proceeded to refine as desired with more ideas.”
For eight months, the pair dove into the challenge of developing a device that would predict the final resting spot of a roulette ball. For the device to beat the house, Thorp and Shannon didn’t have to predict the precise outcome every time: They just had to acquire any kind of slight edge over the odds. Over time, and with enough bets, even the smallest advantage would multiply into a meaningful return.
“Once a lady next to me looked over in horror,” Thorp recalled. “I left the table quickly and discovered the speaker peering from my ear canal like an alien insect.”
Picture a roulette wheel divided up into eight segments: By June 1961, Thorp and Shannon had a working version of a device that could determine which of those segments would end up holding the ball. As soon as they concluded that they had, in fact, found their edge, Shannon impressed upon Thorp the need for absolute secrecy. He invoked the work of social network theorists, who argued that two people chosen at random would be, at most, three degrees of separation from one another. In other words, the distance between Shannon, Thorp, and an enraged casino owner was slim.
The device that they created “was the size of a pack of cigarettes,” operated by Thorp’s and Shannon’s big toes, “with microswitches in our shoes,” and delivered gambling advice in the form of music. Thorp explained:
One switch initialized the computer and the other timed the rotor and the ball. Once the rotor was timed, the computer transmitted a musical scale whose eight tones marked the rotor octants passing the reference mark. … We each heard the musical output through a tiny loudspeaker in one ear canal. We painted the wires connecting the computer and the speaker to match our skin and hair and affixed them with “spirit gum.” The wires were the diameter of a hair to make them inconspicuous but even the hair thin steel wire we used was fragile.
They took it to the casinos, where Thorp and Shannon took turns placing bets. “The division of labor,” Thorp said, “was that Claude stood by the wheel and timed, while I sat at the far end of the layout, unable to see the spinning ball well, and placed bets.” Their wives served as lookouts, “checking to see whether the casino suspected anything and if we were inconspicuous.” Even so, they had some close calls: “Once a lady next to me looked over in horror,” Thorp recalled. “I left the table quickly and discovered the speaker peering from my ear canal like an alien insect.”
Mishaps aside, Thorp was confident that the duo could run the tables. Claude, Betty, and Thorp’s wife, Vivian, were less sure. Thorp would later concede that the others were probably on the right side of caution: The Nevada gaming industry was, notoriously, entangled with the mafia. Had Shannon and Thorp been caught, the odds were against two MIT professors talking their way out of it. The experiment was called off after its trial run, and the wearable computer was consigned to Shannon’s growing heap of curiosities.
*Jimmy Soni is an author, editor, and former speechwriter**.*
*Rob Goodman is a doctoral candidate at Columbia University and a former congressional speechwriter.*
*From *A Mind at Play *by Jimmy Soni and Rob Goodman. Copyright © 2017 by Jimmy Soni and Rob Goodman. Reprinted by permission of Simon & Schuster, Inc. * | true | true | true | The father of information theory built a machine to game roulette, then abandoned it. | 2024-10-13 00:00:00 | 2017-07-20 00:00:00 | article | nautil.us | Nautilus | null | null |
|
419,137 | http://www.nytimes.com/2009/01/04/business/04blind.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,431,260 | https://techcrunch.com/2016/09/05/snapchat-joints-the-bluetooth-sig-fueling-hardware-speculation/?ncid=rss&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29 | Snapchat joins the Bluetooth SIG, fueling hardware speculation | TechCrunch | Darrell Etherington | Snapchat has given observers another reason to suspect it might be working on hardware – the social network company joined the Bluetooth Special Interest Group (SIG) according to the Financial Times. The SIG industry association maintains the Bluetooth wireless standard, and membership is a necessary prerequisite for companies that want to employ Bluetooth in any hardware devices.
Snapchat is an “adopter” in the SIG, which is a free tier that provides member companies with a license to build Bluetooth-enabled products, and to work with other Bluetooth SIG makers on collaborative efforts. AS the FT notes, the only cure for companies to join the SIG, generally speaking, is if they intend to actually launch a wireless device.
Other companies dealing primarily in software are members of the SIG, however, including Facebook. But Facebook has dabbled in hardware previously, including with a project last year where it offered free Bluetooth beacons to business owners as part of a wireless, location-based ambient advertising push.
Snapchat’s own hardware plans including Bluetooth tech might be more ambitious; the FT report notes that it has recently done a lot to suggest an interest in augmented reality hardware, including the acquisition of AR headset startup Vergence Lab, recent hires and a number of acquisitions in computer vision and AR software. A report earlier this year suggested Snapchat might already have begun developing a pair of smart glasses, potentially similar to Google Glass.
Snapchat has good reason to be interested in AR as a category; its smartphone app is scene by many as one of the most successful existing examples of consumer AR, mainly through the use of their image filters, which can be applied to live video captured by a user’s device camera. | true | true | true | Snapchat has given observers another reason to suspect it might be working on hardware – the social network company joined the Bluetooth Special Interest | 2024-10-13 00:00:00 | 2016-09-05 00:00:00 | article | techcrunch.com | TechCrunch | null | null |
|
1,885,063 | http://aws.typepad.com/aws/2010/11/amazon-cloudfront-support-for-custom-origins.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
32,384,147 | https://arxiv.org/abs/1512.08546 | When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries | Caliskan; Aylin; Yamaguchi; Fabian; Dauber; Edwin; Harang; Richard; Rieck; Konrad; Greenstadt; Rachel; Narayanan; Arvind | # Computer Science > Cryptography and Security
[Submitted on 28 Dec 2015 (v1), last revised 18 Dec 2017 (this version, v3)]
# Title:When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries
View PDFAbstract:The ability to identify authors of computer programs based on their coding style is a direct threat to the privacy and anonymity of programmers. While recent work found that source code can be attributed to authors with high accuracy, attribution of executable binaries appears to be much more difficult. Many distinguishing features present in source code, e.g. variable names, are removed in the compilation process, and compiler optimization may alter the structure of a program, further obscuring features that are known to be useful in determining authorship. We examine programmer de-anonymization from the standpoint of machine learning, using a novel set of features that include ones obtained by decompiling the executable binary to source code. We adapt a powerful set of techniques from the domain of source code authorship attribution along with stylistic representations embedded in assembly, resulting in successful de-anonymization of a large set of programmers.
We evaluate our approach on data from the Google Code Jam, obtaining attribution accuracy of up to 96% with 100 and 83% with 600 candidate programmers. We present an executable binary authorship attribution approach, for the first time, that is robust to basic obfuscations, a range of compiler optimization settings, and binaries that have been stripped of their symbol tables. We perform programmer de-anonymization using both obfuscated binaries, and real-world code found "in the wild" in single-author GitHub repositories and the recently leaked this http URL hacker forum. We show that programmers who would like to remain anonymous need to take extreme countermeasures to protect their privacy.
## Submission history
From: Aylin Caliskan [view email]**[v1]**Mon, 28 Dec 2015 22:28:51 UTC (1,663 KB)
**[v2]**Tue, 1 Mar 2016 14:00:23 UTC (1,840 KB)
**[v3]**Mon, 18 Dec 2017 00:18:42 UTC (282 KB)
# Bibliographic and Citation Tools
Bibliographic Explorer
*(What is the Explorer?)*
Litmaps
*(What is Litmaps?)*
scite Smart Citations
*(What are Smart Citations?)*# Code, Data and Media Associated with this Article
CatalyzeX Code Finder for Papers
*(What is CatalyzeX?)*
DagsHub
*(What is DagsHub?)*
Gotit.pub
*(What is GotitPub?)*
Papers with Code
*(What is Papers with Code?)*
ScienceCast
*(What is ScienceCast?)*# Demos
# Recommenders and Search Tools
Influence Flower
*(What are Influence Flowers?)*
Connected Papers
*(What is Connected Papers?)*
CORE Recommender
*(What is CORE?)*# arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**. | true | true | true | The ability to identify authors of computer programs based on their coding style is a direct threat to the privacy and anonymity of programmers. While recent work found that source code can be attributed to authors with high accuracy, attribution of executable binaries appears to be much more difficult. Many distinguishing features present in source code, e.g. variable names, are removed in the compilation process, and compiler optimization may alter the structure of a program, further obscuring features that are known to be useful in determining authorship. We examine programmer de-anonymization from the standpoint of machine learning, using a novel set of features that include ones obtained by decompiling the executable binary to source code. We adapt a powerful set of techniques from the domain of source code authorship attribution along with stylistic representations embedded in assembly, resulting in successful de-anonymization of a large set of programmers. We evaluate our approach on data from the Google Code Jam, obtaining attribution accuracy of up to 96% with 100 and 83% with 600 candidate programmers. We present an executable binary authorship attribution approach, for the first time, that is robust to basic obfuscations, a range of compiler optimization settings, and binaries that have been stripped of their symbol tables. We perform programmer de-anonymization using both obfuscated binaries, and real-world code found "in the wild" in single-author GitHub repositories and the recently leaked Nulled.IO hacker forum. We show that programmers who would like to remain anonymous need to take extreme countermeasures to protect their privacy. | 2024-10-13 00:00:00 | 2015-12-28 00:00:00 | /static/browse/0.3.4/images/arxiv-logo-fb.png | website | arxiv.org | arXiv.org | null | null |
18,517,646 | https://www.vox.com/the-goods/2018/11/20/18103516/black-friday-cyber-monday-amazon-fulfillment-center | Inside an Amazon warehouse on Black Friday | Chavie Lieber | Last year, shoppers spent $7.9 billion on Thanksgiving Day and Black Friday. An additional $6.59 billion was dropped on Cyber Monday. This year, Black Friday sales alone are expected to jump to $9.1 billion. And one retailer dominates these sales: Amazon.
# The human costs of Black Friday, explained by a former Amazon warehouse manager
Last year, Amazon accounted for 45 percent of all Thanksgiving Day online purchases and 60 percent of purchases on Black Friday, according to Hitwise. The Seattle e-commerce giant is expecting similar numbers this year, and has been making an even more aggressive push, offering free shipping to all shoppers during the holiday season, with no minimum purchase.
While this may be great for Amazon’s bottom line, all this shopping has a real human cost. As plenty of Amazon employees have attested, working in the company’s warehouses is grueling. Earlier this summer, a former Amazon fulfillment center manager from California reached out to me after I wrote about Sen. Bernie Sanders (I-VT) sparring with Amazon over worker rights and pay rates. (The battle ended in Amazon raising the hourly rate to $15 an hour but cutting company stock grants and bonuses.) The former Amazon employee, a US Air Force veteran, requested anonymity for fear of professional repercussions.
I talked with the former Amazon employee a few times over the past several months to learn what it’s like to work inside an Amazon warehouse. We recently discussed the Black Friday shift. They spoke about long and punishing hours, how morale plummets as the holiday season goes on, and why the holidays still make them feel guilty.
(After this article published, an Amazon spokesperson emailed Vox with the following statement: “This is an unfortunate experience. But for many of the managers and associates in fulfillment centers, the holiday season is a fun, exciting place to work. All managers go through trainings that are intended to set their departments up for success and similar trainings are offered to hourly employees. All employees are encouraged to bring their questions, comments, or concerns directly to their managers as direct communication is an essential part of Amazon’s peculiar culture.”)
My conversation with the former Amazon employee is below, edited for length and clarity.
**How did you land a job at an Amazon warehouse?**
I’ve always wanted to live in Seattle, and it’s hard to move there without working for Amazon because they are the largest employer. I heard they pay corporate employees really well, and I have friends who work there. Some hate it; others have been there 10 years.
I applied to a job after I finished serving in the Air Force Reserves in 2014, but then decided to go to graduate school for business administration. After I finished school, I figured I’d take a warehouse job and transfer internally, which never happened. I was at a fulfillment center in California from September 2016 to August 2017.
**What was your job title and salary?**
I was an area manager, inside a new facility that had about 1,000 associates. I oversaw the packers, managing about 55 associates. My salary was $80,000, which is on the higher end. They gave me more because I said I wouldn’t accept anything less, since I had a master’s.
**Were there a lot of US vets working at your facility?**
Yes, it’s a pretty typical thing for Amazon. It’s easy for Amazon to hire us because they know vets are willing to shut up and cooperate. In my opinion, Amazon is preying on the work-life balance issue that the military has, and feeds off the rigid order the Army teaches. The military is known for being a bastion of sexism, but I had a worse experience at Amazon. It’s way more cutthroat.
**What was the assembly line like inside your facility?**
Basically, a Kiva robot will pick a pod that looks like a four-sided shelf and has the items people ordered. The robot will bring this pod to the picker, and its screen tells them what item to grab. The picker will grab that item, scan it, throw it into a tote, and that tote will go out to people packing packages. Associates at my facility put out about 60 packages an hour [per person].
**What’s it like working inside these fulfillment centers?**
During my first few weeks of training, when I was just observing, it was clear to me immediately how hard this job is. The associates have mandated breaks where they have to clock in and out by certain times, but the managers do not get any sort of break or lunch. Most of us just never ate. Everyone is on their feet for 12 hours a day.
**Are all those reports about timed breaks and surveillance true?**
Yes, it is all extremely accurate. We had to track how long someone hadn’t packed or picked something. If we saw that five minutes went by without any activity from an associate, we were supposed to go over and talk to that person. Everything is tracked inside Amazon because all the packages are being scanned and have time stamps. I would watch the internal system on my laptop and monitor all the packers to make sure there were no dead spots in packing rates that might be due to a system problem.
We didn’t specifically have timed bathroom breaks, but the break system was rigid. Associates got a 30-minute lunch break, two 15-minute breaks, and an additional 15 minutes of “time off tasks.” If they did nothing for a set period of time, the system would note they were off task. If they had an emergency they needed to leave the area for, I could scan their badge or go back and change their time off task later to an actual task, but bathroom breaks weren’t something we were supposed to change time off task for.
If someone was out for more than 30 minutes, it was a first writeup, unless they’d been written up for time off task before, then it would be a progressive writeup. If they were off task for more than an hour, it was an immediate firing.
It was really difficult for me because the firings were automatic in the system in general, and I had no control over helping out associates. I had to fire people multiple times, and they were devastated because they counted on the health insurance.
**Can you walk me through your time at the warehouse on Black Friday and Cyber Monday?**
The managers all showed up at 5 am. Typically, we’d arrive at 6:30 am, but we went early for an internal pep talk and to make sure all stations were perfect and ready for associates, that there were boxes at all stations, and that the floor was clear for the Kiva robots to run correctly. The associates showed up at 7 am.
Once they got to their stations, the chaos began. You’re just trying to make things stay as stable as possible. The volume of orders on Black Friday is like what happens when Amazon opens the floodgates; we were at full capacity, and we just never stopped. I remember looking at the backlog and watching the orders go from 10,000 to 300,000, and just thinking that we’d never be out of it. The backlog was even higher on Cyber Monday, because Cyber Monday is actually busier for Amazon than Black Friday.
Amazon is a very well-oiled machine, so it doesn’t look like chaos. But it’s all so frantic, you feel like you’re in chaos. It’s extremely busy, but it’s still organized.
**What was the messaging Amazon corporate tried to give to warehouse workers?**
They wanted us managers to give a lot of pep talks to the associates. They would tell us to say, “We’re doing something really special, we’re bringing Christmas people across the country! This is something you should be proud of!” It was pretty tone-deaf. They were trying to pump people up, but they ignored the long-term problems they had at the fulfillment centers, which is that their associates barely survived the shifts.
People who worked Black Friday got time and a half, so everyone was initially very happy, but by the time the Christmas season was over, people were so burnt out that we lost a significant amount of staff. Everyone was beyond exhausted.
**What were the hours like?**
On Black Friday and during the holiday season, everyone works six days a week. The associates work 10 hours a day, the managers 14 to 18. It’s mandatory overtime, the hours are not voluntary, and they are all on your feet. Amazon is also really strict about taking time off during Black Friday. You weren’t allowed to use your PTO for Black Friday, and if you missed it, you were at risk of getting fired.
**What was the age range of associates you worked with?**
It was 50-50 between young people and [middle-aged] people. Half were straight out of high school, and they didn’t take it very seriously. The other half were older and just trying to make a living for their families.
**What was the morale inside the fulfillment center during Black Friday and the holiday season?**
At the start of it all, it was fine. Everyone was happy. People even said, “Yes, I’m bringing Christmas to my family. I’m working these crazy long hours, but I’m getting time and a half.” But after five weeks, people were frustrated, tired, and they didn’t want to be there anymore.
Another part of my job that was really frustrating was that after the holiday season was when Amazon determined promotions and raises, which is really messed up because everyone is on a post-Christmas crash. You are coming back to work after your most stressful time on the job, and it’s so brutal because there’s all these built-up frustrations. And yet that’s when we had to talk about promotions and raises. I found it to be extremely manipulative and unfair.
**Was it tough to be a boss in this type of position?**
Yes, it was really hard. The way I handled it all is one of my biggest regrets till this day. Like, machines break, things happen, and I wish I would have been more reassuring to the associates I was managing. There was this little old lady who reminded me so much of my mother, and one time I had to give her a write-up because she wasn’t making her packing rate, and she was very, very close to getting fired. I wish that I had been more sympathetic to her. I really disliked being in that position.
**Do you think Amazon could have trained you better as a manager?**
Amazon never trained us in how to communicate with associates. We weren’t trained to be understanding of their struggles or communicate with them. It was all about mechanics. And Amazon has built the system so that managers and workers have entirely different incentives. Workers constantly feel like their jobs are on the line, because they are. We were supposed to be observing their [packing] rate and not be concerned with how hard it is to pack things. Managers were pressured to identify the weak links and get them out so that we can have a faster rate. It’s a pressure cooker environment, and that’s what you have to be to get to Amazon’s level of efficiency.
**Would Amazon be a better place to work if they took a more humane approach to their warehouses?**
It could help, but I also think that would be a Band-Aid for the real problem, which is that the workers need to unionize. It’s immoral at this point, considering we all basically got anti-union training. Some associates at Amazon do make good money, but that does not replace workers’ comp or fair work conditions.
**What are your thoughts on people who say workers on Black Friday benefit because they get time and half?**
I’d say that while Amazon associates did get overtime, they are still severely underpaid while being on their feet for 12 hours. There’s a lot of human cost here, a lot of time lost with family.
**What do you think about Amazon’s pay increase to $15 an hour? **
Honestly, they should be pushing for more. I think they should be paying for $20 an hour. Jeff Bezos is obscenely rich, and in any moral society, he would not have that much money.
**Do you shop during Black Friday or Cyber Monday today?**
I do. I boycott Prime Day, but I do shop on Amazon during Black Friday. I feel bad about it, but at the same time, I don’t think me not shopping will get Amazon to change its practices. And I shop on Amazon because my family lives far away from me, and it’s the easiest way to send presents to them. But I think about what I saw in the fulfillment centers a lot, and I have a lot of regret. I think this time of year is probably ruined for me.
**What do you want people to know about Amazon and Black Friday?**
People need to know that their free shipping comes at a human cost. They might be shopping for things that are cheaper and arrive faster, but those packages are artificially cheap. They are being paid for in other ways. People who are watching the expansion of Amazon need to know that it’s not necessarily a good thing. Sure, you’ll get cheaper and faster packages. But Amazon runs on a logistics system that’s based off working people to the bare bones.
**Update 11/20: **Updated to include a statement from Amazon.
**Clarification 11/21: **This post has been updated to clarify that the Amazon fulfillment center the former warehouse manager worked at put out 60 packages per hour, per associate.
## Most Popular
- Sign up for Vox’s daily newsletter
- Take a mental break with the newest Vox crossword
- The one horrifying story from the new Menendez brothers doc that explains their whole caseMember Exclusive
- The resurgence of the r-wordMember Exclusive
- AI companies are trying to build god. Shouldn’t they get our permission first? | true | true | true | A former warehouse manager shares what the Black Friday shift is like. | 2024-10-13 00:00:00 | 2018-11-20 00:00:00 | article | vox.com | Vox | null | null |
|
13,522,852 | http://www.cbc.ca/news/canada/nova-scotia/offnet-app-developer-cellphones-wi-fi-data-fall-river-teens-text-messages-1.3951894 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,464,543 | https://www.indiegogo.com/projects/night-terrors-augmented-reality-survival-horror | Night Terrors - Augmented Reality Survival Horror | null | Indiegogo is committed to accessibility. If you have difficulty using our site, please contact support@indiegogo.com for assistance or view our accessibility notice by
clicking here | true | true | true | An ultra immersive gaming experience that transforms your environment into a terrifying hellscape. | Check out 'Night Terrors - Augmented Reality Survival Horror' on Indiegogo. | 2024-10-13 00:00:00 | 2015-04-07 00:00:00 | indiegogo:campaign | indiegogo.com | Indiegogo | null | null |
|
7,539,010 | http://motherboard.vice.com/blog/unemployment-is-the-future | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,329,398 | https://psiloveyou.xyz/i-used-tinder-happn-bumble-and-dine-for-2-weeks-results-were-interesting-a3988687661a#.57y55uks2 | Love Publishing Technology Blog Posts with HackerNoon | HackerNoon | null | 45,000+
CONTRIBUTING WRITERS
4,000,000+
MONTHLY READERS
1,000,000,000+
WORDS PUBLISHED
A VARIETY OF TOPICS.
YOUR CHOICE.
YOUR CHOICE.
Explore some of the most published topics on HackerNoon. More copy more copy more copy more copy more copy more copy.
ANYONE CAN PUBLISH WITH US.
As a leading publication in the technology space, HackerNoon provides a platform for writers to share their expertise and reach a global audience. Our readers are always on the lookout for informative and engaging content that helps them stay ahead of the curve in this rapidly-evolving field.
Human Editor Review
HackerNoon has a large audience of tech enthusiasts, entrepreneurs, and industry professionals. By publishing their content on HackerNoon, writers can reach a broader audience and gain exposure to potential employers, collaborators, and industry influencers.
Validation & Recognition
Being published on HackerNoon provides writers with validation and recognition from their peers and the tech community. It demonstrates their expertise, knowledge, and ability to articulate ideas effectively, which can enhance their professional reputation.
Career Growth & Advancement
HackerNoon is known for its high-quality content and trusted platform. Having published articles on HackerNoon can serve as a strong portfolio that showcases a writer’s skills, expertise, and thought leadership. This can lead to career advancement opportunities, such as speaking engagements, consulting gigs, and job promotions.
ELEVATE YOUR WRITING EXPERIENCE.
At HackerNoon, we are always striving to improve and enhance our publishing experience in order to offer writers a range of innovative and powerful tools. Our commitment to staying ahead of the curve means that we are continuously exploring new features and functionalities that will empower our writers to create and share their content with ease. Through our ongoing efforts, we aim to provide a seamless and user-friendly platform that enables writers to effectively communicate their ideas and engage with their audience.
Human Editor Review
At HackerNoon, every story goes through a thorough editorial review by a human editor. This ensures that your content is polished, well-structured, and adheres to the highest quality standards.
Audio Story Option
HackerNoon automatically converts your written story to an audio format. This allows readers to listen to your content, making it accessible to a wider audience and enhancing the user experience.
Story Translations
HackerNoon supports translations in 12 different languages, enabling your content to reach audience all over the world. This feature eliminates language barriers and expands the impact of your writing.
HackerNoon is where hackers start their afternoons.
We’re a global network of 45,000+ published diehards; lurkers; script kiddies; hats all the way through white, grey, green, blue, red and black; plus a few plain old Tim Ferriss fanboy #LifeHackers - the full spectrum, you’ll find ’em here.
HackerNoon is a close-knit community for cat people and single dads who code. A safe place for power women in tech and misunderstood millennials. For gross teenagers and curious retirees. For hodlrs, venture capitalists and anarchists. For entrepreneurs and engineers. For philosophers, product managers, and futurists.
In short, HackerNoon has space for everybody. There is only one rule of engagement: We treat our internet friends with respect.
HackerNoon is also on pretty much every social media platform ever invented—so find us where you like to be by searching @hackernoon. Or come hang out with us via slogging.
We’re a global network of 45,000+ published diehards; lurkers; script kiddies; hats all the way through white, grey, green, blue, red and black; plus a few plain old Tim Ferriss fanboy #LifeHackers - the full spectrum, you’ll find ’em here.
HackerNoon is a close-knit community for cat people and single dads who code. A safe place for power women in tech and misunderstood millennials. For gross teenagers and curious retirees. For hodlrs, venture capitalists and anarchists. For entrepreneurs and engineers. For philosophers, product managers, and futurists.
In short, HackerNoon has space for everybody. There is only one rule of engagement: We treat our internet friends with respect.
HackerNoon is also on pretty much every social media platform ever invented—so find us where you like to be by searching @hackernoon. Or come hang out with us via slogging.
Read our Editorial Guideline
A thoroughly crafted guideline by HackerNoon’s editorial team on publishing with HackerNoon that covers a range of topics including: publishing philosophy, republishing of content, backlinks, headlines, grammar, & plagiarism checker.
Submit story for review
Make sure you’ve created an account beforehand. Click here to sign up. Then, click the “Write" button to start the writing process. The “Write” button is placed on top right of our navigation bar, next to the profile icon.
Write or upload a story
With HackerNoon’s WYSIWYG editor and an autosave technology, we hope to provide you a smooth and easy writing experience that captures all the essence of your brain. Oh, and we also have an AI-generated image software!
Submit story for review
Go to “Story Settings”, scroll down, and then select “Submit Story for Review”. Make sure you’ve added a featured image, meta description, a TL;DR, up to 8 tags per story, story indicators, and more. You can also leave a note to our editors!
You might not be the smartest person in this room, for a change.
**Anarchists**
Write with us
**Engineers**
Write with us
**Product Managers**
Write with us
**Aurora**
@auroraisnear
📣 Aurora Labs has been nominated for Startup of the Year in Gibraltar by HackerNoon! 🎉
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
May 30, 2023
**Aurora**
@auroraisnear
📣 Aurora Labs has been nominated for Startup of the Year in Gibraltar by HackerNoon! 🎉
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
May 30, 2023
**Aurora**
@auroraisnear
📣 Aurora Labs has been nominated for Startup of the Year in Gibraltar by HackerNoon! 🎉
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
May 30, 2023
**Aurora**
@auroraisnear
📣 Aurora Labs has been nominated for Startup of the Year in Gibraltar by HackerNoon! 🎉
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
May 30, 2023
**Aurora**
@auroraisnear
📣 Aurora Labs has been nominated for Startup of the Year in Gibraltar by HackerNoon! 🎉
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
We need your support to secure the win, so please cast your vote and help us reach the top! 🗳️ 🚀
🔗 https://startups.hackernoon.com /europe/europe-gibraltar-bvi-uk
May 30, 2023
Come join us over here in the future of independent tech storytelling!
Here's why you should submit your first tech story to HackerNoon's team of human editors today
**Reach millions of**
tech professionals
-----
tech professionals
**4,000,000+**monthly readers
**64%**between 18-34 years old
**27%**in the United States
**Top demographic**classified by Cloudflare & Google Analytic's way too creepy consumer affinity tracking are 'technophiles'
**Get free editorial support**
from real humans
-----
from real humans
dig business & economics?
**@sheharyarkhan**is your man
world news & science fiction?
**@asherumerie**'s got you
general tech, gaming, & entertainment?
meet
**@joseh**
consumer tech & gaming?
we have
**@adrianmorales**
**Join a community**
who gaf about tech
-----
who gaf about tech
Introduce Yourself, Stranger
**Support**
Emotional
**Support**
And there's our
**Help Section**
Remember, on HackerNoon,
**you own your content and can remove it at anytime!**Read full terms & conditions
**here**What our leaders say
DAVID SMOOKE
FOUNDER & CEO
**Please treat your internet friends with respect.**There are many places on the internet where you can be mean. This is not one of them. You are welcome to be anonymous. You are welcome to speak your truth. Within reason. When agreeing or disagreeing, please be thoughtful, respectful and kind.
Linh Dao Smooke
Senior Data Analyst
Sidra Ijaz
Technical Research Analyst and Content Writer
David Smooke
Founder of Adora Fonudation
Appreciation post for HackerNoon and especially for the CEO David Smooke!
I really wish to emphasise this, even outside of the post talking about my journey, @hackernoon has been central to my blog work. @DavidSmooke has been very kind to me throughout this and I’m really lucky to continue contributing to them.
I'm ecstatic. Completely out of breath.
I got featured on the homepage of HackerNoon, one of the top publishing sites for tech enthusiasts. It's really hard getting published on HackerNoon. 55% of the articles get rejected. Not only did I get published, I got featured on the homepage! This means potentially millions of people will read this article. Ah, what a day. Alhamdullilah.
I'm Well, today is full of fantastic surprises!
An article I helped write with the amazing Green Software Foundation team just got published by HackerNoon! It was probably the most collaborative writing effort I've been part of, and I love the end result, capturing the best of all our voices. An interesting and rich experience as a writer, and a wonderful introduction to how software can (and should) intelligently adapt to how clean or dirty the electricity grid is. A complementary and essential way to reduce digital emissions.
**Zack Whittaker**
Security Editor @ TechCrunch
The site is well known...
... among the security researcher crowd for deep-dives into ethical hacking, cybersecurity, coding and more.
... among the security researcher crowd for deep-dives into ethical hacking, cybersecurity, coding and more.
**Brooks Lockett**
Content Strategist
The smoothest experience I've had with an online publication.
Been having a blast publishing articles on HackerNoon. My favorite part is the emphasis on total time readers spend actually reading the piece. Awesome contributor submission process of working with editors too.
The smoothest experience I've had with an online publication. 10/10 HN - you guys built a cool thing
Been having a blast publishing articles on HackerNoon. My favorite part is the emphasis on total time readers spend actually reading the piece. Awesome contributor submission process of working with editors too.
The smoothest experience I've had with an online publication. 10/10 HN - you guys built a cool thing
**Vamshi**
Co-founder and CPO
@ HelloFynHQ
@ HelloFynHQ
@hackernoon is helping its best to open up the system compared to a closed system being built by @Medium.
I appreciate their work! Also, their support has been awesome in getting a lot of migration queries resolved. Keep up the good work @utsav_jaiswal1 @DavidSmooke
I appreciate their work! Also, their support has been awesome in getting a lot of migration queries resolved. Keep up the good work @utsav_jaiswal1 @DavidSmooke
**Lanze IV**
Founder @ OjiLabs
...one of the greatest free resources for programmers.
It's terrifying considering @hackernoon is one of the greatest free resources for programmers. This is akin to putting soldiers outside a public library
It's terrifying considering @hackernoon is one of the greatest free resources for programmers. This is akin to putting soldiers outside a public library
**Mayank Vikash**
Tech Writer
I truly want to express my gratitude. Writing for the readers of HackerNoon has become almost like an addiction, and this response brings me more joy than any payment for my writing.
My story is trending on HackerNoon again.
My story is trending on HackerNoon again.
**Caroline Vrauwdeunt from Map Your City**
Founding Board
I am a follower and avid reader of HackerNoon since the early days on Medium.
When Medium pulled up a paywall, HackerNoon stepped up and took the community with which it helped Medium platform grow and started their own.
I loved the way you pulled that one! HackerNoon bootstrapped vs Medium very well VC funded. Maybe it hit home, as we are not VC-funded ourselves either.
When Medium pulled up a paywall, HackerNoon stepped up and took the community with which it helped Medium platform grow and started their own.
I loved the way you pulled that one! HackerNoon bootstrapped vs Medium very well VC funded. Maybe it hit home, as we are not VC-funded ourselves either.
**Pavel Shkliaev**
CEO @ LensAI
Highly intelligent people get together on this platform.
There is an old-school spirit. Highly intelligent people get together on this platform. There are different points of view. It allows me to keep track of the direction modern IT entrepreneurship is heading in.
There is an old-school spirit. Highly intelligent people get together on this platform. There are different points of view. It allows me to keep track of the direction modern IT entrepreneurship is heading in. | true | true | true | HackerNoon publishes curious and insightful technologists without pop up ads, paywalls, or a lengthy review process. | 2024-10-13 00:00:00 | 2023-05-30 00:00:00 | null | hackernoon.com | Hackernoon | null | null |
|
4,475,759 | http://www.google.com/intl/en/chrome/timemachine/ | Google Chrome - The Fast & Secure Web Browser Built to be Yours | null | ## The
way to do things online-
### Prioritize performance
Chrome is built for performance. Optimize your experience with features like Energy Saver and Memory Saver.
-
### Stay on top of tabs
Chrome has tools to help you manage the tabs you’re not quite ready to close. Group, label, and color code your tabs to stay organized and work faster.
-
### Optimized for your device
Chrome is built to work with your device across platforms. That means a smooth experience on whatever you’re working with.
-
### Automatic updates
There’s a new Chrome update every four weeks, making it easy to have the newest features and a faster, safer browser.
-
### Prioritize performance
Chrome is built for performance. Optimize your experience with features like Energy Saver and Memory Saver.
-
### Stay on top of tabs
Chrome has tools to help you manage the tabs you’re not quite ready to close. Group, label, and color code your tabs to stay organized and work faster.
-
### Optimized for your device
Chrome is built to work with your device across platforms. That means a smooth experience on whatever you’re working with.
-
### Automatic updates
There’s a new Chrome update every four weeks, making it easy to have the newest features and a faster, safer browser.
## Supercharge your browser with
built right in-
## Generative themes
### Create a theme that’s uniquely yours.
Bring your imagination to life with a Chrome theme that’s unmistakably you. The power of AI lets you play with subject, color, art style, and mood for a one-of-a-kind browsing experience.
Learn more about AI in Chrome -
## Help me write
### Spark your
creativity.Whether you want to leave a well-written review for a restaurant or make a formal inquiry about an apartment rental, Chrome's AI-powered writing tool can help you write with more confidence on the web.
Learn more about AI in Chrome -
## Tab organizer
### Your tabs, sorted.
More open tabs than you can manage? AI-powered grouping suggestions help you sort and organize your tabs, so you can stay focused on your browsing flow. It even suggests group names and emojis. 💡
Learn more about AI in Chrome -
## Google Lens
### See anything, search anything.
Search, translate, identify, or shop with Google Lens in Chrome. You can ask questions about what you see, whether it’s something you come across on a website or a photo you take.
Learn more about AI in Chrome
## Stay
while you browse##
Stay
while you browse
-
## PASSWORD MANAGER
### Use strong passwords on every site.
Chrome has Google Password Manager built in, which makes it simple to save, manage, and protect your passwords online. It also helps you create stronger passwords for every account you use.
-
## ENHANCED SAFE BROWSING
### Browse with the confidence that you're staying safer online.
Chrome's Safe Browsing warns you about malware or phishing attacks. Turn on Enhanced Safe Browsing for even more safety protections.
-
## SAFETY CHECK
### Check your safety level in real time with just one click.
Chrome's Safety Check confirms the overall security and privacy of your browsing experience, including your saved passwords, extensions, and settings. If something needs attention, Chrome will help you fix it.
-
## PRIVACY GUIDE
### Keep your privacy under your control with easy-to-use settings.
Chrome makes it easy to understand exactly what you’re sharing online and who you’re sharing it with. Simply use the Privacy Guide, a step-by-step tour of your privacy settings.
## Make it
and take it with you## Make it
and take it with you### Customize your Chrome
Personalize your web browser with themes, dark mode and other options built just for you.
### Browse across devices
Sign in to Chrome on any device to access your bookmarks, saved passwords, and more.
### Save time with autofill
Use Chrome to save addresses, passwords, and more to quickly autofill your details.
### Customize your Chrome
Personalize your web browser with themes, dark mode and other options built just for you.
### Browse across devices
Sign in to Chrome on any device to access your bookmarks, saved passwords, and more.
### Save time with autofill
Use Chrome to save addresses, passwords, and more to quickly autofill your details.
### Extend your experience
From shopping and entertainment to productivity, find extensions to improve your experience in the Chrome Web Store.
## The browser by Google
-
## GOOGLE SEARCH
### The search bar you love, built right in.
Access a world of knowledge at your fingertips. Check the weather, solve math equations, and get instant search results, all contained inside your browser's address bar.
-
## GOOGLE PAY
### Pay for things as quick as you click.
Google Pay makes it easy to pay online. When you securely store your payment info in your Google Account, you can stop typing your credit card and check out faster.
-
## GOOGLE WORKSPACE
### Get things done, with or without Wi-Fi.
Get things done in Gmail, Google Docs, Google Slides, Google Sheets, Google Translate and Google Drive, even without an internet connection. | true | true | true | Chrome is the official web browser from Google, built to be fast, secure, and customizable. Download now and make it yours. | 2024-10-13 00:00:00 | 2010-10-13 00:00:00 | website | google.com | Google Chrome | null | null |
|
19,658,004 | https://arstechnica.com/gaming/2018/06/george-lucas-reveals-his-plan-for-star-wars-7-through-9-and-it-was-awful/ | George Lucas reveals his plan for Star Wars 7 through 9—and it was awful | Jonathan M Gitlin | Friday morning was going pretty well, all things considered. I was at my desk, editing some photos and having breakfast. Then Ars editor Lee Hutchinson pinged me on Slack and ruined it all. "It’s even worse than we could have possibly imagined," said my boss. "And, as Han Solo said, I can imagine *a lot*." Accompanying this message was news about George Lucas' plans for Star Wars episodes 7-9, and my *god* would they have sucked. Forget the First Order or Porgs, forget BB-8 and Poe Dameron. Imagine, if you can, our heroes shrinking down like the *Fantastic Voyage* to go meet some midi-chlorians. There, now your breakfast is ruined, too.
The info comes from an interview between Lucas and another billionaire filmmaker, James Cameron. The latter made a series about science fiction, and the transcript of their interview was recently published in the series' companion book. The bombshell drops after a brief insight into Lucas' view of the environmental damage we're causing the Earth. "We're not going to save the planet," Lucas regularly tells people and follows up by saying we'll end up like Mars. But Mars is fine, he thinks, and he's sure we'll find life there. And in the rest of the Solar System, too.
This thought of microscopic alien life spurs a memory. "Everybody hated it in *Phantom Menace* [when] we started talking about midi-chlorians," Lucas says. Uh-huh, we sure did. *Because it was a really dumb idea*. What follows should make every Star Wars fan send a note of gratitude to whoever at Disney decided to buy the franchise and take it away and out from under Lucas' control:
[The next three ‘Star Wars’ films] were going to get into a microbiotic world. But there’s this world of creatures that operate differently than we do. I call them the Whills. And the Whills are the ones who actually control the universe. They feed off the Force.
I'm honestly at a bit of a loss when it comes to describing just how dumb I think that story would have been. Like *Robot Jox*—*Pacific Rim* if I'm being charitable—but with Jedi and Sith instead of giant mechs? A recreation of the underwater chase on Naboo but with macrophages and neutrophils? I just can't even, as the kids say. (By the way, if you were holding onto that fan theory that the Whills were Yoda's species, time to deal with it.) On the other hand, it might have displaced *Ewoks: The Battle for Endor*, *Caravan of Courage*, and the *Star Wars Christmas Special* as the three worst Star Wars productions ever made. | true | true | true | Compared to this, Rian Johnson saved your childhood. | 2024-10-13 00:00:00 | 2018-06-15 00:00:00 | article | arstechnica.com | Ars Technica | null | null |
|
33,774,469 | https://en.wikipedia.org/wiki/Demosaicing | Demosaicing - Wikipedia | null | # Demosaicing
**Demosaicing** (or **de-mosaicing**, **demosaicking**), also known as **color reconstruction**, is a digital image processing algorithm used to reconstruct a full color image from the incomplete color samples output from an image sensor overlaid with a color filter array (CFA) such as a Bayer filter. It is also known as **CFA interpolation** or **debayering**.
Most modern digital cameras acquire images using a single image sensor overlaid with a CFA, so demosaicing is part of the processing pipeline required to render these images into a viewable format.
Many modern digital cameras can save images in a raw format allowing the user to demosaic them using software, rather than using the camera's built-in firmware.
## Goal
[edit]The aim of a demosaicing algorithm is to reconstruct a full color image (i.e. a full set of color triples) from the spatially undersampled color channels output from the CFA. The algorithm should have the following traits:
- Avoidance of the introduction of false color artifacts, such as chromatic aliases, zippering (abrupt unnatural changes of intensity over a number of neighboring pixels) and purple fringing
- Maximum preservation of the image resolution
- Low computational complexity for fast processing or efficient in-camera hardware implementation
- Amenability to analysis for accurate noise reduction
## Background: color filter array
[edit]A color filter array is a mosaic of color filters in front of the image sensor. Commercially, the most commonly used CFA configuration is the Bayer filter illustrated here. This has alternating red (R) and green (G) filters for odd rows and alternating green (G) and blue (B) filters for even rows. There are twice as many green filters as red or blue ones, catering to the human eye's higher sensitivity to green light.
Since the color subsampling of a CFA by its nature results in aliasing, an optical anti-aliasing filter is typically placed in the optical path between the image sensor and the lens to reduce the false color artifacts (chromatic aliases) introduced by interpolation.[1]
Since each pixel of the sensor is behind a color filter, the output is an array of pixel values, each indicating a raw intensity of one of the three filter colors. Thus, an algorithm is needed to estimate for each pixel the color levels for all color components, rather than a single component.
## Illustration
[edit]To reconstruct a full color image from the data collected by the color filtering array, a form of interpolation is needed to fill in the blanks. The mathematics here is subject to individual implementation, and is called demosaicing.
In this example, we use Adobe Photoshop's bicubic interpolation to simulate the circuitry of a Bayer filter device such as a digital camera.
The image below simulates the output from a Bayer filtered image sensor; each pixel has only a red, green or blue component. The corresponding original image is shown alongside the demosaiced reconstruction at the end of this section.
Bayer filter samples | ||
Red | Green | Blue |
A digital camera typically has means to reconstruct a whole RGB image using the above information. The resulting image could be something like this:
Original | Reconstructed |
The reconstructed image is typically accurate in uniform-colored areas, but has a loss of resolution (detail and sharpness) and has edge artifacts (for example, the edges of letters have visible color fringes and some roughness).
## Algorithms
[edit]### Simple interpolation
[edit]These algorithms are examples of multivariate interpolation on a uniform grid, using relatively straightforward mathematical operations on nearby instances of the same color component. The simplest method is nearest-neighbor interpolation which simply copies an adjacent pixel of the same color channel. It is unsuitable for any application where quality matters, but can be useful for generating previews given limited computational resources. Another simple method is bilinear interpolation, whereby the red value of a non-red pixel is computed as the average of the two or four adjacent red pixels, and similarly for blue and green. More complex methods that interpolate independently within each color plane include bicubic interpolation, spline interpolation, and Lanczos resampling.
Although these methods can obtain good results in homogeneous image regions, they are prone to severe demosaicing artifacts in regions with edges and details when used with pure-color CFAs.[2] However, linear interpolation can obtain very good results when combined with a spatio-spectral (panchromatic) CFA.[3]
One could exploit simple formation models of images for demosaicing.
In natural images within the same segment, the ratio of colors should be preserved.
This fact was exploited in an image sensitive interpolation for demosaicing.
[4]
### Pixel correlation within an image
[edit]More sophisticated demosaicing algorithms exploit the spatial and/or spectral correlation of pixels within a color image.[5] Spatial correlation is the tendency of pixels to assume similar color values within a small homogeneous region of an image. Spectral correlation is the dependency between the pixel values of different color planes in a small image region.
These algorithms include:
**Variable Number of Gradients (VNG)**[6]interpolation computes gradients near the pixel of interest and uses the lower gradients (representing smoother and more similar parts of the image) to make an estimate. It is used in first versions of dcraw, and suffers from color artifacts.**Pixel Grouping (PPG)**[7]uses assumptions about natural scenery in making estimates. It has fewer color artifacts on natural images than the Variable Number of Gradients method; it was introduced in dcraw from rel. 8.71 as "Patterned Pixel Grouping".**Adaptive Homogeneity-Directed (AHD)**is widely used in the industry. It selects the direction of interpolation so as to maximize a homogeneity metric, thus typically minimizing color artifacts.[8]It has been implemented in recent versions of dcraw.[9]**Aliasing Minimization and Zipper Elimination (AMaZE)**designed by Emil J. Martinec is slow but has great performance, especially on low noise captures. Implementations of AMaZE can be found in RawTherapee and darktable.
### Video super-resolution/demosaicing
[edit]It has been shown that super-resolution and demosaicing are two faces of the same problem and it is reasonable to address them in a unified context.[10] Note that both these problems face the aliasing issue. Therefore, especially in the case of video (multi-frame) reconstruction, a joint super-resolution and demosaicing approach provides the optimal solution.
### Trade-offs
[edit]Some methods may produce better results for natural scenes, and some for printed material, for instance. This reflects the inherent problem of estimating pixels that are not definitively known. Naturally, there is also the ubiquitous trade-off of speed versus quality of estimation.
## Use in computer image processing software
[edit]When one has access to the raw image data from a digital camera, one can use computer software with a variety of different demosaicing algorithms instead of being limited to the one built into the camera. A few raw development programs, such as RawTherapee and darktable, give the user an option to choose which algorithm should be used. Most programs, however, are coded to use one particular method. The differences in rendering the finest detail (and grain texture) that come from the choice of demosaicing algorithm are among the main differences between various raw developers; often photographers will prefer a particular program for aesthetic reasons related to this effect.
The color artifacts due to demosaicing provide important clues for identifying photo forgeries.[11]
## See also
[edit]## References
[edit]**^**Adrian Davies; Phil Fennessy (2001).*Digital imaging for photographers*(Fourth ed.). Focal Press. ISBN 978-0-240-51590-8.**^**Lanlan Chang; Yap-Peng Tan (2006). "Hybrid color filter array demosaicking for effective artifact suppression" (PDF).*Journal of Electronic Imaging*.**15**: 2. Bibcode:2006JEI....15a3003C. doi:10.1117/1.2183325. Archived from the original (PDF) on 2009-12-29.**^**Hirakawa, K., & Wolfe, P. J. (2007, September). Color Filter Array Design for Enhanced Image Fidelity. In 2007 IEEE International Conference on Image Processing (Vol. 2, pp. II-81). IEEE.**^**Kimmel, R. (1999). Demosaicing: Image reconstruction from color CCD samples. IEEE Transactions on image processing, 8(9), 1221-1228.**^**Lanlan Chang; Yap-Peng Tan (2006). "Hybrid color filter array demosaicking for effective artifact suppression" (PDF).*Journal of Electronic Imaging*.**15**: 013003. Bibcode:2006JEI....15a3003C. doi:10.1117/1.2183325. Archived from the original (PDF) on 2009-12-29.**^**Ting Chen. "Interpolation using a Threshold-based variable number of gradients". Archived from the original on 2012-04-22.**^**Chuan-kai Lin, Portland State University (2004). "Pixel Grouping for Color Filter Array Demosaicing". Archived from the original on 2016-09-23.**^**Kiego Hirakawa; Thomas W. Parks (2005). "Adaptive homogeneity-directed demosaicing algorithm" (PDF).*IEEE Transactions on Image Processing*.**14**(3): 360–369. Bibcode:2005ITIP...14..360H. doi:10.1109/TIP.2004.838691. PMID 15762333. S2CID 37217924.**^**Decoding raw digital photos in Linux Archived 2016-10-19 at the Wayback Machine, Dave Coffin.**^**Sina Farsiu; Michael Elad; Peyman Milanfar (2006). "Multi-Frame Demosaicing and Super-Resolution of Color Images" (PDF).*IEEE Transactions on Image Processing*.**15**(1): 141–159. Bibcode:2006ITIP...15..141F. CiteSeerX 10.1.1.132.7607. doi:10.1109/TIP.2005.860336. PMID 16435545. S2CID 2989394.**^**YiZhen Huang; YangJing Long (2008). "Demosaicking recognition with applications in digital photo authentication based on a quadratic pixel correlation model" (PDF).*Proc. IEEE Conference on Computer Vision and Pattern Recognition*: 1–8. Archived from the original (PDF) on 2010-06-17.
## External links
[edit]- HowStuffWorks: How Digital Cameras Work, More on Capturing Color, with a demosaicing algorithm at work animation
- Interpolation of RGB components in Bayer CFA images, by Eric Dubois
- Color Demosaicing Using Variance of Color Differences by King-Hong Chung and Yuk-Hee Chan
- Hybrid color filter array demosaicking for effective artifact suppression by Lanlan Chang and Yap-Peng Tan
- Image Demosaicing: A Systematic Survey by Xin Li, Bahadir Gunturk and Lei Zhang
- Demosaicking: Color Filter Array Interpolation in Single-Chip Digital Cameras, B. K. Gunturk, J. Glotzbach, Y. Altunbasak, R. W. Schafer, and R. M. Mersereau
- Spatio-Spectral Color Filter Array Design for Enhanced Image Fidelity, Keigo Hirakawa and Patrick J. Wolfe
- Effective Soft-Decision Demosaicking Using Directional Filtering and Embedded Artifact Refinement, Wen-Tsung Huang, Wen-Jan Chen and Shen-Chuan Tai
- Similarity-based Demosaicking by Antoni Buades, Bartomeu Coll, Jean-Michel Morel, Catalina Sbert, with source code and online demonstration
- A list of existing demosaicing techniques
- Interactive site simulating Bayer data and various demosaicing algorithms, allowing custom images(dead)
- Geometry-based Demosaicking by Sira Ferradans, Marcelo Bertamio and Vicent Caselles with source code and reference paper. (dead)
- A comprehensive list of demosaicing codes and binaries available online Archived 2016-04-21 at the Wayback Machine (dead) | true | true | true | null | 2024-10-13 00:00:00 | 2005-11-04 00:00:00 | null | website | wikipedia.org | Wikimedia Foundation, Inc. | null | null |
18,574,558 | http://facesofopensource.com/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,793,684 | http://trademarks.justia.com/860/97/bitcoin-86097068.html | BITCOIN Trademark - Serial Number 86097068 :: Justia Trademarks | null | **BITCOIN - Trademark Details**
*Status:*602 - Abandoned-Failure To Respond Or Late Response
**Serial Number**
86097068
**Word Mark**
BITCOIN
**Status**
**602**- Abandoned-Failure To Respond Or Late Response
**Status Date**
2014-09-06
**Filing Date**
2013-10-21
**Mark Drawing**
**4000**- Standard character mark Typeset
**Attorney Name**
**Law Office Assigned Location Code**
M90
**Employee Name**
MCMORROW, RONALD G
**Statements**
**Goods and Services**
A-shirts; Apparel for dancers, namely, tee shirts, sweatshirts, pants, leggings, shorts and jackets; Athletic apparel, namely, shirts, pants, jackets, footwear, hats and caps, athletic uniforms; Athletic shirts; Body shirts; Button down shirts; Button-front aloha shirts; Crop pants; Denims; Dress pants; Flood pants; Gym pants; Jeggings, namely, pants that are partially jeans and partially leggings; Jogging pants; Leather pants; Lounge pants; Pants; Stretch pants; Sweat pants; Yoga pants
**Pseudo Mark**
BIT COIN
**Classification Information**
**International Class**
**025**- Clothing, footwear, headgear. - Clothing, footwear, headgear.
**US Class Codes**
022, 039
**Class Status Code**
**6**- Active
**Class Status Date**
2013-10-26
**Primary Code**
025
**First Use Anywhere Date**
2013-09-28
**First Use In Commerce Date**
2013-09-28
**Correspondences**
**Name**
Alex Kagianaris
**Address**
*Please log in with your Justia account to see this address.*
**Trademark Events**
Event Date | Event Description |
2013-10-24 | NEW APPLICATION ENTERED IN TRAM |
2013-10-26 | NEW APPLICATION OFFICE SUPPLIED DATA ENTERED IN TRAM |
2013-10-29 | NOTICE OF PSEUDO MARK E-MAILED |
2013-11-04 | TEAS CHANGE OF CORRESPONDENCE RECEIVED |
2013-11-04 | TEAS VOLUNTARY AMENDMENT RECEIVED |
2013-11-06 | ASSIGNED TO LIE |
2013-11-06 | TEAS WITHDRAWAL OF ATTORNEY RECEIVED |
2013-11-06 | WITHDRAWAL OF ATTORNEY GRANTED |
2013-11-07 | TEAS CHANGE OF CORRESPONDENCE RECEIVED |
2013-11-07 | TEAS VOLUNTARY AMENDMENT RECEIVED |
2013-11-15 | APPLICANT AMENDMENT PRIOR TO EXAMINATION - ENTERED |
2014-02-05 | ASSIGNED TO EXAMINER |
2014-02-07 | NON-FINAL ACTION WRITTEN |
2014-02-07 | NON-FINAL ACTION E-MAILED |
2014-02-07 | NOTIFICATION OF NON-FINAL ACTION E-MAILED |
2014-09-06 | ABANDONMENT - FAILURE TO RESPOND OR LATE RESPONSE |
2014-09-08 | ABANDONMENT NOTICE MAILED - FAILURE TO RESPOND | | true | true | true | A-shirts; Apparel for dancers, namely, tee shirts, sweatshirts, pants, leggings, shorts and jackets; Athletic apparel, namely, shirts, pants, jackets, footwear, hats and caps, athletic uniforms; Athletic shirts; Body shirts; Button down shirts; Button-front aloha shirts; Crop pants; Denims; Dress pants; Flood pants; Gym pants; Jeggings, namely, pants that are partially jeans and partially leggings; Jogging pants; Leather pants; Lounge pants; Pants; Stretch pants; Sweat pants; Yoga pants | 2024-10-13 00:00:00 | 2014-09-06 00:00:00 | https://trademarks.justia.com/media/og_image.php?serial=86097068 | null | justia.com | trademarks.justia.com | null | null |
5,056,089 | http://arstechnica.com/tech-policy/2013/01/government-formally-drops-charges-against-aaron-swartz/ | Government formally drops charges against Aaron Swartz | Cyrus Farivar | In the wake of the tragic suicide of Aaron Swartz, the United States Attorney decided to formally drop the pending charges against the 26-year-old information hacktivist.
"In support of this dismissal, the government states that Mr. Swartz died on January 11, 2013," wrote Carmen Ortiz, the United States Attorney for the District Court of Massachusetts.
Swartz faced legal charges after he infamously downloaded a huge cache of documents from JSTOR. Over the weekend, Swartz' family said the aggressive legal tactics of the US Attorney's office contributed to his suicide.
The outpouring of grief has continued well into Monday, as even those who didn't know Swartz well (myself included) are shocked that someone so young and talented could feel so much pressure from the justice system that he would be compelled to take his own life. One computer engineering student, John Atkinson, wrote an eloquent post on Sunday that Swartz' death has been on his mind over the last few days—even though he didn't know Swartz personally, nor had he *heard* of Swartz prior to the arrest.
Aaron Swartz is what I wish I was. I am a bright technologist, but I’ve never built anything of note. I have strong opinions about how to improve this world, but I’ve never acted to bring them to pass. I have thoughts every day that I would share with the world, but I allow my fears to convince me to keep them to myself. If I were able to stop being afraid of what the world would think of me, I could see myself making every decision that Aaron made that ultimately led to his untimely death. This upsets me immensely. I am upset that we have a justice system that would persecute me the way it did Aaron. I am upset that I have spent 27 years of my life having made no discernible difference to the world around me. Most of all I am upset that Aaron’s work here is done when there is so much more he could have accomplished.
Swartz's funeral will be held in Highland Park, Illinois on Tuesday, January 15, 2013 at 10am CST—many tech luminaries, family, and friends are expected to be in attendance. | true | true | true | US Attorney: “The government states that Mr. Swartz died on January 11, 2013.”… | 2024-10-13 00:00:00 | 2013-01-14 00:00:00 | article | arstechnica.com | Ars Technica | null | null |
|
10,229,632 | http://techcrunch.com/2015/09/16/y-combinator-launches-open-office-hours-to-give-diverse-founders-access-to-its-partners/ | Y Combinator Launches "Open Office Hours" To Give Diverse Founders Access To Its Partners | TechCrunch | Megan Rose Dickey | Y Combinator is piloting a new program called “Open Office Hours” to connect with founders from underserved communities and give them direct access to YC partners either in-person or via Skype, depending on where the founders are based.
YC’s office hours effort is led Michael Seibel, the founder whose startup Twitch sold to Amazon for $970 million and whose startup Socialcam sold to Autodesk for $60 million. The idea for an office hours came from Seibel’s trip around the world to places like Morocco and Southeast Asia.
“It’s weird how you have to leave the states to understand what happens in the states,” Seilbel told me. “There’s a ton of interest in tech but the ecosystem isn’t set up right for people to succeed. There’s not a lack of talent or drive.”
When he returned to the U.S., he decided that he wanted to make YC more accessible.
“I’ve definitely heard that [founders from underserved groups] felt like YC wasn’t for them — they felt like it was on the ivory tower, which kills me,” Seibel said.
For now, the office hours are only open to black and Hispanic startup founders. Down the road, YC will host open office hours for women, veterans and founders who live outside of the U.S.
YC isn’t particularly known for having the most diverse group of founders. In YC’s Winter 2015 batch, 7 percent of the companies had a black founder, 5.26 percent of the startups had a Hispanic founder and 21 percent of the companies had a female founder. But the firm has made efforts over the last couple of years to be more inclusive through events like the Female Founders Conference, the YC Fellowship and now, Open Office Hours.
People tried to convince Seibel to host a conference or some other kind of event, Seibel said, but that wasn’t the approach he wanted to take.
“I wanted to do something personal,” Seibel said. “I’ve always felt like if you’re on the outside looking in, a half-hour phone call could mean the world to you. I started putting my email out there and started getting tons of people reaching out to me. I now do office hours with as many people outside of YC as those who are in YC.”
Ideally, YC is looking to chat with founders who have just started their company, are already working on an MVP, and have a team with a technical co-founder.
“That’s absolutely ideal,” Seibel said. “But I’m also imagining we’ll be talking to people who are just thinking about starting a company or are wondering how to recruit a technical co-founder.”
Most of the YC partners will be participating in the open office hours, Seibel said. After the office hours, there’s no guarantee that the startups will have a better chance of getting accepted into YC, but having connections which YC partners certainly won’t hurt their chances.
The first office hours will be next week, September 24-25, and will have space for 50 startup founders. Sign-ups begin today. | true | true | true | Y Combinator is piloting a new program called "Open Office Hours" The aim is to connect with founders from underserved communities and give them direct access to YC partners either in-person or via Skype, depending on where the founders are based. YC's office hours effort is led Michael Seibel, the founder whose startup Twitch sold to Amazon for $970 million and whose startup Socialcam sold to Autodesk for $60 million. | 2024-10-13 00:00:00 | 2015-09-16 00:00:00 | article | techcrunch.com | TechCrunch | null | null |
|
3,590,040 | http://parseintimate.com | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
29,640,858 | https://ariadne.space/2021/12/21/stop-defining-feature-test-macros-in-your-code/ | stop defining feature-test macros in your code | Ariadne Conill | # stop defining feature-test macros in your code
If there is any change in the C world I would like to see in 2022, it would be the abolition of `#define _GNU_SOURCE`
. In many cases, defining this macro in C code can have harmful side effects ranging from subtle breakage to miscompilation, because of how feature-test macros work.
When writing or studying code, you’ve likely encountered something like this:
```
#define _GNU_SOURCE
#include <string.h>
```
Or worse:
```
#include <stdlib.h>
#include <unistd.h>
#define _XOPEN_SOURCE
#include <string.h>
```
The `#define _XOPEN_SOURCE`
and `#define _GNU_SOURCE`
in those examples are defining something known as a *feature-test macro*, which is used to selectively expose function declarations in the headers. These macros are necessary because some standards have conflicting definitions of functions and thus are aliased to other symbols, allowing co-existence of the conflicting functions, but only one version of that function may be defined at a time, so the feature-test macros allow the user to select which definitions they want.
The correct way to use these macros is by defining them at compile time with compiler flags, e.g. `-D_XOPEN_SOURCE`
or `-std=gnu11`
. This ensures that the declared feature-test macros are consistently defined while compiling the project.
As for the reason why `#define _GNU_SOURCE`
is a thing? It’s because we have documentation which does not correctly explain the role of feature-test macros. Instead, in a given manual page, you might see language like “this function is only enabled if the `_GNU_SOURCE`
macro is defined.”
To find out the actual way to use those macros, you would have to read feature_test_macros(7), which is usually not referenced from individual manual pages, and while that manual page shows the incorrect examples above as bad practice, it understates how much of a bad practice it actually is, and it is one of the first code examples you see on that manual page.
In conclusion, never use `#define _GNU_SOURCE`
, always use compiler flags for this. | true | true | true | If there is any change in the C world I would like to see in 2022, it would be the abolition of #define _GNU_SOURCE. In many cases, defining this macro in C code can have harmful side effects ranging from subtle breakage to miscompilation, because of how feature-test macros work. | 2024-10-13 00:00:00 | 2021-12-21 00:00:00 | null | article | ariadne.space | Ariadneconill | null | null |
39,287,645 | https://medium.com/vortechsa/whats-the-point-in-polygon-8e67ded323a2 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
41,333,905 | https://surfingcomplexity.blog/2021/08/28/contempt-for-the-glue-people/ | Contempt for the glue people | Lorin Hochstein | The clip below is from a lecture from 2015 that then-Google CEO Eric Schmidt gave to a Stanford class.
Here’s a transcript, emphasis mine.
When I was at Novell, I had learned that
there were people who I call “glue people”. The glue people are incredibly nice people who sit at interstitial boundaries between groups, and they assist in activity. And they are very, very loyal, and people love them, andyou don’t need them at all.At Novell, I kept trying to get rid of these glue people, because they were getting in the way, because they slowed everything down. And every time I get rid of them in one group, they’d show up in another group, and they’d transfer, and get rehired and all that.
I was telling Larry [Page] and Sergey [Brin] this one day, and Larry said, “I don’t understand what your problem is. Why don’t we just review all of the hiring?” And I said, “What?” And Larry said, “Yeah, let’s just review all the hiring.” And I said, “Really?” He said, “Yes”.
So, guess what? From that moment on, we reviewed every offer packet, literally every one.
And anybody who smelled or looked like a glue person, plus the people that Larry and Sergey thought had backgrounds that I liked that they didn’t,would all be flagged.
*[Edit (2023-04-02*)*: I originally incorrectly transcribed the word “flagged” as “fired”.]*
I first watched this lecture years ago, but Schmidt’s expressed contempt for the *nice and loyal* *but useless glue people* just got lodged in my brain, and I’ve never forgotten it. For some reason, this tweet about Google’s various messaging services sparked my memory about it, hence this post.
I’ve also been thinking for years about this expression. Removing glue people means fostering silos. I have the impression managers like silos and employees hate it (and employees help themselves getting things done by working with and supporting glue people). So why silos? Aren’t managers interested in getting things done? Does a silo-less org come with less controls and power for management? I don’t really get it. Would love a discussion about it.
I was confused reading Schmidt’s description (or maybe just trying to be generous) whether the “glue” referred to how these people bind different groups (“sit at interstitial boundaries”), or their bureaucratic skills at the expense of the organization (difficult to get rid of them), or their tar-like characteristics (they slow everything down). Three very different things!
But I fear he meant the first kind of glue, which is to my mind an absolutely positive attribute to have and to cultivate, and it baffles me that Google would look for everybody who even smells like that kind of person and fire them without further cause. | true | true | true | The clip below is from a lecture from 2015 that then-Google CEO Eric Schmidt gave to a Stanford class. Here’s a transcript, emphasis mine. When I was at Novell, I had learned that there were … | 2024-10-13 00:00:00 | 2021-08-28 00:00:00 | article | surfingcomplexity.blog | Surfing Complexity | null | null |
|
39,829,528 | https://techcrunch.com/2024/03/16/linkedin-wants-to-add-gaming-to-its-platform/ | LinkedIn plans to add gaming to its platform | TechCrunch | Ingrid Lunden | LinkedIn, the Microsoft-owned social platform, has made a name for itself primarily as a platform for people looking to network and pick up knowledge for professional purposes, and for recruitment — a business that now has more 1 billion users. Now, to boost the time people are spending on the platform, the company is breaking into a totally new area: gaming.
TechCrunch has learned and confirmed that LinkedIn is working on a new games experience. It will be doing so by tapping into the same wave of puzzle-mania that helped simple games like Wordle find viral success and millions of players. Three early efforts are games called “Queens”, “Inference” and “Crossclimb.”
App researchers have started to find code that points to the work LinkedIn is doing. One of them, Nima Owji, said that one idea LinkedIn appears to be experimenting with involves player scores being organised by places of work, with companies getting “ranked” by those scores.
BREAKING: #LinkedIn is working on IN-APP GAMES!
There are going to be a few different games and companies will be ranked in the games based on the scores of their employees!
Pretty cool and fun, in my opinion! pic.twitter.com/hLITqc8aqw
— Nima Owji (@nima_owji) March 16, 2024
A spokesperson for LinkedIn has confirmed that it is working on gaming, but said there is as yet no launch date.
“We’re playing with adding puzzle-based games within the LinkedIn experience to unlock a bit of fun, deepen relationships, and hopefully spark the opportunity for conversations,” the spokesperson said in a message to TechCrunch. “Stay tuned for more!”
The spokesperson added that the images shared by the researcher on X are not the latest versions.
(Update: some updated pictures have now been supplied, which we’re embedding below.)
LinkedIn’s owner Microsoft is a gaming behemoth. Its games business — which includes Xbox, Activision Blizzard and ZeniMax — brought in $7.1 billion in revenues last quarter, passing Windows revenues for the first time.
The LinkedIn spokesperson declined to say how and if Microsoft is involved in the gaming project at LinkedIn.
Games are regularly among the most popular apps for mobile phones and PCs — both in terms of revenues and engagement — and puzzle-based casual games has been one of the most popular categories in the space among mobile users. Non-gaming platforms have long tapped into these facts to boost their own traffic — arguably a trend that preceded the internet, if you think about the popularity of crosswords and other puzzles in newspapers and magazines.
The New York Times, which acquired the viral hit Wordle in 2022, said at the end of last year that that millions of people continue to play the game, which is now part of a bigger platform of online puzzles and games developed by the newspaper.
Others that have doubled down on gaming have seen mixed results. Facebook, the world’s biggest social network, has been a major driver of social gaming over the years. But in 2022 it shut down its standalone gaming app amid a decline in usage: it’s putting significantly more focus these days on mixed reality experiences and its Meta Quest business.
Over the years, LinkedIn has tried out a number of different new features over the years to boost how and how much people use its platform, with the strategy possibly best described as: “how can we take the most popular tools people are using right now and make them relevant to LinkedIn’s audience and focus on the world of work?”
Those have ranged from efforts in online education and professional development, through to a publishing and news operation, bringing in more video tools and courting creators and influencers. | true | true | true | LinkedIn, the Microsoft-owned social platform, has made a name for itself primarily as a platform for people looking to network and pick up knowledge for | 2024-10-13 00:00:00 | 2024-03-16 00:00:00 | article | techcrunch.com | TechCrunch | null | null |
|
35,938,529 | https://www.tabletmag.com/sections/news/articles/woke-university-servant-class | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,417,756 | http://9to5mac.com/2013/03/21/apple-beefs-up-icloud-apple-id-security-with-two-step-verification/ | Apple beefs up iCloud, Apple ID security with two-step verification - 9to5Mac | Mark Gurman | Today, Apple has rolled out a new two-step verification service for iCloud and Apple ID users. This functionality greatly enhances the security of Apple accounts because it requires users to use a trusted device and an extra security code.
This security code can be sent via SMS or via the Find my iPhone iOS app (if it is installed). Users can now setup two-step authentication on their devices via the Apple ID website. Users need to access the security tab on this website to conduct the setup process.
During the setup process for two-step verification, users can choose which of their iOS devices they want to be “trusted.” This new service will allow only you to be able to reset your password. All of the details below:
Apple’s two-step verification is available in the U.S., U.K, Australia, Ireland, and New Zealand.
Last year, the security of Apple’s online services came into question when technology writer Mat Honan’s digital life was hacked via social engineering. His iCloud account was hacked and accessed. His computer ended up being completely erased via Apple’s Find my Mac restore feature.
Apple requires users to print out a recovery key. This key is the only way to access your iCloud or Apple ID account if you cannot access your iOS device. Apple’s phone support will no longer be able to reset your Apple account password.
Notably, Google’s online services have offered two-step verification for years.
Earlier today, Apple begun training its AppleCare phone support employees on the new system. Details of Apple’s training materials are directly below. Additionally, several more details are below.
Below are all of the details about two-step verification:
**What is two-step verification for Apple ID?**
Two-step verification is an optional security feature for your Apple ID. It requires you to verify your identity using one of your devices before you can:
- Sign in to My Apple ID to manage your account.
- Make an iTunes, App Store, or iBookstore purchase from a new device.
- Get Apple ID-related support from Apple.
Turning on two-step verification reduces the possibility of someone accessing or making unauthorized changes to your account information at My Apple ID or making purchases using your account.
**Why should I use two-step verification with my Apple ID?**
Your Apple ID is the key to many important things you do with Apple, such as purchasing from the iTunes and App Stores, keeping personal information up-to-date across your devices with iCloud, and locating, locking, or wiping your devices. Two-step verification is a feature you can use to keep your Apple ID as secure as possible.
**How do I set up two-step verification?**
Set up two-step verification at My Apple ID (appleid.apple.com):
- Select “Manage your Apple ID” and sign in.
- Select “Password and Security.”
- Under Two-Step Verification, select Get Started
**How does it work?**
When you set up two-step verification, you register one or more trusted devices. A trusted device is a device you control that can receive 4-digit verification codes using either Find My iPhone notifications or SMS to verify your identity.
Then, any time you sign in to manage your Apple ID at My Apple ID or make an iTunes, App Store, or iBookstore purchase from a new device, you will need to enter both your password and a 4-digit verification code as shown below.
After you sign in, you can manage your account or make purchases as usual. Without both your password and the verification code, access to your account will be denied.
You will also get a 14-digit Recovery Key for you to print and keep in a safe place. You will use your Recovery Key to regain access to your account if you ever lose access to your devices or forget your password.
**Do I still need to remember any security questions?**
With two-step verification, you do not need to create or remember any security questions. Your identity is verified exclusively via your password, verification codes sent to your trusted devices, and your Recovery Key.
**How do I use Find My iPhone notifications to receive verification codes?**
Find My iPhone notifications can be used to receive verification codes on any iOS device with Find My iPhone turned on. Learn how to set up Find My iPhone.
**Which SMS numbers should I verify for my account?**
You should verify all SMS-enabled phone numbers that you normally use with your iPhone or other mobile phone. You should also consider verifying an SMS-enabled phone number used by someone close to you, such as a spouse or other family member. You can use this number if you are temporarily without access to your own devices.
**Note: **You cannot use landline or web-based (VOIP) phone services for two-step verification.
**Where should I keep my Recovery Key?**
Keep your Recovery Key in a secure place in your home, office, or other location. You should consider printing more than one copy so that you can keep your key in more than one place. This will make it easier to find if you ever need it and ensure that you have a spare copy if one is ever lost or destroyed.
You should not store your Recovery Key on your device or computer since that could give an unauthorized user instant access to it.
**Can I turn off two-step verification after I turn it on?**
Yes. Learn how to turn off two-step verification in this article.
**What do I need to remember when I use two-step verification?**
Two-step verification simplifies and strengthens the security of your account. After you turn it on, there will be no way for anyone to access and manage your account at My Apple ID other than by using your password, verification codes sent your trusted devices, or your Recovery Key. You must be responsible for:
- Remembering your password.
- Keeping your trusted devices physically secure.
- Keeping your Recovery Key in a safe place.
If you lose access to two of these three items at the same time, you could be locked out of your Apple ID account permanently.
In addition, with two-step verification turned on, only you can reset your password, manage your trusted devices, or create a new recovery key.
Apple Support can help you with other aspects of your service, but they will not be able to update or recover these three things on your behalf.
**What if I lose my Recovery Key?**
You can replace your Recovery Key any time by visiting My Apple ID:
- Select “Manage your Apple ID” and sign in with your password and trusted device.
- Select “Password and Security.”
- Under Recovery Key, select Replace Lost Key.
**Note**: When you create a new key, your old Recovery Key is no longer usable. See this article for more information.
**What if I forget my Apple ID password?**
You can reset your password at My Apple ID by using your Recovery Key and one of your trusted devices.
**Note**: Apple Support can** **not reset your password on your behalf. To reset your password, you must have your Recovery Key and access to at least one of your trusted devices. See this article for more information.
**What if I lose or give away one of my trusted devices?**
If you no longer have access to one of your devices, go to My Apple ID to remove that device from your list of trusted devices as soon as possible so that it can no longer be used to help verify your identity.
**What if I no longer have access to any of my trusted devices?**
If you cannot access any of your trusted devices, you can still access your account at My Apple ID using your password and Recovery Key. You should verify a new trusted device as soon as possible. See this article for more information.
**Why was I asked to wait before setting up two-step verification?**
As a basic security measure, Apple does not allow two-step verification setup to proceed if any significant changes have recently been made to your account information. Significant changes can include a password reset or new security questions. This waiting period helps Apple ensure that you are the only person accessing or modifying your account. While you are in this waiting period, you can continue using your account as usual with all Apple services and stores.
Apple will send an email to all the addresses you have on file notifying you of the waiting period and encouraging you to contact Apple Support if you think that someone else has unauthorized access to your account. You will be able to return to set up two-step verification after the date listed on your Apple ID account page and in the email that you receive.
**In which countries is two-step verification available?**
Initially, two-step verification is being offered in the U.S., UK, Australia, Ireland, and New Zealand. Additional countries will be added over time. When your country is added, two-step verification will automatically appear in the Password and Security section of Manage My Apple ID when you sign in to My Apple ID.
*FTC: We use income earning auto affiliate links.* More.
I can’t change my email adress
[users can choose which of their iOS devices they want to be “trusted.”]
~
I only have ONE “iOS device. How does THAT work? | true | true | true | Today, Apple has rolled out a new two-step verification service for iCloud and Apple ID users. This functionality greatly enhances... | 2024-10-13 00:00:00 | 2013-03-21 00:00:00 | http://9to5mac.com/wp-content/uploads/sites/6/2013/03/screen-shot-2013-03-21-at-2-31-03-pm.png?w=704 | article | 9to5mac.com | 9To5Mac | null | null |
7,911,618 | http://www.amazon.com/fire-phone | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,794,377 | http://arstechnica.com/gadgets/2012/11/samsungs-840-series-ssd-reviewed/ | Samsung’s 840 Series SSD reviewed | Techreport | This story was brought to you by our friends at The Tech Report. You can visit the original story here.
Samsung is the biggest producer of flash memory in the world—by a fair margin. The latest numbers from IHS iSuppli peg Samsung's share of the NAND market at over 42 percent, well ahead of Toshiba's at 25 percent and Micron's at 21 percent. Given those figures, it's no surprise Samsung is also one of the biggest players in the SSD business.
What may surprise you is just how long Samsung has been one of the top dogs in that realm. Everyone's familiar with the 830 Series, which debuted a little more than a year ago and quickly became one of the most desired SSDs among PC enthusiasts. The 830 Series was a follow-up to the 470 Series, which didn't make as big of a splash in hobbyist circles. Samsung had other SSDs before that, but I bet you can't name any of them.
I'm sure you can list a few major PC makers, though. Samsung claims it's been the number-one supplier of SSDs to the big-name PC brands since 2006, a full two years before Intel's first SSD even hit the market.
With the PC establishment seemingly sewn up, Samsung has increasingly targeted customers who buy drives one at a time rather than in lots of a thousand. The size of this market is growing as SSDs become more affordable for upgraders and system builders. Right now, the sweet spot is around $200, where there are numerous drives in the 240-256GB range. One of those options is Samsung's next-generation 840 Series SSD.
While the name suggests this drive is a successor to the 830 Series, the new model isn't quite a direct heir. Yes, the controller has been updated and the NAND is built using a smaller fabrication process. However, Samsung has traded the 830 Series' MLC flash for TLC chips that squeeze an extra bit into each cell. This NAND costs less per gigabyte, which is probably why the 840 Series 250GB rings in at just $180 right now. The implications of that extra bit go beyond pricing, affecting not only the drive's performance, but also its longevity. Let's take a closer look.
The skinny on TLC
TLC NAND is the defining characteristic of the Samsung 840 Series. Thankfully, I can assure you it has nothing to do with the television network responsible for spawning Here Comes Honey Boo Boo. TLC stands for "triple-level cell" and describes the number of bits (three) stored in each flash cell. The MLC NAND commonly found in consumer-grade SSDs packs two bits per cell, while the SLC flash reserved for uber-expensive server drives has only one bit.
More bits per cell translate to more gigabytes per die, which in turn means more gigabytes per wafer. MLC doubles the storage capacity of SLC, and TLC adds 50 percent on top of that. By adding bits, the capacity of each wafer can be increased without shrinking the fabrication process.
TLC NAND isn't a new technology; flash makers have been cranking out it for years. The triple-stuffed NAND has mostly been confined to devices like thumb drives because its endurance and performance haven't measured up to MLC NAND. For a sense of why that is, it helps to have an understanding of how data is stored in flash memory. Behold my crudely drawn flash cell diagram:
Each flash cell consists of insulated control and floating gates situated above a silicon substrate. If enough voltage is applied to the control gate, electrons will rise up from the substrate and into the floating gate through a process called tunneling. When the voltage to the control gate is cut, the oxide insulator traps the migratory electrons in the floating gate. The presence of those electrons creates a negative charge that changes the threshold voltage required to activate the cell, effectively writing data to it. Applying a sufficiently strong negative charge to the substrate reverses the process, causing electrons in the floating gate to return to the substrate. This mass exodus erases the cell and returns the threshold voltage to its lowest state.
Because there's some variation in the characteristics of individual cells, any data that's written needs to be read for verification. Data is read by asserting a voltage at the control gate and checking for current flow between the source and the drain. Current will flow if the control voltage is higher than the threshold voltage of the cell.
Reading and writing SLC NAND is fairly quick because there are only two values to consider: 0 and 1. Additional control voltages must be applied to account for the 00, 01, 10, and 11 values supported by MLC NAND, and that takes more time. The process is even longer with TLC flash, which can store eight different values between 000 and 111, requiring more control-voltage levels.
Dealing with TLC flash becomes even more challenging as the NAND starts to wear out. Electrons tunneling through the oxide layer can break down the bonds in the insulator and become trapped. The negative charge created by these stranded electrons raises the minimum threshold voltage required to activate the cell, narrowing the voltage range that can be used for programming. The more values that are crammed within that shrinking voltage range, the more difficult it is to distinguish between them. That's why TLC NAND typically tolerates fewer write-erase cycles than MLC flash, which is itself less durable than SLC NAND. Eventually, the tunneling oxide degrades to the point where the cell is no longer viable and has to be retired.
Transitioning to finer fabrication nodes reduces NAND longevity even further because the layer of tunneling oxide gets thinner as the cell geometry shrinks. That detail is particularly important for the Samsung 840 Series, whose TLC chips are fabbed on a next-gen 21-nm process. The 830 Series uses 27-nm NAND.
Should you be concerned? Maybe. Unlike some SSD makers, Samsung doesn't quote endurance ratings on its website or in the official Reviewer's Guide attached to the 840 Series. We've asked the firm on multiple occasions to characterize the drive's endurance, in terms of either the number of write-erase cycles the NAND can survive or the total volume of writes the drive can withstand as a whole, and we're still waiting for a response.
To its credit, Samsung covers the 840 Series with the same three-year warranty that applies to the 830 Series. The firm also says the NAND rolling off its production lines is sorted, and that only the highest quality chips are used in its SSDs. We've already seen Intel cherry pick high-endurance MLC NAND for enterprise-oriented drives that would have otherwise used SLC memory. Samsung appears to be doing something similar with TLC memory and consumer SSDs. Given how popular MLC-based offerings have become with the server crowd, we're not inclined to write off the 840 Series due solely to its use of TLC NAND.
The rest of the 840 Series SSD
There's more to Samsung's 840 Series SSD than its 21-nm TLC NAND. Heck, there's more to the NAND than the fab process and the number of bits per cell. Like the last two generations of Samsung SSDs, the 840 Series' flash memory conforms to the Toggle DDR NAND specification. This standard was jointly developed with Toshiba and is an alternative to the ONFI flash spec backed by Intel, Micron, and Hynix.
Toggle DDR is capable of executing reads and writes on both the rising and falling edges of a data strobe, hence the DDR moniker. Synchronous ONFI NAND has a similar double data rate driven by an external clock cycle. That external clock is always running, while Toggle DDR turns on its data strobe only when transfers are taking place. As a result, Toggle DDR chips should be more power-efficient at idle than their ONFI counterparts. We'll put that notion to the test a little later in the review when we probe the 840 Series' power consumption.
Although Samsung has equipped its SSDs with Toggle DDR NAND for a couple of years, the 840 Series is the first to use flash chips compliant with version 2.0 of the Toggle standard. The initial spec allowed for transfer rates up to 133 MT/s, a ceiling that's been bumped up to 400 MT/s for Toggle DDR2. That 400 MT/s limit nicely matches the maximum data rate of the ONFI 3.0 specification, by the way.
To support the 840 Series' faster NAND, Samsung has cooked up a new revision of its proprietary, triple-core SSD controller. Like the 830 Series' MCX controller, this new MDX chip has eight memory channels and ARM9-based processor cores. Multiple cores are used to prevent background tasks like garbage collection from slowing the drive's performance with user workloads. The firmware controls which tasks are assigned to the individual cores, and there's some flexibility to shuffle the load around.
Samsung always makes a point of highlighting the fact that its controller technology doesn't use SandForce-style write compression. A little compression mojo might not be a bad idea given the inherent endurance handicap associated with 21-nm TLC NAND, though. Compression allows fewer blocks to be written to the NAND, reducing the write amplification factor and leaving more write-erase cycles in reserve. This approach biases performance toward easily compressible data, but there are ways to reduce the write amplification factor without resorting to compression. SSDs can also wring more life out of their NAND with smarter error correction and signal processing algorithms. Alas, we couldn't pry any specifics from Samsung regarding its use of those techniques.
We do, however, know that the 840 Series attempts to extend its lifespan by reserving more NAND capacity as spare area available only to the controller. The drive's 120, 250, and 500GB capacities are a bit lower than the 128, 256, and 512GB you might recall from the 830 Series.
Lower capacities usually hint at the presence of RAID-like schemes that protect against physical flash failures. The 840 Series doesn't arrange its NAND in a redundant array, though. Samsung restricts NAND-level redundancy to its enterprise-grade drives. It does, however, endow the 840 Series with full-disk encryption support.
Despite the fact that the 840 Series is aimed at the mainstream market, the family doesn't include anything below 120GB. Perhaps that's because the base model's street price is already quite low, at just $100.
If you think that's low, check out the sequential write speed ratings for the various members of the 840 Series lineup.
Ouch. The 250GB model's 240MB/s maximum sequential write speed doesn't inspire confidence. To put it in perspective, consider that the old Samsung 830 Series 256GB is rated for 400MB/s sequential writes. That drive has the same write speed rating as its 512GB sibling, but there's a sizable gap between the 250 and 500GB versions of the 840 Series. I suspect the larger 840 Series drive uses a greater number of NAND dies, allowing it to exploit more parallelism within the controller. We've asked Samsung to reveal the size and number of NAND dies for each member of the 840 Series but are still awaiting an answer.
For what it's worth, the performance of contemporary MLC SSDs tends to plateau at around 256GB. TLC NAND can serve the same capacity with fewer dies, but it looks like the 840 Series 250GB doesn't have enough dies to take full advantage of the controller.
Of course, the NAND dies aren't the only memory chips onboard the 840 Series. The drive features DRAM cache memory used primarily to store lookup tables. This low-power DDR2 cache also serves as a landing pad for some incoming host writes. The 120GB model has a 256MB cache, while the higher-capacity flavors sport 512MB.
We'd love to show you a gratuitous close-up of the 840 Series' DRAM cache and other chips, but Samsung has locked down the case with pentalobe screws straight out of Apple's playbook. (We're in the process of obtaining the tool required to crack open the case.) In some ways, the drive bears an eerie resemblance to the iPhone 5. The case is black; it's rectangular with rounded corners; and a chamfered edge runs around the rim. At just 7 mm thick, the 840 Series should be slim enough for Apple, too—and for folks looking to upgrade thinner notebooks that can't accommodate the 9.5-mm cases that house most SSDs.
Speaking of upgrades, the SSD Magician utility that accompanies Samsung SSDs is one of the best around. There's secure-erase functionality, of course, plus optimization routines and a built-in benchmark. The integrated flashing utility can reach out and grab updates directly from Samsung, and the overprovisioning can be tweaked to allocate more NAND capacity as spare area. Don't get too excited about that cloning icon, though. It refers to a copy of Norton Ghost 15 that's included only with drives sold as part of upgrade kits.
If you want to keep tabs on the health of the 840 Series' TLC NAND, drilling down in the System Information section of the interface reveals an estimated lifetime display based on SMART data. After hammering our drive with several days of testing, the needle remains close to 100 percent. Speaking of testing, it's time to see how this puppy performs.
Our testing methods
If you're familiar with our test methods and hardware, the rest of this page is filled with nerdy details you already know; feel free to skip ahead to the benchmark results. For the rest of you, we've summarized the essential characteristics of all the drives we've tested in the table below. Our collection of SSDs includes representatives based on the most popular SSD configurations on the market right now.
Interface
Cache
Flash controller
NAND
Corsair Force Series 3 240GB
6Gbps
NA
SandForce SF-2281
25nm Micron async MLC
Corsair Force Series GT 240GB
6GBps
NA
SandForce SF-2281
25nm Intel sync MLC
Corsair Neutron 240GB
6GBps
256MB
LAMD LM87800
25nm Micron sync MLC
Corsair Neutron GTX 240GB
6GBps
256MB
LAMD LM87800
26nm Toshiba Toggle DDR
Crucial m4 256GB
6Gbps
256MB
Marvell 88SS9174
25nm Micron sync MLC
Intel 320 Series 300GB
3Gbps
64MB
Intel PC29AS21BA0
25nm Intel MLC
Intel 335 Series 240GB
6Gbps
NA
SandForce SF-2281
20nm Intel sync MLC
Intel 520 Series 240GB
6Gbps
NA
SandForce SF-2281
25nm Intel sync MLC
OCZ Agility 4 256GB
6Gbps
512MB
Indilinx Everest 2
25nm Micron async MLC
OCZ Vertex 4 256GB
6Gbps
1GB
Indilinx Everest 2
25nm Intel sync MLC
Samsung 830 Series 256GB
6Gbps
256MB
Samsung MCX
27nm Samsung Toggle MLC
Samsung 840 Series 250GB
6Gbps
512MB
Samsung MDX
21nm Samsung Toggle TLC
WD Caviar Black 1TB
6Gbps
64MB
NA
NA
We used the following system configuration for testing:
Corsair Force 3 Series 240GB with 1.3.2 firmware
Corsair Force Series GT 240GB with 1.3.2 firmware
Crucial m4 256GB with 010G firmware
Intel 320 Series 300GB with 4PC10362 firmware
WD Caviar Black 1TB with 05.01D05 firmware
OCZ Agility 4 256GB with 1.5.2 firmware
Samsung 830 Series 256GB with CXM03B1Q firmware
Intel 520 Series 240GB with 400i firmware
OCZ Vertex 4 256GB with 1.5 firmware
Corsair Neutron 240GB with M206 firmware
Corsair Neutron GTX 240GB with M206 firmware
Intel 335 Series 240GB with 335s firmware
Samsung 840 Series 250GB with DXT06B0Q firmware
Thanks to Asus for providing the systems' motherboards and graphics cards, Intel for the CPUs, Corsair for the memory and PSUs, Thermaltake for the CPU coolers, and Western Digital for the Caviar Black 1TB system drives.
We used the following versions of our test applications:
To ensure consistent and repeatable results, the SSDs were secure-erased before almost every component of our test suite. Some of our tests then put the SSDs into a used state before the workload begins, which better exposes each drive's long-term performance characteristics. In other tests, like DriveBench and FileBench, we induce a used state before testing. In all cases, the SSDs were in the same state before each test, ensuring an even playing field. The performance of mechanical hard drives is much more consistent between factory fresh and used states, so we skipped wiping the HDDs before each test—mechanical drives take forever to secure erase.
We run all our tests at least three times and report the median of the results. We've found IOMeter performance can fall off with SSDs after the first couple of runs, so we use five runs for solid-state drives and throw out the first two.
Steps have been taken to ensure that Sandy Bridge's power-saving features don't taint any of our results. All of the CPU's low-power states have been disabled, effectively pegging the 2500K at 3.3GHz. Transitioning in and out of different power states can affect the performance of storage benchmarks, especially when dealing with short burst transfers.
The test systems' Windows desktop was set at 1280x1024 in 32-bit color at a 75Hz screen refresh rate. Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
HD Tune—Transfer rates
HD Tune lets us present transfer rates in a couple of different ways. Using the benchmark's "full test" setting gives us a good look at performance across the entire drive rather than extrapolating based on a handful of sample points. The data created by the full test also gives us fodder for line graphs, which we've split up by drive maker. You can click the buttons below each line graph to see how the Samsung 840 Series and our mechanical hard drive compare to different SSDs.
To make the graphs easier to interpret, we've greyed out the mechanical drive. The SSD results have been colored by drive maker, as well.
The Samsung 840 Series gets off to a good start, posting the highest sustained read speeds we've ever observed in this test. Samsung takes the top two spots, with the 840 Series besting its predecessor by 20MB/s. Impressive.
Well, we knew this was coming. Samsung's 840 Series is way behind the leaders in HD Tune's write speed test. The old 830 Series is nearly 150MB/s faster here. That said, the 840 Series does beat a handful of other SSDs, including the popular Crucial m4.
HD Tune runs on unpartitioned drives, a setup that isn't always ideal for SSDs. For another perspective, we ran CrystalDiskMark's sequential transfer rate tests, which call for partitioned drives. We used the app's default settings: a 1GB transfer size with randomized data.
Again, the 840 Series leads in the read speed test but gets dominated when we switch to writes. Samsung's new hotness manages to stay ahead of a couple of the other SSDs in the write speed test, including the asynchronous SandForce configuration represented by the Corsair Force Series 3. Beating the Intel 320 Series is hardly worth bragging about, though. That drive's old enough to have a 3Gbps SATA controller.
HD Tune—Random access times
In addition to letting us test transfer rates, HD Tune can measure random access times. We've tested with four transfer sizes and presented all the results in a couple of line graphs. We've also busted out the 4KB and 1MB transfers sizes into bar graphs that should be easier to read without the presence of the mechanical drive.
TLC memory doesn't appear to slow down the Samsung 840 Series' random read performance. The drive has quicker access times than the 830 Series in the 4KB and 1MB tests. The 840 Series is up among the leaders in the 1MB test but not as competitive in the 4KB one.
Surprisingly, the Samsung 840 Series' random 4KB write performance is pretty decent, just a few microseconds off the quickest access times we've measured in that test. However, the 840 Series fades toward the back of the field in the 1MB test. With the larger transfer size, the drive's write access times are more than a millisecond behind those of the 830 Series.
TR FileBench—Real-world copy speeds
Concocted by resident developer Bruno "morphine" Ferreira, FileBench runs through a series of file copy operations using Windows 7's xcopy command. Using xcopy produces nearly identical copy speeds to dragging and dropping files using the Windows GUI, so our results should be representative of typical real-world performance. We tested using the following five file sets—note the differences in average file sizes and their compressibility. We evaluated the compressibility of each file set by comparing its size before and after being run through 7-Zip's "ultra" compression scheme.
Number of files
Average file size
Total size
Compressibility
Movie
6
701MB
4.1GB
0.5 percent
RAW
101
23.6MB
2.32GB
3.2 percent
MP3
549
6.48MB
3.47GB
0.5 percent
TR
26,767
64.6KB
1.7GB
53 percent
Mozilla
22,696
39.4KB
923MB
91 percent
The names of most of the file sets are self-explanatory. The Mozilla set is made up of all the files necessary to compile the browser, while the TR set includes years worth of the images, HTML files, and spreadsheets behind my reviews. Those two sets contain much larger numbers of smaller files than the other three. They're also the most amenable to compression.
To get a sense of how aggressively each SSD reclaims flash pages tagged by the TRIM command, we run FileBench with the solid-state drives in two states. We first test the SSDs in a fresh state after a secure erase. They're then subjected to a 30-minute IOMeter workload, generating a tortured used state ahead of another batch of copy tests. We haven't found a substantial difference in the performance of mechanical drives between these two states. Let's start with the fresh-state results.
Although the Samsung 830 Series does reasonably well in our first wave of FileBench tests, the 840 Series can't keep up. The TLC-based drive is slower across the board, often by sizable margins.
Relegated to the bottom half of the pack, the 840 Series trades blows with the Corsair Force Series 3, Crucial m4, and OCZ Agility 4. The Samsung SSD comes out ahead of those drives in most of the tests. Check out the Mozilla and TR results, though. Those file sets are filled with easily compressed data, allowing the Force Series 3 to achieve better performance through write compression.
The Samsung 840 Series' copy speeds slow down when the drive is put into a simulated used state. There isn't much change in its position relative to the competition, but performance drops by as much as 19 percent versus our fresh-state results. The biggest drops come in the Mozilla, MP3, and RAW tests.
Some of the other SSDs experience similar performance drops, so the 840 Series isn't alone. However, its behavior doesn't match the more consistent copy speeds of the 830 Series. I suspect the new drive attempts to preserve precious write-erase cycles by taking a more conservative approach to reclaiming trimmed flash pages.
TR DriveBench 1.0—Disk-intensive multitasking
TR DriveBench allows us to record the individual IO requests associated with a Windows session and then play those results back as fast as possible on different drives. We've used this app to create a set of multitasking workloads that combine common desktop tasks with disk-intensive background operations like compiling code, copying files, downloading via BitTorrent, transcoding video, and scanning for viruses. The individual workloads are explained in more detail here.
Below, you'll find an overall average followed by scores for each of our individual workloads. The overall score is an average of the mean performance score for each multitasking workload.
Neither of the Samsung SSDs fares particularly well in this test, and the 840 Series is the slower of the two. It has a huge lead over the OCZ Agility 4 but still languishes behind the rest of the solid-state field.
The individual test results are somewhat more encouraging, with the Samsung 840 Series climbing up the standings in a couple of instances. However, it's really slow in the compiling test, which involves writing a lot of small files to the drive.
As much as we like DriveBench 1.0's individual workloads, the traces cover only slices of disk activity. Because we fire the recorded I/Os at the disks as fast as possible, solid-state drives also have no downtime during which to engage background garbage collection or other optimization algorithms. DriveBench 2.0 addresses both of those issues with a much larger trace that spans two weeks of typical desktop activity peppered with multitasking loads similar to those in DriveBench 1.0. We've also adjusted our testing methods to give solid-state drives enough idle time to tidy up after themselves. More details on DriveBench 2.0 are available on this page of our last major SSD round-up.
Instead of looking at a raw IOps rate, we're going to switch gears and explore service times—the amount of time it takes drives to complete an I/O request. We'll start with an overall mean service time before slicing and dicing the results.
The Samsung 840 Series sits near the middle of the field overall, a fair bit behind the lead group of six. Still, the 840 Series is more responsive overall than a number of its peers, including the Crucial m4 and OCZ Agility 4. Care to hazard a guess about what happens when we split service times between reads and writes?
I thought so. With reads, the 840 Series' mean service time is nearly as quick as that of its predecessor. Writes are a different story entirely. The new model's write service time is more than twice that of the old 830 Series. Only the Crucial m4 and Intel 320 Series fare worse, and the Intel drive is pretty close.
There are millions of I/O requests in this trace, so we can't easily graph service times to look at the variance. However, our analysis tools do report the standard deviation, which can give us a sense of how much service times vary from the mean.
Once again, there's a notable difference between the 840 Series' competitive position with reads and writes. Only the OCZ Agility 4 and Crucial m4 exhibit more variance in their write service times. With reads, the 840 Series has more consistent results than not only those drives, but also the Corsair Force Series 3, Intel 320 Series, and OCZ Vertex 4.
While we can't easily graph all the service times recorded by DriveBench 2.0, we can sort them. The graphs below plot the percentage of service times that fall below various thresholds. You can click the buttons below the graphs to see how the Samsung 840 Series compares to SSDs from other drive makers.
The distribution of write service times is surprisingly consistent for all the SSDs, and the Samsung 840 Series gets lost in the crowd. Click through the read results, though. The 840 Series has fewer service times under 0.1 milliseconds than most of the other SSDs, yet it's ahead of all its peers at the 0.2-ms threshold and beyond. The majority of the drives start to converge around the 0.6-ms threshold. However, the Crucial m4, OCZ Agility 4, and Samsung 830 series don't catch up until we approach the one-millisecond threshold.
As the distribution plots illustrate, service times over 100 milliseconds make up a tiny fraction of the overall results. Those extremely long service times have the potential to cause the sort of hitching that a user might notice, so we've graphed the individual percentages for each drive.
No problems here. The Samsung 840 Series doesn't have the lowest percentages of extremely long service times, but it comes pretty close with both reads and writes.
Don't let the tiny fractions throw you. Our DriveBench 2.0 trace covers over 10 million I/O requests spread over two weeks of activity. Even a small percentage represents a sizable number of sluggish service times.
IOMeter
Our IOMeter workloads feature a ramping number of concurrent I/O requests. Most desktop systems will only have a few requests in flight at any given time (87 percent of DriveBench 2.0 requests have a queue depth of four or less). We've extended our scaling up to 32 concurrent requests to reach the depth of the Native Command Queuing pipeline associated with the Serial ATA specification. Ramping up the number of requests also gives us a sense of how the drives might perform in more demanding enterprise environments.
We run our IOMeter tests using the fully randomized data pattern, which presents a particular challenge for SandForce's write compression scheme. We'd rather measure SSD performance in this worst-case scenario than using easily compressible data.
Adding the Samsung 840 Series to our catch-all IOMeter graphs made them too difficult to read, so we've split the results by drive maker. You can compare the 840 Series' performance to that of the competition by clicking the buttons below each graph.
The web server test is made up exclusively of read requests, so we'll deal with it separately. Here, the Samsung 840 Series fares exceptionally well, beating its forebear and everything else we've tested. Unlike some of the other SSDs, the 840 Series' transaction rate doesn't fall off appreciably as the number of concurrent I/O requests scales beyond eight.
All of the other IOMeter tests mix read and write requests, and the 840 Series suffers as a result. It's slower than the 830 Series almost across the board and only comes out ahead of the SandForce-based SSDs at higher queue depths. That said, the 840 Series sticks pretty close to the Corsair Force Series 3 at lower queue depths and beats the Crucial m4, Intel 320 Series, and OCZ Agility 4 handily overall.
Boot duration
Before timing a couple of real-world applications, we first have to load the OS. We can measure how long that takes by checking the Windows 7 boot duration using the operating system's performance-monitoring tools. This is actually the first test in which we're booting Windows 7 off each drive; up until this point, our testing has been hosted by an OS housed on a separate system drive.
Level load times
Modern games lack built-in timing tests to measure level loads, so we busted out a stopwatch with a couple of reasonably recent titles.
There's only about a one-second difference between most of the SSDs in our load time tests. The Samsung 840 Series is never more than a couple of milliseconds out of first place and even takes the lead in Portal 2, albeit by a hair. As we've seen throughout our results, the lone mechanical hard drive doesn't even come close to matching the performance of the SSDs.
Power consumption
We tested power consumption under load with IOMeter's workstation access pattern chewing through 32 concurrent I/O requests. Idle power consumption was probed one minute after processing Windows 7's idle tasks on an empty desktop.
The Samsung 840 Series' idle power consumption is lower than any other SSD we've tested. In fact, the drive draws less than a third of the power required by the 830 Series in this state.
Our IOMeter load is considerably more demanding, as the higher power consumption numbers attest. Here, the Crucial m4 and Intel 320 Series jump to the top of the standings. The Samsung 840 Series still draws about 2W less than its predecessor, though, and its power consumption is comparable to the collection of SandForce-based SSDs in the 2.8-3.1W range.
The value perspective
Welcome to another one of our famous value analyses, which adds capacity and pricing to the performance data we've explored over the preceding pages. We used Newegg prices to even the playing field, and we didn't take mail-in rebates into account when performing our calculations.
First, we'll look at the all-important cost per gigabyte, which we've obtained using the amount of storage capacity accessible to users in Windows.
Unlike Intel's recently introduced 335 Series, which is selling for much more than the suggested retail price, the Samsung 840 Series costs a little bit less than the MSRP. $180 for 250GB translates to a lower cost per gigabyte than any of the other SSDs. Samsung's own 830 Series comes the closest, but it's being phased out and won't be in stock for much longer.
Our remaining value calculation uses a single performance score that we've derived by comparing how each drive stacks up against a common baseline provided by the Momentus 5400.4, a 2.5" notebook drive with a painfully slow 5,400-RPM spindle speed. This index uses a subset of our performance data described on this page of our last SSD round-up.
Well, this isn't surprising. The Samsung 840 Series' comparatively weak write performance keeps it out of the top tier in our overall performance metric. Still, the drive scores higher than a handful of SSDs that includes the Corsair Force Series 3, Crucial m4, Intel 320 Series, and OCZ Agility 4.
Now for the real magic. We can plot this overall score on one axis and each drive's cost per gigabyte on the other to create a scatter plot of performance per dollar per gigabyte. The best place on the plot is the upper-left corner, which combines high performance with a low price.
The scatter plot nicely puts things into perspective. There are lots of drives that offer better performance than the Samsung 840 Series, but they all cost more. In fact, with the exception of the 830 Series, those faster drives cost a fair bit more.
If we consider the 840 Series versus its more direct competition, the drive looks like a pretty sweet deal. It's cheaper than drives in the second tier but scores higher on our performance scale.
Conclusions
Those expecting the Samsung 840 Series to be a true successor to the 830 Series will no doubt be disappointed. The new drive is slower than its predecessor overall and quite sluggish with both sequential and random writes. Samsung's use of TLC memory is almost certainly to blame for the slower write performance, but the firm hasn't bet the farm on triple-stuffed NAND. The 840 Pro is the true follow-up to the 830 Series, and it features MLC chips with two bits per cell. Of course, the Pro also costs 50% more than the 840 Series. Clearly, it's a different class of drive.
The 840 Series is really a budget SSD. When you look at its performance versus other offerings in that arena—the Crucial m4, OCZ Agility 4, and asynchronous SandForce drives like the Corsair Force Series 3—the 840 Series is very competitive. Even its plodding write speeds don't seem so bad when compared with those direct rivals. Factor in the drive's low idle power consumption and slick utility software, and it starts to look pretty desirable. And don't forget that the 840 Series costs less than all of its peers.
Of course, we still have questions surrounding the 840 Series' endurance; the useful life of the drive is kind of a big deal. Endurance is particularly important considering the 840 Series' NAND, whose finer geometry and extra bit per cell both conspire to shorten the life of the flash. Given those obvious red flags, we wish Samsung were more forthcoming about the 840 Series' expected lifespan.
Samsung's ability to sort the NAND it produces selectively certainly leaves room for optimism. The fact that the company has been dealing with the stringent validation requirements of big-name PC makers for years lends some credibility to the final product, too. So does the three-year warranty, even if it doesn't tell us how much data can be written to the drive during that time.
As long as its NAND holds up, the 840 Series looks like a good option for anyone seeking an inexpensive path to solid-state storage. The slim case and low idle power consumption are especially appealing for notebooks, which may be the perfect environment for this drive. Thing is, the shadow of the 830 Series looms large even as its supply fades. The 256GB model costs 10 bucks more than the equivalent 840 Series but offers a little more storage and a lot more performance.
The 830 Series is still the best option for desktop use... while it lasts. Notebook users more concerned with NAND endurance than battery life are probably better off taking that route, too, but road warriors should give the 840 Series a closer look. Its combination of low power consumption and a low cost per gigabyte is a potent cocktail for mobile systems. | true | true | true | Samsung’s latest is a good drive for the price, but not fantastic. | 2024-10-13 00:00:00 | 2012-11-16 00:00:00 | article | techreport.com | Ars Technica | null | null |
|
21,384,397 | https://en.wikipedia.org/wiki/Bicycle_infantry | Bicycle infantry - Wikipedia | Authority control databases National Germany | # Bicycle infantry
Part of a series on |
War (outline) |
---|
**Bicycle infantry** are infantry soldiers who maneuver on (or, more often, between) battlefields using military bicycles. The term dates from the late 19th century, when the "safety bicycle" became popular in Europe, the United States, and Australia. Historically, bicycles lessened the need for horses, fuel and vehicle maintenance. Though their use has waned over the years in many armies, they continue to be used in unconventional armies such as militias.
## History
[edit]### Origins
[edit]The development of pneumatic tires coupled with shorter, sturdier frames during the late 19th century led to the investigation of possible military uses for bicycles.[1] To some extent, bicyclists took over the functions of dragoons, especially as messengers and scouts, substituting for horses in warfare.[2] Bicycle units or detachments were in existence by the end of the 19th century in most armies. The United Kingdom employed bicycle troops in militia or territorial units, rather than in regular units. Essentially this reflected the popularity of cycling amongst the civilian population and the perceived value of bicycles in providing increased mobility for home defence units.[3]
In 1887 the first of a series of cyclist maneuvers involving British volunteer units was held.[4][5] In France, several experimental units were created, starting in 1886.[6] They developed folding bicycles, that could be collapsed and carried slung across the backs of their riders, from an early date. By 1900 each French line infantry and chasseur battalion had a cyclist detachment, intended for skirmishing, scouting and dispatch carrying. In the years prior to World War I the availability of an extensive network of paved or gravel roads in western Europe made military cyclists appear a feasible alternative to horse-mounted troops; on the grounds of economy, simplicity of training, relative silence when on the move and ease of logistical support. The Dutch and Belgian armies, with extensive flat terrain within their national boundaries, maintained battalion or company sized units of cyclists. The Italian Bersaglieri expanded their established role as fast-moving light infantry through the extensive use of bicycles from the 1890s onwards. Even the Swiss Army found bicycles to be a useful means of mobility in rough terrain where horse cavalry could not be used. The Imperial Russian Gendarmerie used bicycles with outrigger wheels, to mount patrols along the Siberian Railway before and during the Russo-Japanese War of 1905.(see illustration opposite).
Late in the 19th century the United States Army tested the bicycle's suitability for cross-country troop transport. The most extensive experimentation on bicycle units was carried out by 1st Lieutenant James A. Moss, of the 25th United States Infantry (Colored) (an African American infantry regiment with European American officers). Using a variety of cycle models, Moss and his troops, accompanied by an assistant surgeon, carried out extensive bicycle journeys covering between 800 and 1,900 miles (1287 to 3058 km). In 1896, Moss' Buffalo Soldiers stationed in Montana rode bicycles across roadless landscapes for hundreds of miles at high speed. The "wheelmen" traveled the 1,900 Miles to St. Louis Missouri in 40 days with an average speed of over 6 mph. A proposed ride from Missoula to San Francisco was not approved and the experiments terminated.[7]
The first known use of the bicycle in combat occurred during the 1895 Jameson Raid, in which cyclists carried messages. In the Second Boer War military cyclists were used primarily as scouts and messengers. One unit patrolled railroad lines on specially constructed tandem bicycles that were fixed to the rails. Several raids were conducted by cycle-mounted infantry on both sides; the most famous unit was the *Theron se Verkenningskorps* (Theron Reconnaissance Corps) or TVK, a Boer unit led by the scout Daniel Theron, whom British commander Lord Roberts described as "the hardest thorn in the flesh of the British advance." Roberts placed a reward of £1,000 on Theron's head—dead or alive—and dispatched 4,000 soldiers to find and eliminate the TVK.[8]
### World Wars
[edit]During World War I cycle-mounted infantry, scouts, messengers and ambulance carriers were extensively used by all combatants. Italy used bicycles with the *Bersaglieri* (light infantry units) until the end of the war. German Army *Jäger* (light infantry) battalions each had a bicycle company (*Radfahr-Kompanie*) at the outbreak of the war, and additional units were raised during the war, bringing the total to 80 companies. A number of these were formed into eight *Radfahr-Bataillonen* (bicycle battalions). The British Army had cyclist companies in its divisions, and later two whole divisions became cyclists: 1st and 2nd Cyclist Divisions.
Prior to the start of trench warfare the level terrain in Belgium was well used by military cyclists. Each of the four Belgian carabinier battalions included a company of cyclists, equipped with a brand of folding bicycle named the *Belgica*. A regimental cyclist school gave training in map reading, reconnaissance, reporting and the carrying of verbal messages. Attention was paid to the maintenance and repair of the machine itself.[9]
In its 1937 invasion of China, Japan employed some 50,000 bicycle troops. Early in World War II their southern campaign through Malaya en route to capturing Singapore in 1941 was largely dependent on bicycle-riding soldiers. In both efforts bicycles allowed quiet and flexible transport of thousands of troops who were then able to surprise and confuse the defenders. Bicycles also made few demands on the Japanese war machine, needing neither trucks nor ships to transport them, nor precious petroleum. Although the Japanese were under orders not to embark for Malaya with bicycles, for fear of slowing up amphibious landings, they knew from intelligence that bicycles were plentiful in Malaya and moved to systematically confiscate bicycles from civilians and retailers as soon as they landed. Using bicycles, the Japanese troops were able to move faster than the withdrawing Allied Forces, often successfully cutting off their retreat. The speed of Japanese advance, usually along plantation roads, native paths and over improvised bridges, also caught Allied Forces defending the main roads and river crossings by surprise, by attacking them from the rear. However, there were one or two cases of Australian troops turning the tables on the Japanese by isolating cycle troops from their accompanying motorized forces after blowing up bridges over rivers.
During the Invasion of Poland of 1939, most Polish infantry divisions included a company of bicycle-riding scouts. The equipment of each bicycle company included 196 bicycles, one motorcycle with sidecar, and nine horse-drawn supply carts, plus three to six anti-tank rifles and standard infantry equipment such as machine guns, rifles, pistols, and hand grenades.[10][ circular reference]
The Finnish Army utilized bicycles extensively during the Continuation War and Lapland War. Bicycles were used as a means of transportation in Jaeger Battalions, divisional Light Detachments and regimental organic Jaeger Companies. Bicycle units spearheaded the advances of 1941 against the Soviet Union. Especially successful was the 1st Jaeger Brigade which was reinforced with a tank battalion and an anti-tank battalion, providing rapid movement through limited road network. During winter time these units, like the rest of the infantry, switched to skis. Within 1942–1944 bicycles were also added to regimental equipment pools. During the Summer 1944 battles against the Soviet Union, bicycles provided quick mobility for reserves and counter-attacks. In Autumn 1944 bicycle troops of the Jaeger Brigade spearheaded the Finnish advance through Lapland against the Germans; tanks had to be left behind due to the German destruction of the Finnish road network.
The hastily assembled German Volksgrenadier divisions each had a battalion of bicycle infantry, to provide a mobile reserve.
Allied use of the bicycle in World War II was limited, but included supplying folding bicycles to paratroopers and to messengers behind friendly lines. The term "bomber bikes" came into use during this period, as US forces dropped bicycles out of planes to reach troops behind enemy lines.
By 1939, the Swedish Army operated six bicycle infantry regiments. They were equipped with domestically produced Swedish military bicycles. Most common was the m/42, an upright, one-speed roadster produced by several large Swedish bicycle manufacturers. These regiments were decommissioned between 1948 and 1952, and the bicycles remained for general use in the Army, or were transferred to the Home Guard. Beginning in the 1970s, the Army began to sell these as military surplus. They became very popular as cheap and low-maintenance transportation, especially among students. Responding to its popularity and limited supply, an unrelated company, Kronan, began to produce a modernized version of the m/42 in 1997.[ citation needed]
-
Italian Bersaglieri before World War I with folding bicycles strapped to their backs
-
German bicycle infantry during World War I
-
Most common bicycle used by Polish scout companies assigned to infantry divisions during the German invasion of Poland
-
Danish soldiers cycling to the front to fight the Germans during the German invasion of Denmark in 1940
-
Wehrmacht troops advancing on bicycles in 1944
### Later uses
[edit]Although much used in World War I, bicycles were largely superseded by motorized transport in more modern armies. In the past few decades, however, they have taken on a new life as a "weapon of the people" in guerrilla conflicts and unconventional warfare, where the cycle's ability to carry large, about 400 lb (180 kg), loads of supplies at the speed of a pedestrian make it useful for lightly equipped forces. For many years the Viet Cong and North Vietnamese Army used bicycles to ferry supplies down the "Ho Chi Minh trail", avoiding the repeated attacks of United States and Allied bombing raids. When heavily loaded with supplies such as sacks of rice, these bicycles were seldom rideable, but were pushed by a *tender* walking alongside. With especially bulky cargo, tenders sometimes attached bamboo poles to the bike for tiller-like steering (this method can still be seen practiced in China today). Vietnamese "cargo bikes" were rebuilt in jungle workshops with reinforced frames to carry heavy loads over all terrain.[ citation needed]
### 21st century
[edit]The use of the cycle as an infantry transport tool continued into the 21st century with the Swiss Army's Bicycle Regiment, which maintained drills for infantry movement and attack until 2001, when the decision was made to phase the unit out.[11]
Although the impact of bicycles is limited in modern warfare, the Finnish Defence Forces still trains all conscripts to use bicycles and skis.[12] American Paratroopers have jumped folding mountain bikes in several Airborne operations [13]
The Liberation Tigers of Tamil Eelam made use of bicycle mobility during the Sri Lankan Civil War.[ citation needed]
During the Russian invasion of Ukraine, electric bicycles were used for anti tank purposes. [14]
## See also
[edit]*April 9th*(film)—a 2015 Danish film about a group of Danish bicycle infantry during the German invasion of Denmark (1940).- Army Cyclist Corps
- Australian Cycling Corps
- Reich Labour Service
- Swiss army bicycle
## Citations
[edit]**^**Leiser 1991, p. 10.**^**Leiser 1991, pp. 11–16.**^**R. Wilson, page 32, "East York Volunteer Infantry 1859-1908", Fineprint Hull 1982**^**Chisholm, Hugh, ed. (1911). .*Encyclopædia Britannica*. Vol. 07 (11th ed.). Cambridge University Press. pp. 682–685, see page 685, final para.Military....The cycle has also been taken up for military purposes. For this idea the British army is indebted to Colonel A. R. Savile, who in 1887 organized the first series of cycle manœvres in England. Since then military cycling has undergone a great development, not only in the country of its origin but in most others.
**^**An article written by Sir Arthur Conan Doyle, published in the Daily Express of 8 February 1910, argued the case for Yeomanry cyclists replacing mounted troops. Prime arguments given were numbers available, tactical advantage, rapidity, and relative cheapness**^**Leiser 1991, p. 11.**^**The Bicycling Buffalo Soldiers**^**"Danie Theron". Archived from the original on 8 February 2009. Retrieved 7 October 2007.**^**Pages 21-22 "Handbook of the Belgian Army 1914", prepared by the General Staff, British War Office, ISBN 978-1-78331-094-4**^**pl:Kompania kolarzy w 1939**^**Doole, Claire (11 May 2001). "End of road for Swiss army cyclists". BBC News. Retrieved 5 February 2008.**^**"Puolustusvoimien polkupyörillä ei enää ole merkitystä sodan aikana". 15 April 2009.**^**"1st Tactical Studies Group (Airborne), Operation: Dark Claw".*combatreform.org*. 22 April 2011. Retrieved 28 June 2023.**^**Parker, Connor (19 May 2022). "Ukrainians using e-bikes mounted with missiles to blow up Russian tanks". Yahoo! News. Retrieved 20 May 2022.
## General and cited references
[edit]- Leiser, Rolf (1991).
*Hundert Jahre Radfahrer-Truppe, 1891–1991*[*Bicycle Troops, 1891–1991*] (in German). Bern, Switzerland: Bundesamt für Mechanisierte u. Leichte Truppen. OCLC 885590986. - Fitzpatrick, Jim (1998).
*The Bicycle in Wartime: An Illustrated History*. Washington, D.C.: Brassey's Inc. ISBN 1-57488-157-4. - Ekström, Gert; Husberg, Ola (2001).
*Älskade cykel*(1st ed.). Bokförlaget Prisma. ISBN 91-518-3906-7. - Koelle, Alexandra V. (2010). "Pedaling on the Periphery: The African American Twenty-fifth Infantry Bicycle Corps and the Roads of the American Expansion".
*The Western Historical Quarterly*.**41**(3): 305–26. doi:10.2307/westhistquar.41.3.0305. - Fletcher, Marvin E. (Autumn 1974). "The Black Bicycle Corps".
*Arizona and the West*.**16**(3): 219–32. JSTOR 40168452.
## External links
[edit]- Media related to Bicycle troops at Wikimedia Commons | true | true | true | null | 2024-10-13 00:00:00 | 2004-11-04 00:00:00 | website | wikipedia.org | Wikimedia Foundation, Inc. | null | null |
|
1,521,256 | http://money.cnn.com/2010/07/16/autos/toyota_tesla_rav4/index.htm | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,395,838 | https://www.facebook.com/topic/Facebook/107885072567744 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,740,396 | https://arstechnica.com/information-technology/2024/06/cyberattacks-have-forced-thousands-of-car-dealerships-to-paper-for-a-second-day/ | Single point of software failure could hamstring 15K car dealerships for days | Kevin Purdy | CDK Global touts itself as an all-in-one software-as-a-service solution that is "trusted by nearly 15,000 dealer locations." One connection, over an always-on VPN to CDK's data centers, gives a dealership customer relationship management (CRM) software, financing, inventory, and more back-office tools.
That all-in-one nature explains why people trying to buy cars, and especially those trying to sell them, have had a rough couple of days. CDK's services have been down, due to what the firm describes as a "cyber incident." CDK shut down most of its systems Wednesday, June 19, then told dealerships that afternoon that it restored some services. CDK told dealers today, June 20, that it had "experienced an additional cyber incident late in the evening on June 19," and shut down systems again.
"At this time, we do not have an estimated time frame for resolution and therefore our dealers' systems will not be available at a minimum on Thursday, June 20th," CDK told customers.
As of 2 pm Eastern on June 20, an automated message on CDK's updates hotline said that, "At this time, we do not have an estimated time frame for resolution and therefore our dealers’ systems will not be available likely for several days." The message added that support lines would remain down due to security precautions. Getting retail dealership services back up was "our highest priority," the message said.
On Reddit, car dealership owners and workers have met the news with some combination of anger and "What's wrong with paper and Excel?" Some dealerships report not being able to do more than oil changes or write down customer names and numbers, while others have sought to make do with documenting orders they plan to enter in once their systems come back online.
"We lost 4 deals at my store because of this," wrote one user Thursday morning on r/askcarsales. "Our whole auto group uses CDK for just about everything and we are completely dead. 30+ stores in our auto group."
Dealerships also get their toner and paper from CDK... | true | true | true | “Cyber incident” affecting 15K dealers could mean outages “for several days.”… | 2024-10-13 00:00:00 | 2024-06-20 00:00:00 | article | arstechnica.com | Ars Technica | null | null |
|
12,286,387 | http://users.metu.edu.tr/utuba/Huntington.pdf | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,602,323 | http://haskellbook.com/ | null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
10,978,782 | http://stackoverflow.com/q/35031150/2404470 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,971,691 | https://github.com/remind101/assume-role | GitHub - remind101/assume-role: Easily assume AWS roles in your terminal. | Remind | This tool will request and set temporary credentials in your shell environment variables for a given role.
On OS X, the best way to get it is to use homebrew:
`brew install remind101/formulae/assume-role`
If you have a working Go 1.6/1.7 environment:
`$ go get -u github.com/remind101/assume-role`
On Windows with PowerShell, you can use scoop.sh
```
$ scoop bucket add extras
$ scoop install assume-role
```
Setup a profile for each role you would like to assume in `~/.aws/config`
.
For example:
`~/.aws/config`
:
```
[profile usermgt]
region = us-east-1
[profile stage]
# Stage AWS Account.
region = us-east-1
role_arn = arn:aws:iam::1234:role/SuperUser
source_profile = usermgt
[profile prod]
# Production AWS Account.
region = us-east-1
role_arn = arn:aws:iam::9012:role/SuperUser
mfa_serial = arn:aws:iam::5678:mfa/eric-holmes
source_profile = usermgt
```
`~/.aws/credentials`
:
```
[usermgt]
aws_access_key_id = AKIMYFAKEEXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/MYxFAKEYEXAMPLEKEY
```
Reference: https://docs.aws.amazon.com/cli/latest/userguide/cli-roles.html
In this example, we have three AWS Account profiles:
- usermgt
- stage
- prod
Each member of the org has their own IAM user and access/secret key for the `usermgt`
AWS Account.
The keys are stored in the `~/.aws/credentials`
file.
The `stage`
and `prod`
AWS Accounts have an IAM role named `SuperUser`
.
The `assume-role`
tool helps a user authenticate (using their keys) and then assume the privilege of the `SuperUser`
role, even across AWS accounts!
Perform an action as the given IAM role:
`$ assume-role stage aws iam get-user`
The `assume-role`
tool sets `AWS_ACCESS_KEY_ID`
, `AWS_SECRET_ACCESS_KEY`
and `AWS_SESSION_TOKEN`
environment variables and then executes the command provided.
If the role requires MFA, you will be asked for the token first:
```
$ assume-role prod aws iam get-user
MFA code: 123456
```
If no command is provided, `assume-role`
will output the temporary security credentials:
```
$ assume-role prod
export AWS_ACCESS_KEY_ID="ASIAI....UOCA"
export AWS_SECRET_ACCESS_KEY="DuH...G1d"
export AWS_SESSION_TOKEN="AQ...1BQ=="
export AWS_SECURITY_TOKEN="AQ...1BQ=="
export ASSUMED_ROLE="prod"
# Run this to configure your shell:
# eval $(assume-role prod)
```
Or windows PowerShell:
```
$env:AWS_ACCESS_KEY_ID="ASIAI....UOCA"
$env:AWS_SECRET_ACCESS_KEY="DuH...G1d"
$env:AWS_SESSION_TOKEN="AQ...1BQ=="
$env:AWS_SECURITY_TOKEN="AQ...1BQ=="
$env:ASSUMED_ROLE="prod"
# Run this to configure your shell:
# assume-role.exe prod | Invoke-Expression
```
If you use `eval $(assume-role)`
frequently, you may want to create a alias for it:
- zsh
`alias assume-role='function(){eval $(command assume-role $@);}'`
- bash
`function assume-role { eval $( $(which assume-role) $@); }`
- Cache credentials. | true | true | true | Easily assume AWS roles in your terminal. Contribute to remind101/assume-role development by creating an account on GitHub. | 2024-10-13 00:00:00 | 2016-01-26 00:00:00 | https://opengraph.githubassets.com/586f4c4b52cb449c6bd6d368b5b8295f018456df84c6523f1e40f74c53b59e81/remind101/assume-role | object | github.com | GitHub | null | null |
28,498,058 | https://theevilbit.github.io/beyond/ | Beyonds | Csaba Fitzl | -
October 10, 2024
Beyond the good ol' LaunchAgents - 34 - launchd boot tasks
-
June 12, 2024
Beyond the good ol' LaunchAgents - 33 - Widgets
-
September 29, 2023
Beyond the good ol' LaunchAgents - 32 - Dock Tile Plugins
-
May 26, 2023
Beyond the good ol' LaunchAgents - 31 - BSM audit framework
-
May 10, 2023
Beyond the good ol' LaunchAgents - 30 - The man config file - man.conf
-
March 8, 2022
Beyond the good ol' LaunchAgents - 29 - amstoold
-
February 9, 2022
Beyond the good ol' LaunchAgents - 28 - Authorization Plugins
-
February 8, 2022
Beyond the good ol' LaunchAgents - 27 - Dock shortcuts
-
February 5, 2022
Beyond the good ol' LaunchAgents - 26 - Finder Sync Plugins
-
December 15, 2021
Beyond the good ol' LaunchAgents - 25 - Apache2 modules
-
December 2, 2021
Beyond the good ol' LaunchAgents - 24 - Folder Actions
-
November 27, 2021
Beyond the good ol' LaunchAgents - 23 - emond, The Event Monitor Daemon
-
November 24, 2021
Beyond the good ol' LaunchAgents - 22 - LoginHook and LogoutHook
-
October 12, 2021
Beyond the good ol' LaunchAgents - 21 - Re-opened Applications
-
September 22, 2021
Beyond the good ol' LaunchAgents - 20 - Terminal Preferences
-
August 6, 2021
Beyond the good ol' LaunchAgents - 19 - Periodic Scripts
-
June 28, 2021
Beyond the good ol' LaunchAgents - 18 - X11 and XQuartz
-
May 31, 2021
Beyond the good ol' LaunchAgents - 17 - Color Pickers
-
May 30, 2021
Beyond the good ol' LaunchAgents - 16 - Screen Saver
-
May 12, 2021
Beyond the good ol' LaunchAgents - 15 - xsanctl | true | true | true | null | 2024-10-13 00:00:00 | 2024-10-10 00:00:00 | null | website | github.io | Theevilbit Blog | null | null |
10,000,124 | http://www.hopesandfears.com/hopes/future/technology/215907-google-live-streetview | Mindblown: a blog about philosophy. | null | # Mindblown: a blog about philosophy.
-
## Hello world!
Welcome to WordPress. This is your first post. Edit or delete it, then start writing!
Got any book recommendations?
Welcome to WordPress. This is your first post. Edit or delete it, then start writing!
Got any book recommendations? | true | true | true | null | 2024-10-13 00:00:00 | 2023-06-30 00:00:00 | null | null | null | null | null | null |
21,850,452 | https://medium.com/1-minute-reads/asking-this-simple-question-will-make-you-understand-your-hidden-motives-6cfc6093ae25 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,079,848 | https://www.autocar.co.uk/slideshow/what-has-mercedes-benz-ever-done-us#1 | Slideshow | null | Small, basic steps can keep your car running longer -- and save you money
A close look into the enticing machines that the Blue Oval keeps hidden away for special occasions
We take a look at what we think are the most important cars launched each year since 1945
The cars that prove you don't need rear-drive balance to have fun
Too many concepts nowadays are just watered-down previews of production cars. But not these – they were all far too mad to ever be sold
We take a look at all the cars that changed the world – one new technology at a time
Need to make a quick getaway? These are some of the greatest heist cars ever used - by fictional and real life criminals
Everyone loves a fast wagon - and in this story we're rounding up the very best you've ever been able to buy
Founded in 1964, and still family-run today, Windy Hill Auto Parts is a treasure trove of rare and interesting nuggets
It doesn't matter how great the rest of the car is, if the engine is a dud, the car will always be a lemon…
View all car reviews | true | true | true | All the latest car news slideshows, direct from Autocar's team of automotive experts | 2024-10-13 00:00:00 | 2024-10-12 00:00:00 | null | null | autocar.co.uk | autocar.co.uk | null | null |
3,737,440 | http://www.4007167187.com | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,566,026 | http://www.usatoday.com/story/opinion/2017/01/29/health-care-surgery-india-america-disruption-column/97056938/ | U.S. health care needs a wakeup call from India: Column | Robert Pearl | # U.S. health care needs a wakeup call from India: Column
## Global innovators are doing high-quality $1,800 heart surgery. Why aren't we paying attention?
In Bangalore, India, heart surgeons perform daily state-of-the-art heart surgery on adults and children at an average cost of $1,800. For the record, that’s about 2% of the $90,000 that the average heart surgery costs in the United States. And when it comes to the quality of the heart surgery, the patient outcomes are among the best in the world.
I visited India during Thanksgiving week to meet with Dr. Devi Shetty. He’s the heart surgeon who served as personal physician to Mother Teresa and now runs Narayana Health, with 20 hospitals in India. I wanted see what the United States could learn from medical innovations undertaken halfway around the world, and how he achieves these impressive clinical results at such a low price.
The high quality and low cost represent the type of disruptive innovation that has impacted nearly all industries in the United States, and should serve as a wake-up call for American doctors and hospitals. Disruption is what we call it when lower-priced alternatives to current products and services are introduced. Each time, the incumbents in the field scoff at them and dismiss their long-term impact, only to be blindsided when after a decade these companies dominate.
Ask most Americans about obtaining their health care outside of the United States, and they respond with disdain and negativity. In their mind, the quality and medical expertise available elsewhere is second-rate. Of course, that's exactly what Yellow Cab thought about Uber, Kodak thought about digital photography, General Motors thought about Toyota and Borders thought about Amazon.
To date, doctors and hospitals have been spared the pain of disruption. But that day is ending, and I predict that even people looking for it to happen are gazing in the wrong direction. They expect disruption to be led by companies like Google and Apple or maybe entrepreneurial start-ups. Based on my time in India, they should be looking globally.
Trump is keeping too much of Obamacare: Christian Schneider
Let states lead on replacing Obamacare: Column
Most families in India have no health insurance, and often need to borrow the money to pay for surgery. When it costs $1,800 for heart surgery, Shetty can offer it to only so many children. But if it costs less, he can save more lives. He and the people working in his hospitals are driven to transform health care to save lives.
The day I was there, the teams of surgeons performed 37 heart surgeries on adults and children, including one heart transplant. That translates into about 900 procedures a month, or about what most U.S. university hospitals do a year. His success results from a combination of high volume, advanced technology and a focus on people and performance.
In surgery, the experience of the surgeon and the team are the best predictors of superior clinical outcomes. As you might imagine, given the huge volume of procedures his team performs each day, his hospital’s results are exceptional. And contrary to what Americans may assume, the entire surgical experience is cutting edge, and beyond what is available almost anywhere in the United States.
For example, clinicians use a sophisticated electronic health record they developed, with the information stored on an iPad. Unlike nearly all U.S. EHR systems, the application is so intuitive that minimal physician or nurse training is required. The operating rooms themselves have huge windows leading to protected gardens designed to allow natural sunlight to enter and spur creativity.
The bedside monitoring equipment links with a central computer system, allowing clinical leaders like Devi to measure each day how long it took a physician to intervene for a potentially urgent medical problem. In the United States this often exceeds an hour at night and on the weekend. In India it was eight minutes. The disruptive innovation he has implemented isn’t just lower cost, it's also higher quality.
The hospital's focus on people was widely evident. Embroidered on the white coats of doctors, nurses and staff was the question, "How can I help you?"
Its research facility matched the best university hospitals, with tens of thousands of fully sequenced DNA specimens, and a place to store cancer tissue from today, to be tested against the discoveries of tomorrow. Unlike many in the U.S., it was fully integrated with clinical practice, focusing on the questions and opportunities raised by the treating physicians. Next door was the lab focused on nanotechnology, and down the hall, one working on a vaccine to prevent heart attacks.
**POLICING THE USA: A look at race, justice, media**
Obamacare costs way more than it should: Column
Devi took me to Biocon, a drug manufacturing company with its visionary chairman and managing director, Dr. Kiran Mazumdar-Shaw. The manufacturing machines were identical to those used by companies in the U.S. and the cleanliness matched the best in the world. The company’s products were some of the most sophisticated in medicine, including biosimilar drugs for the treatment of cancer, inflammation and arthritis. But one stood out in particular, and that was insulin.
The medication had identical action to what's sold in the U.S. And its preloaded syringes, with a sophisticated calibrating mechanism, were more accurate in dose than any I've seen. What was most remarkable was the price — less than 10% of what it costs Americans with diabetes today. The combination of massive scale and appropriate pricing accounted for the 10-fold difference.
The U.S. can continue for a few more years to provide inefficient medical care and tolerate exorbitantly priced drugs. But at some point, insurance companies will start offering patients and their families all-expense-paid trips around the globe, and maybe even $5,000 to use in the duty-free shop after their surgery and medical treatment are complete.
Most American doctors and hospitals see India as far away, and as a result, underestimate the dangers they face from global disruption. They assume that people won’t be willing to travel halfway around the world for surgery. And they may be right. But before they become too complacent, they should look to the Grand Cayman Island, with its seven-mile white sand beach and tourist culture. There Shetty is building a 2,000 bed hospital for a population of more than 50,000 citizens. Maybe it’s just a coincidence that by airplane Florida is less than an hour away.
*Robert Pearl is CEO of The Permanente Medical Group and a clinical professor of surgery at Stanford University.*
*You can read diverse opinions from our Board of Contributors and other writers on the Opinion front page, on Twitter @USATOpinion and in our daily Opinion newsletter. To submit a letter, comment or column, check our submission guidelines.* | true | true | true | Global innovators are doing high-quality $1,800 heart surgery. Why aren't we paying attention? | 2024-10-13 00:00:00 | 2017-01-29 00:00:00 | article | usatoday.com | USA TODAY | null | null |
|
10,669,207 | http://snooze.getutter.com/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
27,242,868 | https://www.economist.com/books-and-arts/2021/05/22/a-century-ago-ludwig-wittgenstein-changed-philosophy-for-ever | A century ago Ludwig Wittgenstein changed philosophy for ever | null | # A century ago Ludwig Wittgenstein changed philosophy for ever
## Written in the trenches, his “Tractatus Logico-Philosophicus” still baffles and inspires
OF ALL THE innovations that sprang from the trenches of the first world war—the zip, the tea bag, the tank—the “Tractatus Logico-Philosophicus” must be among the most elegant and humane. When the conflict began, this short treatise was a jumble of ideas in the head of a young Austrian soldier and erstwhile philosophy student called Ludwig Wittgenstein. By the time he was released from a prisoner-of-war camp during the Versailles peace conference, it had taken rough shape over a few dozen mud-splattered pages in his knapsack. In 1921 Wittgenstein found a publisher, and philosophy was changed for ever.
This article appeared in the Culture section of the print edition under the headline “The rest is silence”
## Discover more
### Why the world is so animated about anime
Japan’s cartoons have conquered its screens, and more
### How a second nuclear disaster was avoided at Chernobyl in 2022
The Russian occupation underscored the risks posed by nuclear sites in wartime
### Han Kang wins the Nobel prize in literature for 2024
The South Korean author offers another example of the country’s cultural clout
### How complicated is brain surgery actually?
A doctor reveals the myths and realities of his profession
### Why you should read Mohamed Mbougar Sarr
The Senegalese novelist is one of the boldest writers working today
### Is TV’s next sure-fire hit, “Disclaimer”, a must-watch or a dud?
The glitzy new thriller is both | true | true | true | Written in the trenches, his “Tractatus Logico-Philosophicus” still baffles and inspires | 2024-10-13 00:00:00 | 2021-05-20 00:00:00 | Article | economist.com | The Economist | null | null |
|
18,281,372 | https://medium.com/@tanaygahlot/https-medium-com-tanaygahlot-moving-beyond-the-distributional-model-for-word-representation-b0823f1769f8 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
39,534,797 | https://github.com/luryus/light-operator | GitHub - luryus/light-operator: Control smart lights with Kubernetes | Luryus | "Let there be light". And there was light.
Clearly, even the Bible recognized the importance of managing lights declaratively. Why not do that in our homes, too?
Light-operator allows managing smart lights with Kubernetes custom resources. This guarantees unparalleled scalability and reliability for switching lights on and off. You just need to run and maintain a Kubernetes cluster and define a few lines of YAML, and you have complete declarative control over your bulbs! Why would anyone use physical light switches or talk to a voice assistant anymore?
With light-operator, you can finally solve configuration drift issues with your lights! Light-operator continuously monitors your light status and reconciles them to achieve the declared state. No more other family members sabotaging your carefully crafted lighting configuration!
- Light states defined as Kubernetes custom resources
- Continuous reconciliation: light-operator will try to keep lights in the declared state
- Switch lights on
- Switch lights off
- Control brightness and color
- Have SmartThings-compatible light bulbs installed and integrated to your SmartThings account
- Generate a SmartThings API token here: https://account.smartthings.com/tokens. Make sure to add all the devices scopes (list all devices, see all devices, manage all devices & control all devices).
- Install light-operator with Helm:
`helm install my-light-operator oci://ghcr.io/luryus/charts/light-operator --set-string "smarthome.smartthings.apiToken=<your api token>"`
- Define the state of your bulb in a YAML file:
apiVersion: light-operator.lkoskela.com/v1alpha1 kind: Light metadata: name: living-room-ceiling-1 spec: # Grab the device ID from the SmartThings CLI (run `smartthings devices`) deviceId: <your device id> state: 'SwitchedOn' brightness: 80
- Apply the definition:
`kubectl apply -f light.yaml`
- See the bulb light up!
light-operator will now continuously monitor the light. If any changes are made to its state, it will be reconciled to match the definition. You can try this by turning the light off via the SmartThings app - in a moment, light-operator will turn it back on!
Here are all the options available to configure a light. Usable features depend on the capabilities of each light.
```
apiVersion: light-operator.lkoskela.com/v1alpha1
kind: Light
metadata:
name: living-room-ceiling-1
spec:
# Device ID: This is generated by SmartThings and identifies the device
deviceId:
# Is the light on (SwitchedOn) or (SwitchedOff)
state: 'SwitchedOn'
# The brightness level of the bulb (0-100)
brightness: 80
color:
# Either colorTemperature or hueSaturation must be specified
# Light color temperature, 1-60000 K
colorTemperature: 2700
# Color hue+saturation
hueSaturation:
# Hue 0-100
hue: 30
# Saturation 0-100
saturation: 60
```
It does! (But tested only with a single device)
Unfortunately, light-operator does not currently support crafting physical light bulbs (PRs welcome)
Sure!*
*With compatible RGB bulbs
It was the easiest way to control the single smart bulb I have (the cheapest non-shady looking one I could find). Support for other smart home platforms can be added (PRs welcome)
This felt like a fun little project to do and play around with kube-rs.
I wouldn't | true | true | true | Control smart lights with Kubernetes. Contribute to luryus/light-operator development by creating an account on GitHub. | 2024-10-13 00:00:00 | 2023-09-09 00:00:00 | https://opengraph.githubassets.com/cf7f45c7d1fd8def7f32258f4ee0d938836a03711f96c79c0daccaea9266a820/luryus/light-operator | object | github.com | GitHub | null | null |
11,596,572 | http://thenextweb.com/google/2016/04/28/google-may-testing-new-app/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,245,676 | https://github.com/oilshell/oil/wiki/Compact-AST-Representation | Compact AST Representation | Oils-For-Unix | -
-
Notifications
You must be signed in to change notification settings - Fork 155
# Compact AST Representation
- Reddit Thread: Representing ASTs as byte strings with with small integers rather than pointers
-
sajson:
*Last week, I described a JSON parse tree data structure that, worst case, requires N words for N characters of JSON text.*Single allocation JSON? -
Compiling tree transforms to operate on packed representations at ECOOPLDI 2017
- Conference Presentation on YouTube
- This is much more ambitious, but also more limited. Not only are there no pointers, but the tree traversal is equivalent to a linear scan through memory. It compiles a custom programming language and rearranges the operations so this is true.
- See limitations at the end of the paper.
- Gibbon compiler
- Good references in that paper:
- The Lattner/Adve paper below
- Cache-Conscious Data Structure Layout
- Compact Normal Form in Glasgow Haskell Compiler -- reduces both space and garbage collection pressure.
- Ken Thompson's postfix encoding for regexes (no pointers) is mentioned here: https://swtch.com/~rsc/regexp/regexp1.html
As opposed to array of pointers to heterogeneous structs
-
Vertical object layout and compression for fixed heaps
- Implemented in the Virgil Language by Ben Titzer
- Vertical Object Layout basically means transposing objects and fields. Analogous to arrays of structs -> struct of arrays. Except the "array" is every instance of a given type on the heap.
-
Co-dfns APL Compiler Thesis -- Uses an array-based representation of an AST. Section 3.2 is
*The AST and Its Representation*. What's novel is that this is a**parallel**compiler hosted on the CPU. So it's not just compact, but laid out in a way amenable to parallel computation. -
Zig PR to use Struct-of-Arrays and Indices in AST. Proof of concept for other IRs: https://github.com/ziglang/zig/pull/7920
- Video Explanation: Practical Data-Oriented Design - Handmade Seattle 2021
- Downside: you lose type info since you have integers rather than pointers
-
Modernizing Compiler Design for Carbon's Toolchain - CppNow 2023
- Primary Homogeneous Dense Array packed with most ubiquitous data
- Indexed access allows most connections to use adjacency (implicit pointer, no storage required)
- Side arrays for secondary data, referred to with compressed index
- Flyweight Handle wrappers
This is related: more general, but more complex.
-
Compressed Ordinary Object Pointers (Oops) in 64-bit JVMs mean that 32-bit integers can address 4 GiB * 8 = 32 GiB of memory (since objects are at least 8-byte aligned)
- StackOverflow: Trick behind JVM's compressed Oops
- Apparently this is the default since Java 7 circa 2011, when the max heap size is < 32 GiB
- https://shipilev.net/jvm/anatomy-quarks/23-compressed-references/
-
Relative Pointers. Good survey of 4 kinds of relative pointers, including two that have the
`memcpy()`
property:- Virtual Memory Pointers. What we all use on modern OSes.
- Global-based pointers (relative to global variable). There are compiler extensions I didn't know about.
- Offset pointers -- relative to an explicitly defined base, such as the beginning of a file
*Offset pointers can also be replicated in C++ through the use of templates. Similar to the previous user-defined based pointers above, the size of the underlying integer can be any size.*
- Self-Relative/Auto-Relative pointers—relative to its own memory address
- TODO: explore this for Oil. Might be easier to put multiple interpreters in the same address space. Will complicated debugging a lot!
- Reddit: Transparent Pointer Compression for Linked Data Structures by Lattner, Adve -- divide address in to 4 GiB pools so that internal pointers are 32-bit.
- Jai Demo: Relative Pointers
- Object-Relative Addressing: Compressed Pointers in 64-bit Java Virtual Machines
- Reddit: Idea for Tiny Relative Pointers. Using the LSB for external vs. internal might be a good idea. I also wanted a bit for const vs. non-const.
-
CppCon 2019: Steven Pigeon “Small is beautiful: Techniques to minimise memory footprint” -- a lot of C++ metaprogramming tricks with
`constexpr`
, etc. Goes to the extreme on size but says little about speed. No benchmarks and no concrete applications, just small pieces of C++. -
Why Sorbet Is Fast -- Fast static type checker for Ruby written in nice C++. 32-bit IDs for interned strings for fast comparison.
`GlobalState`
and`Ref`
. - Flat Buffers don't need to be deserialized before using them. Also there is possible precursor here with flat_file_memory_management (no code)
-
Pointer Compression in V8 (March 2020). Chrome switched to being a 64-bit process in 2014, so they got this problem. Each v8 instance is limited to around 4 GB, although they still want to have multiple instances in a single address space (I guess for web workers, where all the workers have the same privileges).
*In order to simplify integration of Pointer Compression into existing code, we decided to decompress values on every load and compress them on every store. Thus changing only the storage format of tagged values while keeping the execution format unchanged.**we observed that Pointer Compression reduces V8 heap size up to 43%! In turn, it reduces Chrome’s renderer process memory up to 20% on Desktop.*- More comments on Hacker News
- [smol_world: Compact garbage-collected heap and JSON-like object model] comments on lobste.rs. Library in C++ 20 using relative 32-bit "pointers".
- You Can Have It All: Abstraction and Good Cache Performance. (2017) Divide objects into pools and can do SoA/AoS on individual sets of fields. It seems like this is done manually, which might be too onerous for big apps? The work hasn't been integrated into a compiler yet. Good related work section, citing a lot of the work above.
- https://news.ycombinator.com/item?id=24740889
-
*Miniphases: compilation using modular and efficient tree transformations, Odersky et al.*(2017) -
*TreeFuser: a framework for analyzing and fusing general recursive tree traversals.*(2017)- https://dl.acm.org/doi/abs/10.1145/3133900
- This one said they prototyped it in Clang.
-
*Deforestation: Transforming programs to eliminate trees, Wadler*(1988)
*I think the key point is that some of the frameworks introduce restrictions that make it hard to write a big program like a compiler. There is a hard tradeoff between the amount of fusing that can be done and how expressive the metalanguage is.*
-
Efficient Communication and Collection with Compact Normal Forms - Glasgow Haskell Compiler
- Paper: http://ezyang.com/papers/ezyang15-cnf.pdf
- Slides: http://ezyang.com/slides/ezyang15-cnf-slides.pdf
- Reduces GC pressure
- No outgoing pointers
- doesn't seem to have identity/sharing? Duplication is the default
- Oil Blog Post: What is oheap?
- Oheap V1 was a read-only format for transporting an ASDL tree across the Python/C++ boundary. Though it turned out there wasn't a single clean boundary in the shell's architecture.
-
OHeap2 was a read-write format, for OVM2.
- At first I used it to represent .pyc files (Python's
`CodeObject`
) - Meant to replace marshal, pickle, and zipimport.
- 4-16-N model: 4 byte refs, 16 byte cells, and N byte slabs.
- See
`ovm2/oheap2.py`
- OHeap2 might still be a good idea, but I judged OVM2 not to be a good idea
- At first I used it to represent .pyc files (Python's | true | true | true | Oils is our upgrade path from bash to a better language and runtime. It's also for Python and JavaScript users who avoid shell! - oils-for-unix/oils | 2024-10-13 00:00:00 | 2024-06-28 00:00:00 | https://opengraph.githubassets.com/15d83ce47b1a08542cc7b44a416ceb37d0af70520d86e5f243fd9dc241746c51/oils-for-unix/oils | object | github.com | GitHub | null | null |
1,506,030 | http://techcrunch.com/2010/07/11/this-mobile-payments-company-may-self-destruct-in-15-minutes/ | This Mobile Payments Company May Self Destruct In 15 Minutes | TechCrunch | Aria Alamalhodaei | C$ cMoney, a mobile payments startup based in Houston, is having quite a week. On Friday, the company announced an impressive funding round of $100 million from private equity firm AGS Capital Group. In total, cMoney has secured $115 million in funding commitments from AGS and a NY-based firm called Kodiak Capital Group.
The latest announcement was picked up by a few news outlets, including the WSJ’s Venture Capital Dispatch which touted cMoney as “a start-up with a funky name [that] has an ambitious plan for replacing consumers’ debit and credit cards with a mobile-payment system.”
Sounds like another promising company in the red-hot mobile payments sector—a market that Nokia, PayPal, and a host of startups like Zong are trying to crack. Except, I don’t buy it. There is too much hype, and too little actual product coming from cmoney.
That highly-touted $100 million funding, for instance, turns out not be a venture round at all, but rather an equity line of credit (a lot more on that later). And the company is being sued by a CEO it recruited. Read on for a tale of a bizarre reverse merger, promises of million-dollar fees, and a young, 28-year old founder who lists among her accomplishments and qualifications her childhood sports activities, including “dance, gymnastics, soccer, softball and tennis.”
**What is cMoney?**
The company, which launched in March 2009, is developing a mobile application that will let users send or receive money via temporary connections. According to reports, the application will be linked to a users’ credit cards and accounts. When s/he wants to make a purchase, the user logs-in to the app, submits information on the transaction, punches in a password and retrieves a code. The user will be able to give that code to a cashier for payment and after 15 minutes the code will expire.
Like many startups, the company’s product is not ready for market, it’s in the “demonstration” phase. However, cMoney has struggled to meet its own deadlines. According to an April press release the launch was scheduled for this summer, then a May SEC filing predicted a fall release, and now, in its latest press release, the company is predicting a 2011 launch.
Beyond the delays, the structure of the startup itself, is odd: cMoney acquired a company called Bonfire in May 2010 in a reverse merger transaction. According to an S-1 filing, Bonfire had no active operations and was previously a business based on “producing, marketing and selling audio recordings of folk tales, fairy tales and other children’s stories under the brand name ‘Bonfire Tales.’”
Bonfire also disclosed in late March, just before the merger, that it had “no cash and a working capital deficit of $28,845.”
In fact, there was only one employee, the director, Tim DeHerrera, who was scheduled to resign after the deal.
Unless cMoney wanted to tap Bonfire’s rich trove of children’s fairytales, why would a mobile payments company purchase a seemingly defunct company with only debt, no synergies and virtually no employees? The only real interest here, seems to be Bonfire’s status as a publicly traded company, trading over the counter as a penny stock (under ticker symbol PINK:BNFR). Reverse mergers are typically done by companies who cannot go public in a more straightforward fashion.
**The Case Of The Missing CEO **
On May 6, 2010, cMoney named Lawrence Krasner to the position of CEO, Krasner is a fairly seasoned veteran of Wall Street, as a former Senior Vice President of Lehman Brothers, a Senior Manager of Ernst & Young, and a VP of JPMorgan. Sounds like a solid coup for a fledgling startup, except Krasner never spent a single day as cMoney’s CEO, according to the company’s SEC filing.
Instead, Krasner is building a legal case against the company, suing them for damages in excess of $700,000 and 1,950,000 in fully vested shares of the company’s common stock, according to a filing. I reached out to Krasner on Friday, but he said he could not comment at the time. The company says the claims “are without merit.”
The founder of the company, Jennifer Pharris, has assumed the role of CEO and Chairman of cMoney. With the exception of a thin LinkedIn profile and her official bio on cMoney’s website, there’s very little information available on Pharris online. However, that official bio is worth a read— if you can make it through the syntactical quagmire:
Jennifer Lynn Pharris, 27 , is a College Girl who turned to the Corporate World with her quest to save time and money in the new age revolution of instant gratification, she designed a revolutionary product called C$ cMoney !
Jennifer was born in Dallas, Texas lived there for (12) twelve years with her family and was very active child in dance, gymnastics, soccer, softball and tennis winning top awards and always known as a leader.
Her high school career outside Memphis, TN graduated her with Top Honors and was recognized as Who’s Who in American High School Students and active member of SADD, FTA, FHA, and DECA and received a scholarship awarded through FTA.
While attending college at Middle Tennessee State University outside of Nashville, TN, she concentrated her degree plan in business, marketing and economics. Jennifer was a Kappa Delta legacy member wherein her mother and grandmother were former members as well. Jennifer was active in her community and various social affairs.
Jennifer while attending college began work on dream concept which was to eliminate the need for Mom and Dad to wire transfer money to her account when she needed money at College, but Jennifer created a better way, just send the money to me on my cell phone. That small step back in college and some (4) four years later has evolved a new company, C$ cMoney and worldwide patents.
Jennifer and her love for Houston moved back home to the Houston Area after college with the one plan in mind which was to follow her dream and build her company. Jennifer gives all the credit to her relationship with the “Lord” and her active time with her church, Fellowship of The Woodlands which helped her to reach her goal. Jennifer’s goal was to make Houston the New Top Technology City with her new revolutionary product called C$ cMoney !
Highlights include the random capitalization of words like “College Girl,” “Mom” and “Corporate World,” and my favorite (for its casual, mid-sentence transition from the third to the first person):
Jennifer while attending college began work on dream concept which was to eliminate the need for Mom and Dad to wire transfer money to her account when she needed money at College, but Jennifer created a better way,
just send the money to me on my cell phone.
In fact there are typos and run-on sentences all over the website, with words like “trough” and “prividing” popping up on the investor relations and FAQ pages. While it is a bit unfair to judge them in this manner, this is supposed to be a startup valued north of $115 million. It’s hard to comprehend how any investor would look at this website and agree to plunk down $100 million— or even a million dollars. Which brings us to that $100 million question.
**Why did AGS and Kodiak Capital Group invest in cMoney? **
At the time of this post, the firms had not responded to my requests for comment. With no ready-for-market product, six employees, a poorly managed website, a lawsuit from an almost-CEO, and apparently more than $2.5 million in accumulated losses on the books and just $47,831 cash on hand (according to an S-1 May 27th filing), cMoney doesn’t seem like a prime candidate for a $100 million cash infusion.
Then again, cMoney may not see a lot of that cash after all.
AGS’s funding committment is contingent on several conditions and the structure of the deal itself is bizarre. Under the agreement, cMoney has the right to compel AGS to purchase its common stock shares, to “put” shares to AGS. Depending on the price of the share at the time, AGS will get a discount— for example, cMoney is currently trading at 3 cents a share, AGS would be eligible for a 50% discount to 1.5 cents a share. However, the key part in this agreement, is that cMoney “can not put shares to AGS if such put would cause AGS to own in excess of 4.99% of our outstanding shares of common stock.” In other words, once AGS hits that 4.99% bar, cMoney can no longer compel them to purchase the common stock and with the company currently trading at a market cap of $1.36 million — AGS is only on the hook for *maybe* a couple hundred thousand dollars. Of course, AGS could elect to buy more shares, but I’m going to make a bold prediction here that the final investment will be laughable compared to the promise of $100 million.
Here’s the pertinent excerpt from cMoney’s July 7th filing:
On July 7, 2010, the Corporation has entered into a Reserve Equity Financing Agreement (the “AGS Equity Line of Credit”) and Registration Rights Agreement with AGS Capital Group, LLC. (“AGS”). Pursuant to the AGS Equity Line of Credit, we have the right to “put” to AGS (the “AGS Put Right”) up to $100 million in shares of our common stock (i.e., we can compel AGS to purchase our common stock at a pre-determined formula) for a purchase price equal to from 50% to 95% of the lowest closing bid price of our common stock during the five consecutive trading days immediately following the date of our notice to AGS of our election to put shares pursuant to the AGS Equity Line of Credit. That percent discount is: 50% if the price of our common stock is less than $0.11 per share, 25% if the price of our common stock is between $0.11 and $0.19, 85% if the price of our common stock is between $0.20 and $0.74 per share, 10% if the price of our common stock is between $0.75 and $0.99 per share, and 5% the price of our common stock is above $1.00 per share. The maximum dollar value that we will be permitted to put at any one time will be, at our option, either: (a) $100,000 or (b) 250% of the product of the average daily volume in the U.S. market of our common stock for the ten (10) trading days prior to the notice of our put, multiplied by the average of the ten (10) daily closing bid prices immediately preceding the date of the put notice. However, we must withdraw our put if the lowest closing bid price of our common stock during the five consecutive trading days immediately following the date of our notice to AGS of our election to put shares pursuant to the AGS Equity Line of Credit is less than 98% of the average closing price of our common stock for the ten trading days prior to the date of such notice. We can not put shares to AGS if such put would cause AGS to own in excess of 4.99% of our outstanding shares of common stock.
And why does a young startup, with such a small staff, need a huge cash infusion? It’s unclear how the money will be used to build the actual mobile application, but at the very least, we know they are spending a large chunk of change on salaries for the executive team. The chief operating officer, chief marketing officer and chief financial officer have a base salary of $250K per year, the chief technology officer is promised $215K.
Meanwhile, cMoney directly pays Pharris $500 a week, but (and here is where it gets interesting) her other company Global 1— which is not mentioned in her profile—receives $41,543 per year for 5 years of management services. As explained in a recent filing, these “management services are expect to be provided by Ms. Harris.”
More significantly, in April of this year, cMoney entered into a “technology license agreement with Global 1 Enterprises, Inc. Under the agreement, C$ cMoney will pay Global 1 $1,500,000 per year for an exclusive and non-transferable license to certain intellectual property included trademarks and patents. Global 1 is owned by Jennifer Pharris.” Thus, to spell it out slowly, Pharris is charging cMoney hefty fees for the rights to license the technology from her other company. It’s hard to estimate what her cut is from this deal, but I imagine it’s a substantial amount (there was no reference online of any other executives at Global 1). For a company that is still bleeding cash, it’s strange for Pharris to charge such a high premium to access technology.
Who knows, cMoney could have a brillant project underneath the jumble, but at the very least, this strange constellation of facts and questions paints a dubious picture. I have asked several people in the mobile payments space, including those who cover the sector for investment firms, whether they have heard ever of cMoney or AGS— no one said yes. As always, buyer beware. | true | true | true | C$ cMoney, a mobile payments startup based in Houston, is having quite a week. On Friday, the company announced an impressive funding round of $100 million from private equity firm AGS Capital Group. The latest announcement was picked up by a few news outlets, including the WSJ's Venture Capital Dispatch which touted cMoney as “a start-up with a funky name has an ambitious plan for replacing consumers’ debit and credit cards with a mobile-payment system." Sounds like another promising company in the red-hot mobile payments sector—a market that Nokia, PayPal, and a host of startups like Zong are trying to crack. Except, I don’t buy it. That highly-touted $100 million funding, for instance, turns out not be a venture round at all, but rather an equity line of credit (a lot more on that later). And the company is being sued by a CEO it recruited. Read on for a tale of a bizarre reverse merger, promises of million-dollar fees, and a young, 28-year old founder who lists among her accomplishments and qualifications her childhood sports activities, including "dance, gymnastics, soccer, softball and tennis." | 2024-10-13 00:00:00 | 2010-07-11 00:00:00 | article | techcrunch.com | TechCrunch | null | null |
|
2,419,548 | http://www.pcmag.com/article2/0,2817,2383212,00.asp | Google's Andy Rubin Defends Openness of Android Platform | PCMag Staff April 7 | # Google's Andy Rubin Defends Openness of Android Platform
Android fragmentation has been in the news again lately, prompting Andy Rubin, Google's vice president of engineering, to pen a Wednesday blog post in which he defended Android and insisted that the company remains committed to developing an open platform.
Android fragmentation has been in the news again lately, prompting Andy Rubin, Google's vice president of engineering, to pen a Wednesday blog post in which he defended Android and insisted that the company remains committed to developing an open platform.
Reports to the contrary are "misinformation," Rubin wrote.
"We don't believe in a 'one size fits all' solution," he continued. "Miraculously, we are seeing the platform take on new use cases, features and form factors as it's being introduced in new categories and regions while still remaining consistent and compatible for third party applications."
Rubin's comments come after a this week found that 55 percent of Android developers find OS fragmentation to be a meaningful or huge problem. But Android isn't exactly suffering. This week also saw the release of reports that said Android-based smartphones of the U.S. market, and could command nearly half of worldwide smartphone OS market share by the end of 2012.
The ability to customize the Android OS "enables device makers to support the unique and differentiating functionality of their products," Rubin wrote.
Google has basic compatability requirements if handset makers want to include Google apps on their devices, and Google has an anti-fragmentation program in place, but "there are not, and never have been, any efforts to standardize the platform on any single chipset architecture," he said.
Google made headlines last month when it said it would to smaller phone manufacturers for an undisclosed period of time. Google said that was designed for tablets, not phones, and that it had more work to do before Honeycomb was released in an open-source format.
PCMag mobile analyst that forces the question of whether anyone can create a great open-source mobile UI.
On Wednesday, Rubin denied that that announcement represents a change in strategy, and insisted that Google "remain[s] firmly committed to providing Android as an open source platform across many device types."
"As I write this the Android team is still hard at work to bring all the new Honeycomb features to phones," he wrote. "As soon as this work is completed, we'll publish the code."
For PCMag Editor Lance Ulanoff's take on the OS, see . | true | true | true | Android fragmentation has been in the news again lately, prompting Andy Rubin, Google's vice president of engineering, to pen a Wednesday blog post in which he defended Android and insisted that the company remains committed to developing an open platform. | 2024-10-13 00:00:00 | 2011-04-07 00:00:00 | website | pcmag.com | PCMAG | null | null |
|
22,756,393 | https://briarproject.org/ | Secure messaging anywhere | null | # Secure messaging anywhere
Censorship-resistant peer-to-peer messaging that bypasses centralized servers. Connect via Bluetooth, Wi-Fi or Tor, with privacy built-in.
**Latest Release:** Briar 1.5.13 (August 31, 2024)
Censorship-resistant peer-to-peer messaging that bypasses centralized servers. Connect via Bluetooth, Wi-Fi or Tor, with privacy built-in.
**Latest Release:** Briar 1.5.13 (August 31, 2024)
End-to-end encryption and decentralized technology ensure conversations stay private and available even without Internet access.
Peer-to-peer encrypted messaging and forums
Messages are stored securely on your device, not in the cloud
Connect directly with nearby contacts, even without Internet
Free and open source software | true | true | true | Secure messaging, anywhere | 2024-10-13 00:00:00 | 2013-05-01 00:00:00 | null | null | null | null | null | null |
3,528,727 | http://www.open-electronics.org/how-to-find-the-location-with-gsm-cells/ | How to find the location with GSM cells - Open Electronics | Boris Landoni | - How to Adjust X and Y Axis Scale in Arduino Serial Plotter (No Extra Software Needed)Posted 2 weeks ago
- Elettronici Entusiasti: Inspiring Makers at Maker Faire Rome 2024Posted 2 weeks ago
- makeITcircular 2024 content launched – Part of Maker Faire Rome 2024Posted 3 months ago
- Application For Maker Faire Rome 2024: Deadline June 20thPosted 4 months ago
- Building a 3D Digital Clock with ArduinoPosted 9 months ago
- Creating a controller for Minecraft with realistic body movements using ArduinoPosted 10 months ago
- Snowflake with ArduinoPosted 10 months ago
- Holographic Christmas TreePosted 10 months ago
- Segstick: Build Your Own Self-Balancing Vehicle in Just 2 Days with ArduinoPosted 11 months ago
- ZSWatch: An Open-Source Smartwatch Project Based on the Zephyr Operating SystemPosted 12 months ago
# How to find the location with GSM cells
Discover how to find the coordinate from the GSM cells!!
[iframe_loader src=”http://www.open-electronics.org/celltrack/” height=”2150″ width=”100%”]
**The PHP code to find the coordinates from GSM cells**
<html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"/> <title>Tracking cell by Boris Landoni Example</title> <?php function geturl() { if ($_REQUEST["myl"] != "") { $temp = split(":", $_REQUEST["myl"]); $mcc = substr("00000000".($temp[0]),-8); $mnc = substr("00000000".($temp[1]),-8); $lac = substr("00000000".($temp[2]),-8); $cid = substr("00000000".($temp[3]),-8); } else { $hex = $_REQUEST["hex"]; //echo "hex $hex"; if ($hex=="1"){ //echo "da hex to dec"; $mcc=substr("00000000".hexdec($_REQUEST["mcc"]),-8); $mnc=substr("00000000".hexdec($_REQUEST["mnc"]),-8); $lac=substr("00000000".hexdec($_REQUEST["lac"]),-8); $cid=substr("00000000".hexdec($_REQUEST["cid"]),-8); $nlac[0]=substr("00000000".hexdec($_REQUEST["lac0"]),-8); $ncid[0]=substr("00000000".hexdec($_REQUEST["cid0"]),-8); $nlac[1]=substr("00000000".hexdec($_REQUEST["lac1"]),-8); $ncid[1]=substr("00000000".hexdec($_REQUEST["cid1"]),-8); $nlac[2]=substr("00000000".hexdec($_REQUEST["lac2"]),-8); $ncid[2]=substr("00000000".hexdec($_REQUEST["cid2"]),-8); $nlac[3]=substr("00000000".hexdec($_REQUEST["lac3"]),-8); $ncid[3]=substr("00000000".hexdec($_REQUEST["cid3"]),-8); $nlac[4]=substr("00000000".hexdec($_REQUEST["lac4"]),-8); $ncid[4]=substr("00000000".hexdec($_REQUEST["cid4"]),-8); $nlac[5]=substr("00000000".hexdec($_REQUEST["lac5"]),-8); $ncid[5]=substr("00000000".hexdec($_REQUEST["cid5"]),-8); }else{ //echo "lascio dec"; $mcc = substr("00000000".$_REQUEST["mcc"],-8); $mnc = substr("00000000".$_REQUEST["mnc"],-8); $lac = substr("00000000".$_REQUEST["lac"],-8); $cid = substr("00000000".$_REQUEST["cid"],-8); $nlac[0]=substr("00000000".($_REQUEST["lac0"]),-8); $ncid[0]=substr("00000000".($_REQUEST["cid0"]),-8); $nlac[1]=substr("00000000".($_REQUEST["lac1"]),-8); $ncid[1]=substr("00000000".($_REQUEST["cid1"]),-8); $nlac[2]=substr("00000000".($_REQUEST["lac2"]),-8); $ncid[2]=substr("00000000".($_REQUEST["cid2"]),-8); $nlac[3]=substr("00000000".($_REQUEST["lac3"]),-8); $ncid[3]=substr("00000000".($_REQUEST["cid3"]),-8); $nlac[4]=substr("00000000".($_REQUEST["lac4"]),-8); $ncid[4]=substr("00000000".($_REQUEST["cid4"]),-8); $nlac[5]=substr("00000000".($_REQUEST["lac5"]),-8); $ncid[5]=substr("00000000".($_REQUEST["cid5"]),-8); } } //echo "MCC : $mcc <br> MNC : $mnc <br>LAC : $lac <br>CID : $cid <br>"; return array ($mcc, $mnc, $lac, $cid, $nlac, $ncid); } function decodegoogle($mcc,$mnc,$lac,$cid) { $mcch=substr("00000000".dechex($mcc),-8); $mnch=substr("00000000".dechex($mnc),-8); $lach=substr("00000000".dechex($lac),-8); $cidh=substr("00000000".dechex($cid),-8); echo "<tr><td>Hex </td><td>MCC: $mcch </td><td>MNC: $mnch </td><td>LAC: $lach </td><td>CID: $cidh </td></tr></table>"; $data = "\x00\x0e". // Function Code? "\x00\x00\x00\x00\x00\x00\x00\x00". //Session ID? "\x00\x00". // Contry Code "\x00\x00". // Client descriptor "\x00\x00". // Version "\x1b". // Op Code? "\x00\x00\x00\x00". // MNC "\x00\x00\x00\x00". // MCC "\x00\x00\x00\x03". "\x00\x00". "\x00\x00\x00\x00". //CID "\x00\x00\x00\x00". //LAC "\x00\x00\x00\x00". //MNC "\x00\x00\x00\x00". //MCC "\xff\xff\xff\xff". // ?? "\x00\x00\x00\x00" // Rx Level? ; $init_pos = strlen($data); $data[$init_pos - 38]= pack("H*",substr($mnch,0,2)); $data[$init_pos - 37]= pack("H*",substr($mnch,2,2)); $data[$init_pos - 36]= pack("H*",substr($mnch,4,2)); $data[$init_pos - 35]= pack("H*",substr($mnch,6,2)); $data[$init_pos - 34]= pack("H*",substr($mcch,0,2)); $data[$init_pos - 33]= pack("H*",substr($mcch,2,2)); $data[$init_pos - 32]= pack("H*",substr($mcch,4,2)); $data[$init_pos - 31]= pack("H*",substr($mcch,6,2)); $data[$init_pos - 24]= pack("H*",substr($cidh,0,2)); $data[$init_pos - 23]= pack("H*",substr($cidh,2,2)); $data[$init_pos - 22]= pack("H*",substr($cidh,4,2)); $data[$init_pos - 21]= pack("H*",substr($cidh,6,2)); $data[$init_pos - 20]= pack("H*",substr($lach,0,2)); $data[$init_pos - 19]= pack("H*",substr($lach,2,2)); $data[$init_pos - 18]= pack("H*",substr($lach,4,2)); $data[$init_pos - 17]= pack("H*",substr($lach,6,2)); $data[$init_pos - 16]= pack("H*",substr($mnch,0,2)); $data[$init_pos - 15]= pack("H*",substr($mnch,2,2)); $data[$init_pos - 14]= pack("H*",substr($mnch,4,2)); $data[$init_pos - 13]= pack("H*",substr($mnch,6,2)); $data[$init_pos - 12]= pack("H*",substr($mcch,0,2)); $data[$init_pos - 11]= pack("H*",substr($mcch,2,2)); $data[$init_pos - 10]= pack("H*",substr($mcch,4,2)); $data[$init_pos - 9]= pack("H*",substr($mcch,6,2)); if ((hexdec($cid) > 0xffff) && ($mcch != "00000000") && ($mnch != "00000000")) { $data[$init_pos - 27] = chr(5); } else { $data[$init_pos - 24]= chr(0); $data[$init_pos - 23]= chr(0); } $context = array ( 'http' => array ( 'method' => 'POST', 'header'=> "Content-type: application/binary\r\n" . "Content-Length: " . strlen($data) . "\r\n", 'content' => $data ) ); $xcontext = stream_context_create($context); $str=file_get_contents("http://www.google.com/glm/mmap",FALSE,$xcontext); if (strlen($str) > 10) { $lat_tmp = unpack("l",$str[10].$str[9].$str[8].$str[7]); $lat = $lat_tmp[1]/1000000; $lon_tmp = unpack("l",$str[14].$str[13].$str[12].$str[11]); $lon = $lon_tmp[1]/1000000; $raggio_tmp = unpack("l",$str[18].$str[17].$str[16].$str[15]); $raggio = $raggio_tmp[1]/1; } else { echo "Not found!"; $lat = 0; $lon = 0; } return array($lat,$lon,$raggio); } list($mcc,$mnc,$lac,$cid, $nlac, $ncid)=geturl(); echo "<table cellspacing=30><tr><td>Dec</td><td>MCC: $mcc </td><td>MNC: $mnc </td><td>LAC: $lac </td><td>CID: $cid </td></tr>"; list ($lat,$lon,$raggio)=decodegoogle($mcc,$mnc,$lac,$cid); echo "<br>Google result for the main Cell<br>"; echo "Lat=$lat <br> Lon=$lon <br> Range=$raggio m<br>"; echo "<a href=\"http://maps.google.it/maps?f=q&source=s_q&hl=it&geocode=&q=$lat,$lon&z=14\" TARGET=\"_blank\" >See on Google maps</a> <BR> <br>"; for ($contatore=0; $contatore < (count($nlac)); $contatore++) { if ($nlac[$contatore]==0) { //echo "trovato campo vuoto al contatore $contatore<BR>"; $ncelle=$contatore; break; } } for ($contatore=0; $contatore < ($ncelle); $contatore++) { echo "LAC: $nlac[$contatore]\t CID: $ncid[$contatore]<BR>"; list ($nlat[$contatore],$nlon[$contatore],$nraggio[$contatore])=decodegoogle($mcc,$mnc,$nlac[$contatore],$ncid[$contatore]); echo "<br>Google result for the Neighbor Cell $contatore <br>"; echo "nLat=$nlat[$contatore] <br> nLon=$nlon[$contatore] <br> nRaggio=$nraggio[$contatore] m<br><br>"; } echo "<div id=\"map\" style=\"width: 100%; height: 700px\"></div>"; echo "<script type=\"text/javascript\">"; echo "var latgoogle=$lat;"; echo "var longoogle=$lon;"; echo "var raggio=$raggio;"; //creo un file contenente le coordinate delle celle **** $stringa_xml_doc = " <markers>\n\t"; $stringa_xml_doc =$stringa_xml_doc. "<marker lat=\"$lat\" lng=\"$lon\" rag=\"$raggio\" html=\"Main cell\" ico=\"antred\" label=\"Main\" />"; for($contatore= 0; $contatore < $ncelle; $contatore++) { if ($nlat[$contatore]!=0) { $stringa_xml_doc =$stringa_xml_doc. "<marker lat=\"$nlat[$contatore]\" lng=\"$nlon[$contatore]\" rag=\"$nraggio[$contatore]\" html=\"Cell $contatore\" ico=\"antbrown\" label=\"Marker $contatore\" />"; } } $stringa_xml_doc =$stringa_xml_doc."\n </markers>"; echo ($stringa_xml_doc); //$stringa_xml = $stringa_xml_dtd.$stringa_xml_doc; $stringa_xml = $stringa_xml_doc; $file_name = "celle_xml.xml"; $file = fopen ($file_name,"w"); $num = fwrite ($file, $stringa_xml); fclose($file); echo("File XML creato con successo!!"); //*** echo "nLat=new Array();"; echo "nLon=new Array();"; echo "nraggio=new Array();"; for ($contatore=0; $contatore < ($ncelle); $contatore++) { echo " nLat [$contatore] =$nlat[$contatore]; nLon [$contatore] =$nlon[$contatore]; nraggio [$contatore]=$nraggio[$contatore];"; } echo "</script>"; ?> <center> <br> <br> <center> <br> <br> <iframe width='100%' height='540' marginwidth='0' marginheight='0' scrolling='no' frameborder='0' src='http://open-electronics.org/fb/' ></iframe> </center> <br> <br> </center> <script src="http://maps.google.com/maps?file=api&v=2&key=ABQIAAAAIJFMStkxCl4SCny-4ljyrBRkrgiUOwoahV4KonZmGOdSmhVVVBTizYtL9IQMT4sND3EJvMdlOrIA8g" type="text/javascript"></script> </head> <body onunload="GUnload()"> <center> <br> <br> <script type="text/javascript"><!-- google_ad_client = "ca-pub-5248152858136551"; /* OE Link Banner post */ google_ad_slot = "3312240372"; google_ad_width = 468; google_ad_height = 60; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script> <br> <br> </center> <script type="text/javascript"> //function initialize() { if (GBrowserIsCompatible()) { //alert("boris2"); //} // This function picks up the click and opens the corresponding info window function myclick(i) { GEvent.trigger(gmarkers[i], "click"); } var map = new GMap2(document.getElementById("map")); //map.setCenter(new GLatLng(37.4419, -122.1419), 13); map.setUIToDefault(); function cursore() { alert("boris"); var pointg = new GLatLng(latgoogle,longoogle); alert(pointg); } // A function to create the marker and set up the event window function createMarker(point,name,html,icona) { var marker = new GMarker(point,markerOptions); GEvent.addListener(marker, "click", function() { marker.openInfoWindowHtml(html); }); return marker; } // Read the data from example.xml GDownloadUrl("celle_xml.xml", function(doc) { var xmlDoc = GXml.parse(doc); var markers = xmlDoc.documentElement.getElementsByTagName("marker"); for (var i = 0; i < markers.length; i++) { // obtain the attribues of each marker var lat = parseFloat(markers[i].getAttribute("lat")); var lng = parseFloat(markers[i].getAttribute("lng")); var raggio = parseFloat(markers[i].getAttribute("rag")); var icona = markers[i].getAttribute("ico"); var Icon = new GIcon(G_DEFAULT_ICON); Icon.image = icona + ".png"; // Set up our GMarkerOptions object markerOptions = { icon:Icon }; var point = new GLatLng(lat,lng); var html = markers[i].getAttribute("html"); var label = markers[i].getAttribute("label"); // create the marker if (icona=="antred") { var color="#FF0000"; var fillColor="#FF6600"; } else { var color="#FF6699"; var fillColor="#FF99CC"; } var thickness=4; var opacity=0.4; var fillOpacity=0.2; var polyCircle=drawCircle(point, raggio, color, thickness, opacity, fillColor, fillOpacity); map.addOverlay(polyCircle); var marker = createMarker(point,label,html,markerOptions); map.addOverlay(marker); } map.setCenter(point, 13); // put the assembled side_bar_html contents into the side_bar div //document.getElementById("side_bar").innerHTML = side_bar_html; }); function drawCircle(center, radius, color, thickness, opacity, fillColor, fillOpacity) { radiusMiles=radius*0.000621371192; //alert(radiusMiles); var degreesPerPoint = 8; var radiusLat = radiusMiles * (1/69.046767); // there are >> >> > 69.046767 miles per degree latitude var radiusLon = radiusMiles * (1/(69.046767 * Math.cos(parseFloat(center.lat()) * Math.PI / 180))); var points = new Array(); //Loop through all degrees from 0 to 360 for(var i = 0; i < 360; i += degreesPerPoint) { var point = new GLatLng(parseFloat(center.lat()) + (radiusLat * Math.sin(i * Math.PI / 180)), parseFloat(center.lng()) + (radiusLon * Math.cos(i * Math.PI / 180))); points.push(point); } points.push(points[0]); // close the circle polyCircle = new GPolygon(points, color, thickness, opacity, fillColor, fillOpacity) return polyCircle; } } </script> <script type="text/javascript"> //cursore(); //carica il marcatore </script> </body> </html>
Pingback: Arturo Escalante (chinazo) | Pearltrees | true | true | true | Discover how to find the coordinate from the GSM cells!! [iframe_loader src=”http://www.open-electronics.org/celltrack/” height=”2150″ width=”100%”] The PHP code to find the coordinates from GSM cells <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"/> <title>Tracking cell by Boris Landoni Example</title> <?php function geturl() { if ($_REQUEST["myl"] != "") { $temp = split(":", $_REQUEST["myl"]); $mcc = substr("00000000".($temp[0]),-8); […] | 2024-10-13 00:00:00 | 2011-09-18 00:00:00 | article | open-electronics.org | Open Electronics | null | null |
|
39,553,430 | https://www.wsj.com/business/energy-oil/frackers-are-now-drilling-for-clean-power-fb613dbf | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
27,315,064 | https://github.com/systemd/systemd/pull/19598 | PoC: add Rust submodule as libbasic_rust by bluca · Pull Request #19598 · systemd/systemd | Systemd | -
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
## New issue
**Have a question about this project?** Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
# PoC: add Rust submodule as libbasic_rust #19598
##
*base:*
main
**{{ refName }}**
### Are you sure you want to change the base?
## Conversation
**poettering**reviewed
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the meson integration doesn't look to bad.
one thing that makes me wonder: does it make sense to build mixed C+rust projects still with gcc at all? i.e. during the fedora default compiler discussion people suggested gcc and llvm still generate slightly incompatible interfaces. but otoh it sounds weird using llvm for the rust stuff but then still use C for the gcc part. what#s the story there?
@mbiebl on a scale from 1 to 10 how much would debian hate if we actually would adopt this?
##
src/basic/string-util.rs
Outdated
pub extern "C" fn string_extract_line(s: *const c_char, i: usize, ret: *mut *mut c_char) -> i32 { | ||
if ret.is_null() { | ||
panic!("string_extract_line(): 'ret' cannot be NULL"); | ||
} |
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i ifgure the 4ch indenting is rust-native styling?
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, not too much clue about rust here, but there must be a way to encode this in the function call signature, no?
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i ifgure the 4ch indenting is rust-native styling?
https://github.com/rust-dev-tools/fmt-rfcs/blob/master/guide/guide.md#indentation-and-line-width
hmm, not too much clue about rust here, but there must be a way to encode this in the function call signature, no?
Quite possibly, but I don't know. First time ever I wrote Rust :-)
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rust person here; can I offer some help? 😀
i ifgure the 4ch indenting is rust-native styling?
Yes
hmm, not too much clue about rust here, but there must be a way to encode this in the function call signature, no?
Not really. An actual Rust-y version of this function would look somewhat like this:
```
fn string_extract_line(s: &str, line_index: usize) -> Option<&str> {
s.lines().nth(line_index)
}
```
(Note that this always returns a slice of the original string and never does a copy.)
But this function is instead written to use C types (and C ABI) and be callable from external C. So it's rather pointless to try to enforce anything in its Rust-side signature, as C is still going to see its own definition. That, and also raw pointers (as opposed to references) can be null, dangling and unaligned.
On the other hand, since this function is already unsafe to call (it dereferences raw pointers! — and by the way, it should be marked `unsafe`
for this reason), and since the (old) C version of this function check doesn't appear to check the pointer for being null, it's OK to just say this is similarly a precondition here.
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rust person here; can I offer some help?
Yes, please! Pushed a new version, with some improvements
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Rust code should be tested thoroughly as well.
Sure.
I'm talking about various analyzers (like Coverity and LGTM for C), fuzzers and so on.
Well, I again admit I don't have any experience with using those tools with Rust, but Rust should not be much different from C in the regard. You pass the appropriate flags to the compiler (and perhaps Meson handles this for you similarly to how Cargo does), and then you invoke whatever analyzer tool. As discovered below, you'd likely need libstd built with the same flags, but that should be it.
But this is not my area of expertise (I do use Cargo for all my Rust projects...); better ask Meson developers for what they recommend.
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So there are 2 ways that cargo runs commands
- The subcommand is built into cargo. In this case it is intended that anything you can do with cargo you can also do by calling rustc or other binaries like rustdoc directly.
- Cargo also provides an extension mechanism so arbitrary commands can masquerade as cargo subcommands. Basically, if some executable with a name like
`cargo-mytool`
or`cargo-mytool.exe`
on windows exists, then running`cargo mytool`
will run your tool and pass command line parameters in.
It should be possible to run those programs simply by calling them directly (e.g. `cargo-fuzz --my --args`
).
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A cursory search of `cargo-fuzz`
suggests that there's no requirement for `cargo`
when running the tool (although building it does require cargo).
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the second case the command very likely uses cargo internally. Depending on the tool, the cargo subcommand either has a hard dependency on cargo or just uses cargo to build everything with the necessary flags and for example overrides the rustc used by cargo or gets the necessary information about the target crate after building all dependencies and then invokes the main tool. `cargo clippy`
for example uses `RUSTC_WORKSPACE_WRAPPER`
to run `clippy-driver`
instead of rustc for all crates inside the current workspace.
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I more or less figured out how to extract the part of "cargo fuzz" dealing with the compiler options like `-Cpasses=sancov`
, `-Cllvm-args=-sanitizer-coverage-level=4`
and so on I need. It kind of even works in the sense that I was able to run the systemd fuzz targets built with both clang and rustc. So I agree that it's possible to kind of bring cargo plugins to the systemd build system. The problem is that borrowing code like that isn't maintainable. Someone would have to keep track of what changed where and update `systemd`
accordingly.
i wonder if one approach to the rustification would be to start with the stuff in src/basic/ and convert it one by one, always making sure our tests for this still pass, and keeping the C version in the codebase, verifying they match the rest version in behaviour by providing careful tests. THen one day, when we are confident about it we could just remove the C code and then make rust a requirement. This would allow us to graudually move things over, and more forward looking distros could build the rust versions of stuff while more traditional distros (where compat with legacy archs like sparc or alpha is important) could hold out a bit longer with the C versions of stuff. The price for such an approach would of course be that we'd have to maintain two versions of much of the stuff in src/basic for a while, but maybe that's a good thing, while we are still learning (and by "we" I mean in particular myself, since at this point my superficial understanding of Rust is very superficial) |
https://buildd.debian.org/status/package.php?p=rustc |
I know @smcv was involved there, maybe he can share his experience. |
But how did that play out? Is DEbian still stuck with the old C version of librsvg for all arcs? or did they finally make the jump? does debian ship the rust version on modern archs and the C version on legacy archs? |
Would it make more sense to start with a leaf/optional component, like say homed, which can be disabled if necessary to start getting familiar with rust? |
**poettering**reviewed
##
src/basic/string-util.rs
Outdated
}; | ||
let mut found: bool = false; | ||
|
||
for (j, item) in s.lines().enumerate() { |
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's a "line" in rust btw? i.e. what separators are accepted? \n? \r? \0? combinations thereof?
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's quite some difference to our current logic... i'd prefer if we didn#t introduce that silently
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, the tests are passing :-) Apart from the `\r\n`
, what else is different?
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW using `\n`
unconditionally is simple enough:
` for (j, item) in s.split('\n').enumerate() {`
I |
|
It's using rustc though, not llvm? Or are they the same thing? The rustc debian package pulls in gcc for example, not llvm |
rustc is LLVM-based. Currently GCC doesn't have a rust frontend, although one is currently being built and there is also the mrustc project not related to either LLVM or GCC. |
I guess rustc uses llvm via libllvm? Not sure why it has the gcc dependency tbh. |
Yeah, that's pretty much what I was thinking too, it would make sense. The double implementation wouldn't probably be too bad - in the end when we add helpers to src/basic/ they are always pretty small and self-contained, so writing them twice shouldn't take too much time. And tree-wide refactors are often due to style or C-specific stuff, so wouldn't apply anyway. The main issue I see is the one you already clocked - there are many architectures where rustc is not available. This is a known problem, and unfortunately I do not have an answer there. In Debian, the affected architectures are not "release" archs - so by the letter of the law, one is allowed to break things there. But it is not very nice :-) If we do the double-implementation thing, then I can cook up the meson-ry required to make it work, so that if there's no rustc things work gracefully. |
FWIW, a Rust frontend for GCC is currently work-in-progress with the developers being commercially supported: So it might not be need to use the original Rust compiler in the near future. |
Ah I see, thank you - it shows that I'm a total Rust n00b :-) |
Yes, we're still stuck with these old librsvg versions on the architectures which don't have Rust support yet. m68k is gaining Rust support in the foreseeable future since LLVM has now an M68k backend as well and I have already added support to m68k in Rust in a forked repository. However, I think in the long-term, the better approach will be to use the upcoming Rust frontend in GCC. This way, there also won't be any need for additional compilers or tools to be able to build the codebase. |
There is on snapshot.debian.org.
It can. But the API has changed in the meantime a bit so some packages using librsvg currently FTBFS.
librsvg will be buildable on the other architectures in the foreseeable future when GCC has merged the upcoming Rust frontend. From my discussion with GCC developers I know that they are looking forward to the frontend being part of the official sources. I already tested it myself and while it doesn't support macros yet which are required for stuff like |
That's really promising! I had heard a frontend was under development, but didn't know it was that far ahead, as haven't been following closely. |
so maybe we can do as i proposed and the time where we drop the C versions would be the moment where gcc rust is merged? |
I would probably wait for that particular GCC version to be released first, but yes, that's sounds like a reasonable approach. |
I assume you are suggesting here, that we build all of systemd with clang/llvm? |
Yeah, that's what I was wondering. i.e. to me it sounds like the best option to go full-llvm or full-gcc, but mix and match not sure.
No idea, this was pretty much the question i was asking. But i guess if gcc-rust is a thing soon, then maybe the question doesn't matter that much, as then one can easily do a pure-gcc build |
Do we have any idea how this might affect binary sizes? |
`b499177`
to
`a52e4ed`
Compare
Proof-of-concept to test integrating Rust submodules. This adds a (very naive) rewrite of string_extract_line from string-util.h in Rust, and compiles it in so that the rest of our C code, including the unit tests, call into it instead. Given we are only using the standard library, there is no need for a (complicated) integration with Cargo, and the only additional requirement is the Rust compiler. Meson abstracts it quite nicely.
Property-based testing (basically fuzzing of unittests with different inputs) could be a tool to check for differential behaviour of the Rust and C implementations. This should decrease the maintenance overhead of having two implementations. |
**keszybz**reviewed
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at this, I can't shake the feeling that systemd is probably not the best project for exploration of integration methods. It'd be better to develop a working solution somewhere else first and then apply it cleanly here. Our CI is a monster with a thousand heads already, and we would make it doubly hard by mixing Rust into it.
@@ -1071,6 +1071,7 @@ int string_truncate_lines(const char *s, size_t n_lines, char **ret) { | |||
return truncation_applied; | |||
} | |||
|
|||
#ifdef BUILD_LEGACY_HELPERS |
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please don't use `ifdef`
, but change the define to be 0 or 1, and use `#if`
here.
- { COMPILER: "gcc", COMPILER_VERSION: "11", LINKER: "bfd", CRYPTOLIB: "gcrypt" } | ||
- { COMPILER: "gcc", COMPILER_VERSION: "11", LINKER: "bfd", CRYPTOLIB: "gcrypt", RUST: "yes" } | ||
- { COMPILER: "gcc", COMPILER_VERSION: "12", LINKER: "gold", CRYPTOLIB: "openssl" } | ||
- { COMPILER: "clang", COMPILER_VERSION: "13", LINKER: "mold", CRYPTOLIB: "gcrypt" } | ||
- { COMPILER: "clang", COMPILER_VERSION: "14", LINKER: "lld", CRYPTOLIB: "openssl" } | ||
- { COMPILER: "clang", COMPILER_VERSION: "14", LINKER: "lld", CRYPTOLIB: "openssl", RUST: "yes" } |
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'd generally want to try rust with the latest compilers, not oldest. More likely to work ;)
} | ||
} | ||
|
||
*ret = CString::new("").expect("CString::new failed").into_raw(); |
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'd want to use `Result`
and `?`
everywhere, and then just have a wrapper at the Rust↔C interface which converts it into a negative-errno.
I agree with this stance. Luckily, the Rust frontend for GCC is already progressing nicely and I think projects will be able to integrate Rust code in the foreseeable future much easier. |
I have a pretty good idea about how to do the integration, and it relies on rust-lang/rust#106560 landing. Once that's happened, I'll update the Meson PR. I wouldn't consider this anywhere ready until well after there are stable versions of both rustc and meson available with everything buttoned up and ready. |
rust-lang/rust#106560 landed yesterday and is available on the latest nightly behind a feature flag. |
Nice, thank you! I'll resume work on the meson side as soon as I can |
##
1 similar comment
Nice, thank you! I'll resume work on the meson side as soon as I can |
@bluca what's the status of this? I notice the meson PR has been open for a while, so I'm curious if Rust integration is still planned for systemd. |
Haven't had time yet, sadly |
Yeah, we should get on it since the kernel is having more mature rust support (would be cool to rewrite memory safety critical parts of systemd into rust) |
Sorry if any of these have already been answered and I forgot to pull them back off my TODO pile while reading through the thread, but I noticed a few spots where I felt I may be able to add some useful information:
Rustc's POSIX platform, which assumes BSD/UNIX-style "libc is the point of ABI stability for syscalls", is also used on Linux, where it treats delegating to to the system C compiler for the final link as the point of API stability for identifying what linker flags are needed for libc.
The abi_stable crate may be relevant. It's sort of a Rust-to-C-to-Rust FFI framework intended to provide a stable way to encode Rust types in C FFI.
I'd suggest starting by seeing what cargo-bloat (sort of a clone of Bloaty McBloatface) says about something that got much bigger in Rust. In my experience, it tends to be a mixture of the following:
...and possibly pathological expressions of LTO not being used, though I haven't investigated how likely or severe that would be. Though it's much older than min-sized-rust, Why is a Rust executable large? is also good, since it does detailed comparisons, including test builds, between C, C++, and Rust, to explore
In case it's relevant, a According to this comment use of it requires that calling C code be compiled with There's currently an RFC to close the soundness hole relating to
Bear in mind that that Rust makes no affordances for C++-style
There's ongoing discussion about a stable ABI, but it's not intended to be the internal ABI used for libstd but, rather, an easier-to-design-and-bikeshed external ABI for things like plugins. Rust's liberal use of generics like However, if the concern is symbol collisions between different copies of the same libraries, I'd focus on the binary sizes, not the effects on whether it will build successfully. Rust's whole "fearless upgrades" mantra stems in part from an attitude that symbol collisions should only be possible when dependency resolution fails to prevent two different versions of the same C library from getting pulled in. As such, it's designed to default to not exporting symbols and to use name-mangling unless asked not to, with a hash getting incorporated into the symbol to allow non-conflicting use of different versions of the same dependency in the same artifact if that's what it takes to get a dependency tree to resolve. (eg. the
If it helps, here's the documentation page that covers all the crate types ( |
When using cargo this will likely be fixed in the near future by having cargo passing
I don't think this is the case at all. While the ABI itself is not stable, if you use the same libstd when running as when compiling, you shouldn't ever have any issues. And because the filename of libstd is unique for every rustc version and every mangled symbol mangled differently depending on the rustc version string, so even the official builds and distro builds of the same rust release will use different filenames and mangled symbol names and thus don't risk accidentally mixing different abi's. The only case I can imagine where mixing would be possible is if a distro does a patch release of rust without changing the version reported by rustc using |
That does sound sensible, but, especially for a piece of important infrastructure, I'd want to see if anyone from the Rust team can point out something we're missing. |
I'm part of the compiler contributors team. This means that I have merge permission, but don't have decision making power like full compiler team members do. (See https://forge.rust-lang.org/compiler/membership.html for more details) I have a fair amount of knowledge about the current state of the compilation/linking model of rust. I can tell you that it should currently work without getting UB and will at least for a long time as rustc itself depends on it. I don't have the decision making power to block any changes that would break this, but I don't think anyone wants to break it and I did personally oppose breaking it. Be aware however that dynamic linking in rust has a couple of restrictions and rough edges:
If on the other hand you choose to use the staticlib crate type, be aware of rust-lang/rust#104707. You will almost certainly want to use a version script to avoid exporting the entire rust standard library from the dynamic library into which you are linking the staticlib. Not doing so increases the size of the dynamic library (as |
Proof-of-concept to test integrating Rust submodules.
This adds a (very naive) rewrite of string_extract_line from string-util.h
in Rust, and compiles it in so that the rest of our C code, including the
unit tests, call into it instead.
Given we are only using the standard library, there is no need for a
(complicated) integration with Cargo, and the only additional
requirement is the Rust compiler. Meson abstracts it quite nicely.
Note that this is the first time I've written Rust - so it's most likely horrible and completely wrong. Learning was the main idea of the exercise :-)
Opening a PR as I think there were questions in the past about how easily it would be to add Rust modules in the code base. | true | true | true | Proof-of-concept to test integrating Rust submodules. This adds a (very naive) rewrite of string_extract_line from string-util.h in Rust, and compiles it in so that the rest of our C code, includin... | 2024-10-13 00:00:00 | 2021-05-13 00:00:00 | https://opengraph.githubassets.com/14c894d6a3d405de85cd5e81eddaa0eb46dd4af78f6365a38b2086d8ef9f7225/systemd/systemd/pull/19598 | object | github.com | GitHub | null | null |
21,681,985 | https://techgrabyte.com/oculuss-john-carmack-resigns-ai-project/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,151,120 | https://github.com/firmai/datagene | GitHub - firmai/datagene: DataGene - Identify How Similar TS Datasets Are to One Another (by @firmai) | Firmai | **DataGene** is developed to detect and compare dataset similarity between real and synthetic datasets as well as train, test, and validation datasets. You can read the report on SSRN for additional details. Datasets can largely be compared using quantitative and visual methods. Generated data can take on many formats, it can consist of multiple dimensions of various widths and heights. Original and generated datasets have to be transformed into an acceptable format before they can be compared, these transformation sometimes leads to a reduction in array dimensions. There are two reasons why we might want to reduce array dimensions, the first is to establish an acceptable format to perform distance calculations; the second is the preference for comparing like with like. You can use the **MTSS-GAN** to generate diverse multivariate time series data using stacked generative adversarial networks in combination with embedding and recurrent neural network models.
https://ssrn.com/abstract=3619626
Installation and import modules:
```
pip install datagene
```
As of now, you would also have to install the following package, until we find an alternative
```
pip install git+git://github.com/FirmAI-Research/ecopy.git
```
```
from datagene import distance as dist # Distance Functions
from datagene import transform as tran # Transformation Functions
from datagene import mod_utilities as mod # Model Development Utilities
from datagene import dist_utilities as distu # Distance Utilities
from datagene import vis_utilities as visu # Visualisation Utility Functions
```
**(A) Transformations (Colab):**
-
From Tesseract
- To Tensor & Matrix
- Matrix Product State
- To Tensor & Matrix
-
From Tensor
-
To Tesseract
- Multivariate Gramian Angular Encoding
- Multivariate Recurrence Plot
- Multivariate Markov Transition Fields
-
To Tensor
- Matrix Product State
- Recurrence Plot
-
To Matrix
- Aggregates
- Tucker
- CANDECOMP
- Sample PCA
-
-
From Matrix
-
To Tensor
- Recurrence Plot
- Gramian Angular Field
- Markov Transition Field
-
To Matrix
- PCA
- SVD
- QR
- Feature Kernels
- Covariance
- Correlation Matrix
- 2D Histogram
- Pairwise Distance
- Pairwise Recurrence Plot
-
To Vector
- PCA Single Component
- Histogram Filter
-
-
From Vector
- To Matrix
- Signitures Method
- To Vector
- Extraction
- Autocorrelation
- To Matrix
**(B) Visualisations (Colab):**
- Convert Arrays to Images
- Histogram
- Signiture
- Gramian
- Recurrence
- Markov Transition Fields
- Correlation Matrix
- Pairplot
- Cord Lenght
**(C) Distance Measures (Colab):**
- Tensor/Matrix
- Contribution Values
- Predictions
- Feature Ordering
- Direction Divergence
- Effect Size
- Contribution Values
- Matrix
- Structural Similarity
- Similarity Histogram
- Hash Similarity
- Distance Matrix Hypothesis Test
- Dissimilarity Measures
- Statistical and Geometric Measures
- Vectors
- PCA Extracted Variance Explained
- Statistical and Geometrics Distances
- Geometric Distance Feature Map
- Curve Metrics
- Curve Metrics Feature Map
- Hypotheses Distance
In this example, the first thing we want to do is generate various datasets and load them into a list. See this notebook for an example of generating synthetic datasets by Turing Fellow, Mihaela van der Schaar, and researchers Jinsung Yoon, and Daniel Jarrett. As soon as we have these datasets, we load them into a list, starting with the original data.
As of now, this package is catering to time-series regression tasks, and more specifically input arrays with a three dimensional structure. The hope is that it will be extended to time-series classification and cross-sectional regression and classification tasks. This package can still be used for other tasks, but some functions won't apply. To run the package interactively, use this notebook.
`datasets = [org, gen_1, gen_2]`
Citation:
```
@software{datagene,
title = {{DataGene}: Data Transformation and Similarity Statistics},
author = {Snow, Derek},
url = {https://github.com/firmai/datagene},
version = {0.0.4},
date = {2020-05-11},
}
```
*You have the ability to work with 2D and 3D generated data. The notebook excerpted in this documents, uses a 3D time series array. Data has to organised as samples, time steps, features, [i,s,f]. If you are working with a 2D array, the data has to be organised as samples, features [i,f].*
This first recipe uses six arbitary transformations to identify the similarity of datasets. As an analogy, imagine you're importing similar looking oranges from two different countries, and you want to see whether there is a difference in the constitution of these oranges compared to the local variety your customers have gotten used to. To do that you might follow a six step process, first you press the oranges for pulp, then you boil the pulp, you then maybe sift the pulp out and drain the juice, you add apple juice to the pulp, and then add an organge concentrate back to the pulp, you then dry the concoction on a translucent petri dish and shine light through the petri dish to identify differences in patterns between the organges using various distance metrics. You might want to do the process multiple times and establish an average and possibly even a significance score. The transformation part, is the process we put the data through to be ready for similarity calculations.
`tran.mps_decomp_4_to_2()`
- Matrix-product state are as the de facto standard for the representation of one-dimensional quantum many body states.
`tran.gaf_encode_3_to_4()`
- A Gramian Angular Field is an image obtained from a time series, representing some temporal correlation between each time point.
`tran.mrp_encode_3_to_4()`
- Recurrence Plots are a way to visualize the behavior of a trajectory of a dynamical system in phase space.
`tran.mtf_encode_3_to_4()`
- A Markov Transition Field is an image obtained from a time series, representing a field of transition probabilities for a discretized time series.
`tran.jrp_encode_3_to_3()`
- A joint recurrence plot (JRP) is a graph which shows all those times at which a recurrence in one dynamical system occurs simultaneously with a recurrence in a second dynamical system
`tran.mean_3_to_2()`
- Mean aggregation at the sample level.
`tran.sum_3_to_2()`
- Sum aggregation at the sample level.
`tran.min_3_to_2()`
- Minimum aggregation at the sample level.
`tran.var_3_to_2()`
- Variation aggregation at the sample level.
`tran.mps_decomp_3_to_2()`
- Matrix-product state are as the de facto standard for the representation of one-dimensional quantum many body states.
`tran.tucker_decomp_3_to_2()`
- Tucker decomposition decomposes a tensor into a set of matrices and one small core tensor
`tran.parafac_decomp_3_to_2()`
- The PARAFAC decomposition may be regarded as a generalization of the matrix singular value decomposition, but for tensors.
`tran.pca_decomp_3_to_2()`
- Long to wide array conversion with a PCA Decomposition.
`tran.rp_encode_2_to_3()`
- Recurrence Plots are a way to visualize the behavior of a trajectory of a dynamical system in phase space.
`tran.gaf_encode_2_to_3()`
- A Gramian Angular Field is an image obtained from a time series, representing some temporal correlation between each time point.
`tran.mtf_encode_2_to_3()`
- A Markov Transition Field is an image obtained from a time series, representing a field of transition probabilities for a discretized time series.
`tran.pca_decomp_2_to_2()`
- Principal component analysis (PCA) is a mathematical algorithm that reduces the dimensionality of the data while retaining most of the variation in the data set.
`tran.svd_decomp_2_to_2()`
- Singular value decomposition (SVD) is a factorization of a real or complex matrix that generalizes the eigendecomposition of a square normal matrix.
`tran.qr_decomp_2_to_2()`
- QR decomposition (also called the QR factorization) of a matrix is a decomposition of the matrix into an orthogonal matrix and a triangular matrix.
`tran.lik_kernel_2_to_2()`
- A special case of polynomial_kernel with `degree=1`
and `coef0=0`
.
`tran.cos_kernel_2_to_2()`
- The chi-squared kernel is a very popular choice for training non-linear SVMs in computer vision applications.
`tran.pok_kernel_2_to_2()`
- The function polynomial_kernel computes the degree-d polynomial kernel between two vectors.
`tran.lak_kernel_2_to_2()`
- The function laplacian_kernel is a variant on the radial basis function kernel.
`tran.cov_2_to_2()`
- A covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
`tran.corr_2_to_2()`
- A correlation matrix is a table showing correlation coefficients between sets of variables.
`tran.hist_2d_2_to_2()`
- 2D histograms are useful when you need to analyse the relationship between 2 numerical variables that have a large number of values.
`tran.pwd_2_to_2()`
- Computes the distance matrix from a vector array X and optional Y.
`tran.prp_encode_2_to_2()`
- Recurrence Plots are a way to visualize the behavior of a trajectory of a dynamical system in phase space.
`tran.pca_decomp_2_to_1()`
- Principal component analysis (PCA) is a mathematical algorithm that reduces the dimensionality of the data while retaining most of the variation in the data set.
`tran.sig_encode_1_to_2()`
- The signature method is a transformation of a path into a sequence that encapsulates summaries of the path.
`tran.vect_extract_1_to_1()`
- The vector extraction function calculates a large number of time series characteristics.
`tran.autocorr_1_to_1()`
- Autocorrelation is the correlation of a signal with a delayed copy of itself as a function of delay.
There are an infinite number of ways in which you can pipe transformations. Sometimes it is better to just use one transformation at a time. Your architecture should be emperically driven. That generally means developing a knowingly bad and knowingly good synthetic dataset, and comparing them using a range of transformations and distance metrics to identify which methods best capture their difference. We have developed a very simple pipeline that can take in many datasets to perform multiple operations resulting in a range of encoded-decomposition on which various similarity statistics can be calculated. In the future, we will add more tranformations to this pipeline to help with array operations like swapping axes, transpositions, and others. Researchers would also be able to share which transformation pipelines were best suited there problem, such a database can further ehance data similarity research.
```
def transf_recipe_1(arr):
return (tran.pipe(arr)[tran.mrp_encode_3_to_4]()
[tran.mps_decomp_4_to_2]()
[tran.gaf_encode_2_to_3]()
[tran.tucker_decomp_3_to_2]()
[tran.qr_decomp_2_to_2]()
[tran.pca_decomp_2_to_1]()
[tran.sig_encode_1_to_2]()).value
recipe_1_org,recipe_1_gen_1,recipe_1_gen_2 = transf_recipe_1(datasets)
```
In the example above, Multivariate Recurrence Plots (mrp) are a way to visualise the behavior of a trajectory of a dynamical system in phase space. Matrix-product state (mps) are the de facto standard for the representation of one-dimensional quantum many body states. A Gramian Angular Field (gaf) is an image obtained from a time series, representing some temporal correlation between each time point. Tucker (tucker) decomposition decomposes a tensor into a set of matrices and one small core tensor. The QR decomposition (qr) of a matrix is a decomposition of the matrix into an orthogonal matrix and a triangular matrix. Principal component analysis (pca) is a mathematical algorithm that reduces the dimensionality of the data while retaining most of the variation in the data set. The signature method (sig) is a transformation of a path into a sequence that encapsulates summaries of the path.
Here we just reorder the transformation performed in Pipeline 1, naturally leading to a different matrix output.
```
def transf_recipe_2(arr):
return (tran.pipe(arr)[tran.mrp_encode_3_to_4]()
[tran.mps_decomp_4_to_2]()
[tran.qr_decomp_2_to_2]()
[tran.pca_decomp_2_to_1]()
[tran.sig_encode_1_to_2]()
[tran.gaf_encode_2_to_3]()
[tran.tucker_decomp_3_to_2]()).value
recipe_2_org,recipe_2_gen_1,recipe_2_gen_2 = transf_recipe_2(datasets)
```
```
import numpy as np
borg = np.random.rand(2,10,4)
bgen_1 = np.random.rand(2,10,4)
bgen_2 = np.random.rand(2,10,4)
recipe_1_org,recipe_1_gen_1,recipe_1_gen_2 = transf_recipe_1([borg,bgen_1,bgen_2])
```
```
array([[[0.52622142, 0.60183253, 0.90520897, 0.87681428],
[0.49643314, 0.22163889, 0.08297965, 0.05521378],
[0.12673893, 0.30928161, 0.15584585, 0.2604721 ],
[0.46941103, 0.44269038, 0.20596992, 0.96443987],
[0.78553311, 0.99440565, 0.6572868 , 0.61169361],
[0.2566714 , 0.0945539 , 0.2089194 , 0.48753717],
[0.17280917, 0.38463175, 0.90092389, 0.68599007],
[0.60802798, 0.02979119, 0.9303079 , 0.45431196],
[0.34087785, 0.21292418, 0.99554781, 0.44958344],
[0.53300737, 0.02957087, 0.93316219, 0.47669416]],
[[0.136937 , 0.18934288, 0.01034264, 0.24355441],
[0.5230321 , 0.58856421, 0.90271319, 0.33731024],
[0.2146031 , 0.53520014, 0.20652055, 0.60681106],
[0.73932746, 0.62229586, 0.74544928, 0.45184852],
[0.29048791, 0.81789998, 0.1519025 , 0.33354673],
[0.16954388, 0.77435085, 0.84737376, 0.32730576],
[0.51777727, 0.5785573 , 0.39350156, 0.05570645],
[0.45253767, 0.84138019, 0.41662209, 0.82696786],
[0.37107322, 0.56108578, 0.80767508, 0.84657908],
[0.48295465, 0.12923933, 0.2101367 , 0.71983682]]])
```
```
array([[ 1. , 0.5 , -2.98601104, -2.98601104, 0.125 ,
-0.74650276, -1.49300552, -0.74650276, 4.45813096, 8.91626193,
0. , 0. , 4.45813096],
[ 1. , 0.5 , 0.38700531, 0.38700531, 0.125 ,
0.09675133, 0.19350265, 0.09675133, 0.07488655, 0.14977311,
0. , 0. , 0.07488655]])
```
A range of distance measures have been developed to calculate differences between 1D, 2D, and 3D arrays. A few of these methods are novel and new to academia, and would require some benchmarking in the future; they have been signed (NV). In the future, this package would be branched out to look into privacy measurements aswell.
The model includes a transformation from tensor/matrix (the input data) to the local shapley values of the same shape, as well as tranformations to prediction vectors, and feature rank vectors.
`dist.regression_metrics()`
- Prediction errors metrics.
`mod.shapley_rank()`
+ `dist.boot_stat()`
- Statistical feature rank correlation.
`mod.shapley_rank()`
- Feature direction divergence. (NV)
`mod.shapley_rank()`
+ `dist.stat_pval()`
- Statistical feature divergence significance. (NV)
Transformations like Gramian Angular Field, Recurrence Plots, Joint Recurrence Plot, and Markov Transition Field, returns an image from time series. This makes them perfect candidates for image similarity measures. From this matrix section, only the first three measures, take in images, they have been tagged (IMG). From what I know, image similarity metrics have not yet been used on 3D time series data. Furthermore, correlation heatmaps, and 2D KDE plots, and a few others, also work fairly well with image similarity metrics.
`dist.ssim_grey()`
- Structural grey image similarity index. (IMG)
`dist.image_histogram_similarity()`
- Histogram image similarity. (IMG)
`dist.hash_simmilarity()`
- Hash image similarity. (IMG)
`dist.distance_matrix_tests()`
- Distance matrix hypothesis tests. (NV)
`dist.entropy_dissimilarity()`
- Non-parametric entropy multiples. (NV)
`dist.matrix_distance()`
- Statistical and geometrics distance measures.
`dist.pca_extract_explain()`
- PCA extraction variance explained. (NV)
`dist.vector_distance()`
- Statistical and geometric distance measures.
`dist.distribution_distance_map()`
- Geometric distribution distances feature map.
`dist.curve_metrics()`
- Curve comparison metrics. (NV)
`dist.curve_kde_map()`
- dist.curve_metrics kde feature map. (NV)
`dist.vector_hypotheses()`
- Vector statistical tests.
Model prediction errors can be used as a distance metric to compare datasets. We have to control for the prediction problem, which in this example is a next-day closing stock price prediction task. A model is trained on each respective dataset, and the model is tested on a real hold-out set, to identify the differences in generalised performance. For interest sake, I have also included a simple previous day prediction. This and other benchmarks help you to consider if the regression prediction task is at all worthy for comparison purposes. This method would ordinarily be considered a utility metric, as it has a supervised learning component, but it could also be indicative of similarity accross datasets. The prima facie results would indicate that generated-1 (gen_1) data are more prediction-worthy than generated-2 (gen_2) data.
`pred_dict = dist.regression_metrics(pred_list=[y_pred_org, y_pred_gen_1,y_pred_gen_2,org_y_vl_m1 ],name_list=["original","generated_1","generated_2","previous day"],valid=org_y_vl)`
```
original generated_1 generated_2 previous day
explained_variance_score 0.988543 0.991786 0.989290 0.997897
max_error 0.125190 0.115178 0.128092 0.100051
mean_absolute_error 0.030326 0.026712 0.036038 0.010663
mean_squared_error 0.001326 0.001017 0.001825 0.000251
mean_squared_log_error 0.000521 0.000382 0.000592 0.000087
median_absolute_error 0.027985 0.025136 0.033403 0.006978
r2_score 0.987251 0.990348 0.982379 0.997897
```
This measure is based on the belief that datasets that if datasets are similar and are used to predict the same value or outcome using the same model, they will have the same or similar feature rank ordering. Because there is some inherent randomness in the model development process, the rank correlation are taken multiple times, then the orginal-original rank correlations are compared against original-generated rank correlations. A t-stat and p-value is also derived from this comparison. A low p-value would signifify a true difference. The p-values of generated datasets can be compared against eachother.
`dist.boot_stat(gen_org_arr,org_org_arr)`
```
t-stat and p-value:
Original: 0.30857142857142855, Generated: 0.15428571428571428, Difference: 0.15428571428571428
(0.8877545314489291, 0.3863818038855802)
```
Following along from the previous method, this method trains a model on the different generated datasets. The difference is that for each real hold-out instance (row) that are used to obtain local shapley values, the generated and real data models are compared against eachother to see which one gives a higher contribution value and are given a value of 1. In aggregate each feature should as a result not deviate too far from 0.5 as it would be indicative of non-random biases.
`divergence_total.mean(axis=0)`
```
Open 0.469565
High 0.378261
Low 0.447826
Close 0.604348
Adj_Close 0.504348
Volume 0.600000
```
Because we are generating a 3D array, another axis can also be investigated, for us this would be the time step axis, again, any divergence away from 0.5 would be intial evidence of differences in dataset construction. Here we can see that time 15 to 20 (i.e., lag 8-3) seems to be diverging from the original model. This would call for further investigation.
`divergence_total.mean(axis=1)`
```
0 0.516667
1 0.533333
2 0.516667
3 0.500000
4 0.500000
5 0.516667
6 0.500000
7 0.466667
8 0.516667
9 0.516667
10 0.516667
11 0.450000
12 0.500000
13 0.483333
14 0.483333
15 0.633333
16 0.700000
17 0.616667
18 0.383333
19 0.283333
20 0.383333
21 0.500000
22 0.500000
```
This function looks at the actual overall effect size differences, this method is itterated multiple times, and as a result of the random component, we can obtain t-stats p-value and. We can see that here, there is no statistically significant element-wise differences in local feature contributions accross any of the features in the third axis. Here like before, we can also investigate the time-step axis, or even a matrix looking at both dimensions. These methods will be made available in future iterations.
`un_var_t, df_pval = dist.stat_pval(single_org_total,single_gen_total)`
```
Open 0.159681
High 0.941508
Low 1.134092
Close -1.335381
Adj_Close 1.351427
```
```
Open 0.87386
High 0.35159
Low 0.26290
Close 0.18862
Adj_Close 0.18347
```
The Structural Similarity Index (SSIM) is a perceptual metric that quantifies image quality degradation* caused by processing such as data compression or by losses in data transmission. If after processing the one dataset is more similar to the original data, then that dataset is more likely to capture the characteristics of the original data.
```
dist.ssim_grey(gray_org,gray_gen_1)
dist.ssim_grey(gray_org,gray_gen_2)
```
```
Image similarity: 0.3092467224082394
Image similarity: 0.21369506433133445
```
Returns a histogram for the image. The histogram is returned as a list of pixel counts, one for each pixel value in the source image. By looking at the histogram of an image, you get intuition about contrast, brightness, intensity distribution etc of that image. It is therefore worth comparing the image histograms of different datasets.
```
dist.image_histogram_similarity(visu.array_3d_to_rgb_image(rp_sff_3d_org), visu.array_3d_to_rgb_image(rp_sff_3d_gen_1) ))
dist.image_histogram_similarity(visu.array_3d_to_rgb_image(rp_sff_3d_org), visu.array_3d_to_rgb_image(rp_sff_3d_gen_2) ))
```
```
Recurrence
25.758089344255847
17.455374649851166
```
Perceptual hash (pHash) acts as an image fingerprint. This mathematical algorithm analyzes an image's content and represents it using a 64-bit number fingerprint. Two images' pHash values are "close" to one another if the images' content features are similar. The differences in pHash similarity can be used to measure the similarity between datasets.
```
print(dist.hash_simmilarity(visu.array_4d_to_rgba_image(mtf_fsdd_4d_org), visu.array_4d_to_rgba_image(mtf_fsdd_4d_gen_1)))
print(dist.hash_simmilarity(visu.array_4d_to_rgba_image(mtf_fsdd_4d_org), visu.array_4d_to_rgba_image(mtf_fsdd_4d_gen_2)))
```
```
51.5625
40.625
```
The Mantel test provides a means to test the association between distance matrices and has been widely used in ecological and evolutionary studies. Another permutation test based on a Procrustes statistic - a shape finding test - (PROTEST) was developed to compare multivariate data sets. Tests show that PROTEST is likely more powerful than the Mantel test for testing matrix association. As a result of the increased power of PROTEST and the ability to assess the match for individual observations (not available with the Mantel test). The procrustes statistic is higher for generated 1, but the mantel test is higher for generated 2. This hasn't led to anything too conclusive. If we are forced to say something, we might say generated-2 are more correlated to the original data, and that generated-1's distribution is more similar to the original data.
```
pvalue, stat = dist.distance_matrix_tests(pwd_ss_2d_org,pwd_ss_2d_gen_1)
pvalue_2, stat_2 = dist.distance_matrix_tests(pwd_ss_2d_org,pwd_ss_2d_gen_2)
```
```
{'mantel': 0.0, 'procrustes': 0.0, 'rda': -0.0}
{'mantel': 0.5995869421294606, 'procrustes': 0.4925792204150222, 'rda': 0.9999999999802409}
{'mantel': 0.0, 'procrustes': 0.0, 'rda': -0.0}
{'mantel': 0.862124217928853, 'procrustes': 0.13215320039660494, 'rda': 0.9999999999636482}
```
Various non-parametric entropy estimation methods can be used to compute the difference between matrices, such as the K-L k-nearest neighbour continuous entropy estimator (centropy), correlation explanation (corex), and mutual information (MI). These scores might best be presented as multiples of relationships.
`diss_np_one = dist.entropy_dissimilarity(org.var(axis=0),gen_1.var(axis=0)); print(diss_np_one)`
```
OrderedDict([('incept_multi', 0.00864), ('cent_multi', 0.25087), ('ctc_multi', 28.56361), ('corexdc_multi', 0.14649), ('ctcdc_mult', 0.15839), ('mutual_mult', 0.32102), ('minfo', 0.91559)])
```
Next, the distance can simply be taken between the two matrices using various statistical distance and geometrical distance metrics like correlation distance, intersection distance, Renyi divergence, Jensen Shannon divergence, Dice, Kulsinski, Russell Roa and many others. In essence the differences between matrices are collapsed down to a scalar value. These distance metrics are purposefully designed to be applied to matrices. Here I have extended the list to include not just numeric but also Boolean data. In dataset comparison, we find that bootstrapped hypothesis testing might be good to give the user some clarity in the statistical significance of the indicated divergence. Some methods are invalid for the data used, and displays a nan, for these values the input data can be fixed or the method can be dropped. Some of these measures are only applicable to binary data; but they would execute regardless with all numeric datatypes. Note these are all distance measures, all similarity measures are converted into distance measures; so it is not a "correlation" metric, but a "correlation distance" metric.
`dist.matrix_distance(recipe_2_org,recipe_2_gen_1)`
```
OrderedDict([('correlation', 0.00039),
('intersection', 0.0),
('renyi_divergence', nan),
('pearson_rho', 0.0),
('jensen_shannon_divergence', nan),
('ks_statistic_kde', 0.09268),
('js_metric', 0.12354),
('dice', 1.75803),
('kulsinski', 0.00031),
('rogerstanimoto', 0.15769),
('russellrao', 5.46193),
('sokalmichener', 0.15769),
('sokalsneath', 0.00472),
('yule', 0.0372),
('braycurtis', 0.19269),
('directed_hausdorff', 5.38616),
('manhattan', 7.19403),
('chi2', 0.62979),
('euclidean', 5.64465),
('variational', 7.19403),
('kulczynski', nan),
('bray', 0.1941),
('gower', 0.33268),
('hellinger', 0.02802),
('czekanowski', 0.55339),
('whittaker', 0.00501),
('canberra', 4.44534)])
```
The vector extraction function calculates a large number of time series characteristics. These are calculated for multiple bootsrapped iterations to for a matrix, this matrix is the decomposed to X PCA components of which the PCA error, correlation, and p-value is calculated. Currently the transformation has about 30 different time series characteristics like `abs_energy`
, `mean_abs_change`
, `mean_second_derivative_central`
, `partial_autocorrelation`
, `augmented_dickey_fuller`
, `gskew`
, and `stetson_mean`
.
```
dist.pca_extract_explain(np.sort(y_pred_org.mean(axis=1)),np.sort(y_pred_gen_1.mean(axis=1)))
dist.pca_extract_explain(np.sort(y_pred_org.mean(axis=1)),np.sort(y_pred_gen_2.mean(axis=1)))
```
```
PCA Error: 0.07666231511948172, PCA Correlation: 0.9996278922766885, p-value: 8.384146445855097e-14
(0.07666231511948172, 0.9996278922766885, 8.384146445855097e-14)
PCA Error: 0.028902437880890735, PCA Correlation: 0.9999364499384681, p-value: 7.135244167278149e-17
(0.028902437880890735, 0.9999364499384681, 7.135244167278149e-17)
```
Similar to the matrix functions, but applied to vectors.
```
braycurtis canberra correlation cosine dice euclidean ...
Iteration_0 0.101946 318.692930 0.030885 0.019464 0.571581 ...
Iteration_1 0.097229 306.932707 0.028121 0.017263 0.556409 ...
Iteration_2 0.102882 314.205121 0.031078 0.019340 0.602853 ...
Iteration_3 0.094278 304.127560 0.028063 0.017154 0.535805 ...
Iteration_4 0.097794 325.415987 0.029636 0.018002 0.566395 ...
```
A method to calculate the difference in distribution using vector distance metrics.
`vect_gen_dens_dist, vect_org_dens_dist = dist.distribution_distance_map(pd.DataFrame(org.mean(axis=(1)),columns=f_names),pd.DataFrame(gen_1.mean(axis=(1)),columns=f_names),f_names)`
```
Open High Low Close Adj_Close Volume
braycurtis 0.584038 0.586344 0.591567 0.582749 0.587926 0.725454
canberra 9.810338 9.941922 10.033852 9.815635 9.960998 14.140223
correlation 0.877240 0.823857 0.823024 0.826746 0.813448 1.145181
... ... ... ... ... ... ...
```
Another technique is to take a vector an calculate the probability density function using univariate Kernel Density Estimation (KDE) and then compare the curves. Multiple curve metrics can be used to look at the difference, such as curve length difference, partial curve mapping, discrete Frechet distance, dynamic time warping, and the area between curves. This technique would work with any other curves, like ROCAUC curves, or cumulative sum curves.
`dist.curve_metrics(matrix_org_s, matrix_gen_s_1)`
```
{'Area Between Curves': 0.60957,
'Curve Length Difference': 25.60853,
'Discrete Frechet Distance': 2.05938,
'Dynamic Time Warping': 217.50606,
'Mean Absolute Difference': 0.53275,
'Partial Curve Mapping': 159.14488}
```
Curve metrics accross flattened dataframes for all third-axis features transformed through a kernel density estimation.
`vect_org_dens_curve = dist.curve_kde_map(df_org_2d_flat.sample(frac=frac).astype('double'),df_org_2d_flat.sample(frac=frac).astype('double'), f_names, 0.01)`
```
Open High Low Close Adj_Close Volume
Curve Length Difference 0.499444 0.513556 0.518112 0.526037 0.527647 0.351608
Partial Curve Mapping 0.366652 0.362188 0.359239 0.373632 0.366966 0.296968
Discrete Frechet Distance 0.090328 0.092736 0.090900 0.093791 0.093466 0.073793
Dynamic Time Warping 1.898949 2.055921 1.914067 2.013428 1.969417 1.789365
Area Between Curves 0.035566 0.036917 0.035882 0.036786 0.036718 0.031578
```
Vectors also have their own tests to look at similarity, such as the Pearson correlation, Wilcoxon Rank-sum, Mood's two sample, Flinger-Killeen, Ansari-Bradley, Bartlett's, Levene, and Mann-Whitney rank test.
` dict_sta, dict_pval = dist.vector_hypotheses(matrix_org[:, 1],matrix_gen_1[:, 1])`
```
Statistic
{'pearsonr': 0.6489227957382259, 'ranksums': -267.40109998538, 'mood': 74.66159732420131, 'fligner': 18979.312108773225, 'ansari': 547045501353.0, 'bartlett': 299084.5868101086, 'levene': 15724.282328938525, 'mannwhitneyu': 432539640953.0}
P-Value
{'pearsonr': 0.0, 'ranksums': 0.0, 'mood': 0.0, 'fligner': 0.0, 'ansari': 3.880810985159465e-35, 'bartlett': 0.0, 'levene': 0.0, 'mannwhitneyu': 0.0}
```
**Methods**
The purpose of this package is to compare datasets for similarity. Why would we be interested in dataset-similarity and not just the utility or predictive quality of the data? The most important reason is to preserve the interpretability of the results. If the sole purpose of the generated data is to be used in black-box machine learning models, then simillarity is not a prerequisite, but for just about any other reason, data similarity is a must. Think along the lines of feature importance scores, data exploration, causal and associative analysis, decision-making, anomaly detection, scenario analysis, and software development.
There is a slight difference between testing dataset quality and testing models performance using data, for datasets comparison we test one dataset versus many, for models it is many datasets versus many datasets. In which case you might move into high-order tensors like tessaracts. Whether you want to compare a few datasets or series of datasets, this package would enable you to move into the appropriate dimension.
Datasets can largely be compared using quantitative and visual methods. Generated data can take on many formats, it can consist of multiple dimensions of various widths and heights. Original and generated datasets have to be transformed into an acceptable format before they can be compared, these transformation sometimes leads to a reduction in array dimensions. There are two reasons why we might want to reduce array dimensions, the first is to establish an acceptable format to perform distance calculations; the second is the preference for comparing like with like. The concatenated samples in a generated array are assumed independent from that of the original data and an aggregation across all samples could lead to more accurate and interpretable distance statistics. For that reason, data similarity is a function, of not just distance calculations and statistics, but also data transformations.
**Motivation**
The reason why one would want to emphasise data relationships is the importance of data integrity in the data science process. Generating data that preserves the predictive signal but increases the relationship noise might be beneficial for a black-box prediction task, but that is about it.
This method is predicated on two ideas; the first being that certain distance and statistical metrics are only available, or are best tested, on data-structures of specific dimensions; the second, that lower and higher dimensional representations of data might lead to a better understanding of non-linear relationships within the data.
Input data therefore generally requires a transformation (i.e. covariance matrix) plus a distance metric between the two transformed datasets in question (average element-wise euclidean distance). In such a way, one can develop transformation-distance recipes that best capture differences in you data.
The benefit of an approach that catalogues various methods, is that the effectiveness of various transormation plus distance recipes can be tested against data that have been generated with known to be optimal vs non-optimal procedures by comparing the learning curves of discriminator and generator losses over time. One would then be able to emperically validate the performance of every distinct recipe.
This package would eventually carry two streams of data statistics, those for time-series and those from cross-sectional data.
**Transformations**
For most distance measures, we would prefer non-sample specific comparisons. Real versus generated sample-specific distance could be usefull as a measure of the overall bias of the generated data. Generally we also want to focus on the relationships and not just feature (columnular) bias, in which case it is important that the transformations blend the samples into the lower dimension for a row-agnostic comparison. Decomposition helps to decrease the data-structure dimensionality, and encoding increases it. A recipe could theoretically transform the data-structure up a dimension and brings it down again, and by virtue of this process could help to expose non-linear relationships.
Multivariate time series data are generally generated as chunks of two dimensional arrays. These chunks can be captured in an additional dimension to create a rank three tensor. In such a scenario we might face a problem because of a lack of tensor comparison techniques. In other circumstances, one might start with a lower dimensional array but have the need to identify higher dimensional relationships and therefore perform encoding functions that lead to high-dimensional data structures.
Either way, there is a need to transform data to a lower acceptable dimensions to perform similarity calculations. For that reason we might want to transform a tesseract to a cubed tensor, a tensor to a matrix, and a matrix to a vector. With each additional transformation data similarity techniques become easier to perform. To do this we can use factorization, aggregation and other customised techniques. Some distance metrics have been adapted into statistical tests, the benefit of statistical tests is that we can set thresholds for what we will be comfortable with. We can also set thresholds with standardised data.
**Types of Transformations:**
- Data transformations to blend in samples*. (preferred)
- Transformations to decrease feature dimensions. (sometimes preferred)
- Additional transformations for distance functions. (sometimes needed)
- Additional transformations for hypotheses tests. (sometimes needed)
Example of Blend Operations:
- KDE lines.
- Sort of data.
- Cummulative sum.
- PCA on Features.
- 2D Histogram.
To blend in samples means to move away from element-wise sample comparison towards structured comparisons.
**Visualisations**
As a sanity test, I have also provided for a few plots of the data. See the Colab notebook for examples.
*This package draws inspiration from a range of methods developed or expounded on by researchers outside and inside the Turing (signitures, sktime and quipp). The data has been generated in the following Colab; the model has been developed by Turing Fellow, Mihaela van der Schaar.* | true | true | true | DataGene - Identify How Similar TS Datasets Are to One Another (by @firmai) - firmai/datagene | 2024-10-13 00:00:00 | 2020-05-09 00:00:00 | https://opengraph.githubassets.com/1992d624e052944aff05b615f22acef3ca1dd8e41e1d9a63154eb49b87d12ce0/firmai/datagene | object | github.com | GitHub | null | null |
39,126,736 | https://www.nytimes.com/2024/01/17/us/politics/atomic-bomb-secret-funding-congress.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,102,133 | https://vim8.org/vim-8.1-released.php | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,284,879 | http://blog.semilshah.com/2013/08/27/listen-to-fred-wilsons-blog-avc-on-swell-radio/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,075,225 | http://www.seobook.com/google-adwords-tax-calculator | Google AdWords Tax Calculator | Giovanna | # Google AdWords Tax Calculator
Many experienced advertisers realize that there are many gotchas in the AdWords system...optimization tools and default setting which optimize to boost Google's yield at the expense of unsuspecting advertisers, who don't yet know what match types are or that their ads are syndicated to content sites by default.
To help new advertisers get past many of the gotchas we created the Google AdWords tax calculator - a free utility which highlights many stumbling blocks that catch new AdWords advertisers.
Given that each keyword market is unique it would be impossible to make a tool that was 100% accurate in every situation, but the goal of this tool was to simply highlight common issues, and help new advertisers address them. Individual efficiency gains may be greater or smaller than the rough initial estimates the tool provides.
Please let us know what you think, as we will gladly iterate this calculator to make it better if you have some great ideas you think we should include in it. Like all of Google's products, our calculator is starting out in beta :D
## Comments
Hi,
I have spend $1500 already to promote my new site cheekugames.com, but i am still making not much money through adsense, now i know how i was running my ad campaign in a wrong manner, i will try to rectify this also hopefully make some money .
cheeku
It is quite hard to arbitrage buying Google traffic and selling it back out to Google while still having any profit margins left over.
## Add new comment | true | true | true | null | 2024-10-13 00:00:00 | 2010-01-27 00:00:00 | null | null | seobook.com | SEO Book blog | null | null |
1,526,104 | http://latestatic.com/what-a-programmer-sees-when-he-watches-incept | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
19,153,800 | https://www.influxdata.com/blog/influxdata-secures-60-million-in-series-d-funding-to-bring-the-value-of-time-series-to-the-enterprise-mainstream/ | InfluxData Secures $60 Million in Series D Funding | InfluxData | Mark Herring | # InfluxData Secures $60 Million in Series D Funding to Bring the Value of Time Series to the Enterprise Mainstream
###
By
Mark Herring /
Company, Press Releases
Feb 13, 2019
#### Navigate to:
*Norwest leads round, emphasizing strong growth and market potential; former MongoDB CEO joins board of directors*
**SAN FRANCISCO** – InfluxData, creator of the leading time series database InfluxDB, has raised $60 million in Series D funding, led by Norwest Venture Partners and joined by Sorenson Capital and existing investors Sapphire Ventures, Battery Ventures, Mayfield Fund, Trinity Ventures and Harmony Partners.
The proliferation of data continues to grow, prompted mainly by the instrumentation of the virtual (e.g., Kubernetes, microservices) and physical (e.g., IoT sensors) worlds. As enterprises look for ways to achieve better results, gain new insights and grow their customer base, analysis of time series data is critical and is driving the need for purpose-built solutions. InfluxDB is the leading time series database – which has been the fastest growing database category over the last two years, according to DB-Engines, the industry ranking authority. InfluxData more than doubled revenue, new customer logos, and employees in 2018 and is positioned to accelerate momentum through 2019 and beyond. To date, most of the company’s growth can be attributed to great product-market fit, a developer-focused approach and demand from the open source community.
“InfluxData’s time series vision is the future of data management,” said Rama Sekhar, partner at Norwest Venture Partners and new InfluxData board member. “Enterprises are experiencing a data revolution with massive increases in volumes of data. InfluxData is empowering enterprises to discover insights about their data by organizing it in the time dimension, a critical approach for use cases such as DevOps observability and IoT analytics. Developer support is also key in this space and the cult-following from the open source community signals a unique vision and value.”
The new infusion of funds will support further investment in product innovation, with increased focus on the cloud, and building out sales and marketing programs to meet growing product demand, as well as customer support needs. InfluxData will begin to market its solutions for specific uses, including industrial IoT and network monitoring, and target industries such as e-commerce, gaming and financial services.
“A majority of data is best understood in the time dimension, which is why time series data is the foundation of so many of today’s critical technology investments, including IoT, observability, machine learning and predictive analytics,” said Evan Kaplan, CEO of InfluxData. “The new funding will enhance our ability to deliver versatile solutions to countless companies looking to harvest insights from time and support our aggressive plans to scale the company. We’re also honored to have such esteemed individuals joining our board of directors to enhance our strategic direction.”
“InfluxData has demonstrated consistent growth in a market with tremendous potential, which is why we’re strengthening our commitment in this round of funding,” said Anders Ranum, managing director at Sapphire Ventures and member of the InfluxData board of directors. “The company continues to innovate its technology, pushing the boundaries when it comes to time series data solutions and is transforming its business model to support current and future demand. There are great things coming from InfluxData.”
InfluxData is also proud to welcome Max Schireson, entrepreneur in residence at Battery Ventures and former CEO of MongoDB, to its board of directors. Schireson brings a wealth of database industry experience, having led and advised cloud, big data and open source technology companies through hyper-growth transformational phases.
InfluxData closed $35 million in Series C funding in January 2018, led by Sapphire Ventures and joined by Harmony Partners and existing investors Battery Ventures, Mayfield Fund and Trinity Ventures. Other past investors include Bloomberg Beta and Y Combinator.
**About InfluxData**
InfluxData, the creator of InfluxDB, delivers a modern open source platform, built from the ground up, for analyzing metrics and events (time series data) for DevOps and IoT applications. Whether the data comes from humans, sensors, or machines, InfluxData empowers developers to build next-generation monitoring, analytics, and IoT applications faster, easier, and to scale delivering real business value quickly. Based in San Francisco, InfluxData has more than 450 customers including Cisco, eBay, IBM and Siemens. For more information, visit www.influxdata.com. Twitter: @influxdb.
**About Norwest Venture Partners**
Norwest is a leading growth equity and venture investment firm managing more than $7.5 billion in capital. Since its inception, it has invested in more than 600 companies. The firm invests in early- to late-stage companies across a wide range of sectors with a focus on consumer, enterprise, and healthcare. It offers a deep network of connections, operating experience, and a wide range of impactful services to help CEOs and founders advance on their journey. Norwest has offices in Palo Alto and San Francisco, with subsidiaries in India and Israel. For more information, visit Norwest and follow on Twitter @NorwestVP. | true | true | true | Press Release: InfluxData Secures $60 Million in Series D Funding to Bring the Value of Time Series to the Enterprise Mainstream | 2024-10-13 00:00:00 | 2019-02-13 00:00:00 | article | influxdata.com | InfluxData | null | null |
|
11,325,603 | http://owlorbit.com/ | Owlorbit Speed Gun | null | We work closely with coaches to create software that matters the most to you and your teams. Continuously improving. Just like your athletes. | true | true | true | null | 2024-10-13 00:00:00 | null | null | null | null | null | null | null |
1,786,653 | http://librivox.org | Everyday Science For Every Day | Eduardo Ladislao Holmberg | ## Free public domain audiobooks
Read by volunteers from around the world.
Read by volunteers from around the world.
Technology is ubiquitous in our world, and it’s all based on scientific discoveries. Let’s look at the beginnings of what we consider normal these days with 10 gems from our catalog. One of the best-known families of scientists with three Nobel Prizes to their name are the Curies. The biography of Pierre Curie, written by […]
LibriVox will turn 19 in a few days and thus enter the final one of its teenage years. Let’s take a look back at the last year with 10 newly arrived gems from our catalog. A young man just arrived at River Hall, The Uninhabited House in a fashionable neighborhood. He doesn’t know that its […]
Summer time means vacation time! But how about instead of going to a different place, you could go to a different time? Take a trip through history with 10 gems from our catalog. Henry Kenton is traveling from Kentucky to South Carolina on the eve of the American civil war. He is anxious to complete […]
There is something in the air that makes June a perfect month to get married. Be inspired – or reminisce – with 10 gems from our catalog. As far as inspiration goes, Edward J. Wood has done plenty of research already. Read about different ceremonies for The Wedding Day in All Ages and Countries as […] | true | true | true | null | 2024-10-13 00:00:00 | 2024-09-01 00:00:00 | null | null | null | null | null | null |
5,680,778 | http://skift.com/2013/05/09/who-needs-another-flight-search-startup/ | Does anyone really need another flight-search startup? | Dennis Schaal | ## Skift Take
To paraphrase a certain baseball philospher, it's getting late very early for flight-search startups. For any chance of success, they'd better offer something unique, have a lot of funding, and figure out how to attract substantial revenue elsewhere.
Here’s some unsolicited advice for entrepreneurs: If you want to launch a travel startup, avoid building a flight-search startup, and focus on hotels or something else.
OK, the hotel sector is over-crowded, too, but you’ll have more of a chance of surviving than trying to subsist on flight commissions (they only exist for huge OTAs or mega travel agencies), flight-search referral fees, or flight-related advertising revenue.
This is hardly revolutionary advice, but consider all of the flight-related startups out there, including, Routehappy, Pintrips, GetGoing, Superfly, MileWise, Darjeelin, Flights With Friends, SocialFlights, Flightfox, and many more.
Actually, strike one flight-search startup, MileWise, from the list as the company, founded in 2010, announced today that Yahoo is acquiring the MileWise team and its service is shutting down.
Still, despite all the hurdles, there are probably more flight-search startups in stealth mode, and on the way.
To be sure, these flight-search startups have divergent business models, with some commercing in airline miles, data licensing, opaque offerings, subscription revenue from contests, and other tweaks, but virtually all have to contend with the low margins in flight search.
By one estimate, hotel referral fees and advertising can be five times more lucrative than flight-search revenue, and the only way to even subsist on flight-search revenue, if that’s your only business line, is to have tons of volume and to be a fairly large company.
Consider some of these tidbits:
- One of the Hipmunk co-founders stated a year or so after the company’s 2010 launch that if they had known more about the travel industry when they got into it, then they would have focused on hotels instead of flights from the outset. Today, Hipmunk debuted a “Book on Hipmunk” option for hotel bookings as a way to up its revenue take, and to increase hotel transactions.
- Serge Faguet, CEO of Russia OTA Ostrovok says one of the best pieces of advice he received from two angel investors was for the company to abandon its initial focus on packaged travel, and to purse hotels instead.
- Kayak is overly dependent on its low-margin flight-search business, and is doing its best to expand on the hotel side. Somewhat counter-intuitively, though, consumers tend to start their travel planning with flights, and Kayak’s new global ad campaign emphasizes flight search.
- And Room 77 doesn’t bother with flights at all; it is hotel-only.
Companies such as Kayak, Hipmunk and Fly.com have built-in ways to navigate around their over-dependence on flights — at least temporarily.
Kayak’s margins on flights are much lower than on hotels, but Kayak has the volumes to make a business out of flights.
Hipmunk has more than $20 million in funding and a relatively small number of employees so it can afford to hang on for awhile to see if it can build its hotel business.
And, flight-only Fly.com is part of the larger Travelzoo so Fly.com benefits from Travelzoo’s subscriber base of travel-deal seekers, as well as cross-selling and mutual promotional activities.
But, one travel-startup veteran recently mused about what he considers to be the futility of flight-search startups, arguing that the margins they achieve are so low that most are destined to fail as they won’t be able to secure the resources to achieve any meaningful scale.
## Flightfox’s human-powered search
That sort of talk doesn’t deter startups such as Flightfox, which debuted its “human-powered” flight-search business in 2012. Flightfox, with office in Australia and the U.S., charges travelers a finders’ fee starting at $24 for the most simple itineraries. And, flight experts compete for travelers’ business.
Todd Sullivan, Flightfox co-founder, says the company hasn’t perfected its business yet, but the company is on its way in “proving a flight startup in 2013 can still make it.”
“The economics of the flight industry are only difficult if a startup can’t create unique value,” Sullivan says. “If a startup innovates and creates enough value, they can readily charge for it. This makes them much less reliant on industry economics or commissions paid.”
The economics of it all, depends on your “worldview,” Sullivan contends.
“Also, the notion of poor economics is based on a particular worldview,” he says. “To some, travel means commuting SFO to JFK. To others, it involves multiple continents and many months away. In the case of complex and premium travel, the economics are still great.”
A couple of other flight-search startups, Routehappy and Superfly, separately indicate that they know flight search by itself is insufficient to build a sustainable business, and they have plans to diversify revenue beyond referral and affiliate fees.
## Routehappy’s “Happiness Scores”
“We know most OTAs and metas think it’s all about hotel, and we get it — we’ve been there,” says Bob Albert, Routehappy founder and CEO, who previously was general manager of vacation-package seller Site59.
Routehappy recently redesigned its site, emphasizing flight Happiness Scores, taking into account amenities such as seat comfort and Wi-Fi, along with airfares.
“But it’s not metasearch business as usual for Routehappy,” Albert says. “We see a huge under-valued market in flight search.”
Albert believes Routehappy will make it based on its “unique content” because flyers need new ways to decipher “ever-changing flight products and services,” and “airlines need a meaningfully more modern distribution channel.”
“As a result, Routehappy will reach scale because of our unique content,” Albert says. “Happiness Factors that flyers care about personally and Happiness Scores that make selecting a flight easier than anywhere else.”
But, unique flight-search information, however long Routehappy maintains that advantage, will only go so far.
Albert says Routehappy is developing “unique targeting capabilities for airlines and others to upsell their products, which is an entirely different economic discussion from selling a low-fare ticket with airlines. We also have shopping and purchase analytics that are valuable to the industry.”
## Superfly looking for a win
In that regard, Routehappy has some parallels with Superfly as Jonathan Meiri, founder and CEO of Superfly, points out that the company reasoned “that if we win flight search with a more personalized model, then we can win the higher margin business further down the value chain.”
With so many competitors, though, it will be an uphill battle for Superfly to “win” in flight search.
Superfly’s angle is to inform frequent flyers about the value of their airline ticket purchases based on the airfare itself, coupled with mileage accruals, and the company has plans to work with airlines on targeting big-spending passengers.
“If there is no strategy to create value further down the value chain, the company will fail,” Meiri says.
So will lots of others.
Have a confidential tip for Skift? Get in touch
Tags: search
Photo credit: One of the latest crop of flight-search startups, Flightfox charges travelers a finders' fee starting at $24 for the most simple itineraries. And, flight experts compete for travelers' business. Flightfox
## Up Next
*Loading next stories* | true | true | true | Despite all the financial hurdles, there are probably more flight-search startups in stealth mode, and on the way. They keep coming. | 2024-10-13 00:00:00 | 2013-05-09 00:00:00 | article | skift.com | Skift | null | null |
|
3,622,375 | http://www.ekathimerini.com/4dcgi/_w_articles_wsite2_1_21/02/2012_429208 | 50 Years Ago Today | eKathimerini.com | Newsroom | # 50 Years Ago Today
NOVEMBER 22, 1951 PLASTIRAS TAKEN ILL: According to a medical bulletin issued from the prime minister’s political bureau yesterday (November 15), Prime Minister Nikolaos Plastiras was troubled by some chest pains last Thursday which forced him to remain in bed. The bulletin, signed by his doctors Professor N. Tsaboulas, K. Maroulis head of the Red Cross Hospital and K. Samaras, a cardiologist, also said that the prime minister has now fully recovered and that his general state of health is excellent. KING INFORMED: The general director of the prime minister’s political bureau Mr Moatsos went to the Palace yesterday morning to inform the king of the prime minister’s illness. He also visited the US Ambassador John Purifoy to brief him. ON MACEDONIA – New York, 21 (from our correspondent): The magazine Newsweek has reported that a group of Greek communists in Bulgaria clashed with a group of Bulgarians because the latter claimed that Macedonia should be annexed to Bulgaria. NO EGGS: Fresh eggs have completely disappeared from the market. They are being sold under the table for 1,700-1,800 each. PRICE OF SOVEREIGN: The price of the British gold sovereign on the stock market is 226,200 drachmas. Given the current state of affairs, it is more than certain that the USA and the EU will take no stabilizing initiatives. The approaching Balkan tribulations will be another test of the EU’s and USA’s mediatory and interventionist credibility. | true | true | true | NOVEMBER 22, 1951 PLASTIRAS TAKEN ILL: According to a medical bulletin issued from the prime minister’s political bureau yesterday (November 15), Prime Minister Nikolaos Plastiras was troubled by some chest pains last Thursday which forced him to remain in bed. The bulletin, signed by his doctors Professor N. Tsaboulas, K. Maroulis head of the Red […] | 2024-10-13 00:00:00 | 2001-11-22 00:00:00 | article | ekathimerini.com | ΚΑΘΗΜΕΡΙΝΕΣ ΕΚΔΟΣΕΙΣ ΜΟΝΟΠΡΟΣΩΠΗ Α.Ε. Εθν.Μακαρίου & Φαληρέως 2 | null | null |
|
11,018,384 | https://en.wikipedia.org/wiki/Wainfan_Facetmobile_FMX-4 | Wainfan Facetmobile - Wikipedia | null | # Wainfan Facetmobile
FMX-4 Facetmobile | |
---|---|
General information | |
Type | Homebuilt Aircraft |
National origin | United States |
Designer | Barnaby Wainfan |
Number built | 1 |
History | |
First flight | April 22, 1993 |
The **Wainfan FMX-4 Facetmobile** is an American homebuilt aircraft designed by Barnaby Wainfan, a Northrop Grumman aerodynamicist and homebuilt aircraft engineer.
The FMX-4 Facetmobile prototype was built by Lynne Wainfan, Barnaby Wainfan, and Rick Dean in Chino, California. Designer Barnaby Wainfan flew the plane to the Experimental Aircraft Association's Oshkosh fly-in in July 1994. That debut along with media coverage has sparked interest in its unique design and gentle flying qualities.[1] The aircraft is unusual in that it is a lifting body – the whole aircraft acts as a low aspect ratio wing: a flat, angular lifting shape, unlike traditional aircraft which use distinct lift-generating wings attached to a non-lifting fuselage. Also notably the aircraft's shape is formed of a series of 11 flat surfaces, somewhat similar to the body of the F-117 Nighthawk jet strike aircraft in using flat plates, but without separate wing structures. Although aerodynamic efficiency is reduced due to the simplistic shaping, that shaping reduces structural weight, improving payload mass fraction.[2]
## Design and development
[edit]### Shape
[edit]The FMX-4 Facetmobile shape forms 11 flat planes, plus two wingtip rudders. Three flat shapes form the bottom of the aircraft (slightly inclined front, flat middle, and sharply raised back), and eight form the top (one large downwards-sloping rear section, one thin nose section, and three inclined side panels per side). The wing section is an 18% thickness ratio, much thicker than the typical 12-15% thickness of normal light aircraft wings. At least one commercial model airplane kit of the Facetmobile is in production.[3]
The prototype FMX-4 Facetmobile crashed on October 13, 1994, after an in-flight engine failure. The aircraft landed at low speed into a barbed wire fence, which caused extensive skin, engine, and some structural damage, though there was no injury to the pilot, Barnaby Wainfan.[4] As of 2006, the aircraft has been partially repaired but not flown again.
### Structure
[edit]The Facetmobile structure is composed of 6061 aluminum tubing fastened with Cherrymax rivets. The fuselage uses conventional fabric covering. The aircraft uses elevons and rudders for control. The landing gear is a fixed tricycle type. The large windshield sections are augmented by two floor-mounted windows. The aircraft is boarded through a bottom-mounted hatch. The aircraft has a BRS parachute system installed.
## Variants
[edit]Wainfan has proposed two derivative aircraft based on the FMX-4 Facetmobile.
- FMX-5 Facetmobile, a larger 2-seat design using the same aluminum-tube-and-fabric construction.
- An unnamed similar 2-seat design using advanced flat composite panel construction.
[2]
## Specifications (Facetmobile FMX-4)
[edit]*Data from* [1]
**General characteristics**
**Crew:**1**Length:**19 ft 6 in (5.94 m)**Wingspan:**15 ft (4.6 m)**Wing area:**214 sq ft (19.9 m2)**Empty weight:**370 lb (168 kg)**Gross weight:**740 lb (336 kg)**Fuel capacity:**10-13 gallons**Powerplant:**1 × Rotax 503 DC , 50 hp (37 kW)**Propellers:**3-bladed GSC ground adjustable
**Performance**
**Maximum speed:**96 kn (110 mph, 178 km/h)**Cruise speed:**80 kn (92 mph, 150 km/h)**Rate of climb:**750 ft/min (3.8 m/s)**Wing loading:**3.45 lb/sq ft (16.8 kg/m2)
## See also
[edit]
**Aircraft of comparable role, configuration, and era**
- Dean Delt-Air 250
- Dyke Delta
- Rohr 2-175
- Verhees D-Plane 1
## References
[edit]**^**Jack Cox (October 1994). "The Facetmobile".*Sport Aviation*.- ^
**a**NASA LARC NAG-1-03054 "Feasibility Study of the Low Aspect Ratio All All-Lifting Configuration as a Low-Cost Personal Aircraft", Barnaby Wainfan and Hans Neiubert, February 2004, accessed October 24, 2006**b** **^**Incredible Facetmobile, accessed October 24, 2006**^**Wise, Jeff (January 2005). "The Daring Visionaries of Crackpot Aviation -- Barnaby Wainfan: Aero Ace Piecing it Together".*Popular Science*.
## External links
[edit]- Barnaby Wainfan's website, accessed October 17, 2022.
- Quarter scale Facetmobile Youtube clip of a radio-controlled model. | true | true | true | null | 2024-10-13 00:00:00 | 2006-10-25 00:00:00 | website | wikipedia.org | Wikimedia Foundation, Inc. | null | null |
|
13,522,680 | https://lmalmanza.wordpress.com/2017/01/30/understanding-startup-valuation-methods/ | Understanding Startup Valuation Methods | View all posts by Luis Almanza | It is difficult to understand **startups valuation**, since almost everything is intangible there are no clear paths to define clear **value** for a **startup**.
There are many ways to project the **value** of a **company** for purposes of **pricing** an **investment**, but all of them rely on **entrepreneurs projections** or **comparable companies** as a **starting point**.
You can find different **valuation methods**, but the question is: which model does consider all the relevant factors? or which is better for my company? There is no unique answer. There is no precise **valuation** for **early stage companies**. In most cases professionals calculate a number of **valuation models** and scenarios and take a weighted average of them.
At the end, that number is a **starting point** or an instrument for a **negotiation**. A deal will happen for a **certain price** which is best known as **market valuation**.
**Think on value and risk.**
Every single thing that can add **value**, like a **patent**, a **great team**, **traction or** **product development** is important for the **value**.
At the same time, those factors decrease the **risk**, for example a **strong team** decreases the **execution risk** against a new and experimental team. Or good **traction numbers** decrease the **risk** of **customer acquisition**.
When you are ready, or you think you are ready, to ask **money** from **investors** you need to be clear about the **value **of your** startup**.
Last week we just launch **Startups Valuation Lab** at **Orion Technology Park** at **Tec de Monterrey**. The name of the project is **OFFICE** and it is launched in collaboration with **Numbelt** company, **Finance students** from **Business School** and **Orion Startups**.
The goal is to provide **startups** with a **formal valuation**, **finance structure** and **risk analysis**.
You need a **valuation** if you want…
- To look for
**investment**. - Know the
**company value**. - Have better
**finance control**. - Avoid Legal conflicts.
- Avoid structure mistakes.
- Compare with competitors.
- Curiosity.. the opportunities come from curiosity – Victor Lau.
According with Venionaire Capital, you can find three standard approaches for **Enterprise Values** (EV):
**1. Fair Market value** – the **value** of an **enterprise** determined by a willing buyer and a willing seller – both conscious of all relevant facts – to close a transaction.
**2. Strategic value** – the **value the company** has for a particular (strategic) **investor**. The positive effects like synergy, opportunistic costs and marketing-effects are considered and calculated for this **valuation** approach.
**3. Intrinsic value** – the **measure of business** value that reflects the **entrepreneur’s** in-depth understanding of the company’s economic potential.
At **OFFICE – Valuation Lab** we will be using four **valuation models**:
**Berkus:**Valuation based on the assessment of key success factors.**Discounted Cash Flow:**Valuation based on the sum of all the future cash flows generated.**Comparable Transactions Method ( Valuation multiples ):**Valuation based on a rule of three with a KPI from a similar company.**Real options:**Valuation based on probability of future cash flows.
There are other **valuation methods**:
**Risk Factor Summation:**Valuation based on a base value adjusted for 12 standard risk factors.**Scorecard:**Valuation based on a weighted average value adjusted for a similar company.**Book Value:**Valuation based on the tangible assets of the company.**Liquidation value:**Valuation based on the scrap value of the tangible assets.**First Chicago:**Valuation based on the weighted average of 3 valuation scenarios.**Venture Capital:**Valuation based on the ROI expected by the investor.
# Berkus Method.
This is a **pre-revenue** method created by **Dave Berkus**. First you need the **value** of a similar company, then based on it, you define **five key criteria** for **similar companies** and the **expected value** to make **estimation**.
For example, imagine a **startup** that has potential of reaching over $20 million in **revenues** within five years, and the key criteria are: Sound Idea, **prototype**, Quality Management Team, Strategic relationship, Product Rollout or Sales:
These numbers are maximums limits that can be earned by the **startup**, all of them can form a **pre-money valuation**.
# Discounted Cash Flow Method
This **method** is useful if your **startup** is doing some **revenue** by some years. And you can estimate the **present value** of your company like the sum of all the **future cash flows** * (DCF method)*.
*PV= DCF1 + DCF2 +…+DCFn + TV*
The most important part for this method, when you are using it for **startups** is to estimate the number of periods for **future cash flows** and the **Terminal Value (TV)**, you have two options:
1) consider your **business** growing generating indefinite **cash flows** after * n* years. The formula for
**Terminal Value**is:
**TV=CFn+1/(k-g)**2) consider an **exit** after * n* years. You need to estimate the
**value**of the future acquisition and discount this future value to get its
**net present value**.
**TV=exit value/(1+K)/n**# Comparable Transactions Method (multiples)
It is a simple rule of three using comparing other **company value** and **key metrics** with yours:
- Monthly Recurring Revenue
- HR headcount
- Number of outlets
- Patent filed
- Weekly Active Users or WAU
- Sales
- Gross margin
- EBITDA
# Risk Factor Summation Methods
Similar to **Berkus Method**, is a **pre-revenue** method described by Ohio TechAngels. You determine an **initial value** for your **startup** and adjust that **value** for **risk** factors:
- Management
- Stage of the business
- Legislation/Political risk
- Manufacturing risk
- Sales and marketing risk
- Funding/capital raising risk
- Competition risk
- Technology risk
- Litigation risk
- International risk
- Reputation risk
- Potential lucrative exit
Each **risk** is assessed, as follows: +2 very positive, +1 positive, 0 neutral, -1 negative and -2 vey negative.
The initial **pre-money valuation** is adjusted positively by $250,000 for every +1 and negatively by -$250,000 for every -1.
”Reflecting the premise that the higher the number of
riskfactors, then the higher the overallrisk, this method forcesinvestorsto think about the various types ofriskswhich a particular venture must manage in order to achieve a lucrative exit…” – Ohio TechAngels
# Scorecard Valuation Method
Also named Bill Payne Method, similar to **RFS** and Berkus, at this **pre-revenue** method, you define an **initial value** and adjust that **value** based on certain set of criteria. The difference is that those criteria are weighed up based on their impact on startup success.
- Strength of the Management Team (30%)
- Size of opportunity (25%)
- Product/Technology or Service (15%)
- Competitive Environment (10%)
- Marketing/Sales Channels/Partnerships (10%)
- Need for Additional Investment (5%)
- Other factors (5%).
**Scorecard Valuation Method WorkSheet: http://bit.ly/ScorecardValuationMethod**
# First Chicago Method
This method, named after the late First Chicago Bank, is based on **probabilities** with three scenarios: worst case, a normal case and best case). It is a **post-revenue** method since you need **financial information** including revenues, earnings, cashflows, exit-horizon etc.
This model combines elements of market oriented and fundamental analytical methods. It is mainly used in valuation of dynamic growth companies
Each **valuation** is made with the **DCF Method** and add a percentage reflecting the probability of each scenario to happen.
# Venture Capital Method
The method is based on the **investor future returns** expected. First described by Professor Bill Sahlman at Harvard Business School in 1987 .
According with industry standard, the **investor** set a number that your company could be sold in * n* years. Based on those two numbers, the
**investor**can calculate the price to pay today for the company adjusting dilution and future rounds between now and the company sale.
*Return on Investment (ROI) = Terminal (or Harvest) Value ÷ Post-money Valuation*
*(in the case of one investment round, no subsequent investment and therefore no dilution)*
Then: **Post-money Valuation = Terminal Value ÷ Anticipated ROI**
Anticipated ROI: based on the **Wiltbank Study**, **investors** should expect a **27% IRR** in **six years**. Most **angels** understand that half of new ventures fail.
# Conclusions
Best practice is to use **multiple methods** for establishing the **pre-money valuation** for **seed/startup companies**.
Valuationsnever show thetrue valueof your company. – Stéphane Nasser
They just show two things:
- How bad the market is willing to invest in your company.
- How bad you are willing to accept it.
The optimal
amount raisedis the maximal amount which, in a given period, allows the last dollar raised to be more useful to the company than it is harmful to the entrepreneur. – Pierre Entremont, Otium Capital
**Notes:**
*Post based on Stéphane Nasser original post at Medium*
Useful online tool to make your valuation: https://www.equidam.com/
or contact OFFICE – **Valuation Lab services**. (erika.ramirez@itesm.mx)
Read more about: | true | true | true | It is difficult to understand startups valuation, since almost everything is intangible there are no clear paths to define clear value for a startup. There are many ways to project the value of a c… | 2024-10-13 00:00:00 | 2017-01-30 00:00:00 | article | wordpress.com | Luis Almanza's Blog | null | null |
|
26,449,115 | https://www.buyforlife.com/blog/017nEbtoG86hMtQqKPxAcd/why-product-research-is-broken-and-how-we-can-fix-it | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
37,245,449 | https://www.indiehackers.com/post/fine-tuning-gpt-3-5-to-benefit-and-grow-your-saas-business-fc5c77c588 | Fine-tuning GPT-3.5 to benefit and grow your Saas business | Herman Schutte | Hi everyone!
Like a lot of people I've been using ChatGPT to come up with initial drafts of emails and web copy. This has been working well, as long as you prime ChatGPT with context before every task.
Yesterday, OpenAI announced that businesses and developers now have the ability to fine-tune GPT-3.5 Turbo with their personal data. This means you can benefit from creating a custom-trainer and tuned GPT model for your company, allowing for more precise responses and a better match to your specific tone.
I posted about how to use this in the context of a virtual assistant: https://sitespeak.ai/blog/gpt-3-5-turbo-fine-tuning-custom-model-training, but any Saas can use this to easily create a custom model that they can use instead of ChatGPT to create email and website copy (or draft blog posts).
To get started you'll need an OpenAI API key. Once you have your key, you'll need to prepare some data, upload it to OpenAI and create your fine-tune job.
**Step 1: Prepare your data**
You'll have to get a dataset ready and send it to the OpenAI API for fine-tuning your model. At least 10 examples are needed, but you'll likely see more improvements with more than 50.
Each example in the dataset should be in line with our Chat completions API format, meaning a list of messages with each containing a role, content, and maybe a name. Make sure some of the training samples target situations where the prompted model doesn't behave right. The assistant messages in the data should be the ideal answers you want your model to provide.
**Example Dataset**
```
{"messages": [{"role": "system", "content": "SiteSpeak AI Assistant is a customer support virtual assistant that is helpful and friendly."}, {"role": "user", "content": "What is SiteSpeakAI?"}, {"role": "assistant", "content": "SiteSpeakAI is a tool that allows you to create a custom trained chatbot that uses ChatGPT to answer you visitor questions."}]}
{"messages": [{"role": "system", "content": "SiteSpeak AI Assistant is a customer support virtual assistant that is helpful and friendly."}, {"role": "user", "content": "How much does it cost?"}, {"role": "assistant", "content": "We have 3 pricing plans, $30/mo, $60/mo and $500/mo."}]}
{"messages": [{"role": "system", "content": "SiteSpeak AI Assistant is a customer support virtual assistant that is helpful and friendly."}, {"role": "user", "content": "Does it support Wordpress?"}, {"role": "assistant", "content": "Yes! You can easily install SiteSpeakAI on Wordpress and add a chatbot to your website."}]}
```
**Step 2: Upload your training data**
There's a complete script to upload and train a model in the post, but if you just need to upload a file:
```
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
# Upload file
print('Uploading file...')
file = openai.File.create(
file=open("yourfile.jsonl", "rb"),
purpose='fine-tune'
)
print('File uploaded: ', file)
```
**Step 3: Create your fine-tuned model**
After uploading your file and making sure the status has changed from uploaded to processed, you can create your fine-tune training job. Training jobs can take a while to complete (5 - 10 minutes). Once the training job is done, the status of the job will change to succeeded. You will then be able to get the fine_tune_model from the response and use this model ID for inference.
```
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
# Create fine-tuning job
print('Creating fine-tuning job...')
job = openai.FineTuningJob.create(
training_file="file-123abc", model="gpt-3.5-turbo")
print('Job created.')
print(job)
```
**Step 4: Use your fine-tuned model**
You can now use your newly trained GPT-3.5 Turbo model for inference:
```
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
completion = openai.ChatCompletion.create(
model="your-fine-tune-model-id",
messages=[
{"role": "system", "content": "SiteSpeak AI Assistant is a customer support virtual assistant that is helpful and friendly."},
{"role": "user", "content": "Please write a short overview of what SiteSpeakAI is and how it can benefit a business."}
]
)
print(completion.choices[0].message)
```
You now have a custom trained GPT model you can use to craft brand accurate emails and copy 😎
Would love to know if anyone else is doing this as well?
Also, please check out https://sitespeak.ai and let me know what you think. It's a crowded marked I know, but I think SiteSpeakAI has a few features that sets it apart from the rest. | true | true | true | Hi everyone! Like a lot of people I've been using ChatGPT to come up with initial drafts of emails and web copy. This has been working well, as long as... | 2024-10-13 00:00:00 | 2023-08-24 00:00:00 | https://storage.googleapis.com/indie-hackers.appspot.com/shareable-images/posts/fc5c77c588 | article | indiehackers.com | Indie Hackers | null | null |
30,019,657 | https://superduperserious.substack.com/p/fintech-more-like-funtech | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,016,562 | https://github.com/tarsana/syntax | GitHub - tarsana/syntax: A tool to parse strings based on data structure definitions | Tarsana | A tool to encode and decode strings based on flexible and composable syntax definitions.
**Warning**: This is just a teaser so if the code seems confusing don't worry, you will understand it after reading the Step by Step Guide.
Let's assume that you have the following text representing a list of developers where each line follow the syntax:
```
first-name last-name [number-of-followers] [repo-name:stars,repo-name:stars,...]
```
```
Tammy Flores 257 library:98,fast-remote:5,anyway:987
Rebecca Welch forever:76,oops:0
Walter Phillips 423
```
**Syntax** helps you to parse this document and convert it to manipulable objects easily.
Let's do it:
```
<?php
require __DIR__ . '/../vendor/autoload.php';
use Tarsana\Syntax\Factory as S;
// a repo is an object having a name (string) and stars (number), separated by ':'
$repo = "{name: string, stars:number}";
// a line consists of a first and last names, optional number of followers, and repos, separated by space. The repos are separated by ","
$line = "{first_name, last_name, followers: (number: 0), repos: ([{$repo}]:[]) | }";
// a document is a list of lines separated by PHP_EOL
$document = "[{$line}|".PHP_EOL."]";
// Now we make the syntax object
$documentSyntax = S::syntax()->parse($document);
// Then we can use the defined syntax to parse the document:
$developers = $documentSyntax->parse(trim(file_get_contents(__DIR__ . '/files/devs.txt')));
```
`$developers`
will contain the following:
```
[
{
"first_name": "Tammy",
"last_name": "Flores",
"followers": 257,
"repos": [
{
"name": "library",
"stars": 98
},
{
"name": "fast-remote",
"stars": 5
},
{
"name": "anyway",
"stars": 987
}
]
},
{
"first_name": "Rebecca",
"last_name": "Welch",
"followers": "",
"repos": [
{
"name": "forever",
"stars": 76
},
{
"name": "oops",
"stars": 0
}
]
},
{
"first_name": "Walter",
"last_name": "Phillips",
"followers": 423,
"repos": ""
}
]
```
You modified `$developers`
and want to save it back to the document following the same syntax ? You can do it:
```
// ... manipulating $developers
file_put_contents('path/to/file', $documentSyntax->dump($developers));
```
Install it using composer
```
composer require tarsana/syntax
```
The class `Tarsana\Syntax\Factory`
provides useful static methods to create syntaxes. In this guide, we will start with the basics then show how to use `SyntaxSyntax`
to do things faster.
```
<?php
use Tarsana\Syntax\Factory as S;
$string = S::string(); // instance of Tarsana\Syntax\StringSyntax
$string->parse('Lorem ipsum dolor sit amet');
//=> 'Lorem ipsum dolor sit amet'
$string->parse('');
// Tarsana\Syntax\Exceptions\ParseException: Error while parsing '' as String at character 0: String should not be empty
$string->dump('Lorem ipsum dolor sit amet');
//=> 'Lorem ipsum dolor sit amet'
$string->dump('');
//=> ''
```
```
<?php
use Tarsana\Syntax\Factory as S;
$number = S::number(); // instance of Tarsana\Syntax\NumberSyntax
$number->parse('58.9'); //=> 58.9
$number->parse('Lorem12');
// Tarsana\Syntax\Exceptions\ParseException: Error while parsing 'Lorem' as Number at character 0: Not a numeric value
```
```
<?php
use Tarsana\Syntax\Factory as S;
$boolean = S::boolean(); // instance of Tarsana\Syntax\BooleanSyntax
$boolean->parse('true'); //=> true
$boolean->parse('yes'); //=> true
$boolean->parse('y'); //=> true
$boolean->parse('TrUe'); //=> true (case insensitive)
$boolean->parse('false'); //=> false
$boolean->parse('no'); //=> false
$boolean->parse('N'); //=> false
$boolean->parse('Lorem');
// Tarsana\Syntax\Exceptions\ParseException: Error while parsing 'Lorem' as Boolean at character 0: Boolean value should be one of "yes", "no", "y", "n", "true", "false"
$boolean->dump(true); //=> 'true'
$boolean->dump(false); //=> 'false'
$boolean->dump('Lorem');
// Tarsana\Syntax\Exceptions\DumpException: Error while dumping some input as Boolean: Not a boolean
```
`Tarsana\Syntax\ArraySyntax`
represents an array of elements having the same syntax and separated by the same string. So an `ArraySyntax`
is constructed using a `Syntax`
(could be `NumberSyntax`
, `StringSyntax`
or any other) and a `separator`
.
-
if the
`Syntax`
argument is missing, an instance of`StringSyntax`
is used by default. -
if the
`separator`
argument is missing,`','`
is used by default.
```
<?php
use Tarsana\Syntax\Factory as S;
$strings = S::array();
$strings->parse('aa:bb,cc,"ss,089",true');
//=> ['aa:bb','cc','ss,089','true']
// Note that we can use "..." to escape the separator
$strings->dump(['aa','bb,cc','76']);
//=> 'aa,"bb,cc",76'
// Yeah, it's smart enough to auto-escape items containing the separator
$vector = S::array(S::number());
$vector->parse('1,2,3,4,5');
//=> [1, 2, 3, 4, 5]
$matrix = S::array($vector, PHP_EOL);
$matrix->parse(
'1,2,3
4,5,6,7
8,9,100');
//=> [ [1, 2, 3], [4, 5, 6, 7], [8, 9, 100] ]
```
`Tarsana\Syntax\Optional`
represents an optional syntax. Given a syntax and a static default value; it will try to parse inputs using the syntax and return the default value when in case of failure.
```
<?php
use Tarsana\Syntax\Factory as S;
$optionalNumber = S::optional(S::number(), 10);
$optionalNumber->parse(15); //=> 15
$optionalNumber->success(); //=> true
$optionalNumber->parse('Yo'); //=> 10 (the default value)
$optionalNumber->success(); //=> false
```
`Tarsana\Syntax\ObjectSyntax`
represents an object in which every field can have its own syntax. It's defined by providing an associative array of fields and a `separator`
(if missing, the separator by default is `':'`
).
```
<?php
use Tarsana\Syntax\Factory as S;
$repository = S::object([
'name' => S::string(),
'is_private' => S::optional(S::boolean(), false),
'forks' => S::optional(S::number(), 0),
'stars' => S::optional(S::number(), 0)
]);
$repository->parse('tarsana/syntax');
// an stdClass as below
// {
// name: 'tarsana/syntax',
// is_private: false,
// forks: 0,
// stars: 0
// }
$repository->parse('tarsana/syntax:5');
// {
// name: 'tarsana/syntax',
// is_private: false,
// forks: 5,
// stars: 0
// }
$repository->parse('tarsana/syntax:yes:7');
// {
// name: 'tarsana/syntax',
// is_private: true,
// forks: 7,
// stars: 0
// }
$data = (object) [
'name' => 'foo/bar',
'is_private' => false,
'forks' => 9,
'stars' => 3
];
$repository->dump($data);
// 'foo/bar:false:9:3'
$developer = S::object([
'name' => S::string(),
'followers' => S::optional(S::number(), 0),
'repositories' => S::optional(S::array($repository), [])
], ' ');
$developer->parse('Amine');
// {
// name: 'Amine',
// followers: 0,
// repositories: []
// }
$developer->parse('Amine tarsana/syntax,webNeat/lumen-generators:16:57');
// {
// name: 'Amine',
// followers: 0,
// repositories: [
// { name: 'tarsana/syntax', is_private: false, forks: 0, stars: 0 },
// { name: 'webNeat/lumen-generators', is_private: false, forks: 16, stars: 57 }
// ]
// }
```
Now you know how to parse and dump basic types : `string`
, `boolean`
, `number`
, `array`
, `optional`
and `object`
. But you may notice that writing code for complex syntaxes (object including arrays including objects ...) requires many complex lines of code. `SyntaxSyntax`
was introduced to solve this issue. As the name shows, it's a `Syntax`
that parses and dumps syntaxes, a meta syntax!
So instead of writing this:
```
$personSyntax = S::object([
'name' => S::string(),
'age' => S::number(),
'vip' => S::boolean(),
'friends' => S::array()
]);
```
You simply write this
`$personSyntax = S::syntax()->parse('{name, age:number, vip:boolean, friends:[]}');`
`S::string()`
is`string`
.`S::number()`
is`number`
.`S::boolean()`
is`boolean`
.`S::syntax()`
is`syntax`
.`S::optional($type, $default)`
is`(type:default)`
where`type`
is the string corresponding to`$type`
and`default`
is`json_encode($default)`
.`S::array($type, $separator)`
is`[type|separator]`
where`type`
is the string corresponding to`$type`
and`separator`
is the same as`$separator`
. If the separator is omitted (ie.`[type]`
); the default value is`,`
. t)`.`S::object(['name1' => $type1, 'name2' => $type2], $separator)`
is`{name1:type1, name2:type2 |separator]`
. If the separator is missing the default value is`:`
.
```
// '{name: string, age: number}'
S::object([
'name' => S::string(),
'age' => S::number()
])
// '{position: {x: number, y: number |"|"}, width:number, height:number}'
S::obejct([
'position' => S::object([
'x' => S::number(),
'y' => S::number()
], '|'),
'width' => S::number(),
'height' => S::number()
])
// '{name, stars:number, contributers: [{name, email|-}]}'
S::object([
'name' => S::string(),
'stars' => S::number(),
'contributers' => S::array(S::object([
'name' => S::string(),
'email' => S::string()
], '-'))
])
```
-
**version 2.1.0**`syntax`
added to the string representation of a syntax and corresponds to the`S::syntax()`
instance.
-
**version 2.0.0**- Separators and default values can be specified when creating syntax from string.
- Escaping separators is now possible.
`OptionalSyntax`
added.- Attributes
`default`
and`description`
removed from`Syntax`
class. - Upgraded to PHPUnit 6 and PHP 7.
- No dependencies.
- Detailed Exceptions with position of errors.
- Better
`Factory`
methods.
-
**version 1.2.1**:-
`tarsana/functional`
dependency updated -
couple of bug fixes
-
-
**version 1.2.0**:-
`SyntaxSyntax`
added. -
`separator`
and`itemSyntax`
getters and setters added to`ArraySyntax`
. -
`separator`
and`fields`
getters and setters added to`ObjectSyntax`
.
-
-
**version 1.1.0**:`description`
attribut added to`Syntax`
to hold additional details.
-
**version 1.0.1**:-
Tests coverage is now
**100%** -
Some small bugs of
`ArraySyntax`
and`ObjectSyntax`
fixed.
-
-
**version 1.0.0**: String, Number, Boolean, Array and Object syntaxes.
Please take a look at the code and see how other syntax classes are done and tested before fixing or creating a syntax. All feedbacks and pull requests are welcome :D | true | true | true | A tool to parse strings based on data structure definitions - tarsana/syntax | 2024-10-13 00:00:00 | 2016-06-25 00:00:00 | https://opengraph.githubassets.com/ade608475906a290b85391bd78627da926886e8c8f31ae84542f7fc8ae8d5285/tarsana/syntax | object | github.com | GitHub | null | null |
12,789,230 | https://blog.cryptomilk.org/2016/10/25/hack-ms-catalog-files-and-digital-signatures/ | Microsoft Catalog Files and Digital Signatures decoded | Andreas Schneider | # Microsoft Catalog Files and Digital Signatures decoded
TL;DR: Parse and print .cat files: dumpmscat
## Introduction
Günther Deschner and myself are looking into the new Microsoft Printing Protocol [MS-PAR]. Printing always means you have to deal with drivers. Microsoft package-aware v3 print drivers and v4 print drivers contain Microsoft Catalog files.
A Catalog file (.cat) is a digitally-signed file. To be more precise it is a PKCS7 certificate with embedded data. Before I started to look into the problem understanding them I’ve searched the web, if someone already decoded them. I found a post by Richard Hughes: Building a better catalog file. Richard described some of the things we already discovered and some new details. It looks like he gave up when it came down to understand the embedded data and write an ASN.1 description for it. I started to decode the myth of Catalog files the last two weeks and created a tool for parsing them and printing what they contain, in human readable form.
## Details
The embedded data in the PKCS7 signature of a Microsoft Catalog is a Certificate Trust List (CTL). Nikos Mavrogiannopoulos taught me ASN.1 and helped to create an ASN.1 description for the CTL. With this description I was able to start parsing Catalog files.
CATALOG {} DEFINITIONS IMPLICIT TAGS ::= BEGIN -- CATALOG_NAME_VALUE CatalogNameValue ::= SEQUENCE { name BMPString, -- UCS2-BE flags INTEGER, value OCTET STRING -- UCS2-LE } ... END
The PKCS7 part of the .cat-file is the signature for the CTL. Nikos implemented support to get the embedded raw data from the PKCS7 Signature with GnuTLS. It is also possible to verify the signature using GnuTLS now!
The CTL includes members and attributes. A member holds information about file name included in the driver package, OS attributes and often a hash for the content of the file name, either SHA1 or SHA256. I’ve written abstracted function so it is possible to create a library and a simple command line tool called *dumpmscat*.
Here is an example of the output:
CATALOG MEMBER COUNT=1 CATALOG MEMBER CHECKSUM: E5221540DC4B974F54DB4E390BFF4132399C8037 FILE: sambap1000.inf, FLAGS=0x10010001 OSATTR: 2:6.0,2:6.1,2:6.4, FLAGS=0x10010001 MAC: SHA1, DIGEST: E5221540DC4B974F54DB4E39BFF4132399C8037
In addition the CTL has normally a list of attributes. In those attributes are normally OS Flags, Version information and Hardware IDs.
CATALOG ATTRIBUTE COUNT=2 NAME=OS, FLAGS=0x10010001, VALUE=VistaX86,7X86,10X86 NAME=HWID1, FLAGS=0x10010001, VALUE=usb\\vid_0ff0&pid_ff00&mi_01
Currently the projects only has a command line tool called: *dumpmscat*. And it can only print the CTL for now. I plan to add options to verify the signature, dump only parts etc. When this is done I will create a library so it can easily be consumed by other software. If someone is interested and wants to contribute. Something like signtool.exe would be nice to have.
I noticed https://github.com/cryptomilk/parsemscat/ disappeared. Any plans to post it somewhere else?
You can find it here:
https://git.samba.org/?p=asn/samba.git;a=commitdiff;h=15a398821c305f8ae6aaa4248ab1fe63d2e027f4
https://github.com/gd/parsemscat | true | true | true | null | 2024-10-13 00:00:00 | 2016-10-25 00:00:00 | null | null | cryptomilk.org | blog.cryptomilk.org | null | null |
2,591,489 | http://www.searchengineoptimizationjournal.com/2011/05/27/blog-posts/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,773,802 | https://techletters.substack.com/p/techletters-106-risks-of-hacking | TechLetters #106 Risks of hacking electric vehicle charging stations at a scale. Tracking abuses of Cobalt Strike at a scale. Regulation of vulnerability research depends on... elections outcome? | Lukasz Olejnik | # TechLetters #106 Risks of hacking electric vehicle charging stations at a scale. Tracking abuses of Cobalt Strike at a scale. Regulation of vulnerability research depends on... elections outcome?
**TechLetters Insight**: why hacking electric vehicle charging stations? My analysis. First Insight release. I’ll consider this an infrequent but periodic format. That said, I am not sure whether to have it unlocked from the start. I really need to consider some supportive format for this thing.
# Security
**Identifying cracked Cobalt Strike in use**. Cobalt Strike is a cyberattack/cyberoperation combine harvester. Over the years, it got cracked and misused. "*Threat actors rely on cracked versions of Cobalt Strike to advance cyberattacks*". Maybe it’s worth to track its (ab)uses?
# Privacy
**Privacy/data protection and competition convergence?** The French data protection authority will investigate the role/position of privacy in anti-competition proceedings/investigation/enforcement, and the role of competition in privacy. In UK this is also an ongoing process. And EU EDPS started the conversation in 2014.
# Technology Policy
**The final version of the preliminary draft report concerning spyware/Pegasus/etc**. Completely cut out of details (previous details about curbs on vulnerability research/trade are purged). It now says that "the discovery, sharing and exploitation of vulnerabilities have to be regulated". Unclear how and would it work at all. However, it is even less clear whether the drafters realise the implications. Including for cybersecurity, but also for non-cyber security. At this moment there’s no proposal for a regulation. The soonest anything happens would be 2024/2025, if at all, because it would largely depend on the results of 2024 elections to the European Parliament. *You read it right*. Regulation of vulnerabilities could just become a political problem. Again? Might be different this time. On the other hand, perhaps nothing will happen out of it. We’ll know more in 2023.
# Other
In case you feel it's worth it to forward this content further:
If you’d like to share: | true | true | true | TechLetters Insight: why hacking electric vehicle charging stations? My analysis. First Insight release. I’ll consider this an infrequent but periodic format. That said, I am not sure whether to have it unlocked from the start. I really need to consider some supportive format for this thing. | 2024-10-13 00:00:00 | 2022-11-28 00:00:00 | article | substack.com | Lukasz Olejnik on Cyber, Privacy and Tech Policy Critique | null | null |
|
40,900,648 | http://jackkelly.name/blog/archives/2024/07/06/im_funding_ladybird_because_i_cant_fund_firefox/ | I'm Funding Ladybird Because I Can't Fund Firefox | Blog | null | I’ve been meaning to write this one for a while, but the announcement of the Ladybird Browser Initiative makes now a particularly good time.
**TL;DR:** Chrome is eating the web. I have wanted to
help fund a serious alternative browser for quite some time, and while
Firefox remains the largest potential alternative, Mozilla has never let
me. Since I can’t fund Firefox, I’m going to show there’s money in
user-funded web browsers by funding Ladybird instead. You
should too.
An open web requires a healthy ecosystem of several competing browsers, where each has enough market share that no one vendor has de facto control over web standards. That’s the world we used to have, after Firefox cracked the dominance of Microsoft’s Internet Explorer (IE) in the 1990s. IE’s poor support for internet standards held back web development all through the late 1990s and early 2000s, and competition from Firefox allowed developers to build “for the web” instead of “for IE6”, forcing browser vendors to catch up.
Unfortunately, we are back in a world without healthy browser competition. statcounter.com claims that Chrome, Google’s browser, has over 65% market share. Add Edge (which uses Blink, Chrome’s browser engine, under the hood) and you’re over 70%. This market dominance allows Google to push through changes like its “Manifest V3” format for browser extensions, which coincidentally cripples ad blockers.
While a discussion of the economics of ad-supported sites is outside the scope of this post, someone will ask “don’t you like free things on the internet?” if I don’t address it first.
Online advertising has become so obnoxious that ad blockers are all-but-necessary for users to get anything done online. More than that, I feel obliged to install ad blockers on family computers that I support: skipping all those megabytes of random advertising JavaScript significantly extends the lifespan of older computers and stops my less-technical family members from being tricked into installing fake malware versions of software. Even the FBI recommends ad blockers.
Sorry, the online advertising industry had its chance, and they blew it.This is also the case for other user-hostile features like “Encrypted Media Extensions” (aka “DRM for the web”). A healthy browser ecosystem would have been able to vigorously push against features that take control from the users; instead, Mozilla caved in the hopes of maintaining Firefox market share but didn’t even get that.
According to the Mozilla Foundation’s Donation FAQ, “Firefox is maintained by the Mozilla Corporation, a wholly-owned subsidiary of the Mozilla Foundation. While Firefox does produce revenue — chiefly through search partnerships — this earned income is largely reinvested back into the Corporation”. “Search partnerships” means “Google”, who made up 81% of Mozilla Corporation’s revenue in 2022. This means Firefox’s primary revenue source is also their direct competitor, and they seem to have little ability to change that.
Mozilla has backed themselves into a very poor position. In recent years, Mozilla Corporation has made several controversial moves in pursuit of revenue. Off the top of my head, there was the Mr. Robot addon, automatically loaded into people’s browsers to advertise a TV show; sponsored links in the address bar; sponsored “top sites” on the “new tab” page; a reading list startup called “Pocket”, integrated into Firefox without warning; and a Mozilla VPN service, complete with in-browser pop-up ads. Cal Paterson has another good list. Meanwhile, Firefox market share falls and the outgoing Mozilla Corporation CEO gets paid millions (6.9 million USD in 2022 — see page 8).
The problem is that many people specifically use Firefox because they’re sick of advertising and cross-promotion everywhere and want a browser that’s “just a browser”. On top of that, silently installing addons like the Mr. Robot extension undermine user trust in one of the most sensitive software projects people use.
Despite desperately trying to find more revenue sources, Mozilla
Corporation stubbornly refuses to *just let users fund Firefox*.
Mozilla Foundation even has a specific donation form for Thunderbird
(Mozilla’s mail client), but not Firefox. I’m sure they could have found
some way of making it work with their corporate structure, and it
baffles me that they haven’t.
**EDIT 2024-07-07:** To clarify, the Thunderbird donation
page sends money to MZLA Technologies Corporation, which is a
different wholly-owned subsidiary from Mozilla Corporation. It
*is* set up to receive non-charitable donations, which makes the
decision to block users from funding Firefox seem even more bizarre.
Ladybird used to be the web browser for SerenityOS, a from-scratch hobby operating system written by Andreas Kling (and community). On 2024-06-03 (about a month ago), he forked Ladybird into a separate project and stepped away from SerenityOS. Presumably this was prep-work to launch the Ladybird Browser Initiative, a non-profit dedicated to building the rest of the browser.
They are very open that the browser is unfinished — the first alpha
release is planned for 2026. But they have running code, and I can
*actually help fund them*. Wesley Moore runs
some numbers in a similar post to this one, and concludes that 15
USD/month (~22.50 AUD in July 2024) is a good amount for a recurring
donation. I’m in; are you? | true | true | true | null | 2024-10-13 00:00:00 | 2024-07-06 00:00:00 | null | null | null | jackkelly.name | null | null |
15,762,658 | https://www.sciencedaily.com/releases/2017/05/170524140721.htm | Amazingly flexible: Learning to read in your 30s profoundly transforms the brain | null | # Amazingly flexible: Learning to read in your 30s profoundly transforms the brain
- Date:
- May 24, 2017
- Source:
- Max Planck Institute for Psycholinguistics
- Summary:
- Reading is such a modern cultural invention that there is no specific area in the brain dedicated to it. Scientists have found that learning to read as an adult reconfigures evolutionarily ancient brain structures hitherto assigned to different skills. These findings were obtained in a large-scale study in India in which completely illiterate women learned how to read and write for six months.
- Share:
Reading is such a new ability in human evolutionary history that the existence of a 'reading area' could not be specified in our genes. A kind of recycling process has to take place in the brain while learning to read: Areas evolved for the recognition of complex objects, such as faces, become engaged in translating letters into language. Some regions of our visual system thereby turn into interfaces between the visual and language systems.
"Until now it was assumed that these changes are limited to the outer layer of the brain, the cortex, which is known to adapt quickly to new challenges," says project leader Falk Huettig from the Max Planck Institute for Psycholinguistics. The Max Planck researchers together with Indian scientists from the Centre of Bio-Medical Research (CBMR) Lucknow and the University of Hyderabad have now discovered what changes occur in the adult brain when completely illiterate people learn to read and write. In contrast to previous assumptions, the learning process leads to a reorganisation that extends to deep brain structures in the thalamus and the brainstem. The relatively young phenomenon of human writing therefore changes brain regions that are very old in evolutionary terms and already core parts of mice and other mammalian brains.
"We observed that the so-called colliculi superiores, a part of the brainstem, and the pulvinar, located in the thalamus, adapt the timing of their activity patterns to those of the visual cortex," says Michael Skeide, scientific researcher at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and first author of the study, which has just been published in the magazine *Science Advances*. "These deep structures in the thalamus and brainstem help our visual cortex to filter important information from the flood of visual input even before we consciously perceive it." Interestingly, it seems that the more the signal timings between the two brain regions are aligned, the better the reading capabilities. "We, therefore, believe that these brain systems increasingly fine-tune their communication as learners become more and more proficient in reading," the neuroscientist explains further. "This could explain why experienced readers navigate more efficiently through a text."
**Large-scale study with illiterates in India**
The interdisciplinary research team obtained these findings in India, a country with an illiteracy rate of about 39 percent. Poverty still limits access to education in some parts of India especially for women. Therefore, in this study nearly all participants were women in their thirties. At the beginning of the training, the majority of them could not decipher a single written word of their mother tongue Hindi. Hindi, one of the official languages of India, is based on Devanagari, a scripture with complex characters describing whole syllables or words rather than single letters.
Participants reached a level comparable to a first-grader after only six months of reading training. "This growth of knowledge is remarkable," says project leader Huettig. "While it is quite difficult for us to learn a new language, it appears to be much easier for us to learn to read. The adult brain proves to be astonishingly flexible." In principle, this study could also have taken place in Europe. Yet illiteracy is regarded as such a taboo in the West that it would have been immensely difficult to find volunteers to take part. Nevertheless, even in India where the ability to read and write is strongly connected to social class, the project was a tremendous challenge. The scientists recruited volunteers from the same social class in two villages in Northern India to make sure that social factors could not influence the findings. Brain scans were performed in the city of Lucknow, a three hours taxi ride away from participants' homes.
**A new view on dyslexia**
The impressive learning achievements of the volunteers do not only provide hope for adult illiterates, they also shed new light on the possible cause of reading disorders such as dyslexia. One possible cause for the basic deficits observed in people with dyslexia has previously been attributed to dysfunctions of the thalamus. "Since we found out that only a few months of reading training can modify the thalamus fundamentally, we have to scrutinise this hypothesis," neuroscientist Skeide explains. It could also be that affected people show different brain activity in the thalamus just because their visual system is less well trained than that of experienced readers. This means that these abnormalities can only be considered an innate cause of dyslexia if they show up prior to schooling. "That's why only studies that assess children before they start to learn to read and follow them up for several years can bring clarity about the origins of reading disorders," Huettig adds.
**Story Source:**
Materials provided by **Max Planck Institute for Psycholinguistics**. *Note: Content may be edited for style and length.*
**Journal Reference**:
- Skeide, M., Kumar, M., Mishra, R. K., Tripathi, V.N., Guleria, A., Singh, J.P., Eisner, F., & Huettig, F.
**Learning to read alters cortico-subcortical cross-talk in the visual system of illiterates. Science Advances**.*Science Advances*, 2017 DOI: 10.1126/sciadv.1602612
**Cite This Page**:
*ScienceDaily*. Retrieved October 12, 2024 from www.sciencedaily.com | true | true | true | Reading is such a modern cultural invention that there is no specific area in the brain dedicated to it. Scientists have found that learning to read as an adult reconfigures evolutionarily ancient brain structures hitherto assigned to different skills. These findings were obtained in a large-scale study in India in which completely illiterate women learned how to read and write for six months. | 2024-10-13 00:00:00 | 2024-10-12 00:00:00 | article | sciencedaily.com | ScienceDaily | null | null |
|
41,400,616 | https://www.youtube.com/watch?v=sXhRWCV38iQ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,178,043 | http://www.networkworld.com/community/blog/gartner-10-key-it-trends-2012 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
20,212,384 | https://en.wikipedia.org/wiki/Bancor | Bancor - Wikipedia | null | # Bancor
This article needs additional citations for verification. (March 2009) |
The **bancor** was a supranational currency that John Maynard Keynes and E. F. Schumacher[1] conceptualised in the years 1940–1942 and which the United Kingdom proposed to introduce after World War II. The name was inspired by the French *banque or* ('bank gold').[2] This newly created supranational currency would then be used in international trade as a unit of account within a multilateral clearing system—the International Clearing Union—which would also need to be founded.
## Overview
[edit]John Maynard Keynes proposed an explanation for the ineffectiveness of monetary policy to stem the Great Depression, as well as a non-monetary interpretation of the depression, and finally an alternative to a monetary policy for meeting the depression. Keynes believed that in times of heavy unemployment, interest rates could not be lowered by monetary policies. The ability for capital to move between countries seeking the highest interest rate frustrated Keynesian policies. By closer government control of international trade and the movement of funds, the Keynesian policy would be more effective in stimulating individual economies.
Bancor would not be an international currency. It would rather be a unit of account used to track international flows of assets and liabilities, which would be conducted through the International Clearing Union. Gold could be exchanged for bancors, but bancors could not be exchanged for gold. Individuals could not hold or trade in bancor. All international trade would be valued and cleared in bancor. Surplus countries with excess bancor assets and deficit countries with excess bancor liabilities would both be charged to provide symmetrical incentives on them to take action to restore balanced trade. In the words of Benn Steil,
Each item a member country exported would add bancors to its ICB account, and each item it imported would subtract bancors. Limits would be imposed on the amount of bancor a country could accumulate by selling more abroad than it bought, and on the amount of bancor debt it could rack up by buying more than it sold. This was to stop countries building up excessive surpluses or deficits. Each country's limits would be proportional to its share of world trade ... Once initial limits had been breached, deficit countries would be allowed to depreciate, and surplus countries to appreciate their currencies. This would make deficit country goods cheaper, and surplus country goods more expensive, with the aim of stimulating a rebalancing of trade. Further bancor debit or credit position breaches would trigger mandatory action. For chronic debtors, this would include obligatory currency depreciation, rising interest payments to the ICB Reserve Fund, forced gold sales, and capital export restrictions. For chronic creditors, it would include currency appreciation and payment of a minimum of 5 percent interest on excess credits, rising to 10 percent on larger excess credits, to the ICB's Reserve Fund. Keynes never believed that creditors would actually pay what in effect were fines; rather, he believed they would take the necessary actions ... to avoid them.
[3]
## Bretton Woods conference
[edit]Keynes was able to make his proposal the United Kingdom's official proposal at the Bretton Woods Conference but it was not accepted.[4]
## Proposed revival
[edit]Since the financial crisis of 2007–2008 Keynes's proposal has been revived. Its proponents have argued that since the end of the Bretton Woods system when the United States dollar was unpegged from gold, the United States was incentivized to run high government spending and high deficits, which made the global financial system unstable.[5] In a speech delivered in March 2009 entitled *Reform the International Monetary System*, Zhou Xiaochuan, the Governor of the People's Bank of China called Keynes's bancor approach "farsighted" and proposed the adoption of International Monetary Fund (IMF) special drawing rights (SDRs) as a global reserve currency as a response to the financial crisis of 2007–2010. U.S. Secretary of the Treasury Timothy Geithner expressed interest in the idea of greater use of SDRs as a reserve. However, he was criticized severely for this in the United States, and the dollar lost 5 cents against the euro in exchange markets following his statements.[5] He and President Barack Obama shortly afterwards backtracked Geithner's comments.[5][6]
He argued that a national currency was unsuitable as a global reserve currency because of the Triffin dilemma—the difficulty faced by reserve currency issuers in trying to simultaneously achieve their domestic monetary policy goals and meet other countries' demand for reserve currency.[7][8] A similar analysis can be found in the Report of the United Nation's "Experts on reforms of the international monetary and financial system"[9] as well as in the IMF's study published on 13 April 2010.[10]
## See also
[edit]## References
[edit]**^**E. F. Schumacher (May 1943). "Multilateral Clearing".*Economica*.**10**(38): 150–165. doi:10.2307/2549461. JSTOR 2549461. Archived from the original on 23 September 2015. Retrieved 15 November 2015.**^**Benn Steil,*The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order*(Princeton: Princeton University Press, 2013), p. 143.**^**Benn Steil,*The Battle of Bretton Woods: John Maynard Keynes, Harry Dexter White, and the Making of a New World Order*(Princeton: Princeton University Press, 2013), pp. 143-44.**^**"Should the IMF dole out more special drawing rights?".*The Economist*. ISSN 0013-0613. Retrieved 12 April 2020.- ^
**a****b**Tooze, Adam (2018).**c***Crashed: How a Decade of Financial Crises Changed the World*. New York, New York: Viking Press. p. 266. ISBN 978-0-670-02493-3. OCLC 1039188461. **^**MarketWatch, Lisa Twaronite, Polya Lesova, &, William L. Watts. "Dollar pares losses after Geithner clarification".*MarketWatch*. Retrieved 29 January 2022.`{{cite news}}`
: CS1 maint: multiple names: authors list (link)**^**Zhou Xiaochuan (2009). "Reform the International Monetary System" (PDF).*BIS Review*. Bank of International Settlements. Retrieved 28 November 2010.**^**"China calls for new reserve currency".*Financial Times*. 24 March 2009.**^**"Recommendations by the Commission of Experts of the President of the General Assembly on reforms of the international monetary and financial system" (20 March 2009).**^**"Reserve Accumulation and International Monetary Stability" (13 April 2010).
## Further reading
[edit]- John Maynard Keynes (1980).
*The Collected Writings, Volume XXV: Activities, 1940–44: Shaping the Post-war World: The Clearing Union*. Basingstoke.`{{cite book}}`
: CS1 maint: location missing publisher (link) - Armand van Dormael (1978).
*Bretton Woods. Birth of a Monetary System*. London. ISBN 978-0-8419-0326-5.`{{cite book}}`
: CS1 maint: location missing publisher (link) | true | true | true | null | 2024-10-13 00:00:00 | 2005-12-06 00:00:00 | website | wikipedia.org | Wikimedia Foundation, Inc. | null | null |
|
10,730,333 | http://www.redpill-linpro.com/sysadvent//2015/12/14/jmeter.html | Stress testing with Apache JMeter | Silje Bjølseth Amundsen Former Senior Systems Consultant | *This post appeared originally in our sysadvent series and has been
moved here following the discontinuation of the sysadvent microsite*
Apache JMeter is a nice little tool with tons of functionality for testing web sites. It can be used both for stress testing and functional testing. This tutorial is going to show you how to set it up and get started with some basic stress testing.
### Installation and initial setup
First, go to the official Apache JMeter website and download the binary, and unpack it on the machine you will be running the test from (your workstation/laptop should be enough to start with). The only software requirement is that you have java (JVM) 6 or higher installed. As such, JMeter can be run on any operating system that has a compliant java implementation.
We are going to use the GUI interface for JMeter in this tutorial. It is also possible to run the software from command line.
Once you have unpacked the binary package, go to the bin directory and start JMeter with jmeter.bat (on Windows systems) or jmeter.sh (on *NIX).
Let us give our project a new name, “Stress test of redpill-linpro.com”. The first thing we want to do then is create a new Thread Group. Right click on “Stress test of redpill-linpro.com” in the left column, and go to Add -> Threads (Users) -> Thread Group. The Thread Group will include a set of tasks that we want the test to complete. If you have many different functionalities you would like to test it may be smart to divide them into different Thread Groups, but for now we will look at one.
There are a few basic elements which can be nice to include at this point. First off, include the configuration element HTTP Request Defaults, which has a pretty self explanatory name. For now we will only set one default, redpill-linpro.com in the “Server Name or IP” field.
It is also smart to add the HTTP Cookie Manager from the same menu. The Cookie Manager will keep track of the cookies received for each thread of requests. Tick the “Clear cookies each iteration?” box. You can of course skip this if the site you are testing does not have cookies.
Finally add a Thread Group (Add -> Threads(Users) -> Thread Group). This is where the requests will be defined. We will name the Thread Group “My Test”. Also, add a Recording Controller (Add -> Logic Controller -> Recording Controller), which will be used as a receiver for the recording described in the next section.
### Setting up your test
For building the test JMeter has a nice recording tool included. It is set up as a proxy service which you then direct your browser traffic through to record a manual walkthrough of the requests you wish to test.
First add the Non-test Element “HTTP(S) Test Script Recorder” to the Workbench. This is the element where you set up the proxy service/recorder. You can use most of the default values, but choose “Stress test of redpill-linpro.com > My Test > Recording Controller” from the “Target Controller” pull-down menu. We will also exclude image files, CSS, JavaScript and other stuff that are not really interesting. Add the following URL patterns to the exclude list:
```
(?i).*\.(bmp|css|js|gif|ico|jpe?g|png|swf|woff)
(?i).*\.(bmp|css|js|gif|ico|jpe?g|png|swf|woff)[\?;].*
```
Before starting the recorder you will need to set up your browser to use the proxy service. In Firefox you can configure this under Preferences -> Advanced -> Network -> Settings. The HTTP Proxy address is “localhost” and the port is 8080 (unless you changed it in the recorder settings). You can also use the FoxyProxy extension which is available for both Firefox and Chrome. If you are unsure how to configure this, a quick search online should get you started.
When the proxy settings are all set up, start the recorder, go to your website and go through the requests you want to stress test manually. When you start the recorder a message box opens that tells you a CA certificate has been created and where it is located. You will need to import this certificate in your browser if your website has elements from other domains which you wish to include in the test requests. Otherwise you will get a security warning from the browser due to the traffic being proxied.
Hot tip! You may want to close all other browser windows/tabs while doing the recording so you do not get any unrelated traffic recorded. It is also possible to configure FoxyProxy to only proxy traffic to a specific site.
Once you are done going through the request you can stop the recorder. You should now have a set of requests listed under the “My Test” Thread Group.
Each request can be modified, and it is also possible to modify the request header. The /css request in the above screenshot is actually a request to fonts.googleapis.com. We will remove this (right click -> Remove) as we do not want to stress test any other servers than our own.
### Running the test
Once we have the requests we want in the way we want them, we can finally start the testing. JMeter have a lot of functionality for adapting the testing so as to be as realistic as possible. For now we will only add a Uniform Random Timer (right click on the main element -> Add -> Timer -> Uniform Random Timer) to give us a nice time distribution of the tests. Let us say you want to simulate a thousand requests. Without the timer all of these will be sent at the exact same millisecond, which rarely happens in reality.
We will set “Random Delay Maximum” to 3000 milliseconds and “Constant Delay Offset” to 300 milliseconds. Experiment and adapt these numbers to something like what you expect to receive at your site.
Continue on to configure the “My Test” element. Set the number of users you want to simulate and the number of loops you want the test to run in.
Finally we will add a couple of elements that will let us view the results. Right click on “Stress test of redpill-linpro.com” -> Add -> Listener. “View Results Tree” and “View Results in Table” will both show if the request was successful. The first will show the response data, while the second will list useful information like latency and bytes transferred. Another useful listener is “Response Time Graph” which will graph the response time for all the threads the test has completed.
Now all you have to do is press the play button on the tool bar and relax as the test runs its course.
It is also possible to do all this from command line, but that is outside the scope of this sysadvent post.
### Testing other types of servers
JMeter can test web servers (as we have seen). but it can also be used to stress test your RESTful API servers or your SOAP implementation. In addition, JMeter can test other services using other protocols than HTTP(S) like:
- FTP servers
- Databases (via JDBC)
- MongoDB servers (NoSQL)
- LDAP servers
- JMS providers
- SMTP(S), POP3(S) and IMAP(S) servers
This list, and the full documentation for JMeter can be found at the official website.
Good luck testing your site!
### Thoughts on the CrowdStrike Outage
Unless you’ve been living under a rock, you probably know that last Friday a global crash of computer systems caused by ‘CrowdStrike’ led to widespread chaos and mayhem: flights were cancelled, shops closed their doors, even some hospitals and pharmacies were affected. When things like this happen, I first have a smug feeling “this would never happen at our place”, then I start thinking. Could it?
### Broken Software Updates
Our department do take responsibility for keeping quite a lot ... [continue reading] | true | true | true | Redpill Linpro Tech Blog | 2024-10-13 00:00:00 | 2015-12-14 00:00:00 | null | article | null | /Techblog | null | null |
1,921,461 | http://www.pollentree.com | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,231,324 | http://www.insidehighered.com//blogs/technology-and-learning/mobile-learning-lessons-audible-app | Mobile Learning Lessons From the Audible App | Joshua Kim | **You have /5 articles left.**
Sign up for a free account or log in.
My goal is to read a book a week. *You?* The majority of my book reading is audio, and nowadays my preferred reading platform is an iPhone 5 and the Audible app.
The Audible app is interesting because it demonstrates both the * advantages* and
*of a mobile and app-centric approach. There may also be some lessons in how Audible has designed the app, and how audiobook listeners interact with the app, for mobile learning.*
**limitations**The most important advantage of the Audible app is that I can utilize the device (the iPhone) that I always have with me. Consuming audiobooks from my phone means I never need to remember to take another device, never need to worry about syncing, charging, or managing yet another piece of technology.
We might be worried that the smart phone has become a new appendage, a technology that we have become so addicted that in its absence we feel naked, exposed, disconnected, and incomplete.
The edtech lesson here is that the phone is quickly becoming our preferred screen, our default technology. This is not a new argument, but it takes on added force for all of us every time we transition yet another task or action to our phone. What used to happen on our laptops, or our TVs, or our e-readers, or iPods, or our wristwatches, or our movie theaters, or our video rental stores - we now do on our phones.
If I ran Blackboard, or Instructure, or D2L, or Pearson, or whatever LMS platform provider the first thing that I would do is create a small team to build a mobile first-LMS.
Take clean slate to the LMS idea, and ask what sort of platform would be designed if the primary interface was an app on a phone?
Then I'd bring that alternative LMS to market, and let the market decide if they wanted to license the existing (legacy) browser-first LMS or the new mobile-first LMS. This is something more than having a mobile division. I'm suggesting a separate mobile company within the company, one not bound in anyway by the technology, culture, organizational structure, or assumptions of the parent company.
The second most important advantage of the Audible app is how the new opportunities for reading that the app opens up. I like that the app tracks my and reports on my reading habits, my own reading dashboard. I can quickly see how much time I spend listening for today, this week, this month, and across time.
The app gives me incentive to read more by providing badges. I'm a recent convert to the app, and so far I've only collected badges for "*Stenographer*" (more than 40 bookmarks), "*Audible Obsessed*" (for using the app for at least 7 days straight), "*Binge Listener*" (self-explanatory), and "*The Stack*" (more than 200 books in my library). I have 12 more badges that I can earn, and I'm motivated.
The phone seems like the perfect platform to combine a push into mobile learning with analytics and badging. Think of all the rich data generated by individual learning behaviors, and aggregated class actions. A good mobile learning app should expose all this data to the learner. Letting them know where they stand in their personal learning plan, how their fellow learners are interacting with the mobile learning platform, and what they should be doing to reach their goals. A badging system, driven by the analytics, would make a terrific addition to the mobile learning experience.
*Where does the Audible app fall short?*
The biggest deficit is of course a result of Apple's licensing terms. **No in-app purchases.** Since Apple would take too big a cut for each book purchase, Audible does not allow me to search for and purchase books in the app. This is crazy, frustrating, and ultimately self-defeating for Apple. **This is also one of the (many) reasons why Amazon (which owns Audible) will eventually come out with a smart phone.**
I'd also very much like if the Audible and Kindle apps were combined. Why do I need to switch apps to move between listening and reading, particularly now that Amazon has enabled Whispersync for Voice.
Mobile learning platforms should have an advantage of commercial platforms when it comes to usability. A mobile enabled course does not have the problem of the "in-app" purchase. A good mobile-first LMS should enable the full downloading and syncing of all course materials.
Offline capability is a must. And this offline capability should extend not only to curriculum (text, videos, and other files), but to collaborative tools. The key is that the mobile learning experience not feel constrained or cramped. That the design leverages the small screen and the capabilities of the app.
*What apps that you use have got you thinking about where mobile learning could go?*
**straight to your inbox**? | true | true | true | Rethinking platforms. | 2024-10-13 00:00:00 | 2013-08-15 00:00:00 | article | insidehighered.com | Inside Higher Ed | Higher Education News, Events and Jobs | null | null |
|
9,595,618 | http://www.bbc.co.uk/news/science-environment-32847369 | The great 'Mars bake-off' begins | Jonathan Amos | # The great 'Mars bake-off' begins
- Published
The UK aerospace laboratory in which Europe's ExoMars rover will be assembled is about to undergo a three-month deep-clean.
Just built at Airbus Defence and Space in Stevenage, the facility must be made spotless before any robot parts are brought in.
ExoMars will search for signs of past and present life on the Red Planet in 2019, and scientists will need to be sure that any detection is real and not the result of some bugs carried from Earth.
The selfie at the top of the page has me wearing normal clothing; the next time I get to go in the cleanroom it will be in the full "bunny" suit, with hood, mask, shoes and gloves.
Before that can happen, the big sterilisation "bake-off" must happen, starting this coming week.
That's what it's called, but bake-off is really a bit of a misnomer because no heat is actually involved.
Instead, technicians will start running a special air-filtration system, in tandem with regular disinfection, to try to reduce the quantity of spores (bacteria, mould, etc) present in the lab.
"We're classified as a Class 8, Highly Controlled cleanroom," says Paul McMahon, Airbus's ExoMars project office manager.
"The 'Class 8' refers to the number of permitted particulates, but it is the 'Highly Controlled' bit that limits bio-contamination.
"We have to try to achieve 300 spores per square metre on the rover vehicle."
Right now, as you're reading this, there will probably be thousands and thousands of bacteria on just your hands, so this specification is demanding.
But the protocols required to meet it are well established.
Interestingly, the battle against resistance will be fought just as fiercely here as in a health lab or on an acute hospital ward.
The disinfectants must be changed regularly otherwise stubborn organisms can begin to take hold.
To the right in the image, you can see some windows. That is where engineers will put their computer equipment. Cables will run under the floor to be plugged into the rover for systems testing.
The windows at the back on the left are part of the public gallery. The long portrait window is there so that wheelchair users will get a good view, too.
Not visible are domes in the ceiling for public webcams. Wherever you are in the world, you'll be able to follow the assembly of the European Space Agency's (Esa) Mars robot.
"It's going to be amazing," says Ben Boyes, the ExoMars deputy engineering manager at Airbus.
"For so long, it's been just a 'paper rover', and to actually now see the facility where we're going to build it - I just can't wait.
"I spend a lot of time on the road going to see the suppliers, and I can tell you that their elements are all getting very mature.
"Most have now passed the Preliminary Design Review, which represents the basic architecture, and many have now moved on towards Critical Design Reviews, where the designs are completely frozen.
"We're already seeing engineering models and qualification models, which lead eventually to the flight hardware."
One of the very first pieces of flight hardware to enter the room will very likely be the Bogie Electro-Mechanical Assembly - in essence, the basic chassis with its six wheels.
This is coming from MDA in Toronto. Canada is formally part of Esa as a "Cooperating" member state.
Other components will rapidly follow, including the so-called Analytical Laboratory Drawer. This is the box containing the key instruments that will study the surface and sub-surface of Mars.
The ALD will need to be kept ultra, ultra-clean. It will come to Stevenage from another lab, sealed and in a bio-bag, ready for insertion into the completed rover.
The launch of the ExoMars rover to the Red Planet is foreseen for May 2018.
This means the robot must be built and out the door before the end of 2017.
Before it can be put atop a rocket, it has to be attached to the hardware that will carry it across space and get it safely down to the surface. That's all being provided by the Russians. | true | true | true | Jonathan Amos gets a first peek inside the UK aerospace laboratory where Europe's ExoMars rover will be assembled. | 2024-10-13 00:00:00 | 2015-05-24 00:00:00 | article | bbc.com | BBC News | null | null |
|
9,164,803 | https://growthhackers.com/companies/new-relics-growth-playbook-from-startup-to-ipo/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |