labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
4
A Police Charity Bought an iPhone Hacking Tool and Gave It to Cops
The San Diego Police Foundation, an organization that receives donations from corporations, purchased iPhone unlocking technology for the city's police department, according to emails obtained by Motherboard. The finding comes as activist groups place renewed focus on police foundations, which are privately run charities that raise funds from Wall Street banks and other companies, purchase items, and then give those to their respective police departments. Because of their private nature, they are often less subject to public transparency laws, except for when they officially interact with a department. Advertisement "The GrayKey was purchased by the Police Foundation and donated to the lab," an official from the San Diego Police Department's Crime Laboratory wrote in a 2018 email to a contracting officer, referring to the iPhone unlocking technology GrayKey. Do you have any other documents related to the GrayKey? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, OTR chat on  jfcox@jabber.ccc.de , or email  joseph.cox@vice.com . The GrayKey is a series of devices made by Atlanta, Georgia-based firm GrayShift. Since launching several years ago, the company has entered a cat-and-mouse game with Apple, with GrayShift continually upgrading its product to allow law enforcement agencies to unlock modern iPhones, and hence bypass the encryption locking down data on the devices, and Apple then implementing mitigations to try and make that process harder. GrayShift offers an online version of its tool, which requires an internet connection and can be used a limited number of times; an offline version that can be used as many times as wanted; and a mobile version which Motherboard recently revealed the existence of. "The EULA I sent you [is] for a software upgrade that will allow us to get into the latest generation of Apple phones. Our original license was a 1 year license agreement paid for by the Police Foundation," the email adds. Advertisement In a 2019 email, two other officials discussed purchasing the GrayKey for the following year. Tech iPhone Hackers Grayshift Sell 'Mobile' GrayKey 07.13.20 "This is the phone unlocking technique that the Police Foundation purchased for us (for 15k). Apparently the software 'upgrade' costs the same as the initial purchase each year. :/ They are the only ones that offer a tool that can crack iPhones, so they charge A LOT!," the email reads. The items that foundations donate can include surveillance technology and weapons. Sometimes, companies that donate funds or items are the same companies that police departments end up buying equipment from. In 2007, an LAPD chief asked a foundation to approach Target, which then donated $200,000 in order to help cover the cost of buying software from surveillance firm Palantir, ProPublica reported. An image of a GrayKey unit published by the FCC. Image: FCC "Our end goal is to have an intervention on the funneling of private money into police forces and into policing," Scott Roberts, senior director of criminal justice campaigns at Color of Change, told Politico recently. "If the police foundations existed to raise money for the families of fallen police officers, we wouldn't say we need to abolish police foundations. It's the specific type of work that they're doing that we object to." Because police foundations act as private entities, they also do not directly fall under public records laws, meaning their expenditure or other activity may be more opaque than that of a police department itself. Advertisement In 2006, chip maker giant Qualcomm gave the San Diego Police Foundation a $1 million donation, for improving communications, GPS location, and broadband services for the department. GovX, a clothing supplier for current and former military and law enforcement officials, donated $12,950 in 2018. Residents can also donate their vehicles to the foundation. Kristen Amacone, director of education and technology for the San Diego Police Foundation, previously told a local CBS affiliate that the city moves slowly with its budgets, meaning getting new equipment can take years. Neither the San Diego Police Department or Sara Napoli, the Police Foundation president and CEO, responded to a request for comment.
8
Washington’s Secret to the Perfect Zoom Bookshelf? Buy It Wholesale
Washington’s Secret to the Perfect Zoom Bookshelf? Buy It Wholesale. Books by the Foot curates shelves full of books for Washington offices, hotels, TV sets—and, now, Zoom backdrops. Chuck Roberts, owner of Books by the Foot, in his warehouse. | Scott Suchman for Politico Magazine In a place like Washington—small, interconnected, erudite, gossipy—being well-read can create certain advantages. So, too, can seeming well-read. The “Washington bookshelf” is almost a phenomenon in itself, whether in a hotel library, at a think tank office or on the walls behind the cocktail bar at a Georgetown house. And, as with nearly any other demand of busy people and organizations, it can be conjured up wholesale, for a fee. Books by the Foot, a service run by the Maryland-based bookseller Wonder Book, has become a go-to curator of Washington bookshelves, offering precisely what its name sounds like it does. As retro as a shelf of books might seem in an era of flat-panel screens, Books by the Foot has thrived through Democratic and Republican administrations, including that of the book-averse Donald Trump. And this year, the company has seen a twist: When the coronavirus pandemic arrived, Books by the Foot had to adapt to a downturn in office- and hotel-decor business—and an uptick in home-office Zoom backdrops for the talking-head class. The Wonder Book staff doesn’t pry too much into which objective a particular client is after. If an order were to come in for, say, 12 feet of books about politics, specifically with a progressive or liberal tilt—as one did in August—Wonder Book’s manager, Jessica Bowman, would simply send one of her more politics-savvy staffers to the enormous box labeled “Politically Incorrect” (the name of Books by the Foot’s politics package) to select about 120 books by authors like Hillary Clinton, Bill Maher, Al Franken and Bob Woodward. The books would then be “staged,” or arranged with the same care a florist might extend to a bouquet of flowers, on a library cart; double-checked by a second staffer; and then shipped off to the residence or commercial space where they would eventually be shelved and displayed (or shelved and taken down to read). Only sometimes do Bowman and Wonder Book President Chuck Roberts know the real identity of the person whose home or project they’ve outfitted: “When we work with certain designers, I pretty much already know it’s going to be either an A-list movie or an A-list client. They always order under some code name,” Bowman says. “They’re very secretive.” Roberts opened the first of Wonder Book’s three locations in 1980, but Books by the Foot began with the dawn of the internet in the late 1990s. A lover of books who professes to never want to see them destroyed, he described the service as a way to make lemonade out of lemons; in this case, the lemons are used books, overstock books from publishers or booksellers, and other books that have become either too common or too obscure to be appealing to readers or collectors. “Pretty much every book you see on Books by the Foot [is a book] whose only other option would be oblivion,” Roberts says. Located in Frederick, Wonder Book’s 3-acre warehouse full of 4 million books is a short jaunt from the nation‘s capital. While the company ships nationally, it gets a hefty portion of its business from major cities including Washington. And, over the past two decades, Books by the Foot’s books-as-decor designs have become a fixture in the world of American politics, filling local appetite for books as status symbols, objects with the power to silently confer taste, intellect, sophistication or ideology upon the places they’re displayed or the people who own them. Wonder Book’s designs have been featured in a number of locations in and around D.C. According to Roberts, The Madison hotel once purchased 19th-century American history texts to use in its decor, and the Washington Hilton ordered books by color a while back (“,” he recalled) to place in some of its suites. (General managers at both hotels said they were unfamiliar with the orders, which Roberts said were placed a number of years ago.) Meanwhile, shows at the Kennedy Center have used Books by the Foot’s curations onstage, and sometimes the company’s handiwork even shows up in the homes of politicians. “We see a lot of them,” Bowman says. “You’ll see in the news they’ll have Books by the Foot [books] in their images in the newspapers, [alongside stories] going inside politicians’ homes and things like that.” In 2010, NBC’s . The show was switching to high definition and debuting a new set with well-stocked bookshelves to commemorate the occasion. Clickspring Design, the firm that designed the set, ordered books on politics, law and an assortment in the style of Books by the Foot’s “.” Bowman remembers the order was for somewhere around 200 feet, or roughly the length of three semitrucks. Erik Ulfers, founder and president of Clickspring, noted that a good TV set either transports viewers to someplace completely new and unfamiliar (“some are very abstract, really graphic-heavy”) or invites them to someplace welcoming and relatable. He recalls that he wanted the books on the “Meet the Press“ set to project familiarity infused with a sort of intellectual gravitas. He requested vintage books, he says—“It suggests a longer history, and somehow it seems more academic”—and replaced the pages in a number of the books with Styrofoam to avoid overloading the shelves. (When asked about this, Roberts wrote in an email, “As long as books are put in the public eye—and private eyes—they are still part of the culture.” “Meet the Press“ didn’t respond to a request for comment about the order.) Books by the Foot’s creations have also popped up in a variety of TV shows and movies, many of them politics-adjacent. “Madam Secretary,” “Veep,” “The Blacklist,” “House of Cards,” as well as the 2017 movie Chappaquiddick, for example, have all outfitted their sets with Books by the Foot curations. Some of the most high-profile projects the team works on, however, aren’t revealed to them until after the fact: Bowman has had the distinctly surreal experience of watching a movie for the first time and . (That’s how it works, she said, with “pretty much anything Marvel.”) Although TV shows set in Washington when Trump was elected (as did ), D.C. residents’ appetites for well-stocked bookshelves, whether as functional libraries or as vanity props, seems to have survived. Or at least, that’s what the demand for Books by the Foot’s services would indicate: The orders Roberts and his staff handled in the Trump years weren’t all that different from the orders they fielded in prior administrations. To Roberts, though, the unchanging demand is a good thing. One of the positives for a business like his, he wrote in an email, is that familiar types of people, who work in similar fields and likely share similar aspirations, are constantly moving in and out of the area: “Military, [employees of the] State Department and embassies, political folks” are always either settling in or leaving. The imminent changeover to the Biden administration will likely bring precisely the type of new business Books by the Foot has depended on for years. In 2020, of course, everything changed for Books by the Foot around the same time everything changed for everyone else. For most of the year, the coronavirus pandemic switched up the proportion of Books by the Foot’s commercial to residential projects: In July, Roberts said residential orders, which had previously accounted for 20 percent of business, now accounted for 40 percent. That was partly due to the closures of offices and hotels, Roberts noted—but a few other things were afoot, too. For one, more people were ordering books with the apparent intent to read them. “We’re seeing an uptick in books by subject, which are usually for personal use,” Roberts said over the summer. Because many people suddenly had extra time at home but hardly anyone was able to shop in brick-and-mortar stores, orders for, say, 10 feet of mysteries, or 3 feet of art books, rose in popularity. Another force at work, however, was the rise of the well-stocked shelf as a coveted home-office prop. When workplaces went remote and suddenly Zoom allowed co-workers new glimpses into one another’s homes, what New York Times writer Amanda Hess dubbed the “” became the hot-ticket item. (“For a certain class of people, the home must function not only as a pandemic hunkering nest but also be optimized for presentation to the outside world,” she wrote.) And while Roberts makes an effort not to infer too much about his clients or ask too many questions about their intent, he did notice a very telling micro-trend in orders he was getting from all across the United States. “We can sort of, you know, guess, or read between the lines, and we’ve had an uptick in smaller quantities,” Roberts said over the summer. “If your typical bookcase is 3 feet wide, and you just want to have the background from your shoulders up, then you might order 9 feet of history, or 9 feet of literature. That way, you put them on your home set … [and] nobody can zoom in on these books and say, Oh my God, he’s reading ... you know, something offensive, or tacky. Nothing embarrassing.” In the fall, Roberts said, numbers evened out somewhat, thanks to some commercial spaces reopening. But Books by the Foot might be taking on a higher percentage of home-library projects well into the future. Jill Mastrostefano is the creative director and lead designer at PFour, an interior design firm that does, by her estimate, about 75 percent of its business in D.C., Maryland and Virginia. Historically, Mastrostefano mostly has ordered books by color from Books by the Foot, usually to create pops of brightness or accent points in (homes built to display the designs available in a particular subdivision). But this year, she has spent a good chunk of her time outfitting model homes with specific rooms where people can envision themselves taking all their work Zoom meetings. “We are probably going to be ordering more books because of what are now called ‘Zoom rooms,’ instead of studies,” she says. “People need to, you know, look good when [they’re] on video.” In Washington and in political circles all over the United States, the fact that people are still, after more than nine months, showing up to work meetings and from inside their own homes likely means there will be sustained demand for impressive-looking bookshelves. And given that much of the politics-adjacent workforce , Books by the Foot can likely count on consistent Zoom-room business until then. Of course, there’s evidence to suggest that people want to be or appear well-read but keep Books by the Foot’s involvement hidden. People probably don’t want to talk about their credibility shelves, Roberts points out. Which has always been, and remains, perfectly fine with him and perfectly fine with Bowman. After assembly in the Wonder Book warehouse, that 12-foot order of left-leaning politics books from August (a somewhat large order for a residential project) was shipped to a private residence in New York for a client whose identity was unknown to Bowman—someone whose use for the books might be revealed one day, or might stay hidden from public view forever. b b p
1
Who Gets Credit for the Computer?: An Exchange
How the Computers Exploded from the June 7, 2012 issue In Jim Holt’s review of George Dyson’s Turing’s Cathedral he falls into the trap that Dyson set for him [“How the Computers Exploded,” NYR, June 7]. Dyson has amplified the importance of John von Neumann’s MANIAC project to a point where Holt got the impression that it was the first useful computer and it started a revolution. It wasn’t. It didn’t. John von Neumann’s MANIAC was not the first computer. Nor was it, as Holt dubs it, the first “genuine” computer, or the first high-speed, stored-program, all-purpose computer. John von Neumann did not invent the stored-program architecture that often bears his name. By the time the MANIAC came online, several stored-program machines were operating and actually for sale in England and the US. Any number of computer history texts will bear this out. Dyson does include these facts in his book. Yet they are mostly brushed over, in a mad love affair with all things von Neumann. So it’s not surprising that Holt’s review gets basic facts wrong. If Holt had only doubted this revisionist history enough to check another source, he could have found that the ENIAC was very busy cranking through a variety of different computational problems from 1945 to 1955 (including one for the H-bomb). By 1948 it had a stored program. In 1949 the Manchester Baby and Mark I, the EDSAC and the BINAC were running. Eckert and Mauchly had contracts in government and industry to deliver UNIVACs. The world was lousy with computers! The idea that von Neumann was some kind of torch carrier who convinced the world that computers were important just does not wash with the facts. It does, apparently, sell books. The insiders were convinced in 1946, when the ENIAC was revealed and the description of Eckert and Mauchly’s EDVAC was disseminated under von Neumann’s name. The population was convinced in 1952, when UNIVAC predicted the election on national TV. The fact that von Neumann continues to get credit for Eckert and Mauchly’s work is maddening. Jim Holt begins his very interesting review of George Dyson’s book Turing’s Cathedral with the statement that the “digital universe” came into existence in 1950 with the construction of the MANIAC—an all-purpose computer built in Princeton. I believe that this statement is incorrect. The first such computer was the ENIAC, which was built at the University of Pennsylvania in 1946. It was conceived and designed by the engineers John Mauchly and J. Presper Eckert. It had all the features of what was later called the von Neumann architecture. Von Neumann got into this around 1944 as a consultant and reformulated the engineering into a mathematical structure. Among the features of the ENIAC that had been conceived by Mauchly and Eckert was what is known as “stored programming.” Hints of this can be found in previous work by Alan Turing. Programs are numerically stored in the computer’s memory where they can be manipulated like numbers so the computer can modify its program as it goes along. Von Neumann was such a compelling figure that it is tempting to build the story around him, which by the way Dyson does not do. Holt makes much of the fact that the MANIAC was used to compute features of the hydrogen bomb. But this was first done by the ENIAC. Von Neumann had an abolute paranoia about the Russians and favored a first nuclear strike. Einstein referred to him as a Denktier, a think animal. The history of the electronic computer is very complex and parts of it are still disputed, but Holt’s account is too simple. Arguing about precedence of near- simultaneous inventions is generally fruitless, but it is strange that Alan Turing’s Pilot ACE (Automatic Computing Engine), which has at least some claim to be the first high-speed, stored-program, von Neumann-architecture digital computer, is often written out of the history: Jim Holt in “How the Computers Exploded,” which emphasizes Turing’s role, identifies MANIAC as the first practical realization of Turing’s “universal computer,” but makes no mention of the Pilot ACE. Pilot ACE, which was based on Turing’s original design for a full ACE version, was built at the UK National Physical Laboratory (NPL) starting in 1946, and ran its first program in May 1950. Turing left NPL before its completion, frustrated by slow progress, but the work was completed by a team including Donald Davies (later director of what came to be called the Informatics division of the NPL), Jim Wilkinson, and Michael Woodger. Pilot ACE used 1,450 vacuum tubes, had a mercury delay line main memory of 384 32-bit words (equivalent to 1,526 bytes in modern terms) and a clock rate of 1 MHz. A 4,096-word “hard drive” drum memory was added later. Between 1950 and 1954, when it was retired to the London Science Museum, Pilot ACE saw significant practical use, most notably in the analysis of the airframe structural stresses that caused the catastrophic breakup of some early Comet airliners (the first commercial jet). Incidentally, Mike Woodger, Turing’s assistant from 1946, whom I worked with at the NPL in the early 1970s, never accepted that Turing’s death by poisoning was suicide. He claimed that Turing’s careless work habits with photographic chemicals made accident much more likely, and that suicide was not consistent with Turing’s personality whatever the stressful circumstances. In 2001 I heard a talk by Kay Mauchly, John Mauchly’s widow, that detailed how the “von Neumann architecture” of the stored-program binary computer got that name. I don’t think she used the word “stolen,” but it was clear that after a certain meeting with Eckert and Mauchly, John von Neumann went off to Princeton to write up their ideas as his own in his paper on the EDVAC computer. In the 1970s I heard similar opinions in chats with both Presper Eckert and Grace Hopper. As for the claim that von Neumann’s 1952 MANIAC was the first digital computer—if not EDVAC of 1945 then there was the Eckert/Mauchly BINAC of 1949, the British LEO machine of 1951, and first commercial production machine, Eckert/Mauchly’s UNIVAC 1 in 1952; and even Konrad Zuse’s Z-series machines of the 1930s and 1940s deserve a vote. Early computing history will always have many fathers, but von Neumann’s worthy reputation in mathematics does not extend to what should rightly be termed the “Eckert/Mauchly architecture.” As for Turing, given room in Trafalgar Square for another column, instead of an apology he might well be placed atop it. David K. Adams former UNIVAC employee Castine, Maine I began my review by claiming that John von Neumann’s MANIAC, which became operational in 1950 at the Institute for Advanced Study in Princeton, was the first “genuine” computer—that is, the first stored-program universal Turing machine. A number of readers wrote to champion the claims of other machines to this title. Both Bill Mauchly and Jeremy Bernstein back the ENIAC, which was codesigned by Mauchly’s father. But the ENIAC, as I pointed out in the review, was not built as a stored-program machine; it had to be laboriously rewired by hand for each task. Mauchly says of the ENIAC that “by 1948 it had a stored program.” That is true, but only because the ENIAC had been retrofitted, at von Neumann’s suggestion, so that it could be programmed in a primitive sort of way. In any event, as the distinguished logician Martin Davis observed in his 2000 book, The Universal Computer, the gap between the thinking behind the ENIAC and the ideal of a universal computer was “immense.” Mark Dowson, by contrast, nominates as the first computer the Pilot ACE—built in the late 1940s in Britain and partly based on the elegantly minimalist design of Alan Turing himself. That is more plausible, even though the Pilot ACE proved far less influential than von Neumann’s Princeton machine as a template for later computers. Still other readers maintained that the EDSAC, designed by Sir Maurice Wilkes at Cambridge University, was the first working stored-program computer. As George Dyson notes in Turing’s Cathedral, Wilkes managed to coax his machine into operation in 1949, beating von Neumann’s by a year or so. (Its first programmed job was to print out a list of prime numbers.) Turing, though, was unimpressed by Wilkes’s EDSAC design, commenting that it was “much more in the American tradition of solving one’s difficulties by means of much equipment rather than by thought.” Whether MANIAC or ENIAC or EDSAC or EDVAC or ACE deserves the laurel as the first genuine computer—not to mention UNIVAC, BIVAC, MARK I, LEO, Zuse Z4, and much of the rest of the alphabet—two things are certain. One is that the “Von Neumann architecture” for the modern computer is something of a misnomer, since the ideas behind it were not exclusively or even primarily due to von Neumann. The other is that “Turing machine” is not a misnomer at all, since it was Turing who came up with the original idea of an all-purpose computer. For that, and for his quietly heroic (and shabbily rewarded) role in helping Britain avert defeat in World War II, Turing probably does, as David K. Adams suggests, merit a column in Trafalgar Square.
2
Huffman Coding
Huffman tree generated from the exact frequencies of the text "this is an example of a huffman tree". The frequencies and codes of each character are below. Encoding the sentence with this code requires 135 (or 147) bits, as opposed to 288 (or 180) bits if 36 characters of 8 (or 5) bits were used. (This assumes that the code tree structure is known to the decoder and thus does not need to be counted as part of the transmitted information.) In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes". [1] The output from Huffman's algorithm can be viewed as a variable-length code table for encoding a source symbol (such as a character in a file). The algorithm derives this table from the estimated probability or frequency of occurrence (i) for each possible value of the source symbol. As in other entropy encoding methods, more common symbols are generally represented using fewer bits than less common symbols. Huffman's method can be efficiently implemented, finding a code in time linear to the number of input weights if these weights are sorted. [2] However, although optimal among methods encoding symbols separately, Huffman coding is not always optimal among all compression methods - it is replaced with arithmetic coding [3] or asymmetric numeral systems [4] if a better compression ratio is required. In 1951, David A. Huffman and his MIT information theory classmates were given the choice of a term paper or a final exam. The professor, Robert M. Fano, assigned a term paper on the problem of finding the most efficient binary code. Huffman, unable to prove any codes were the most efficient, was about to give up and start studying for the final when he hit upon the idea of using a frequency-sorted binary tree and quickly proved this method the most efficient. [5] In doing so, Huffman outdid Fano, who had worked with Claude Shannon to develop a similar code. Building the tree from the bottom up guaranteed optimality, unlike the top-down approach of Shannon–Fano coding. Huffman coding uses a specific method for choosing the representation for each symbol, resulting in a prefix code (sometimes called "prefix-free codes", that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol). Huffman coding is such a widespread method for creating prefix codes that the term "Huffman code" is widely used as a synonym for "prefix code" even when such a code is not produced by Huffman's algorithm. Constructing a Huffman Tree Input. Alphabet , which is the symbol alphabet of size . Tuple , which is the tuple of the (positive) symbol weights (usually proportional to probabilities), i.e. . Output. Code , which is the tuple of (binary) codewords, where is the codeword for . Goal. Let be the weighted path length of code . Condition: for any code . We give an example of the result of Huffman coding for a code with five characters and given weights. We will not verify that it minimizes L over all codes, but we will compute L and compare it to the Shannon entropy H of the given set of weights; the result is nearly optimal. For any code that is biunique, meaning that the code is uniquely decodeable , the sum of the probability budgets across all symbols is always less than or equal to one. In this example, the sum is strictly equal to one; as a result, the code is termed a complete code. If this is not the case, one can always derive an equivalent code by adding extra symbols (with associated null probabilities), to make the code complete while keeping it biunique. As defined by Shannon (1948), the information content i (in bits) of each symbol i i with non-null probability is The entropy i (in bits) is the weighted sum, across all symbols i i with non-zero probability i i , of the information content of each symbol: (Note: A symbol with zero probability has zero contribution to the entropy, since . So for simplicity, symbols with zero probability can be left out of the formula above.) As a consequence of Shannon's source coding theorem, the entropy is a measure of the smallest codeword length that is theoretically possible for the given alphabet with associated weights. In this example, the weighted average codeword length is 2.25 bits per symbol, only slightly larger than the calculated entropy of 2.205 bits per symbol. So not only is this code optimal in the sense that no other feasible code performs better, but it is very close to the theoretical limit established by Shannon. In general, a Huffman code need not be unique. Thus the set of Huffman codes for a given probability distribution is a non-empty subset of the codes minimizing for that probability distribution. (However, for each minimizing codeword length assignment, there exists at least one Huffman code with those lengths.) Visualisation of the use of Huffman coding to encode the message "A_DEAD_DAD_CEDED_A_BAD_BABE_A_BEADED_ABACA_ BED". In steps 2 to 6, the letters are sorted by increasing frequency, and the least frequent two at each step are combined and reinserted into the list, and a partial tree is constructed. The final tree in step 6 is traversed to generate the dictionary in step 7. Step 8 uses it to encode the message. A source generates 4 different symbols with probability . A binary tree is generated from left to right taking the two least probable symbols and putting them together to form another equivalent symbol having a probability that equals the sum of the two symbols. The process is repeated until there is just one symbol. The tree can then be read backwards, from right to left, assigning different bits to different branches. The final Huffman code is: Symbol Code a1 0 a2 10 a3 110 a4 111 The standard way to represent a signal made of 4 symbols is by using 2 bits/symbol, but the entropy of the source is 1.74 bits/symbol. If this Huffman code is used to represent the signal, then the average length is lowered to 1.85 bits/symbol; it is still far from the theoretical limit because the probabilities of the symbols are different from negative powers of two. The technique works by creating a binary tree of nodes. These can be stored in a regular array, the size of which depends on the number of symbols, . A node can be either a leaf node or an internal node. Initially, all nodes are leaf nodes, which contain the symbol itself, the weight (frequency of appearance) of the symbol and optionally, a link to a parent node which makes it easy to read the code (in reverse) starting from a leaf node. Internal nodes contain a weight, links to two child nodes and an optional link to a parent node. As a common convention, bit '0' represents following the left child and bit '1' represents following the right child. A finished tree has up to leaf nodes and internal nodes. A Huffman tree that omits unused symbols produces the most optimal code lengths. The process begins with the leaf nodes containing the probabilities of the symbol they represent. Then, the process takes the two nodes with smallest probability, and creates a new internal node having these two nodes as children. The weight of the new node is set to the sum of the weight of the children. We then apply the process again, on the new internal node and on the remaining nodes (i.e., we exclude the two leaf nodes), we repeat this process until only one node remains, which is the root of the Huffman tree. The simplest construction algorithm uses a priority queue where the node with lowest probability is given highest priority: Since efficient priority queue data structures require O(log n) time per insertion, and a tree with n leaves has 2n−1 nodes, this algorithm operates in O(n log n) time, where n is the number of symbols. If the symbols are sorted by probability, there is a linear-time (O(i)) method to create a Huffman tree using two queues, the first one containing the initial weights (along with pointers to the associated leaves), and combined weights (along with pointers to the trees) being put in the back of the second queue. This assures that the lowest weight is always kept at the front of one of the two queues: Once the Huffman tree has been generated, it is traversed to generate a dictionary which maps the symbols to binary codes as follows: The final encoding of any symbol is then read by a concatenation of the labels on the edges along the path from the root node to the symbol. In many cases, time complexity is not very important in the choice of algorithm here, since n here is the number of symbols in the alphabet, which is typically a very small number (compared to the length of the message to be encoded); whereas complexity analysis concerns the behavior when n grows to be very large. It is generally beneficial to minimize the variance of codeword length. For example, a communication buffer receiving Huffman-encoded data may need to be larger to deal with especially long symbols if the tree is especially unbalanced. To minimize variance, simply break ties between queues by choosing the item in the first queue. This modification will retain the mathematical optimality of the Huffman coding while both minimizing variance and minimizing the length of the longest character code. Generally speaking, the process of decompression is simply a matter of translating the stream of prefix codes to individual byte values, usually by traversing the Huffman tree node by node as each bit is read from the input stream (reaching a leaf node necessarily terminates the search for that particular byte value). Before this can take place, however, the Huffman tree must be somehow reconstructed. In the simplest case, where character frequencies are fairly predictable, the tree can be preconstructed (and even statistically adjusted on each compression cycle) and thus reused every time, at the expense of at least some measure of compression efficiency. Otherwise, the information to reconstruct the tree must be sent a priori. A naive approach might be to prepend the frequency count of each character to the compression stream. Unfortunately, the overhead in such a case could amount to several kilobytes, so this method has little practical use. If the data is compressed using canonical encoding, the compression model can be precisely reconstructed with just bits of information (where B is the number of bits per symbol). Another method is to simply prepend the Huffman tree, bit by bit, to the output stream. For example, assuming that the value of 0 represents a parent node and 1 a leaf node, whenever the latter is encountered the tree building routine simply reads the next 8 bits to determine the character value of that particular leaf. The process continues recursively until the last leaf node is reached; at that point, the Huffman tree will thus be faithfully reconstructed. The overhead using such a method ranges from roughly 2 to 320 bytes (assuming an 8-bit alphabet). Many other techniques are possible as well. In any case, since the compressed data can include unused "trailing bits" the decompressor must be able to determine when to stop producing output. This can be accomplished by either transmitting the length of the decompressed data along with the compression model or by defining a special code symbol to signify the end of input (the latter method can adversely affect code length optimality, however). The probabilities used can be generic ones for the application domain that are based on average experience, or they can be the actual frequencies found in the text being compressed. This requires that a frequency table must be stored with the compressed text. See the Decompression section above for more information about the various techniques employed for this purpose. Huffman's original algorithm is optimal for a symbol-by-symbol coding with a known input probability distribution, i.e., separately encoding unrelated symbols in such a data stream. However, it is not optimal when the symbol-by-symbol restriction is dropped, or when the probability mass functions are unknown. Also, if symbols are not independent and identically distributed, a single code may be insufficient for optimality. Other methods such as arithmetic coding often have better compression capability. Although both aforementioned methods can combine an arbitrary number of symbols for more efficient coding and generally adapt to the actual input statistics, arithmetic coding does so without significantly increasing its computational or algorithmic complexities (though the simplest version is slower and more complex than Huffman coding). Such flexibility is especially useful when input probabilities are not precisely known or vary significantly within the stream. However, Huffman coding is usually faster and arithmetic coding was historically a subject of some concern over patent issues. Thus many technologies have historically avoided arithmetic coding in favor of Huffman and other prefix coding techniques. As of mid-2010, the most commonly used techniques for this alternative to Huffman coding have passed into the public domain as the early patents have expired. For a set of symbols with a uniform probability distribution and a number of members which is a power of two, Huffman coding is equivalent to simple binary block encoding, e.g., ASCII coding. This reflects the fact that compression is not possible with such an input, no matter what the compression method, i.e., doing nothing to the data is the optimal thing to do. Huffman coding is optimal among all methods in any case where each input symbol is a known independent and identically distributed random variable having a probability that is dyadic. Prefix codes, and thus Huffman coding in particular, tend to have inefficiency on small alphabets, where probabilities often fall between these optimal (dyadic) points. The worst case for Huffman coding can happen when the probability of the most likely symbol far exceeds 2−1 = 0.5, making the upper limit of inefficiency unbounded. There are two related approaches for getting around this particular inefficiency while still using Huffman coding. Combining a fixed number of symbols together ("blocking") often increases (and never decreases) compression. As the size of the block approaches infinity, Huffman coding theoretically approaches the entropy limit, i.e., optimal compression. [6] However, blocking arbitrarily large groups of symbols is impractical, as the complexity of a Huffman code is linear in the number of possibilities to be encoded, a number that is exponential in the size of a block. This limits the amount of blocking that is done in practice. A practical alternative, in widespread use, is run-length encoding. This technique adds one step in advance of entropy coding, specifically counting (runs) of repeated symbols, which are then encoded. For the simple case of Bernoulli processes, Golomb coding is optimal among prefix codes for coding run length, a fact proved via the techniques of Huffman coding. [7] A similar approach is taken by fax machines using modified Huffman coding. However, run-length coding is not as adaptable to as many input types as other compression technologies. Many variations of Huffman coding exist, [8] some of which use a Huffman-like algorithm, and others of which find optimal prefix codes (while, for example, putting different restrictions on the output). Note that, in the latter case, the method need not be Huffman-like, and, indeed, need not even be polynomial time. The n-ary Huffman algorithm uses the {0, 1,..., n − 1} alphabet to encode message and build an n-ary tree. This approach was considered by Huffman in his original paper. The same algorithm applies as for binary ( ) codes, except that the n least probable symbols are taken together, instead of just the 2 least probable. Note that for n greater than 2, not all sets of source words can properly form an n-ary tree for Huffman coding. In these cases, additional 0-probability place holders must be added. This is because the tree must form an n to 1 contractor; for binary coding, this is a 2 to 1 contractor, and any sized set can form such a contractor. If the number of source words is congruent to 1 modulo n−1, then the set of source words will form a proper Huffman tree. A variation called adaptive Huffman coding involves calculating the probabilities dynamically based on recent actual frequencies in the sequence of source symbols, and changing the coding tree structure to match the updated probability estimates. It is used rarely in practice, since the cost of updating the tree makes it slower than optimized adaptive arithmetic coding, which is more flexible and has better compression. Huffman template algorithm Most often, the weights used in implementations of Huffman coding represent numeric probabilities, but the algorithm given above does not require this; it requires only that the weights form a totally ordered commutative monoid, meaning a way to order weights and to add them. The Huffman template algorithm enables one to use any kind of weights (costs, frequencies, pairs of weights, non-numerical weights) and one of many combining methods (not just addition). Such algorithms can solve other minimization problems, such as minimizing , a problem first applied to circuit design. Length-limited Huffman coding/minimum variance Huffman coding b is a variant where the goal is still to achieve a minimum weighted path length, but there is an additional restriction that the length of each codeword must be less than a given constant. The package-merge algorithm solves this problem with a simple greedy approach very similar to that used by Huffman's algorithm. Its time complexity is , where is the maximum length of a codeword. No algorithm is known to solve this problem in or time, unlike the presorted and unsorted conventional Huffman problems, respectively. Huffman coding with unequal letter costs In the standard Huffman coding problem, it is assumed that each symbol in the set that the code words are constructed from has an equal cost to transmit: a code word whose length is N digits will always have a cost of N, no matter how many of those digits are 0s, how many are 1s, etc. When working under this assumption, minimizing the total cost of the message and minimizing the total number of digits are the same thing. i is the generalization without this assumption: the letters of the encoding alphabet may have non-uniform lengths, due to characteristics of the transmission medium. An example is the encoding alphabet of Morse code, where a 'dash' takes longer to send than a 'dot', and therefore the cost of a dash in transmission time is higher. The goal is still to minimize the weighted average codeword length, but it is no longer sufficient just to minimize the number of symbols used by the message. No algorithm is known to solve this in the same manner or with the same efficiency as conventional Huffman coding, though it has been solved by Karp whose solution has been refined for the case of integer costs by Golin. Optimal alphabetic binary trees (Hu–Tucker coding) In the standard Huffman coding problem, it is assumed that any codeword can correspond to any input symbol. In the alphabetic version, the alphabetic order of inputs and outputs must be identical. Thus, for example, could not be assigned code , but instead should be assigned either or . This is also known as the Hu–Tucker problem, after T. C. Hu and Alan Tucker, the authors of the paper presenting the first -time solution to this optimal binary alphabetic problem, [9] which has some similarities to Huffman algorithm, but is not a variation of this algorithm. A later method, the Garsia–Wachs algorithm of Adriano Garsia and Michelle L. Wachs (1977), uses simpler logic to perform the same comparisons in the same total time bound. These optimal alphabetic binary trees are often used as binary search trees. [10] The canonical Huffman code If weights corresponding to the alphabetically ordered inputs are in numerical order, the Huffman code has the same lengths as the optimal alphabetic code, which can be found from calculating these lengths, rendering Hu–Tucker coding unnecessary. The code resulting from numerically (re-)ordered input is sometimes called the i and is often the code used in practice, due to ease of encoding/decoding. The technique for finding this code is sometimes called b, since it is optimal like Huffman coding, but alphabetic in weight probability, like Shannon–Fano coding. The Huffman–Shannon–Fano code corresponding to the example is , which, having the same codeword lengths as the original solution, is also optimal. But in canonical Huffman code, the result is . Arithmetic coding and Huffman coding produce equivalent results — achieving entropy — when every symbol has a probability of the form 1/2 k . In other circumstances, arithmetic coding can offer better compression than Huffman coding because — intuitively — its "code words" can have effectively non-integer bit lengths, whereas code words in prefix codes such as Huffman codes can only have an integer number of bits. Therefore, a code word of length k only optimally matches a symbol of probability 1/2 k and other probabilities are not represented optimally; whereas the code word length in arithmetic coding can be made to exactly match the true probability of the symbol. This difference is especially striking for small alphabet sizes.[ p ] Prefix codes nevertheless remain in wide use because of their simplicity, high speed, and lack of patent coverage. They are often used as a "back-end" to other compression methods. Deflate (PKZIP's algorithm) and multimedia codecs such as JPEG and MP3 have a front-end model and quantization followed by the use of prefix codes; these are often called "Huffman codes" even though most applications use pre-defined variable-length codes rather than codes designed using Huffman's algorithm. ^ ^ ^ ^ J. Duda, K. Tahboub, N. J. Gadil, E. J. Delp, The use of asymmetric numeral systems as an accurate replacement for Huffman coding , Picture Coding Symposium, 2015. ^ ^ ^ ^ ^ ^ . See also History and bibliography, pp. 453–454.
24
Streamers are stalling on sharing data
Back in 2019, WarnerMedia struck a deal for its soon-to-launch platform HBO Max that secured exclusive domestic streaming rights to The Big Bang Theory for five years. The pact, which sources pegged in the billions, also included an extension of an existing syndication deal with TBS in which the comedy will continue airing on the WarnerMedia-owned network through 2028. While the syndication side of the deal allows creators, profit participants, reps and even industry observers to gauge how big the TBS audience is, just how many people watch the show on HBO Max remains a mystery. That’s because syndication viewing numbers, as they’ve always been, are readily available through third-party measurement services, while streaming numbers remain under lock and key. Platforms like HBO Max, Netflix, Apple TV+, Amazon’s Prime Video, Hulu and Paramount+ continue to keep a vise-like grip on their data, and, nearly a decade into the streaming era, the lack of transparency is making it nearly impossible for dealmakers, and even viewers, to define what is a hit and what is a bomb. Unlike traditional box office and ratings numbers, streaming data lives behind an opaque wall, with little chance of reliable metrics emerging anytime soon. “It’s the curtain,” Big Bang Theory star Simon Helberg told THR in August while on the carpet for the premiere of his film Annette. “They keep all those numbers and all that data behind it. I ended up on the tail end of this bygone era, which is network television. We had 20 million viewers, sales, commercial ad dollars, you see it, you see the viewers, you see the Nielsens,” Helberg continued. “It all changed overnight. That’s probably deliberately a little bit abstract at the moment, and that is concerning. I want people to be compensated fairly. That’s the key — you want to be fair, and you don’t want to stiff any of the creative people.” According to the current modus operandi, streaming platforms gather reams of data about their users — whether certain titles drive subscriptions, how long people watch a show or movie, how many people finish once they start and cost per user (the ratio of a show’s cost to the size of its audience, among other things). All of those figures are kept in-house, and in most cases streamers don’t even share basic viewing data publicly. There’s no industrywide currency for viewership data on SVOD platforms in the way that Nielsen ratings have served the traditional TV business for decades or box office figures have measured the performance of features. Streamers instead offer oblique statements like, “Over the Ted Lasso season two premiere weekend, Apple TV+ expanded its new viewers by a record-breaking 50 percent week-over-week. … The second season of Ted Lasso increased its viewership by 6x over season one,” to quote an Apple TV+ press release about the Emmy-winning comedy’s season two debut. That release doesn’t state any baseline figures with which to compare the 50 percent week-to-week jump or the sixfold increase for Ted Lasso — like algebra with no way to solve for X — and is typical of public-facing comments from most streamers. In December 2018, Netflix tweeted, “45,037,125 Netflix accounts have already watched Bird Box,” adding that the Sandra Bullock starrer marked the best first-seven-day showing for a Netflix film to date. At the time, “watched” meant 70 percent of a movie (or 70 percent of one episode of a series). With no way to verify the data, the claim is little more than spin. WarnerMedia CEO Jason Kilar defends the lack of transparency, at least in these early days of the streaming era. He estimates a rival like Netflix boasts about 20 million more subscribers than HBO Max in the U.S. because of its status as an early entrant in the field. Thus, a hit for HBO Max may have the same number of views as a middling Netflix performer. “The advantages of being direct-to-consumer is we get an immense amount of data,” Kilar tells THR. “You don’t just see the viewing numbers, you see how they view it. What order do they view things in? How much do they watch? How much do they finish? How do they respond to various prompts to help us get better at helping them find something they love? I wouldn’t expect us or other players to put numbers out just because it’s really hard for people to understand apples-to-apples comparisons. So we labor over it. We know exactly how well these shows are doing.” Netflix’s recently abandoned two-minute “view” metric, while offering little insight into whether subscribers actually stuck with a show or film, at least allowed for comparison across titles on the biggest streaming outlet. The company said in its third-quarter earnings report that in the future, it will report total hours viewed for its series and films, which will give a somewhat better picture of their reach — and more closely align with the streaming rankings Nielsen has been releasing for the past 14 months that show total minutes viewed for a series or movie each week. The Nielsen rankings are imperfect as well. Although it collects data on all streamers, only Disney+, Hulu, Netflix, Amazon and, as of mid-October, Apple TV+ have given their OK to be included in the weekly rankings. A Bloomberg report in October gave the first public accounting of some of Netflix’s key internal data, focusing on the outsized performance of the Korean hit . The company uses metrics like completion rates as well as “efficiency” (size of its audience versus how much it costs) and “adjusted view share,” which measures how valuable to the company any given title’s viewers are. Those who use the service less or are new subscribers are considered more valuable than routine bingers. Offering a better glimpse, internal documents leaked to Bloomberg also showed that Netflix paid $24.1 million for Dave Chappelle’s controversial stand-up special The Closer. His previous special, Sticks and Stones, cost $23.6 million but returned an “impact value” of $19.4 million. THR sent questions to four other major streaming outlets — Apple TV+, Hulu, Netflix and Amazon — asking what and how much data they share with producers, actors and other above-the-line talent on their projects and whether a shared currency for SVOD is important. Apple TV+ and Hulu declined to comment, and the other two didn’t reply by press time. As streaming’s footprint continues to grow — it already accounts for more than a quarter of all TV usage in the U.S. and regularly surpasses broadcast TV, according to Nielsen estimates — the lack of clarity around just how many people are watching streaming series and films could place a strain on streamers’ relations with producers and talent and their representatives. “Over a period of time, the streamers are going to have to release more data to those who are creating shows on their platforms,” says UTA co-president Jay Sures. “Whether it’s next week, next month, next year or sometime soon after, it is inevitable. Eventually, there will be a new technology that can give the interested parties accurate data.” Meanwhile, talent lawyer Joel McKuin, who reps the likes of Kristen Stewart and showrunner Liz Meriwether, says “we know hot programming when we see it,” even without the data. “I don’t know how vital it is to know what the exact numbers are. When you do research and bring them evidence, they invariably say, ‘Oh, we have something else that did better.’ They don’t seem to be straight with you even when you have the goods,” he says. “It’s a bit of a dance. But it always was, even pre-streaming. They would find a way to undermine something’s value. So we’ve become used to not knowing. I’ve become more sanguine about it.” WarnerMedia Studios and Networks Group chair and CEO Ann Sarnoff argues that the new streaming-dominant landscape has made it difficult to release numbers akin to the ratings or box office data of the past given the complexity of the information and the subjectivity of its value to the streamer. For example, what metric is most important? Subscription acquisitions? Engagement? Or something else entirely? “[With] TV and the way it used to report, your monetization of a show was how many people watched because the advertisers were buying that,” says Sarnoff. As viewing options expanded beyond real-time linear television, she explains, advertisers decided viewership over time was valuable and the measures changed in response. “That [was] a much more contained ecosystem than streaming,” Sarnoff continues. “It’s not just your initial view. It’s the behavior of that new sub that comes in the door and what else they watch, and do you keep them?” And, with regard to motion pictures, opening weekend box office used to offer a reliable indicator of ultimate performance. “That’s not true anymore because of COVID, and we don’t know how it’s going to be coming back,” she says. “So I appreciate the desire for data, but I think the data has changed, and it’s not yet been fully vetted in terms of what is the right data for the digital world because it’s not that instantaneous advertiser fulfillment. It’s a much bigger ecosystem and a longer life of the consumer behavior.” Ultimately, Kilar predicts that two conditions will lead to greater transparency. First, third-party measurers like Nielsen will become more accurate with their streaming numbers. And second, he expects that burgeoning streaming giants will catch up to Netflix. With similar subscriber bases, comparisons with competitors would be more relevant. “There’s going to be a short list of folks that get to scale, and then I think you’ll probably see a bit more transparency because we all know what we’re dealing with, and you can build businesses and frameworks and other things on top of it,” Kilar says. “But, right now, things are changing so much. And whether you’re talking about Peacock or Paramount+ or Disney+ or Hulu, it’s not the same foundation. So that’s part of why you’re seeing kind of a ‘less than’ [when it comes to sharing data]. If I were in your shoes, I’d want it to distill down to one simple thing — and you get the email in the morning on Saturday and, boom, things are done.” Chris Gardner contributed to this report. This story first appeared in the Nov. 10 issue of The Hollywood Reporter magazine. Click here to subscribe.
1
Wee need imagination now more than ever
Pandemics, wars, and other social crises often create new attitudes, needs, and behaviors, which we need to anticipate. Imagination — the capacity to create, evolve and exploit mental models of things or situations that don’t yet exist — is the crucial factor in seizing and creating new opportunities, and finding new paths to growth.While imagination may seem like a frivolous luxury in a crisis, it is actually a necessity for building future success. The authors offer seven ways companies can develop their organization’s capacity for imagination: 1) Carve out time for reflection; 2) Ask active, open questions; 3) Allow yourself to be playful; 4) Set up a system for sharing ideas; 5) Seek out the anomalous and unexpected;  6) Encourage experimentation; and 7) Stay hopeful. i p i p i p Annotate i p Buy Copies i p Leer en español Ler em português In these difficult times, we’ve made a number of our coronavirus articles free for all readers.In these difficult times, we’ve made a number of our coronavirus articles free for all readers. To get all of HBR’s content delivered to your inbox, sign up for the Daily Alert newsletter. The idea of “crisis management” requires no explanation right now. Something unexpected and significant happens, and our first instincts are to defend against — and later to understand and manage — the disturbance to the status quo. The crisis is an unpredictable enemy to be tamed for the purpose of restoring normality. But we may not be able to return to our familiar pre-crisis reality. Pandemics, wars, and other social crises often create new attitudes, needs, and behaviors, which need to be managed. We believe imagination — the capacity to create, evolve, and exploit mental models of things or situations that don’t yet exist — is the crucial factor in seizing and creating new opportunities, and finding new paths to growth. Further Reading Coronavirus: Leadership and Recovery Leadership & Managing People Add to Cart Add to Cart Save Share Imagination is also one of the hardest things to keep alive under pressure. Companies that are able to do so can reap significant value. In recessions and downturns, 14% of companies outperform both historically and competitively, because they invest in new growth areas. For example, Apple released its first iPod in 2001 — the same year the U.S. economy experienced a recession that contributed to a 33% drop in the company’s total revenue. Still, Apple saw the iPod’s ability to transform its product portfolio: It increased R&D spending by double digits. The launch of the iTunes Store (2003) and new iPod models (2004) sparked an era of high growth. With imagination, we can do better than merely adapting to a new environment — we can thrive by shaping it. To do this, we need to strategize across multiple timescales, each requiring a different style of thinking. In the current Covid-19 crisis, for example: In other words, renewal and adaptive strategies give way to classical planning-based strategies and then to visionary and shaping strategies, which require imagination. We recently surveyed more than 250 multinational companies to understand the measures they were taking to manage the Covid-19 epidemic. While most companies are enacting a rich portfolio of reactive measures, only a minority are yet at the stage where they’re identifying and shaping strategic opportunities. We have written elsewhere about what the post-Covid reality is likely to look like and how to discriminate between temporary and enduring shifts in demand. But how can companies avoid having imagination become the first casualty of the crisis? Based on our research for a new book on the imaginative corporation we share seven imperatives: 1. Carve out time for reflection. Crises place heavy demands on leaders and managers, and it is easy to lose the already slim time we might have for reflection. But we won’t see the big picture, let alone a shapeable picture of the future, unless we stand back and reflect. Most of the time in business we operate with our instinctual “fight-or-flight” nervous system that evolved to help us in high-pressure situations, like running from a predator. This system narrows our focus. But less emphasized is the parasympathetic, or “rest-and-digest” system, which evolved to manage mental and bodily operations when we are relaxed. We can imagine in hunter-gatherer days, the mental intensity of the hunt, followed by time back at home, reflecting on the day’s stories, perhaps imagining how to hunt better. We need to create the equivalent rhythm of action and reflection in business as we navigate this crisis. Ways to switch off the fight-or-flight mode and support reflection include: 2. Ask active, open questions. In a crisis, we likely won’t have immediate answers, and we therefore need to employ good questions. The most natural questions in a crisis tend to be passive, for example, “What will happen to us?” However, the possibility of shaping events to our advantage only arises if we ask active questions, such as “How can we create new options?” Creativity involves reaching beyond precedents and known alternatives to ask questions that prompt the exploration of fresh ideas and approaches.  Some good questions to ask in the Covid-19 crisis might include, for example: 3. Allow yourself to be playful. Crises require a goal-driven and serious response. However, in times of stress, we tend to overlook the important human capacity of play to temporarily forget about goals and improvise. Biologically, play can be characterized as de-risked, accelerated learning. For example, juvenile animals’ mock fighting is highly effective preparation for real combat. In unprecedented, rapidly changing situations, play is a critical capability. As well as providing some much-needed stress relief — how many of us are currently working from dawn to dusk? — play can end up being, counterintuitively, very productive. We can make interesting, new connections between ideas when we allow ourselves to loosen up from our regular, goal-driven, laser-focused, instrumental approach. “Creativity is the rearrangement of existing knowledge into new, useful combinations,” Jorgen Vig Knudstorp, chairman of the LEGO Brand Group, told us. “Just like playing with LEGO Bricks, this can lead you to valuable innovations — like the Google search engine or the Airbnb business model.” Sometimes nothing immediately useful will come of play, but playing at least allows us to practice imagining, improvising, and being open to inspiration — all important skills when navigating the unknown. 4.  Set up a system for sharing ideas. Someone, somewhere in your organization is likely being forced by circumstances to experiment with new ways of doing things. The imaginative corporation picks up, codifies, and scales these innovations. Imagination doesn’t just happen on an individual level. Ideas evolve and spread by being able to skip between minds. Companies need to facilitate collective imagination. The key to this is allowing new ideas to be shared while they are still in development: creating forums for people to communicate in a casual way, without hierarchy, reports, permissions, or financial justifications. Conversely, the way to kill imagination and the spread of ideas is to construct non-communicating functional silos and to induce fear of not meeting the bar for “sensible” suggestions. In the name of “practicality” or “common sense” many ideas are rejected without being explored. But it is hard to distinguish ideas with no eventual merit from those which are merely unfamiliar, undeveloped, counterintuitive, or countercultural. In a situation where there are no easy solutions, we need to open up rather than constrict the funnel for new ideas. Every corporation had entrepreneurial beginnings. But successful corporations that have honed a stable, profitable, business recipe forget the messy, imaginative origins of the ideas upon which they were founded. Now is not the time for only executing a practiced recipe. We are facing a historic discontinuity, requiring entrepreneurialism and creativity. 5. Seek out the anomalous and unexpected. Imagination is triggered by surprising inputs. Our pattern-seeking minds adapt our mental models when we see something that does not fit. And when we adapt our mental models, we entertain different strategies and courses of action. To solve tough new problems, look externally. Examine accidents, anomalies, and particulars, and ask: “What doesn’t fit here?” Digging into what we find will prompt reframing, rethinking, and the discovery of new possibilities. In the current situation, we might ask, for example, why have some countries like Japan, China, and South Korea been able to break away from an exponential infection pattern? Or why are some cities suffering more than others? Or why apparently similar strategies gave different results in different places? Or what stopped us from being prepared for this crisis in spite of MERS, SARS, Ebola, and other ominous precedents? 6. Encourage experimentation. Although a crisis stretches our resources, it is important to encourage experiments — even if only on a shoestring budget. Natural systems are most resilient when they are diverse, and that diversity comes from trying new ways of doing new things. Our ideas only become useful if they are tested in the real world, often generating unexpected outcomes and stimulating further thinking and new ideas. For example, Ole Kirk Christiansen, the founder of the LEGO Brand, originally made homes and household products, such as wooden ladders and ironing boards, until the Great Depression of the 1930s forced him to experiment, and he tried building toys. This turned out to be a successful move at a time when consumers were holding back from building homes. After examining the international toy market, which was dominated by products made from wood, Christiansen was driven to experiment again by introducing toys made of a new disruptive material — plastics. Despite the scarcity of the immediate post-WWII years, he re-invested a full year’s profits into new machinery and tools, at first making traditional toys, then creating building blocks. By 1958, these evolved into today’s well known “binding” LEGO Bricks. Soon after, the company abandoned all wooden and other toys to double down on the LEGO Brick Toy Building System (“LEGO System in Play”). 7. Stay hopeful. Imagination feeds off the aspirations and aggravations that propel us to seek a better reality. When we lose hope and adopt a passive mindset, we cease to believe that we can meet our ideals or fix our problems. In statistics, Bayesian learning involves taking a belief about a statistical distribution (a “prior”) and updating it in the light of each new piece of information obtained. The outcome of the entire process can be determined by the initial belief. Pessimism can become a self-fulfilling prophesy. As a leader, ask yourself whether you are giving people grounds for hope, imagination, and innovation, or whether you are using pessimistic or fatalistic language, which could create a downward spiral in organizational creativity. Dealing with real risks involves taking imaginative risks, which requires hope. “Never in our lifetimes has the power of imagination been more important in defining our immediate future,” Jim Loree, CEO of Stanley Black & Decker, told us. “Leaders need to seize the opportunity to inspire and harness the imagination of their organizations during this challenging time.” All crises contain the seeds of opportunity. Many businesses, struggling now, will likely find a second life during and after the crisis, if they can keep alive and harness their imaginations. Imagination may seem like a frivolous luxury in a crisis, but it is actually a necessity for building future success. If our content helps you to contend with coronavirus and other challenges, please consider subscribing to HBR. A subscription purchase is the best way to support the creation of these resources.
1
Deploy Portainer with Digital Ocean – Easy Container Management GUI for K8s
Deploying Portainer with the Digital Ocean Marketplace is extremely easy. We are going to describe here how to have Portainer up and running in no time with a small set of clicks. p here . h3 h3 p Once you login on Digital Ocean, on the left menu click on DISCOVER and then click on Marketplace: On the Search Bar type portainer and the Portainer Community Edition will come up that you can select to start the deployment: You will land on the Portainer Community Edition installation page. Next click on Install App button. A pop-up dialog box will come up where you can select the Kubernetes Cluster where you want to deploy Portainer and click on Install: Down towards the end of the Kubernetes Cluster page you can follow the progress of the installation of Portainer: Once the installation of Portainer CE is finished the next step is to install the Kubernetes Metrics Server again via the Marketplace: Just like the installation of Portainer you need to select a cluster. Please make sure you select the cluster where Portainer was installed: Now select Networking on the left side menu: Next Select Load Balancers and you will have an IP address that will be used to access the Portainer UI. You can copy the IP address by clinking on it. On a new browser tab or window type or paste the IP address, colon sign and port 9000 to access Portainer: The Portainer interface will load with the first step of the Portainer settings where the intial administrator user has to be created: Second step is to connect Portainer to the Kubernetes Cluster: The final last step of the Portainer settings are to select the options below and click on Save configuration: Allow users to use external load balancer; Enable features using metrics server; do-block-storage Done! Portainer is now installed on your Kubernetes Cluster. You should now have access to the Endpoint of your cluster: New Cluster There is another even easier way to deploy Portainer via the Digital Ocean Marketplace. The steps are very similar to the ones above, the only difference is that you don't need to have a Kubernetes Cluster installed. You can start by repeating steps 1, 2 and 3 described on the First method above The next step is to select a New cluster: This will take you to the regular Digital Ocean Create cluster page. Please configure your cluster according to your requirements: You will notice that the new cluster is being deployed with Portainer CE pre-installed: Once the installation of the cluster is finished just follow the steps from #6 onwards described on the First method above. Hope that worked well for you, and you're up and running with Portainer. If you have any questions or comments, please drop them into the comments section below, or join us on our  Slack channel .
5
South Korea study shows coronavirus’ spread indoors
Infected after 5 minutes, from 20 feet away: South Korea study shows coronavirus’ spread indoors An outdoor dining set-up with hand sanitizers and stuffed bears to enforce social distancing in Seoul. (Lee Jin-man / Associated Press) Must-read stories from the L.A. Times Get the day's top news with our Today's Headlines newsletter, sent every weekday morning. Enter email address You may occasionally receive promotional content from the Los Angeles Times.
41
Amazon offers 'wellness chamber' for stressed staff
Amazon offers 'wellness chamber' for stressed staff About sharing Amazon plans to put "wellness chambers" in its warehouses so that stressed workers can sit inside and watch videos about relaxation. In a video shared on its Twitter account, Amazon said the "AmaZen" chamber would help staff focus on their mental health. But it deleted the post after a wave of ridicule from other social media users. The US retail giant has been repeatedly criticised over working conditions in its facilities. Amazon has not replied to the BBC's request for comment. On 17 May, the company announced a scheme called WorkingWell focusing on giving staff "physical and mental activities, wellness exercises, and healthy eating support". Describing the AmaZen booths, it said: "During shifts employees can visit AmaZen stations and watch short videos featuring easy-to-follow well-being activities, including guided meditations, positive affirmations, calming scenes with sounds." In the now-deleted Twitter video, the pod can be seen to have just enough room for a chair, small computer table against one wall, and a few small potted plants on shelves. The top panel was painted as a blue sky with clouds. But news site Motherboard described the chamber as a "coffin-sized booth in the middle of an Amazon warehouse". Some viewers were quick to re-upload the video to other accounts and criticise the tech giant for what has been labelled a "crying booth" or a "dystopian" work practice. Related Topics Companies Amazon Mental health More on this story Amazon staff told to work overtime as virus hits p Amazon worker fight: 'You're a cog in the system' p
1
Can Transcendence Be Taught?
Olivia Bee I HAVE, alas! Philosophy, Medicine, Jurisprudence too, And to my cost Theology, With ardent labour, studied through. And here I stand, with all my lore, Poor fool, no wiser than before. For two professors, the opening words of Goethe’s Faust have always been slightly disturbing, but only recently, as we’ve grown older, have they come to haunt us. Faust sits in his dusty library, surrounded by tomes, and laments the utter inadequacy of human knowledge. He was no average scholar but a true savant — a master in the liberal arts of philosophy and theology and the practical arts of jurisprudence and medicine. In the medieval university, those subjects were the culminating moments of a lifetime of study in rhetoric, logic, grammar, arithmetic, geometry, music, and astronomy. In other words, Faust knows everything worth knowing. And still, after all his careful bookwork, he arrives at the unsettling realization that none of it has really mattered. His scholarship has done pitifully little to unlock the mystery of human life. Are we and our students in that same situation? Are we teaching them everything without teaching them anything regarding the big questions that matter most? Is there a curriculum that addresses why we are here? And why we live only to suffer and die? ADVERTISEMENT Those questions are at the root of every great myth and wisdom tradition: the Katha Upanishad, the opening lines of the Bhagavad Gita, Sophocles’ Ajax, and the Book of Job among them. Job cries to the heavens, entreating God to clarify the tortuous perplexity of being human. But God does not oblige, and Job is left in a whirlwind, in the dark, just like Faust at the beginning of Goethe’s modern remake of the ancient biblical story. John’s grandfather Paul died this spring. He was 99. He was a pharmacist in a time when pharmacists were treated like doctors. Being a druggist in the early 20th century meant that you could still make drugs, which Paul did. Expertly. The medicine cabinets at his home, in central Pennsylvania, were always stocked — belladonna, morphine, phentermine — substances that are not readily available today. He taught his family to believe in the powers of modern science, to believe that chemistry and biology could solve the mysteries, or at least the fatal problems, of human life. And he believed this almost to the very end. Paul would have never told us straight out what he thought of philosophy or of our choice to study and then teach it. But in his last years, and quite to his grandson’s surprise, he suggested that it might not be a complete waste of time. He had lots of questions: Why is there evil? Is there a God? Is there an afterlife? What is the meaning of life? What did Socrates mean when he said that the unexamined life is not worth living? Wrapped in illness before pitching forward into dementia, the elderly man had serious questions. Are we teaching students everything without teaching them anything regarding the big questions that matter most? Clancy’s mom is still alive and thriving, in her 70s. But she recently wrote to him, as though he might actually know the answer, “Is there something I should be doing to prepare for death?” She wasn’t talking about the practical issues of estate management and end-of-life care and all the rest of the scary but sensible decisions we have to help our parents make as they get older. She wasn’t talking about the psychological issue of how one might confront death itself, with techniques like mindfulness training or terror management. She was talking about the most important question there is, the one that made the ancient Greeks so notoriously anxious about the inevitability of the end of life: What comes next, and how can I be ready? The immanence of human finitude — the fact that we’re dying right now and not in some distant future — should create the impetus for philosophical reflection. Most philosophers know this in some abstract sense. The Platonic dialogues are set against the backdrop of the trial and death of Socrates for a reason: The difficulty of facing death is that it comes with the sudden challenge of giving a good account of your life, what Plato called an apologia. ADVERTISEMENT When dying finally delivers us to our inevitable end, we would like to think that we’ve endured this arduous trial for a reason. But that reason cannot, unfortunately, be articulated by many of the academic disciplines that have gained ascendance in our modern colleges. Why not? Why shouldn’t an undergraduate education prepare students not only for a rich life but for a meaningful death? B iology offers certain answers about how we live and die. It can describe apoptosis, autophagy, necrosis, and general senescence, the programmed death, dismantling of, injury to, and deterioration of cells. But those descriptions, like the terms they trade in, seem abstract, alien, detached from the experience of living and dying. When a 98-year-old asks, “Why am I in pain?” the biologist has answers: vasoconstriction, dehydration, toxicity. The evolutionary biologist might say that pain is an adaptive response to the world’s dangers. But those aren’t the type of answers that will satisfy a dying man, or Faust for that matter. Faust’s “Why?” is voiced in a different register, one that aches for a cosmic or existential answer. Might cosmic answers be found, then, in the heavens and the study of them? Faust, escaping his library, emerging into the night’s open air, screams his questions at the stars. In our modern way, we do the same. We ask astronomers and astrophysicists to explain the evolution of the universe, the way that all things come into being and are snuffed out. But in regard to the meaning of this cosmic dance, physics itself remains silent or, at least, inexplicable. Faust’s foray into the night air terminates abruptly when the Earth Spirit answers in its terrifyingly opaque way. In the face of that, the little man simply cowers. Despite our star-directed sciences, it’s no different today. The problem with the physical sciences — or with the catchall that Faust called “medicine” — is that when it comes to the difficulties of mortality, scientists are committed to a particular methodology, which necessarily avoids satisfying existential answers. End-of-life issues are subjectively felt; there is a singular quality of experience to each passing life. This is what Heidegger means when he claims that death is a person’s “ownmost possibility.” When an old man asks, “What is the meaning of life?” he simultaneously queries the infinitely more particular question: “What is the meaning of my life?” Which is also the question: “What might be the meaning of my death?” Any satisfying answers would have to address what this meaning might be from the inside, in terms that could be subjectively felt. The physical sciences, on the whole, are wed to empirical, objective investigation, to examining things from the outside. They are numb to the felt sense — the frustration, regret, terror, guilt, uncertainty, relief, joy, peace — that prickles a life that is listing toward the grave. ADVERTISEMENT This is not to say that Western philosophy and theology do a much better job. According to Faust, they don’t. Theology is the study of religion, not religion itself. Theology, true theology, has the pesky consequence of disrupting belief, not solidifying it. If you are looking for answers about the meaning of life, the type that allows you to sleep at night, one should not turn to a theologian. Reading Aquinas’s Summa Theologica is not, even for the most devout, a touching or reassuring experience. It is a logical justification for belief that one already has, but has any dying atheist read it and become a believer? There is a reason that proofs for the existence of God are assiduously avoided by many teachers of the philosophy of religion: They are dead boring, the type of tedium that can actually convince one that there isn’t any grand purpose to life. Go ahead, read the Summa. Persuade us that it is gripping — or even convincing. Moreover, as Kierkegaard argued, rationally knowing that God exists as a consequence of some proof is different than believing that God exists in the relevant way. It’s a bit like the Oracle tells Neo in The Matrix: “No one can tell you you’re in love, you just know it. Through and through. Balls to bones.” If there is any consolation in faith, it won’t come from what someone else has told you. Traditional Western theology lacks what Faust eventually craves: a handle on the human experience. As a discipline, theology does not spend most of its time exploring the inner, felt sense of transcendence, what William James called the “varieties of religious experience.” Theologians often skirt the felt need, the experiential craving, for transcendence. Who needs transcendence? We suspect that human beings do. Of course, it is notoriously difficult to say what transcendence is. But we take Josiah Royce seriously when he suggests that the need for transcendence is real and experientially felt by most people at one point or another. It is experienced, according to Royce, as the obverse of feeling completely, utterly, and totally lost. The prospect of losing one’s life or mind brings this transcendental need into sharp focus. How else to make sense of, overcome the terror of, having your toenails grow, die, and fall off; the experience of losing one’s mind; the experience of scratching one’s arm till it bleeds; of not recognizing your loved ones; of slowly sloughing off flesh until nothing is left? Theology doesn’t go there. But we do, headlong, unstoppably. And we would like to know that it hasn’t all been for naught. Olivia Bee Western philosophy has often followed theology in erring in similar ways. For much of its modern history, it has lusted after the observational powers of the sciences. As modern science took over Europe, it put serious constraints on the love of wisdom. Bacon, Descartes, Hobbes, Hume, Kant — the titans of modern philosophy were, like the bench scientist, bent on describing existence rather than plumbing its deepest meanings. ADVERTISEMENT At best, their rational systems masked the anxiety that Faust experienced, one that stemmed from the sense that despite the pretenses of reason and logic, human life was at its core largely irrational. We live only to suffer? That makes absolutely no sense. At one point, philosophy, according to Socrates, was a preparation for death, a way of getting one’s existential house in order before it was blown away, or because it needed to be in order for whatever might happen next. But this original intent faded in philosophy’s growing desire to become a branch of math or science. C ompleting the first part of Faust, in 1806, Goethe wrote at a time when the rationalism of Descartes had flourished since the mid-1600s but was about to come under attack. The rationalist could ascertain truths about math and logic, like X=X, but could say pitifully little about the natural world. What rationalism gained in certainty, it gave up in descriptive power. Empiricism — the works of Bacon and Hume, for instance — had also had its day, but its models of the natural world were addressed chiefly to practical concerns. While science provided certainty on smaller, provable points, it lost certainty and even the power of imaginative conjecture on some of the important, larger ones. Goethe wrote in the aftermath of these theoretical failures and, indeed, on the heels of another German, Kant, who had done his best to unify, and therefore preserve what is best about, rationalism and empiricism. Of course, according to Goethe, Kant had also come up short: In trying to wed the two principal theories of modern thought, he generated yet another abstract system that had little to do with the bone-and-marrow realities of men and women. Post-Kantian philosophy, the type that Goethe helped to generate in the early years of the 19th century, was defined by its dissatisfaction with, among other things, the conceptual remove of Kant’s critical project, the sense that it had lost touch with the lived experience of life and action. Kant’s philosophy was supposed to be about freedom and human autonomy, but his books were regarded, even in his day, as dry and lifeless. They were “correct” as far as they went, but for thinkers working in his wake, they didn’t go nearly far enough. Kant was missing the felt sense of human meaning. On the evening that John’s grandfather Paul let his grandson hear him talk about love and see him cry, he also shared a story that had been pointedly redacted from his family history. He’d grown up in Altoona, Pa., a coal-mining town that, even in the 1920s, was beginning to run aground. He’d fallen in love with a young woman named Hope, John’s grandmother, from an even more dilapidated community called Alison 1, a “patch town” owned by the Rainey coal-and-coke company of Uniontown. Hope and Paul came from families that were close-knit — so close that they never fully rejoiced at the prospect of marrying off their children. So under cover of night, the two of them eloped to Maryland, and then made for New York City. At one point in the distant past, Paul had known the thrill of experience, a sense of love and freedom that made life oh so worth living, but over the course of middle age it had been tempered, or tamped down, by life’s practicalities. And only in his final days was Paul willing or able to return to those forbidden sentiments. ADVERTISEMENT Goethe and his contemporaries, like Schiller, would have regarded this as tragic and instructive in equal parts. They called their readers to an “education of the sentiments,” which quickly became a touchstone for educators of the 19th century. It was probably drawn from Adam Smith and his theory of moral sentiments, and reshaped by the Romantic poets, who held that a particular orientation among experience, emotion, and nature was key to being fully human. The sentiments, or subjective feelings, were necessary for the educated person to motivate and sustain ethical relations and to develop one’s own fully human capacities. One could read, write, and speak about freedom, but to actually be free one had to thrill with the sheer possibility and then allow this sense to determine one’s actions. The education of the sentiments had little to do with book learning and everything to do with the lessons of human experience, the ways in which it can be lastingly satisfying. This is what Faust craves most: to experience everything. Or better yet, to learn how human experience, transitory and fragile, could come to mean, if not everything, at least not nothing. It is tempting to think that Faust desires an infinite range of experience — to traverse its full horizon — but we suspect that what he yearns for is depth and height, a strange experiential quality that can occasionally pervade a fully human life. If philosophy of the 17th century was defined by the “epistemological turn” — the desire, bordering on obsession, to define the nature of objective truth — writers in the 19th century witnessed what might be called the “experiential turn,” a continuing attempt to explore the subjective inside intellectual life. That culminated, of course, in the movement we call Existentialism. G oethe’s demand to concentrate on, and enrich, experience was echoed by American transcendentalists of the 1830s, and was well fitted to a nation that lacked longstanding tradition but brimmed with opportunity and possibility. For Emerson, Goethe was “the Writer,” who, “coming into an over-civilized time and country, when original talent was oppressed under the load of books and mechanical auxiliaries and the distracting variety of claims, taught men how to dispose of this mountainous miscellany and make it subservient.” But subservient to what? For Goethe, the answer was complicated. ADVERTISEMENT His prioritization of experience over the traditional life of the mind was premised on a deeper commitment to reshaping culture (Bildung), and to the belief that ideas, on their own, without the corresponding sentiments, could do pitifully little to transform a society. Goethe may have helped to initiate the experiential turn, but to the extent that sentimental education remained instrumental, hinged tightly to societal reform, the revolution had yet to be fulfilled, Emerson thought. Goethe’s “is not even the devotion to pure truth,” the American wrote, “but to truth for the sake of culture.” And this orientation, one that elevated culture writ large over the cultivation of individuals, kept Goethe from, in Emerson’s words, “worshiping the highest unity; he is incapable of a self-surrender to the moral sentiment.” The need to have authentically lived and also to know what to do about dying are knotted together in a way that none of our usual intellectual approaches can adequately untangle. Emerson would not make a similar mistake. He published his essay, “Experience,” in 1844. It opens by revisiting the despair, frustration, and confusion that Faust expressed 40 years earlier. But this existential crisis, unlike Faust’s, was not the stuff of fiction, and it wasn’t expressed only to be overcome in the grand movement of Bildung. Emerson’s son Waldo had died two years earlier. The boy had contracted scarlet fever at the age of 5 and succumbed in a matter of days. “I take this evanescence and lubricity of all objects, which lets them slip through our fingers then when we clutch hardest, to be the most unhandsome part of our condition,” Emerson wrote. Unhandsome, indeed. For all of its uncertainty and transience, experience assured Emerson of one thing: It would be over all too soon. This is perhaps the hardest, but also the most profound, lesson of experience, and one that many people learn in the twilight of life. The trick, if we understand it, is to learn before it’s too late. “Experience,” what became a seminal essay in the American philosophical canon, was articulated not in order to be employed by the grand movement of culture, but to refocus on the subjective sense of the most pressing of human problems. Emerson wrote: Did our birth fall in some fit of indigence and frugality in nature, that she was so sparing of her fire and so liberal of her earth, that it appears to us that we lack the affirmative principle, and though we have health and reason, yet we have no superfluity of spirit for new creation? Historically scholars have skirted, if not explicitly fled, that question, retreating to the traditions, institutions, systems, and norms that seem to give some sort of ballast to an otherwise precarious existence. But that has been a flight from experience, a type of transcendence that amounts to a monumental feat of escapism. After the death of Waldo, however, flight was not an option for Emerson. Experience: It’s a noun, it’s a verb, but ultimately, for a host of scholars in the 19th century, it was an inescapable command. Experience — all of it. “It is not length of life,” Emerson instructs, “but depth of life.” ADVERTISEMENT Olivia Bee When one tries to sound the depths, Emerson concludes that it is possible to listen for a quiet inner voice that never, even in our darkest or most ecstatic moments, forsakes us, a voice that says, “Up again, old heart.” This perseverance in the midst of experience, rather than any transcendental dreams for cultural revival, was at the heart of classical American philosophy’s education of the sentiments. It was, at all points, geared toward what Emerson’s young friend Henry David Thoreau would call improving “the nick of time.” Each nick, each critical moment, singular and always present, can, for the time being, be occupied and improved. Thoreau went to Walden not as a demonstration of some environmentalist agenda but to “live deep and suck out all the marrow of life,” to cut, to mark, with pressure and precision, the time he’d been allotted. America of the early 19th century was routinely pigeonholed by European thinkers as having a climate wholly uncongenial to philosophers. But that wasn’t exactly true. It was uncongenial to a certain type of abstract thinker, and some Europeans began to acknowledge American philosophers’ exploration of the relationship between action and thought in a way that might allow one to face longstanding existential dilemmas. Emerson, Nietzsche wrote, is “a good friend and someone who has cheered me up even in dark times: He possesses … so many possibilities, that with him even virtue becomes spiritual.” The Romantic impulse ran deep with both thinkers: Experience was life-affirming not in the abstract but in the emotional and intellectual tenor of an individual. Philosophy at its best was to be learned by rote — not in the sense of mindless memorization but in the sense of learning something by heart. And this most personal of knowledge was meant to give individuals the courage to determine their own lives and to ask a question that Nietzsche voices in Thus Spoke Zarathustra: “What is the greatest experience you can have?” How deeply or gently or subtly will you make your nick of time? T hose questions seem to have no place in academe. Is that because the experiential turn has run its course? Or has it been only temporarily interrupted? The question of “the greatest experience” should be one that we resuscitate in our colleges. Lessons, both narrow and grand, on drawing the marrow from life are, when you think about it, the most crucial and timeless of all, to the self-seeking late teen and the purpose-seeking nonagenarian alike. ADVERTISEMENT At 81, John’s grandfather, Paul, wanted to see the Grand Tetons one last time and asked John to chaperone the outing. The whole family thought it was ludicrous: an old man with a mechanical hip hiking through the woods. They were right. The elderly fellow went “ass over tincups,” in his words, and had to be taken to the emergency room (a fact that didn’t at the time get back to his hand-wringing daughters). At 85 he wanted to ride a bike again, despite not being able to get his leg over the crossbar, and again enlisted the family philosopher as an accomplice. Another secret trip to the emergency room. A year later he wanted to talk about love, despite having assiduously avoided the word for most of his life. This time, something more notable than the emergency room: tears. “We should do this again,” he said, after he dried his eyes. There was something about the quality of the experience, despite its difficulty, that continued to beckon. So what exactly is the allure of experience? Thoreau gives us a hint: “You must live in the present, launch yourself on every wave, find your eternity in each moment. Fools stand on their island of opportunities and look toward another land. There is no other land; there is no other life but this.” That might sound as if he were endorsing a shallow form of hedonism, but we don’t think so. Experience is undergone and absorbed subjectively, in the present — that is to say, in the same register as Faust’s most personal of existential questions. Death might be one’s ownmost possibility, but so is experience. Plumbing the depths of experience allows one to own up to life — to say this life was, for better and for worse, “my own.” In his final months, Paul forgot everything — his keys, his grandson, his name — everything. But one morning, a few weeks before his death, he remembered falling off his bike. “I,” he paused to catch his breath, emphasize the word, and press on, “did that,” he said grinning. ADVERTISEMENT He articulated part of the draw of experience: It is, at every moment, personally felt, a marker of a life lived, if not with grand purpose, at least with authenticity. The ancient philosophical imperative to “know thyself” would be impossible to satisfy without keying into experience. At the brink of the 20th century, William James, who inherited Emerson’s transcendentalism and refashioned it in his American pragmatism, claimed that it was “the zest” of experience that helped make life significant. There is a type of Promethean self-reliance implied in this discussion of experience, a willingness to live in the moment and claim “no other life but this.” But there is another aspect of experience that takes us beyond the confines of modern subjectivity and guards against the charge of solipsism that has often been leveled against the experiential turn. Thoreau’s direction is “to find your eternity in each moment.” The “your” is important, but so to, and equally, is the “eternity.” The “your” and the “eternity.” There’s the intersection where you’ll find a grandfather’s quest for deep experience and a mother’s appeal for guidance toward some kind of transcendent perspective in the face of mortality. As loving children, and as philosophers, we feel the urgent call for meaningful answers. The need to have authentically lived and also to know what to do about dying are knotted together in a way that none of our usual intellectual approaches can adequately untangle. It is related to the strange way that experience is both wholly one’s own and never fully in one’s possession. Experience is, by its very nature, transcendent — it points beyond itself, and it is had and undergone with others. So how could John’s grandfather have reconciled himself with death, and how can Clancy’s mom prepare for it? How can we grapple and help our students grapple with it? Surely it couldn’t come down to a simple reading list; a well-planned course; a humble, fundamental step back to view the why and wherefore of our knowledge and its conveyance. ADVERTISEMENT Then again, none of that could hurt. It must be part of our jobs, as college teachers, to launch our students on the search for something larger than their immediate concerns, to confront them with the challenges that are presented by such intractable questions as the meaning of suffering, life, and death. “One never goes so far as when one doesn’t know where one is going,” Goethe wrote elsewhere, and that’s a big hint. The elusiveness of knowing about life and death might be the point. Like falling in love, or even like remembering riding a bike, thinking about death might be the willingness to embrace what is unknown, what is unknowable. The cheerfulness displayed by that old skeptic Socrates in the face of death is apt for one wise enough to admit that he’s never known anything about the most important matters. Faust’s despair is not a consequence of the limitations of his knowledge but the frustration of a mistaken attitude. Yes, in the face of life and death, all that knowledge amounts to nothing. Of course it does. The meaning of life and death is not something we will ever know. They are rather places we are willing or unwilling to go. To feel them, moment by moment, to the end, authentically, thoughtfully, passionately — that is an answer in itself. And for us as educators, to show our students the importance of trying to go to those places — that may be one of the best things we can teach them. John Kaag is a professor of philosophy at the University of Massachusetts at Lowell. His book American Philosophy: A Love Story is out this month from Farrar, Straus and Giroux. Clancy Martin is a professor of philosophy at the University of Missouri at Kansas City. His books include Love and Lies: An Essay on Truthfulness, Deceit, and the Growth and Care of Erotic Love (FSG, 2015).
2
Covid recovery will stem from digital business
Covid recovery will stem from digital business Let our global subject matter experts broaden your perspective with timely insights and opinions you can’t find anywhere else. Subscribe to unlock this article Try unlimited access Try full digital access and see why over 1 million readers subscribe to the FT Only$1 for 4 weeks Then $69 per month New customers only Cancel anytime during your trial Then $69 per month New customers only Cancel anytime during your trial What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings & Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month. For cost savings, you can change your plan at any time online in the “Settings & Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments. Read more Explore our subscriptions Individual Find the plan that suits you best. Group Premium access for businesses and educational institutions. Check if your university or organisation offers FT membership to read for free.
8
When Trump Was My Guest
Daniel Ferguson p Follow 11 min read p Aug 13, 2020 -- 2 Listen Share Recollections of a long-time television producer “No, no, no, put him through… he’s a good guy.” Did Donald Trump really think I was that stupid? A self-proclaimed billionaire businessman who doesn’t even have a computer on his desk certainly understood speakerphones. He controlled the fact that I would hear him say “no, no, no, put him through, he’s a good guy.” (We had never met, nor spoken at that point.) No doubt it was a “trick” he employed in the hopes of making me, the caller, think he likes me, so I will in turn like him. The reason for the phone call was for me to pre-interview Donald in advance of his first guest appearance on “Late Night with Conan O’Brien” the following day, December 10, 1997. From seeing Donald several times on David Letterman’s show, I knew his absurd persona made good television. He was a press-obsessed rich guy who would withstand searing ridicule and keep the appearance of geniality by forcing a smile, while rarely seeming to get the jokes at his expense. In our call, I hoped to find some potentially funny areas of conversation for his appearance on the show. “Hello… I can hear you,” I said, pointedly not addressing him as “Mister Trump” as instructed. I knew he abhorred when people call him “Donald,” but thought if I had said, “Hello… Donald, I can hear you” the call might end immediately. So I just avoided it completely. If Walter Cronkite wouldn’t allow me to call him “Mister Cronkite,” I certainly wasn’t about to call Donald “Mister.” “Listen,” he yelled into the speakerphone, “I’m in the middle of a very important meeting, so make it good and make it fast.” I wanted to hang up. Nothing good would come of the call. It would not be a back and forth conversation, but a browbeating one-sided monologue with no other sounds during the call that indicated anyone else was in a “very important meeting” with him. Donald dominated our brief 12- to 15-minute pre-interview primarily boasting that he was now worth several billion dollars thanks to New York’s hot real estate market, while briefly plugging his other business ventures. In thinking, “how can I make a billionaire bragging about his wealth entertaining for a late night comedy show,” it occurred to me, “How much money does someone who claims to be a multi-billionaire carry around in his pocket?” Whether $10,000 or $10, I was sure my boss, the host Conan O’Brien, could make it funny. I didn’t tell Donald that the question would arise in the interview. In front of 210 people in Conan’s live studio audience, Conan asked, “How much money do you have on you right now? How much money? What’s in your pocket?” Donald responded, “Lemme see, there is something in the pocket” and then reached down and pulled out a condom, held it up, then returned it to his pocket. Conan said, “Lemme see that! What is it?” and reached across Donald’s body. Donald pulled it out, held it over his head and said, “Safe sex everybody!” The audience reaction was a collective gasp, along with some hooting and laughter, but the impulsive reveal must not have been received as he had assumed it would. I became aware of that fact moments later when the interview was over and Donald walked off the set towards the exit. As soon as he and I stepped out into the hall beside the studio, Donald turned towards me, pointed and yelled, “Who the fuck does he think he is, reaching into another man’s pocket? I’ll never do this fucking show again” and stormed out. True to his word, Donald only appeared on the show seven more times. Billionaires aren’t a natural fit for late night comedy shows, but politicians can be. John McCain, Bob Dole and Al Franken were great guests thanks to their senses of humor, comedic timing and sharp instincts. As a guest, Donald was nothing like the generous Bob Dole, who would call weeks ahead of his appearances pitching jokes he’d come up with to potentially share on the show. I remember once answering my phone in the middle of the workday and hearing, “Bob Dole here. Just wanted to run some jokes by you.” And for the next few minutes, he read off one and two line jokes that were hit and miss, with a few home runs. His comedic brain was so fast that at one point I didn’t realize he’d finished telling his joke and there was silence. I said, “Is that it?” He said, “Yeah… you couldn’t even give me a courtesy chuckle on that one?” I said, “I’m sorry Senator I didn’t realize you’d finished telling the joke.” He said, “I guess I won’t be doing that one on the show.” I laughed harder at Bob Dole’s reaction to the reception to his bad joke than anything Donald has ever said in an attempt to be funny. Donald never came with jokes or funny ideas, but once the cameras were on he seemed willing to accept being made fun of to his face in exchange for national television exposure. He was unique in that regardless of the questions asked or the subjects raised on the pre-interview, Donald would answer a question he wanted to answer. If asked about Trump Water, he might clumsily pivot and declare “The Apprentice” was the №1 show on television. (It wasn’t.). “It’s number one…” he would say repeatedly on our pre-interviews, “and you better make sure I hear that tonight on the show. Okaaaaay? You understand? We are on the same network, riiiiight?” The truth was I’d always found Donald’s lack of self-awareness fascinating to observe. I never missed an episode of his reality television show “The Apprentice.” Yes, I worked at NBC, the network that aired the show, but I really loved watching “The Apprentice.” My favorite moments were when contestants would be “rewarded” for winning a challenge with a visit to see Donald’s actual apartment in Trump Tower. I would giggle watching the contestants’ faces as Donald led them into the gaudiest golden foyer outside Versailles. “What do you think?” He would ask the bewildered contestants. And they would somehow gain their composure to say something like “oh my god,” as one might when witnessing a fiery car crash. When Donald returned to “Conan” during “The Apprentice” days, I pulled aside Jim Dowd, the NBC publicist assigned to Donald, and asked, “So how’d you convince him to come back?” He said, “What happened last time?” As I told him about the condom display, he said, “Jesus… why? Why would he do that?” I said, “Because he’s Donald Trump, Jim! He’s a moron and he proved it when he pulled out a condom on network television.” Jim defended his charge and said, “Look, at least I got him to come back.” That he did, and Donald returned again and again to the show. At the time Jim was new to Trump world, but Donald seemed to really like him and within a few years, Jim left NBC and formed his own PR shop that had Donald as his biggest client. I produced Donald eight times for his appearances on “Late Night with Conan O’Brien” between 1997 and 2008, during which he hyped Trump University, Trump shirts and neckties, even Trump ringtones. Once in 2005, I’d planned for discussion of then newly launched Trump Vodka, figuring there could be some laughs to be had if Donald were to talk about how it’s the world’s finest super-premium vodka, yet he claims to have never tasted alcohol. However, when I mentioned that Conan might bring up the newly-launched Trump Vodka on the show, Donald said, “I don’t wanna talk about that. I do it for charity. My brother Fred was a very bad alcoholic, died young. It’s for charity. Come on, let’s talk about something else.” It was oddly touching to hear that this hustler was showing restraint and even further that he was doing something for charity in the name of his dead brother. Of course, despite saying so on national television, there’s no evidence Donald ever gave even $1 to any charity from the proceeds of his Trump Vodka before the product’s trademark was reportedly abandoned less than 2 years after its launch. So much for the product’s slogan, “Success Distilled.” Donald may never imagined that at some point in the future, scores of investigative journalists would look into his many dubious claims, whether about the charities who were said to have benefited from him, the source of his wealth or, as we learned later, the woman he paid off. Nor could many who have spent time with him over the years have ever imagined him as President of the United States. I certainly didn’t. In fact, over the years when people learned about what my job entailed (attempting to make celebrities funny on television) I was often asked, “who is the best celebrity you’ve ever met?” or “who is the worst?” I would usually say, “I don’t know who the best or the worst is, but I can tell you who the dumbest is… Donald Trump.” I would tell the story of that first “no, no, no, put him through he’s a good guy” phone call, the condom incident, and Donald’s prolonged insistence that “The Apprentice” was the №1 show on television despite years of steadily declining ratings that told a very different story. The truth was that by the time Donald presided over his final season hosting “The Apprentice,” the show had about a quarter of its premiere season ratings. I had so little regard for Donald that when, after flirting with the prospect for years, he officially announced his run for President, I was giddy. I told anyone who would listen, “This ‘run’ for President is going to ruin his life! The whole world is going to see what a vacuous liar he is and his world will crumble! He’ll be an even bigger laughingstock.” Less than a month later, I saw the headline “Trump Attacks McCain: ‘I like people who weren’t captured” and felt deflated. Only a month after his entrance into the Presidential race, I assumed the five-time Vietnam War draft dodger would have to drop out and my prediction for the world learning the truth about him wouldn’t be coming after all. Wrong again. I truly believe that Donald only ran for President to boost his brand, that he never actually expected that he would even be competitive in the race. I saw no evidence he had even the slightest hint of the intellect, curiosity, attention span or character required to be president, and figured that he just dreamed of the trappings of the office and to be seen as a winner. I assumed he would hire professionals to run the government if he beat the odds and won over Hillary Clinton. As Inauguration Day 2017 approached and media reports indicated that Donald hadn’t yet bothered to fill a vast majority of the jobs open in the executive branch, I suddenly had a flash of memories from the mid 90s, shortly before I joined NBC. I was working as half of a two-man “bi-coastal” PR firm when I got a call from an executive recruiter out of Chicago. We spoke on the phone for a while and then a few weeks later, she called to say that she wanted to talk to me about a job opportunity I might be interested in. She was staying at the Plaza Hotel on Central Park South. When I called up to her room at the appointed time, she told me that we would be dining in the Edwardian Room and that my first test was getting us “the best table in the house” despite not having a reservation. My pleading for the corner table were met with more than a little resistance by the hostess. Thereby I failed that first test, as noted by the executive recruiter upon her arrival at dinner. A few minutes later, Donald walked into the restaurant and took a seat at the corner table. Seeing the owner of the hotel at the best table in the house put me at ease for failing that first test. As we ate dinner, I learned the job was a media relations position in the real estate industry. The executive recruiter spoke a bit about the generous pay, long hours and demands I’d be facing, but still hadn’t told me the name of the company. Finally after our dinner plates were cleared, the executive recruiter asked, “Are you ready to meet the client?” I said, “Absolutely. Will it be tomorrow?” She said, “No, now. Tonight. Here.” I looked over at the corner table and said, “Him? THAT’s the client?” She seemed a bit taken aback and said, “Yes, it’s the Trump Organization.” I said, “Oh no. I wish you had told me it was HIM.” She got defensive and said, “I don’t understand your tone. The Trump Organization is one of the most successful real estate companies in New York.” I said, “But you live in Chicago. If you lived here and saw him in the gossip columns every day about what models he’s dating and who he’s fighting with, you’d know he was a buffoon. He’s a joke.” The executive recruiter grew more frustrated with each syllable I uttered. I said, “I don’t think it’s a good idea for you to introduce me because there’s absolutely no chance I’d ever go to work for HIM.” I never heard from that executive recruiter again, and in fact I left the world of PR for a job in television a few months after that dinner. Then in my first six months producing guests for Conan, I finally met Donald when he appeared on the show and I was assigned to produce him. After taking that trip down memory lane about the would-be job with the Trump Organization I refused to even meet Donald for, I remembered Jim Dowd, Donald’s PR guy through his days on “The Appprentice.” I thought, “What ever happened to Jim Dowd?” So I did a Google search on him. The first article that appeared was a transcript from PBS’ “Front Line” from September 2016. In the transcript, Jim offered observations and anecdotes about his time with Donald, especially his obsession with ratings and his insistence the show was №1 despite publically available rankings that put the show at number 72. At one point, he related a conversation where Donald asked, “Jimmy, would you like to be my (white house) press secretary?” I thought, “Why wasn’t Jim on the campaign, never mind in the conversation about the job of Press Secretary.” When I returned to the Google search, the second entry made my heart sink. It was a news article about Jim Dowd, 42, who died just days before the “Front Line” special aired. I have no idea what caused Jim’s tragic death at only 42 years old, but am crushed by the thought of his three young children losing their dad. I also realized that Jim’s death means there is one less person who knows the true pathology of pre-Presidential Donald up close and personal. Donald was inaugurated as the 45th President of the United States on January 20, 2017. As the inauguration proceeded, I somehow felt a small sense of gratitude to have formed such a poor impression of him decades ago, thanks to all the items Donald placed about himself in the gossip columns of the New York tabloids, his laughable appearances on Letterman and more, and my own perception of his deceitful character. I also felt a sense of complicity for whatever role I may have played in helping normalize Donald in his improbable ascendancy to chief perpetrator of the most tragic chapter in American political history. Daniel Ferguson was a producer for Conan O’Brien for over 21 years from 1997–2018 and was recognized with eight Emmy Award nominations.
1
A Decade-Long Affair: The Making of Maquette
A birds-eye view of Maquette's hub area. (Screenshot: Graceful Decay) I need to tell you about Maquette. No, really. It’s a need. I’ve played a lot of wonderful games while working at Kotaku, but none have resonated with me as much as Maquette, a first-person “recursive puzzle game” released this month for PlayStation 4, PlayStation 5, and PC. Were I to review the game, the whole text would amount to, simply, “Play it.” But I wanted to know more, and to learn more, and to tell you more, so I hit up Hanford Lemoore, the game’s creator, director, and lead engineer, to get the full story. I also wanted to know if I could squish myself. Maquette spent about a decade in the oven. In 2011 Lemoore showed it off at the Game Developers Conference in San Francisco, where Kotaku received a firsthand look. Erstwhile Kotaku EIC Stephen Totilo described a “game in which the only thing you can do is pick things up and put them down” that left him “in awe.” You could freely explore a walled city, wherein one passageway was blocked by a giant red cube. Inside a central building, you’d see a small diorama — a maquette — of that exact walled city. Pick up and move the cube in the diorama, and the cube will move identically in the normal-sized world, allowing you passage. There was also a puzzle involving a chasm and a key. Traditionally, in video games, keys unlock doors, right? But in that initial prototype, a key served as a bridge. In the normal-sized world, you’d come across a chasm, one you wouldn’t have a triple jump or a jetpack or any other video game tool of that nature to help you cross. So you’d pick up a key — one of the few objects at your disposal — and drop it over the chasm in the diorama. Look at that: a way across. “How this game came about is, first, you build the recursion engine, the recursion simulation, and that’s not even a game at that point. It’s like the difference between having a character that can jump and having an entire level with obstacles and goals,” Lemoore told me over a Zoom call recently. “I just had this world-within-a-world simulation running. […] It’s only a fun simulation if you can pick things up.” Gif: Graceful Decay / Kotaku That’s still the primary way you interact in Maquette: Pick things up and put them down. (OK, technically, you can also jump, but it’s barely a bunny hop.) The game’s world consists of multiple, coexistent, proportionally rendered planes of different sizes. There’s the “normal” world, a four-sided town square with a dome overhead. In the central area here sits that maquette, a miniaturised carbon copy of the normal world, complete with a dome itself. But if you look beyond the walls of the space you’re in, you can make out an even bigger world, with even bigger versions of all the stuff you see in your normal-sized world. And according to Lemoore, it just keeps going and going. “Mathematically, there’s even a world even larger than that,” Lemoore told me. “It’s like the size of Skyrim or something.” Anything you do on one plane will affect all of the other planes. Let’s say you need to get into a house, but it’s absent a stoop or stairs or any way up, really. Drop a tiny ramp next to the tiny version of that house in the maquette and, in the normal world, you’ll now have a regular-sized ramp to help you into the house. You can also use this mechanic to tweak the size of objects as needed. For instance, in one early level, you need to find a golden ticket — one of those serrated, olde-timey raffle-style ones — to drop into a ticket box, which will then unlock a door to a new area. Tracking down the ticket is a breeze; the game more or less guides you right to it. The problem is that the ticket is way too big to fit into the ticket box. To scale it down, you have to place it somewhere in the normal-sized world, make your way to the maquette, pinpoint the ticket’s location there, pick up the smaller version, and carry that to the ticket box. Doors unlocked and opened, by any means at your fingertips. So yeah, Maquette has obviously changed a lot since 2011, but the core concept has held fast. In fact, both of the quandaries shown off in the GDC prototype — the cube and the key — show up one way or another in today’s iteration of the game. If you want, you can even drop one of them right on your head. “[If] you drop an object over the dome where it might crush you, instead, it bounces on the dome and rolls off. And that’s a live physics simulation. So if you actually back out of the world while that’s happening, you can see two objects of different scale bouncing and falling in the exact same way. And if you want, you can crush yourself there. You can drop the cube on the very, very top of the dome and then, as it’s rolling down, you can walk out of the dome and stand there and the cube will come down and hit you on the head,” Lemoore told me. “But it doesn’t really affect you. We don’t have lives in this game. We don’t have deaths.” Maquette stands out because it’s more than just a game in which you pick things up and put them down to solve increasingly complex puzzles and sometimes drop those things on yourself. It’s also a love story. At the centre are two young, art-obsessed San Francisco hipsters named Michael (voiced by Fringe’s Seth Gabel) and Kenzie (voiced by Gabel’s real-world wife of two decades, Bryce Dallas Howard, whom you may recognise from, I don’t know, pick one of these enormously popular movies). Maquette casts you as Michael, who pulls double duty as the game’s narrator, recounting his relationship with Kenzie via handwriting superimposed at various points over the dreamlike landscapes you stroll through. Screenshot: Graceful Decay / Kotaku “We get this entire relationship, but we get it from a single moment in time,” Lemoore said. “You can think of this game taking place across an entire relationship. But really, the game takes place in one evening of [one] person writing a letter and just remembering the entire relationship.” Michael and Kenzie initially meet at a coffee shop. In the background, you can hear the soft murmur of a caffeinated yet hyper-focused crowd. A sharp ear will pick up on the creak of a door that hasn’t had its hinges greased in ages. You don’t need to see the place to know exactly what it looks and smells and feels like. (Maquette doesn’t feature any cutscenes or motion capture. The story is told entirely through voice acting and some expertly deployed sound design.) Kenzie asks Michael if she can sit down. Ceramic clinks. Whoops. She’s spilled her latte. “Oh my gosh, I’m so sorry,” she says. “It-it only spilled a little,” he says. “Gosh, is your sketchbook ok?” “Oh, can I see it?” Thus begins a grounded, relatable romance — a common enough occurrence in novels, rom-coms, and sappy television shows, sure, but markedly less so in video games. Mild spoilers for Maquette in this next section. First, the duo gets in the habit of drawing together in the parks of San Francisco. They continue hanging out more frequently, cautiously testing the waters of mutual affection. Kenzie invites Michael to a party at her house. He shows up early. She’s late. In the meantime, a friend of Kenzie’s insists Michael is one of Kenzie’s colleagues. So they have “the talk” — you know, the one about whether or not you’re friends or, well, more than friends. Turns out, they’re the latter. Eventually, they say that four-letter word to each other. They move in together. A mid-game chapter chronicles the slow but inexorable march toward mundanity plenty of modern relationships end up on. What story about 20-somethings is complete without a trip to a cabin in the woods? (Screenshot: Graceful Decay) Kenzie returns home after a long day of school, a six-pack of beer in hand. Her and Michael talk about their days, figure out something to watch. She suggests a movie; he counters with “that trashy show we were watching last week.” (They end up watching the trash TV.) Another day, another step. Kenzie returns home. They don’t go over their days, and Kenzie instead notes how quickly Michael closed his laptop, like he was “hiding something from her.” But hey, he picked up wine and whipped up dinner, so that wisp of irritation fades away in seconds. Another day, another step. Kenzie returns home, beer in hand. Michael points out how late she is, how she didn’t text, and mentions that dinner’s in the fridge. Kenzie says she already ate while out with a friend. Michael asks why he wasn’t invited. She apologises. He says it’s cool, asks if she wants to watch something. She does — but by herself, on her laptop, in bed. Another day, an explosive fight. It’s not mean or cruel or insulting. It’s just the end result of two people who’ve gone too long without being heard by the one person they want to be heard by. It’s only natural. You get sick of it. You raise your voice. One of you storms out. OK, spoilers over. Carry on! I’ve been there. You’ve been there. We’ve all been there, to some degree. “It’s a universal love story, right? As far as, like, it’s not drama-filled, there’s no cheating or craziness,” Lemoore said. “Everyone on our team has had that love story happen to them. I’ve had it happen to me before — several times. It was striking that balance between, yes, we’re grounding this in a real city, but we’re making all these choices to make sure as many players can connect with it as possible.” The biggest impediment to making Maquette relatable, though, was San Francisco itself. I spent some time in San Francisco growing up, and still return often, which no doubt contributed to Maquette resonating with me the way it did. Play this game for a second, and you see it: This is San Francisco. It’s unmistakable — and for me, it’s a blast from the past, immediately evocative of a time when life was simpler, or at least less sober. If you’ve spent even one day in San Francisco, you’ll recognise the city’s architectural quirks in the pastel-coloured nooks and Victorian-inspired crannies of Maquette. Spires in one level bear a remarkable resemblance to Ferry Building, that iconic, postcard-perfect structure off the Embarcadero. The dome itself — the one that’s supposed to save you from undue if ultimately meaningless squishing — is directly inspired by the Palace of Fine Arts. I can’t tell you how many times I pulled a double-take at some art deco obelisk, thinking it was actually just a surreal version of Coit Tower. Maquette is as much a love letter to the Golden Gate City as it is to Michael and Kenzie. But while the San Francisco of Maquette is very much San Francisco, it’s not a photorealistic recreation, certainly not like what you’d see in, say, Watch Dogs 2 . It’s a dreamy reimagining, a warped construct that’s evocative of the city without putting up to-the-pixel depictions. “They’re in this world that’s made up of both things imaginary and real,” Lemoore said. “Even in Chapter One, when you get to Kenzie’s House, it’s a normal San Francisco Victorian house, but it’s floating — it’s floating on a floating island. And we tried to blend that mix between real architecture, detail-perfect, and evoke something for players who have been [to San Francisco] but not alienate players who have [not].” That San Francisco connection is rooted in more than references to buildings and neighbourhoods. It’s also present in the music. Nearly every song on Maquette’s soundtrack — which is not currently available as a standalone purchase — is from a Bay Area musician. The sole exception, Gábor Szabó’s “San Francisco Nights,” which plays right at the beginning, is an homage to the city and itself serves as a way to ground the game’s setting. As Lemoore put it, most tracks are the type of songs Michael or Kenzie might have heard play in a tiny club. Screenshot: Graceful Decay / Kotaku One early song in particular, “Tidal Waves” by Meredith Edgar, affected me far more than I was prepared for. I am cursed with an incurable case of impatience, and generally tend to beeline through linear games. (Open-world games are another story.) But when Edgar’s buttery vocals kicked in — over soft guitar in a tranquil setting as these two characters start to put the “love” in “lovebirds” — I had to set the controller down. It was too real. (On “Tidal Waves,” Lemoore told me the song “feels like it’s the theme of Maquette.” You can hear it play in the background of Maquette’s gameplay trailer from last summer.) There are few moments in any media, let alone video games, that truly capture the feeling of stomach butterflies. Seeing as I am a sentient can of cheese whiz, I texted my partner — whom I don’t usually discuss games with — that this scene “made me really miss that time when we were falling in love with each other.” Maquette’s Love Story Has Already Broken My Heart Ever listen to a song so powerful you stop, drop, and roll to frantically figure out what it is? That’s what happened to me last night while playing Maquette, a puzzle game out this week for PC, PS4, and PS5. Yes, the game’s fantastic, which we’ll get to in a... Read more It’s impossible to imagine Maquette as it is without the current soundtrack, but even that wasn’t set in stone, at least not from the start. “So many people said, ‘Do you have a composer yet? Do you have a composer yet?’ As if that’s a foregone conclusion: We’re going to have a composer,” Lemoore said. “And I stopped and I thought, ‘OK, well, no. I know that that’s my first thought, too: I’ll hire somebody to compose an original score.’ But I wanted to take a step back and think, ‘Wait a minute, how do we tell this story in the most serious, realistic way?’” Answer: a soundtrack made up of “actual songs that the characters might have listened to while they were together.” The result works. Not only does the soundtrack feel authentic to the story every step of the way, it also happens to be full of what the kids call “bangers.” I found myself motivated to power through Maquette simply to hear more great music. Sure, I wanted to beat each level to advance the story, and also prove to myself (and the world!) that I’m sharp enough to crack the code on a serious stumper. But I really just wanted to hear whatever song was up next. Here’s another thing you should know about Maquette: It can be quite frustrating. Maquette is by no means perfect, but there are moments of irritation baked into the game on a foundational level. For one thing: the reset function. Puzzle games are complex beasts, often make it possible to paint yourself into a corner, and developers can’t reasonably account for literally every imaginable way in which you can do so. Hence, they often let you reset. But in Maquette’s case, hitting reset will send you all the way back to the beginning of the level. If you find yourself stuck on level’s final puzzle, that could mean undoing half an hour of work. (To be fair, once you know the specific solutions, redoing your work shouldn’t take as long. Theoretically.) Houses aren’t supposed to do that. (Screenshot: Graceful Decay / Kotaku) There’s also currently no way to sprint. In the early levels, that’s fine, but when you breach the walls of the normal-sized world for the first time — when you first venture into the larger recursions — you feel the difference in your bones. Being forced to walk (walk!) massive distances conveys a staggering, humbling sense of scale. It’s like living your whole life on the eastern seaboard and seeing the Rockies for the first time. You thought you knew what a mountain was after that trip to Vermont? Ha! On the other hand, it’s very hard not to itch for some faster movement. Fuck up the solution for a puzzle, and you’re in for a long haul back to the maquette, which, sure, isn’t so bad the first time. When you fuck up four, five, or 15 times, that trek can become arduous. Frankly, a sprint would’ve been nice. “We had an auto-sprint, actually, that would just let you sprint through what we call the hub area, between the dome and the puzzle you want to get to,” Lemoore told me. “A big question was, ‘How do we give it to the player? How do we communicate when they can use it and when they can’t?’ The problem with turbo modes is, in any game, in any first-person game, whether it’s a puzzle game or a shooter, you want to always be turbo-ing. Why would you ever not go at max speed?” There’s a philosophy for puzzle games — which Lemoore brought up but said he doesn’t “fully buy into” — that guides difficulty from a foundational level. Make the puzzles too hard, and players will just throw in the towel. Make them too easy, and players will simply try every possible permutation until they find the solution that clicks. You want a balance. By removing a sprint, Lemoore said, players are more likely to think through solutions before acting them out. He thinks it’s more fun that way. Maquette is a lot like a maquette itself, really, a multi-layered denouement that nestles your feelings (or non-feelings) about Michael and Kenzie (or about actually finishing the game’s litany of puzzles) inside a recursion so big some players might miss it: Lemoore’s decade-long process on the game. Living in a project for a decade is a lot — even more so when it’s a project with deeply personal ties, like Maquette. (Lemoore likened the time he spent working on other projects over the past decade as “cheating.”) One would not be off base to draw a direct line between the fictional relationship at the core of Maquette and the very real development process around it. But Lemoore doesn’t see things so simply. “It hasn’t been lost on me how it connects to me directly. I don’t feel like [shipping the game is] a breakup at all. I’ve had times where I hated Maquette, but I don’t hate it right now,” Lemoore said. “It’s almost like being able to put something out in the world and say, ‘Hey, this has been my thing for 10 years, and now everyone else can enjoy it.’”
1
You Sold Your Company, What Did You Buy First? Here's What I Did
Current Affairs Entrepreneurship Family Film Finance Gadgets Media Music Television Videos Web/Tech My Sundance 2017 Hit list, So Far... No Data For Old Media: A Fascinating Deficiency At Movie and TV Companies My Current List of Television Season Passes - @JasonHirschhorn Brad Wilk of Rage Against The Machine on the importance of human creativity Gandolfini Was Beyond Words Important Message Regarding Media ReDEFined Newsletter Delivery To Subscribers @JasonHirschhorn's 2013 @SundanceFest Hitlist Hello Cable! ABC Has Some Serious Mojo Coming This TV Season Why I Think Jay-Z Is A Model For Entrepreneurs January 2017 June 2014 December 2013 August 2013 July 2013 June 2013 April 2013 January 2013 August 2012 July 2012 More... Subscribe to this blog's feed « A Word or Two about Judy McGrath... | Main | Movies I Really Want To See » Tweets by @JasonHirschhorn Facebook: jason.hirschhorn Pandora: mischiefnm Twitter: JasonHirschhorn
1
Free IPV Channels Fast Servers (Adult-Netflix-World-Sports) 2-12-2020
p p p http://toolusts.com/7SYo http://toolusts.com/7SaZ p Why xtreamcodeseveryday.blogspot.com ? We offer FOR FREE the best IPTV servers updated daily and hardly collected from the best resources online and tested with VLC player daily to overcome the blocked servers and tested by VLC player ,  you will get the best working fast servers to enjoy all the premium adult ,netflix  sports, shemale , vods , mixed and worlwide channels , No you can watch the exclusive movies and series from netflix, amazon, fox and all other premium services daily without restrictions, No need for IPTV paid subscriptions , all files are integrated in one file for easy download in no time. WHAT YOU WILL GET FREE p p p p p p p p p p http://toolusts.com/7SYo http://toolusts.com/7SaZ HOW TO WATCH IPTV CHANNELS ON YOUR PC? -EXTRACT THE RAR FILE TO M3U OR M3U8 FILES -OPOEN THE M3U OR M3U8 FILES WITH VLC PLAYER-ENJOY WATCHING p What is IPTV ? p IPTV =  Internet Protocol Television. Internet Protocol Television (IPTV) is digital television delivered to your television through a high speed internet (broadband) connection. In this service, channels are encoded in IP format and delivered to the TV through a set top box. IPTV service also includes video on demand, which is similar to watching video CDs/DVDs using a VCD/DVD player. p IPTV converts a television signal into small packets of computer data like any other form of online traffic such as email or a web page. There are three main components of IPTV. First, the TV and content head end, where the TV channels are received and encoded and also other content like videos which are stored. The second component is the delivery network, which is broadband and landline network provided by a telecom operators such as MTNL. The third component is the set top box, which is required at the customer location. The packets are reassembled into programming by software in the set-top box. This box is connected between the operator’s broadband modem and customer’s TV. What are the advantages of IPTV? The quality of digital video and audio is much better compared with the traditional analogue TV. With additional features, it can become interactive. For example, viewers may be able to look up a player’s history while watching a game. They also may be able to schedule a recording of their favourite programme when they are not home. With video on demand, they can browse an online movie catalogue and watch the movies instantly. Because IPTV uses standard networking protocols, it promises lower costs for operators and lower prices for users. Using set-top boxes with broadband internet connections, video can be streamed to households more efficiently than cable. What are the limitations of IPTV? Because IPTV is based on internet protocol, it is sensitive to packet loss and delays if the IPTV connection is not fast enough. Reviewed by Admin on December 01, 2020 Rating: 5
2
Via Negativa – Less, but Better
“Michelangelo was asked by the pope about the secret of his genius, how he carved the statue of David, largely considered the masterpiece of all masterpieces. His answer: “It’s simple. I just remove everything that is not David.” We live in a world where the most obvious approach to any problem is done via addition. Maybe we are conditioned in such a way from childhood. More emphasis on collecting degrees/certificates than actual learning let alone questioning do I need to do this? Headache → p Feeling low → Buy something Feeling lonely → Install a dating app . Marriage not working → p Should anything not be working → Add something and try out so on and so forth. Let us explore how Via negativa can be a blessing in tech, love, health, happiness, and life in general. I’m just exploring here, not trying to arrive at a particular solution as I have been reflecting on it for a while. I see this all the time, even the writing app I’m using right now has fallen into this trap. My product manager, clients love this too! They believe by adding more features to their product, makes their product better. Add more colors. Oh, we need icons here. Can we use some heavy animations here? Oh, common on the first fold needs to have more of our brand values. Let’s add this, this, and this too (to appease some dumb stakeholders). But is it true? As a designer, I talk to a lot of users. The ground reality is people don’t always want to be overwhelmed by new features. They just want to get the work done fast. Better to remove unnecessary features in the app that isn’t adding any value to the user experience. Unfortunately, Adding is favoured over subtracting in problem-solving, even when removing features is more efficient [1]. Said no user ever 🙄😭 “It is very difficult to argue with salaried people that the simple can be important and the important can be simple. The tragedy is that much of what you think is random is in your control and, what’s worse, the opposite.” (Excerpt from Bed of Procrustes, Nassim Taleb)” In the late 20th century, a young German designer changed the way we perceive product design. He’s the man who invented consumer product design as we use it today. To quantify his impact, you only need to look at all millions of Apple products in your pocket. Dieter Rams were becoming increasingly interested in the world of things that surrounded him — “an impenetrable confusion of forms, colors, and noises.” He would always strive for reducing clutter, unnecessary elements from the product. One of his design principles was —  “Good design is as little design as possible”. “Good design is as little design as possible” [2] Both Steve Jobs and Johny Ive (ex-chief designer at Apple), were inspired by Ram’s work. No wonder why Apple products have a zen-like simplicity. iPods, iPhone sales speak for themselves. Relationships have already been complex and adding to this we now have — dating apps. Welcome to the vicious circle where everyone belongs to everyone else [3]. I have been on/off on it as well, I don’t know if it’s funny or rather depressing to find similar faces every time I create my account. The point is these apps create a false sense of abundance . Giving you so many matches that you might have a short-lived validation, dopamine hits. But it’s virtually impossible to focus on someone when you have so many choices, it’s a well-known phenomenon called the Paradox of choices [4]. Trust and insecurity issues start to arise, emotional connections can’t be forged let alone something long-term. Please don’t perceive it as black and white. It’s okay to use such platforms if we are clear with our intentions — Not what you are seeking but what you are not seeking . We, humans, are a little complicated that you readers won’t disagree with, it’s much harder to know what we want than what we don’t want. Again, same guiding principle, getting to selection via elimination, l ess but better. The more choices we have, the more confusion it creates. If you are lucky to find a spark with someone then getting immediately off from the platform seems the only reasonable way to me. If Aldous Huxley was alive, he would be experiencing his book ( brave new world [5]) here and now. “But every one belongs to every one else.” ... “Actual happiness always looks pretty squalid in comparison with the overcompensations for misery. And, of course, stability isn’t nearly as spectacular as instability. And being contented has none of the glamour of a good fight against misfortune, none of the picturesqueness of a struggle with temptation, or a fatal overthrow by passion or doubt. Happiness is never grand [6].” Quitting smoking, sugar and so-called junk food has been proved to be more beneficial than taking any supplements or so-called vitamin pills. The Pharma doesn’t want you to know that for obvious reasons! All these packaged food, juices do more harm than good. By refraining from a certain product or even complete absence of food ( my mom calls it fasting ) brings more good than by the addition of anything at all. Hunter gathers understood via negativa so well, fasting was a common practice back then. No wonder, hunter-gathers looked like modern-day athletes! I’m not a nutritionist but I use time as a natural filter and these practices hold up to the Lindy Effect. (The Lindy Effect is the idea that the older something is, the longer it’s likely to be around in the future.If a book has been in print for forty years, I can expect it to be in print for another forty years. But, and that is the main difference, if it survives another decade, then it will be expected to be in print for another fifty years. This, simply, as a rule, tells you why things that have been around for a long time are not “aging” like persons, but “aging” in reverse. Every year that passes without extinction doubles the additional life expectancy. This is an indicator of some robustness. The robustness of an item is proportional to its life [7]. ) Recall that the interventionist focuses on positive action—doing. I have used all my life a wonderfully simple heuristic: charlatans are recognisable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.) [8] I am not trying to let down allopathy here or alleviate alternative ways, but these short-term cures have invisible long-term effects. We often miss the Second-order effects p And many modernity diseases can be resolved by subtraction/elimination, which I believe should be our default approach. A few years back, I was staying in the rainforest of Srilanka beside a Buddhist monastery. There was no wifi, no networks, no stimulation as we generally find in city life. Yet I never felt more peaceful and happy in life. I was a bit taken aback, at that time I was in my early 20s and had all the decent pleasures one could imagine. As I enquired further, I learned these states of bliss which were so pleasant only came out when I was completely still. As I sat down under the foot of the tree, I did nothing (no addition), just being a mere observer of whatever thoughts were coming and going. If I was able to stay still for a good amount of time, things start to fade away, 5 senses got less and less active. Less but for better. One more interesting thing I noticed about how meditation works — If you sit down with the intention of getting something out of it, it won’t work! You’ll just end up getting more agitated, but if you have the intention to just make peace with no matter what’s happening, being kind and letting go of whatever arises, then the magic starts to unravel. In short, it was only by eliminating these sensual stimulations, freeing my brain of constant activity, I was able to experience such joyful states of mind. Some people say to watch your breath, concentrate on something, etc, I don’t think it works this way. Breath is a subtle object, the mind only comes to watch it once grosser things like sound, taste, smell, bodily sensation fade away. It’s a natural process of letting go, the more you let go, the more bliss you experience. Eventually, the sensation of breath also disappears, but it’s a topic on its own. I just wanted to show elimination/ letting go can be so profound and lead to pure ecstatic states of happiness. Somewhere far from civilisation, Srilanka 2015 Interestingly, Buddhism focuses more on what not to do only then comes what to do. Their 5 precepts [10] are all about refraining from certain things. Again, Via negativa in action here. I was reading somewhere, an old wise monk was once asked, Are you enlightened? No sir!, replied the venerable Sri Lankan Thera, I am not enlightened. But I am highly eliminated! Again, I’m not trying here to make a certain approach the only way to do things. I’m exploring and to be honest, I am surprised while writing to find this guiding principle getting fit in so many domains — Tech, love, happiness, health, and so on and so forth. Now, readers I humbly ask you to unbiasedly explore for yourself and let me know in the comments your thoughts :) I have been applying via negativa in my ways too. For instance, instead of making a To-Do list, I now make a Not-To-Do list. If I’m clear on what to not to do then naturally, I’ll put my energy into what matters!? Also, To-Do lists seem so robotic to me, we usually pick the easy ones first and it eventually becomes a vicious checklist circle rather than getting things done. There are often things that I have to consciously push myself to do, I was so wrong when I thought I procrastinate so much! Instead of fighting procrastination as though it is an illness, maybe we should learn to understand its utility. Now I have made procrastination my friend, it seems like a naturalistic filter or as Taleb says —  “Few understand that procrastination is our natural defence, letting things take care of themselves and exercise their antifragility; it results from some ecological or naturalistic wisdom, and is not always bad — at an existential level, it is my body rebelling against its entrapment. It is my soul fighting the Procrustean bed of modernity.” Questioning your ways with an open mind and using some common sense can do wonders. How does something make you feel — does it lead to more agitation or peace? Calmness or confusion? That’s how I adopt things. Trial and error is Indeed freedom. I’ll end this on one of my favorite excerpts from the Bed of Procrustes: “It isn’t what you know that makes you knowledgeable and rigorous; it is in the type of mistakes that you don’t make.” (Bed of Procrustes, 3rd ed.) Thanks to Paras, Mahima, Aman, Shring, Tripty, Alekhaya, Richa, Shikhar for reading drafts of this. https://www.nature.com/articles/d41586-021-00592-0 https://aethadesign.com/less-but-better-rams-screening-at-arts-university-bournemouth-20th-november-2019/ http://dh.canterbury.ac.nz/engl206/author/mha418?print=print-page https://en.wikipedia.org/wiki/The_Paradox_of_Choice https://www.theguardian.com/books/2007/nov/17/classics.margaretatwood https://www.pastemagazine.com/books/brave-new-world/the-15-best-quotes-from-brave-new-world/ https://www.luca-dellanna.com/lindy/ https://fs.blog/2014/01/a-wonderfully-simple-heuristic-to-recgonize-charlatans/ https://fs.blog/2013/10/iatrogenics/ https://tricycle.org/magazine/the-five-precepts/
1
Walmart Drone Delivery Being Tested Amid Amazon Retail Battle
To continue, please click the box below to let us know you're not a robot.
4
Why do the most valuable NFTs look old-school and how will they look next?
João Abrantes p Follow 5 min read p Nov 19, 2021 -- Listen Share After reading this post you’ll understand why the most valuable NFT art looks old-school and how will the looks evolve in the future. But, before we can learn that, we need to find out what made old-school art look the way it looks. The computer that took men to the moon had 69.12kB of memory. To get to this capacity, half a mile of wires were manually threaded through small magnetisable rings, with each ring giving 1 bit of storage. This was crazy expensive, time-consuming and error-prone. Back then, if you were a byte and you wanted to go to the Moon, you better have had a very good reason to be in that memory! Each byte counted, each one was precious. Memory capacity was the limiting factor of computers for many years, and the best software and games were the ones that used all the smart tricks to make each bit count. Fast-forward, the price of storage drops exponentially, the art of displaying interesting images with low-data is lost and today we take hundreds of 2 MB photos of our cats being cats. Well, this was before the blockchain had arrived. After years of disrespect and frivolous use, today bytes are precious again! At the time of writing (10/11/2021), every byte stored on the Ethereum blockchain costs around $0.3 to store. So, if you want to immortalise a 2 MB photo of your cat that will be $600 000 please. Wait, What?! What about all the high-resolution images/animations/videos sold as NFTs? Aren’t they all stored on the blockchain? Yes and no. NFTs use the safest public database to tell everyone who is the owner of what. They do a very good job at storing the who, but, when it comes to storing what people own exactly they simple store: ..a freaking link? That link was 31 bytes. Which is a lot cheaper to store than the actual image (it costs $9.3)! But, it is also like writing down a certificate that says “Mr. Abranti is the only and undisputed owner of the art that is on Building 31 in the Street of Manchester” now lock that certificate on the most secure safe in the universe and leave the art at the Building 31 where whoever as access to the building can change, damage or destroy it. Some projects are storing the art on safer buildings (on IPFS for example). This is an improvement, but the certificate is still being stored in a safer and more permanent place than the actual art. Some other projects are storing a small seed on the blockchain and the algorithm that renders the art on a building. This makes little sense, the seed is useless without the algorithm that knows what the seed means. And now, to preserve the art we need to keep the two things safe… A small group of artists, store the actual art in the blockchain along with the certificate. This is possible because these artists know how to treat bytes as the precious things that they are. Two examples of this are the blitmap and the nouns projects. They look old-school because they use similar tricks to the ones used in the old days where bytes were precious. Each blitmap is a 32 by 32 pixel image with a 4 colour palette and takes 268 bytes in space ($80.4). Nouns also use a colour palette, and additionally, they use a lossless compression algorithm (RLE). Not all nouns have the same size, the noun 63 uses 455 bytes ($136.5) as an example. These images live directly on the blockchain, and besides the safety benefits, the art is now also visible to all the code that is also stored on-chain (the smart contracts). Because of this, it was possible to combine two blitmaps to create another one and pay the creators of the original art while doing this. This was only possible because the art was accessible to smart contracts. In the last years, where bytes were cheap, we’ve learned a whole lot about images, information, perception and compression. We can now use modern tools to search for the art with the highest beauty to bytes ratio. I think that’s where NFT art is heading, and so this is what I am currently pursuing at polys.art. The project’s goal is to use modern tools to find the most beautiful images that use low-data. Regulars, our first collection, describe images not in terms of pixel colours but in terms of regular polygons positioned on a canvas:
23
How Graphviz thinks the USA is laid out
Graphiviz is useful graph- and network-layout software. You give it a description of a graph, as on the left: and it produces a drawing of a graph as on the right. The first line says that node A is connected to B, which is connected to C. The second line says that B is also connected to D and F, but the edges should be dotted. The third line says that D is also connected to F, which is connected to E, which is connected to C. Graphviz has several layout engines, which try to optimize layouts for different kinds of properties. The example above uses the neato layout engine, which tries for a generally compact layout. The example below gives the same input to the dot engine, which is intended more for directed graphs and which tries to emphasize hierarchies or flows among the nodes. I built a Graphiviz configuration file for the graph of U.S. state boundaries and put it in to see what would come out. The graph is naturally planar, and it will be interesting to see if the layout algorithms can detect that. Here's what the U.S. actually looks like. (Source: Wikimedia Commons) This is the output of Graphviz's default layout engine, neato, which has done a great job of inferring the actual shape of the United States! (Nodes here are labeled with their standard postal abbreviations. When you hover your pointer over a state name or other gegraphical designations in this article, the corresponding abbreviation will pop up. For example, New York should pop up a box with “NY”.) The Mid-Atlantic, South Atlantic and Pacific coasts are clearly visible, Florida is dangling off the bottom, as it does. It's pretty amazing. The fact that the map is oriented correctly is a bonus. There are a few oddities: New England is where it should be, but Vermont should be switched with Connecticut and Rhode Island. South Carolina is sticking out into the Atlantic Ocean. There is only one planarity failure, resulting from the inadvertent flipping of New England. The apparent crossing edge between Minnesota and Michigan is illusory; the edge could have been curved around Wisconsin with no trouble. (The real border between Minnesota and Michigan is a bit of an oddity, occurring in Lake Superior, and it really does leap over the head of Wisconsin that way.) Here's fdp, which uses a similar approach to neato: a “spring” model where it imagines the nodes are connected by springs and tries to minimize the energy. It produces a very similar result: It has New England flipped over like neato does. I wonder why that happened? And Washington is embedded in Northern California for some reason. The sfdp engine is a variation on fdp which “uses a multi-scale approach to produce layouts of large graphs in a reasonably short time”. The map it produces has south at the top and east on the left, but otherwise looks pretty much like the fdp one. I'm pretty sure that by graphviz standards this is not a “large” graph: Now we move on to the odd ones. The dot engine I mentioned before tries to maintain the hierarchy, to the extent that there is one. The geography is still correct here, more or less, with South at the top and East on the left. That puts Florida in the upper left and Washington in the lower right. But in between it's unexpectedly tangled. The central column states, ND-SD-NE-KS-OK-TX are all over the place. So many questions here. What the heck is going on with Ohio and Michigan? And Illinois and [ Note: this is where I lost interest and stopped writing. ]
5
Why Is Transgender Identity on the Rise Among Teens?
Transgender identity* is characterized by experiencing distress with, or an inability to identify with one’s biological sex, usually prompting a desire to live one’s life as the opposite sex. In the DSM-5, the standard classification of mental disorders used by mental health professionals, this condition is known as “gender dysphoria." Note that classifying gender dysphoria as a disorder does not—indeed, should not—imply a moral judgment of transgender individuals. Depending on the degree of social stigma associated with it, transgender identity can be accompanied by very significant distress. The point of the mental-health outlook is to help reduce stigma and assist transgender individuals in leading good lives. The role of social norms in this picture, however, remains unclear and hotly debated. The historical and cross-cultural record indicates that conditions akin to what we now call "transgender identity" have been known to occur in all societies, with varying degrees of acceptance, suppression, or even encouragement. The widespread acceptance of individuals who were born males and dress and live as females, such as the hijra in India, katoey in Thailand, bakla in the Philippines, and travesti in Brazil, for example, long predates the current transgender movement in the West. Despite a longstanding recognition of their existence, transgender individuals in those countries continue to face some discrimination. Among the Kuna (also known as Guna) of the San Blas Islands in Panama, transgender identity appears to have been fully accepted since precolonial times. As a rare example of a matriarchal and matrilineal society, names and properties are typically passed on from female to female among the Kuna, leading to a cultural preference for having girl children. In this context, male children were sometimes raised as girls, thereby conferring families with a distinct social advantage. This gave rise to a rare example of absence of cultural stigma around transgender identities. These examples are telling because they point to the importance of different social norms in mediating gendered preferences and behavior. They also introduce another piece in our puzzle: all the culturally recognized incidences of pre-modern transgender individuals mentioned above involve natal males who transition to female. In the DSM-5, prevalence rates of gender dysphoria are estimated at 0.005 percent to 0.014 percent of the population for natal males, and 0.002 percent to 0.003 percent for natal females. The higher prevalence of males exhibiting the condition is likely related to a higher percentage of male homosexuals worldwide (3 to 4 percent) as compared to lesbians (1 to 2 percent). While these rates are the subject of debate, the higher ratio of male homosexuals as compared to women is a consistent finding across surveys. As attested by current controversies, rates of transgender identity appear to be on the rise, particularly among young people. Increased social acceptance of a previously stigmatized condition likely plays a role in this process, but other findings are clearly puzzling: Transgender identity is now reported among young natal females at rates that clearly exceed all known statistics to date. In a recent survey of 250 families whose children developed symptoms of gender dysphoria during or right after puberty, Lisa Littman, a physician and professor of behavioral science at Brown University, found that over 80 percent of the youth in her sample were female at birth. Littman’s study reported many other surprising findings. To meet the diagnostic criteria for gender dysphoria, a child typically needs to have shown observable characteristics of the condition prior to puberty, such as “a strong rejection of typically feminine or masculine toys," or “a strong resistance to wearing typically feminine or masculine clothes." Again, 80 percent of the parents in the study reported observing none of these early signs in their children. The plot thickens again: First, many of the youth in the survey had been directly exposed to one or more peers who had recently "come out" as trans. Next, 63.5 percent of the parents reported that in the time just before announcing they were trans, their child had exhibited a marked increase in Internet and social media consumption. Following popular YouTubers who discussed their transition thus emerged as a common factor in many of the cases. After the youth came out, an increase in distress, conflict with parents, and voiced antagonism toward heterosexual people and non-transgender people (known as “cis” or “cisgender”) was also frequently reported. This animosity was also described as extending to “males, white people, gay and lesbian (non-transgender) people.” The view adopted by trans youth, as summed up by one parent, seemed to be that: “In general, cis-gendered people are considered evil and unsupportive, regardless of their actual views on the topic. To be heterosexual, comfortable with the gender you were assigned at birth, and non-minority places you in the ‘most evil’ of categories with this group of friends. Statement of opinions by the evil cis-gendered population are consider phobic and discriminatory and are generally discounted as unenlightened.” Find a therapist who understands gender identity Parents further reported being derogatorily called “breeders” by their children, or being routinely harassed by children who played “pronoun-police." The observation that they no longer recognized their child’s voice came up time and again in parental reports. In turn, the eerie similarity between the youth's discourse and trans-positive online content was repeatedly emphasized. Youth were described as “sounding scripted," “reading from a script,” “wooden,” “like a form letter,” “verbatim,” “word for word," or “practically copy and paste." Littman raises cautions about encouraging young people’s desire to transition in all instances. From the cases reviewed in her study, she concluded that what she terms “rapid-onset gender dysphoria” (ROGD) appears to be a novel condition that emerges from cohort and contagion effects and novel social pressures. From this perspective, ROSD likely exhibits an aetiology and epidemiology that is distinct from the "classical" cases of gender dysphoria documented in the DSM. Transgender Essential Reads The Do's and Don'ts of Being a True LGBT Ally The Mental Health Impact of Anti-Trans Legislation Littman hypothesizes that ROGD can be cast as a maladaptive coping mechanism for other underlying mental health issues such as trauma or social maladjustment, but also for other exceptional traits like high IQ and giftedness. The peer support, prestige, and identity leveraged by the youth who proudly come out as trans certainly appears to be protective in their circles. As Littman’s study shows, this social signaling strategy also comes with strong disadvantages, particularly as it increases conflict between trans youth and the "cis" majority of the population, which, tellingly, includes a majority of the LGBT community. The notion reported by parents that the ROGD appears to be "scripted" is also telling. Medical anthropologists describe the process of outsourcing negative feelings to cultural narratives and systems of beliefs as “idioms of distress." These beliefs can be partially grounded in science and biology (as is the case with current brain-based mental health culture), or not at all (as is the case in cultures that explain mental illness through the idiom of spirit possession). When extreme forms of distress and coping arise through novel social pressures and spread through implicit imitation, strange epidemics of "mass psychogenic illnesses” have been documented. These have extended to dancing plagues, possession epidemics on factory floors, fugue states, or epidemics of face-twitching. These conditions are described as “psychogenic” (originating in the mind) when no underlying physical cause can be determined. But the term “sociogenic," which highlights the social context in which these conditions occur, is a better description. Risk factors for proneness to mass sociogenic illness remain hotly debated. Tellingly, for our investigation, it is broadly recognized that females, perhaps due to their higher sensitivity to social cues on average, are overwhelmingly more prone to such phenomena. Once more, this should not be read as a moral story. Medical sociologist Robert Bartholomew, one of the world’s leading experts on mass sociogenic epidemics, has long argued that phenomena that are still unjustly termed “mass hysteria” should be renamed “collective stress responses." It is clear from Littman’s study that the rise of rapid-onset gender dysphoria, which seems to predominantly involve natal females, points to a complex web of social pressures, changing cultural norms, and new modes of distress and coping that warrant further investigation. For parents, educators, and clinicians alike, caution is warranted in dealing with this growing phenomenon. * Note: An earlier version of this post used the term "transgenderism" which, while often used to describe transgender individuals, is now considered out of date and stigmatizing by many in the LGBT community. "Transgender identity" is the community's preferred term. The author thanks the Human Rights Campaign for pointing this out. *** Note 2: I have received numerous private comments from readers about this article. Some readers pointed out that I did not mention the controversy and significant public backlash that ensued after the study was first published in August 2018. You can read my discussion of this backlash in this next post. *** Note 3: You may also read my third post, in which I call for dialogue (not debate) and compassion between the different sides of the ROGD debate.
6
Car Is Spying on You, and a CBP Contract Shows the Risks
U.S. Customs and Border Protection purchased technology that vacuums up reams of personal information stored inside cars, according to a federal contract reviewed by The Intercept, illustrating the serious risks in connecting your vehicle and your smartphone. The contract, shared with The Intercept by Latinx advocacy organization Mijente, shows that CBP paid Swedish data extraction firm MSAB $456,073 for a bundle of hardware including five iVe “vehicle forensics kits” manufactured by Berla, an American company. A related document indicates that CBP believed the kit would be “critical in CBP investigations as it can provide evidence [not only] regarding the vehicle’s use, but also information obtained through mobile devices paired with the infotainment system.” The document went on to say that iVe was the only tool available for purchase that could tap into such systems. ph3h3p According to statements by Berla’s own founder, part of the draw of vacuuming data out of cars is that so many drivers are oblivious to the fact that their cars are generating so much data in the first place, often including extremely sensitive information inadvertently synced from smartphones. Indeed, MSAB marketing materials promise cops access to a vast array of sensitive personal information quietly stored in the infotainment consoles and various other computers used by modern vehicles — a tapestry of personal details akin to what CBP might get when cracking into one’s personal phone. MSAB claims that this data can include “Recent destinations, favorite locations, call logs, contact lists, SMS messages, emails, pictures, videos, social media feeds, and the navigation history of everywhere the vehicle has been.” MSAB even touts the ability to retrieve deleted data, divine “future plan[s],” and “Identify known associates and establish communication patterns between them.” The kit, MSAB says, also has the ability to discover specific events that most car owners are probably unaware are even recorded, like “when and where a vehicle’s lights are turned on, and which doors are opened and closed at specific locations” as well as “gear shifts, odometer reads, ignition cycles, speed logs, and more.” This car-based surveillance, in other words, goes many miles beyond the car itself. iV e is compatible with over two dozen makes of vehicle and is rapidly expanding its acquisition and decoding capabilities, according to MSAB. Civil liberties watchdogs said the CBP contract raises concerns that these sorts of extraction tools will be used more broadly to circumvent constitutional protections against unreasonable searches. “The scale at which CBP can leverage a contract like this one is staggering,” said Mohammad Tajsar, an attorney with the American Civil Liberties Union of Southern California. MSAB spokesperson Carolen Ytander declined to comment on the privacy and civil liberties risks posed by iVe. When asked if the company maintains any guidelines on use of its technology, they said the company “does not set customer policy or governance on usage.” MSAB’s contract with CBP ran from June of last year until February 28, 2021, and was with the agency’s “forensic and scientific arm,” Laboratories and Scientific Services. It included training on how to use the MSAB gear. Interest from the agency, the largest law enforcement force in the United States, likely stems from police setbacks in the ongoing war to crack open smartphones. Attacking such devices was a key line of business for MSAB before it branched out into extracting information from cars. The ubiquity of the smartphone provided police around the world with an unparalleled gift: a large portion of an individual’s private life stored conveniently in one object we carry nearly all of the time. But as our phones have become more sophisticated and more targeted, they’ve grown better secured as well, with phone makers like Apple and phone device-cracking outfits like MSAB and Cellebrite engaged in a constant back-and-forth to gain a technical edge over the other. So data-hungry government agencies have increasingly moved to exploit the rise of the smart car, whose dashboard-mounted computers, Bluetooth capabilities, and USB ports have, with the ascendancy of the smartphone, become as standard as cup holders. Smart car systems are typically intended to be paired with your phone, allowing you to take calls, dictate texts, plug in map directions, or “read ”emails from behind the wheel. Anyone who’s taken a spin in a new-ish vehicle and connected their phone — whether to place a hands-free call, listen to Spotify, or get directions — has probably been prompted to share their entire contact list, presented as a necessary step to place calls but without any warning that a perfect record of everyone they’ve ever known will now reside inside their car’s memory, sans password. The people behind CBP’s new tool are well aware that they are preying on consumer ignorance. In a podcast appearance first reported by NBC News last summer, Berla founder Ben LeMere remarked, “People rent cars and go do things with them and don’t even think about the places they are going and what the car records.” In a 2015 appearance on the podcast “The Forensic Lunch,” LeMere told the show’s hosts how the company uses exactly this accidental-transfer scenario in its trainings: “Your phone died, you’re gonna get in the car, plug it in, and there’s going to be this nice convenient USB port for you. When you plug it into this USB port, it’s going to charge your phone, absolutely. And as soon as it powers up, it’s going to start sucking all your data down into the car.” In the same podcast, LeMere also recounted the company pulling data from a car rented at BWI Marshall Airport outside Washington, D.C.: “We had a Ford Explorer … we pulled the system out, and we recovered 70 phones that had been connected to it. All of their call logs, their contacts and their SMS history, as well as their music preferences, songs that were on their device, and some of their Facebook and Twitter things as well. … And it’s quite comical when you sit back and read some of the the text messages.” The ACLU’s Tajsar explained, “What they’re really saying is ‘We can exploit people because they’re dumb. … We can leverage consumers’ lack of understanding in order to exploit them in ways that they might object to if it was done in the analog world.’” The push to make our cars extensions of our phones (often without any meaningful data protection) makes them tremendously enticing targets for generously funded police agencies with insatiable appetites for surveillance data. Part of the appeal is that automotive data systems remain on what Tajsar calls the “frontier of the Fourth Amendment.” While courts increasingly recognize your phone’s privacy as a direct extension of your own, the issue of cracking infotainment systems and downloading their contents remains unsettled, and CBP could be “exploiting the lack of legal coverage to get at information that otherwise would be protected by a warrant,” Tajsar said. MSAB’s technology is doubly troubling in the hands of CBP, an agency with a powerful exception from the Fourth Amendment and a historical tendency toward aggressive surveillance and repressive tactics. The agency recently used drones to monitor protests against the police murder of George Floyd and routinely conducts warrantless searches of electronic devices at or near the border. “It would appear that this technology can be applied like warrantless phone searches on anybody that CBP pleases,” said Mijente’s Jacinta Gonzalez, “which has been a problem for journalists, activists, and lawyers, as well as anyone else CBP decides to surveil, without providing any reasonable justification. With this capability, it seems very likely CBP would conduct searches based on intelligence about family/social connections, etc., and there wouldn’t seem to be anything preventing racial profiling.” “Whenever we have surveillance technology that’s deeply invasive, we are disturbed,” he said. “When it’s in the hands of an agency that’s consistently refused any kind of attempt at basic accountability, reform, or oversight, then it’s Defcon 1.” Part of the problem is that CBP’s parent agency, the Department of Homeland Security, is designed to proliferate intelligence and surveillance technologies “among major law enforcement agencies across the country,” said Tajsar. “What CBP have will trickle down to what your local cops on the street end up getting. That is not a theoretical concern.”
1
Catholic Social Doctrine Isn't a Surrogate for Capitalism
Catholic social doctrine is not a surrogate for capitalism. In fact, although decisively condemning “socialism,” the church, since Leo XIII’s Rerum Novarum, has always distanced itself from capitalistic ideology, holding it responsible for grave social injustices (§2). In Quadragesimo Anno Pius XI, for his part, used clear and strong words to stigmatize the international imperialism of money (§109). —John Paul II, speaking in Latvia in 1993 Saint John Paul II died sixteen years ago. There are still many people alive in Poland who knew him personally and were his close friends. That still does not change the fact that, due to fundamental misunderstandings, some key elements of his thought are still little-known to the wider public. In fact, his thinking remains so lively that a few years ago in Poland it sparked a heated controversy about his stance on capitalism. Paweł Rojek admirably summarized this disagreement between reporter Jonathan Luxmoore and popular Catholic writer George Weigel in a synoptic essay here. Briefly, Wojtyła’s recently published early writings on Catholic social ethics were at the center of the controversy. These 1953-1954 lectures were previously known but only became available to the general public a couple of years ago in a critical edition simply entitled  Catholic Social Ethics . The controversy in Poland centered around Luxmoore accusing the neo-conservatives associated with First Things (George Weigel, Michael Novak, and Richard John Neuhaus) of manipulating the pope’s message. Weigel replied by claiming that during their meetings the pope never suggested that Weigel’s pro-capitalist interpretation of Centesimus Annus was wrong. As far as those early lectures on social ethics go, Weigel maintained that their text was merely derivative of the book Catholic Social Ethics by Jan Piwowarczyk. And so, ultimately, when Luxmoore said that the pope sympathized with Marxism and criticized capitalism, Weigel responded that Wojtyła was invariably critical toward Marxism. It must be noted, that the accusation of an excessively pro-capitalist reading of the pope’s teaching was not just the provenance of a narrow group of leftist authors with agendas who are unfamiliar with Catholicism. Luxmoore is a Catholic himself and has written about the Church in many respected publications. Furthermore, the excessive boosting of capitalism by Weigel et al. was also criticized by mainline conservative Polish scholars such as Michał Łuczewski (former Deputy Director of the Centre for the Thought of JPII), and Rafał Łętocha (author of a commentary in Wojtyła’s Catholic Social Ethics). Additionally, after the publication of Laborem Exercens several left-Catholic authors wrote in the Journal of Religious Socialism, that by criticizing capitalism, the pope implicitly proposed some form of democratic socialism (David Hollenbach, SJ) or a soft, non-Marxist, socialism (Nicholas von Hoffman). On the other hand, with all due respect for Mr. Weigel, no careful reader of John Paul II can possibly regard John Paul’s II social teaching as being straightforwardly pro-capitalist. Even though Weigel is correct in asserting that Wojtyla is invariably critical toward Marxism (whose doctrine is based on false premises about humanity), it is also the case that Wojtyła’s critique of capitalism, when one compares his early writings from the 1950s with later encyclicals and homilies, did not change very much. To behave as if this other continuity did not exist in the papal teaching is as much a corruption of his teaching as saying that the pope was pro-Marxist. We simply have too many examples in his teaching—in books, encyclicals, speeches, homilies, interviews—explicitly criticizing the capitalist system. Of course, one cannot say that Wojtyła’s opinions on capitalism were constant and did not evolve over time. But, we must ask: was there any radical change with regard to this topic between the young Wojtyła living in an emerging socialist state and a pope who observed and commented upon the fall of Soviet communism? This depends on whether we can reconstruct his early views on Marxism, communism, and capitalism. Looking back at Piwowarczyk’s text, as well as the future pope’s restatement of this text in Catholic Social Ethics , we can see many additions made by the young lecturer. If Wojtyła was more apologetic about Marxism than Piwowarczyk then we would surely find traces of it in the text. However, here we need to agree with Weigel, that the future pope’s criticism of Marxism is indisputable. Marxism, being in its basic assumptions a purely materialistic doctrine, is in conflict with Christianity because it is based on a false anthropology and a false view of creation. But what about non-Marxist communism? A careless reader may be confused by statements such as this chapter subheading in the early text by the future pope: “Communism, as a higher ethical principle of possessing material goods, makes higher ethical demands on people. The principle of private property respects the actual state of human nature.” The case gets even more complicated because “communism” is not reduced here only to twentieth-century communism, which was an attempt to implement Marxist theories, but is for him a general idea, a very technical one. Most of what Wojtyła and Piwowarczyk wrote about communism concerns this general notion of communism—community life and the communal possession of goods. In this general sense the first Christian communities might be called “communist.” There is the example of the Jerusalem commune, where disciples of Jesus “had everything in common” (Acts 4:32). Using this criterion, some might say figures as diverse as St. Ambrose, John Chrysostom, and St. Albert can also be regarded as advocates of communism. After all, communal monastic life is communist in its substance. But this meaning should not be confused with modern atheistic communism. Private property is regarded as a natural and necessary institution, but the communal possession of goods and communal life are the supernatural flowering of the natural order and are the ideal of the Christian life. This ideal is within a horizon toward which one should aspire, but it is impossible to attain globally, and all attempts to impose it at the global level border upon fantasy. Wojtyła writes in his lectures on social ethics that the communist ideal: Necessarily demands brotherhood from people, without which the communal possession of goods seems impossible. This brotherhood is nothing else but the Christian idea of social love. Thus, in this way, the ideal of social love as the zenith of Christian ethical teaching converges with the communist ideal at its peak, when it comes to the realization of this ideal in the area of possessing material goods by people. Because of that when choosing between ideals, the Church must choose the communist ideal over the principle of private property (300).[1] But let us note, the young Wojtyła is still speaking about the “ideal,” not about any earthly, existing communism proposed by materialist ideologues. The Christian teaching about poverty and the detachment from material goods favors this very ideal. But it is based on the subjective will of those individuals who make an effort to pursue this calling within Christian communities—it is not forced upon them and everyone else by a regime. Being guided by love is a necessary precondition for Christians to implement this ideal, but love cannot be imposed coercively. Furthermore, this ideal does not in any way nullify private property, but it, if you will, surpasses it. Therefore, we should understand that the renouncement of private ownership in favor of communal ownership must come from the initiative of free individuals (or better: persons) and communities. Wojtyła’s original addition to the subject of communism is his observation that “the principle of private property given the current state of human nature” (304) allows for the abuse because of the Fall. Moreover, he says that the Catholic ethic by accepting the principle of private property must be prepared for the accusation that it legitimizes with its authority the institution which must result in “immense moral evil in human life” (304). This danger is, however, disarmed by the Catholic principle of using property for the common good, which is prior to that of private property and to which the latter is subordinated. And what about capitalism? How much is it justified to say that Wojtyła, later John Paul II, was critical towards it? Moreover, what do we mean when we speak of “capitalism,” because the term itself is already ambiguous? John Cort, the author of Christian Socialism: An Informal History , sees the papal definition of capitalism in Laborem Exercens in the following passage from the encyclical: There is a confusion or even a reversal of the order laid down from the beginning by the words of the Book of Genesis: man is treated as an instrument of production, whereas he—he alone, independently of the work he does—ought to be treated as the effective subject of work and its true maker and creator (§7). Are we justified in accepting Cort's singling out this fragment as the normative definition of capitalism for the pope's thought? Let us look at his writings from the 1950s to see. Wojtyła repeatedly stresses that capitalist enterprise is guided by the rule of “profit” and by “making profit.” And work “falls to the role of a tool, a means, which capital uses for its gain of profit” (364), thus “capitalism definitely subordinates humans to things . . . for profit is the main driving force of economic action, it is its primary and superior end” (365). Capitalism is a child of liberalism which is focused on the good of the individual, and “in capitalism this good of the individual appears as the economic profit of the enterprise” (366). Young Wojtyła dedicates three pages to the critique of the “spirit of capitalism” to show how it is inconsistent with Christianity. This is his original addition, which does not have an equivalent in Piwowarczyk’s text. One could speculate that he was influenced by Stefan Cardinal Wyszyński, because there are some resemblances with his Kościół a duch kapitalizmu (The Church and the Spirit of Capitalism). In those paragraphs Wojtyła writes about the injustice and the abuse of women and about child labor of which “The reading of The Capital provides us with abundance” (366). He is much more radical than Piwowarczyk, strongly emphasizing that the tradition and the statements of the Church contradict capitalism, either as a socio-economic system or a system of values. For the Church, capitalism is a purely materialistic system, focused on the accumulation of wealth, even at the expense of the dignity of persons. Growing out of the secular liberal doctrine it is based on a false concept of human freedom. The conception of freedom that places the atomized individual at the center of it does not have anything in common with Christian freedom, which is a liberation from the constraints of sin toward freedom in obedience to God. The goal in the latter is salvation, not accumulating earthly wealth. Consequently, the capitalist system thus understood is, in its core, not that much different from real-existing communism, which is defined by Wojtyła in his early writings and which in his later writings likens to a nationalist capitalism. In his last interview with Vittorio Possenti, just before becoming pope, Wojtyła said that “liberalism and Marxism actually grow from the same root. Namely, materialism in the guise of economism.” He contrasts those two doctrines against Catholic Social Teaching, which is rooted in the Gospel and that makes it completely unique. However, many free-market authors believe that the pope drastically changed his views in Centesimus Annus. Let us now take a quick look at this document to see if their views hold any water. Adrian Pabst says that there are two main readings of this document—“a U.S. Neocon reading” and “a reading which reads the affirmation of the free market passage within the context of the Church’s social teaching as a whole, with all the caveats about the need to regulate the markets with reference to the common good.” The first school, to which the First Things community mentioned earlier belongs, puts the utmost emphasis upon the “free-market” parts of this encyclical, distorting its message. The myth of the pro-capitalist encyclical is refuted by the very representatives of liberal economic thought, for whom Christian revelation is not in any way the central point of reference, as it obviously is in the case of the pope saint. For example, the document was commented upon by Milton Friedman, one of the main representatives of Chicago School of laissez-faire economics, whose thought, according to some biographers, supposedly inspired John Paul II. Although in his 1991 article Friedman notes that the Polish Pope rejects communism and real-existing socialism, he is utterly disappointed with the affirmation of unions, and workers being called “people of good will.” For Friedman this means that the Pope is not “immune to the influence of Marx.” He is also very concerned with the pope’s attacks on consumerism and consumer society. He is even disturbed by statements of traditional Catholic social ethics, for example, that material possessions should not be considered as something private but common to all, that “there are many human needs which find no place on the market,” that there are some collective goods, a just wage, etc. which should be defended by the state. The main reason behind this incompatibility of the liberal concept of free-market capitalism with the concept of free market defined by Centesimus Annus can be summarized by the last sentences from Friedman’s essay: But I must confess that one high-minded sentiment, passed off as if it were a self-evident proposition, sent shivers down my back: “obedience to the truth about God and man is the first condition of freedom.” Whose “truth”? Decided by whom? Echoes of the Spanish Inquisition? The previously mentioned problem emerges yet again in Friedman's statement: the concept of freedom in Christian tradition is radically different from the one in the liberal capitalist tradition. We cannot speak of a free market when we forget about this fundamental matter, which is the cornerstone of the whole Christian theoretical construction according to which human life should be ordered. As we can see, the definition of capitalism does not change significantly in the thought of John Paul II/Wojtyła—rather it becomes more nuanced. There is still harsh critique of the profit-oriented action (which is characteristic for all Catholic Social Teaching), as well as an emphasis on the primacy of work and the need for guaranteed access to fundamental human needs. However, he says that “capitalism” can be understood as an “economic system which recognizes the fundamental and positive role of business, the market, private property and the resulting responsibility for the means of production, as well as free human creativity in the economic sector,” for which the pope finds different names like a “business economy”, a “market economy,” or a “free economy” (Centesimus Annus, §42). Still John Paul II states that this kind of freedom in an economic order definitely must be “circumscribed within a strong juridical framework which places it at the service of human freedom in its totality”—not the thin liberal-capitalist freedom as we know it in our late-capitalist societies. Capitalism based upon a liberal ideology is blind to exploitation and human misery. The pope warns in his encyclical to not get deluded after the collapse of communism by a “radical capitalistic ideology,” which is dominated by blind faith that all the problems could be solved by “free development of market forces.” In the face of a new challenge he points out that “it is unacceptable to say that the defeat of so-called ‘Real Socialism’ leaves capitalism as the only model of economic organization” (Centisimus Annus §35). In the age of the Polish post-1989 economic reforms John Paul II did not give up his critique of unfettered capitalism (as many of the previous leaders of Solidarity movement actually did). He warned about the dangers of free-market capitalism during his pilgrimages to Poland in his homilies, for example in 1991 and in 1997 in Legnica. In Lubaczow during a 1991 visit he warned to not take shortcuts in economic reforms by surrendering moral guidelines, exploiting workers, and pursuing only wealth. In Legnica he taught about the negative effects of this reform, about mass unemployment and the masses of people left behind because they became mere obstacles on the way to prosperity. Exploitation and misery are and will be fruits of the free market forces unrestricted by ethical responsibility. In the light of John Paul II’s legacy it would be false to maintain, as his many neoconservative readers still do, that his teaching on capitalism changed significantly as the result of the collapse of communism. Capitalism, a fruit of liberalism, is still a threat to human dignity and life. Free economic activity, based upon the Christian idea of freedom, is something radically different. The error inherent in real-existing capitalism is similar to that of communism since both are based upon the secular ideal of brotherhood. Yet just as Christian freedom is something fundamentally different from liberal freedom of the individual, so secular brotherhood is something radically different from Christian caritas. The consequences of original sin do not allow for the realization of utopian projects—whether they be those of capitalism with its voluntaristic freedom as a determinant for every relation, or communism, with an absolute community of goods and lack of any private property. And although “the communist ideal,” as Wojtyła wrote along with Piwowarczyk and other Catholic authors, is closer to the ideal of Christian life it cannot be in any way joined with the two atheistic utopias of the twentieth century: communism and capitalism. From the perspective of Catholic Social Teaching, the conflict between capitalism and communism is not central, because these totalitarian systems grow from one materialistic root. The real battle is between “godless capitalism/communism” and the social teaching of the Church whose roots lie in revelation. The twentieth century witnessed the clash of two powerful godless systems, communist and capitalist. The latter may have won, but Christians should not be deluded into thinking that capitalism is a surrogate for the calling of Christian life, as Pope St. John Paul II warned throughout his life. EDITORIAL NOTE: A version of this essay originally appeared in Polish in Wszystko Co Najważniejsze, used by permission. p Here Wojtyła paraphrases R.P. Ducatillon OP, who is originally quoted by Piwowarczyk.
1
How to reduce your AWS bill up to 60%
How to reduce your AWS bill up to 60% Let’s face it. Once you have consumed your free credit, AWS costs an arm and a leg. This is the price to pay for high-quality services. But how can you reduce your costs without sacrificing quality? This post will show you how to reduce your bill by up to 60% by combining four built-in features in Qovery. Romaric Philogène February 20, 2021 · 3 min read p Romaric Philogène CEO and co-founder of Qovery. Romaric has 10+ years of experience in R&D. From the Ad-Tech to the financial industry, he has deep expertise in highly-reliable and performant systems. See all articles Business AWS Since this article was written, we have partnered with Usage AI - a solution that helps our users to reduce their AWS bill by up to 57% in no time. Check out our partnership announcement. There are three categories of costs on AWS. The “data transfer”, the “compute”, and the “storage” costs. Qovery heavily optimizes “compute” and “storage” costs. Data transfer is application-dependent. Here are the four strategies to reduce your AWS bill. Ephemeral environments Cost reduction: up to 90% on your development environments Ephemeral environments are also sometimes called “Dynamic environments”, “Temporary environments”, “on-demand environments”, or “short-lived environments”. The idea is that instead of environments “hanging around” waiting for someone to use them, Qovery is responsible for spawning and destroying the environments they will run against. Qovery ephemeral environments are convenient for feature development, PR validation, and bug fixing. By nature, they can drastically reduce the cost of your AWS bill. For example: with Qovery ephemeral environments, you can automatically destroy a development environment if not used for 30 minutes. Switching on ephemeral environments in Qovery is as simple as one click. Advantages: Save up to 90% on your development environment costs. Only used environments are running. Downsides: Not applicable to your production environment. It can take some time to start an environment (cold start). Start and stop schedules Cost reduction: up to 77% (5 hours per day from Monday to Friday) on your development environments Similar to ephemeral environments, the idea is to shut down your unused environments. For instance, employees usually work between 9 am to 5 pm, Monday to Friday. Qovery provides everything you need to automatically shut down your development environments when running out of working hours and start them up when in. With Qovery, your development environment runs only 40 hours instead of 168 hours in one week, which helps you to save 77% of your costs. Advantages: Development environments are shut down outside of your working hours. Finally, you can take advantage of the Cloud with dynamic resource provisioning :) Downsides: Not applicable to your production environment. It can take some time to start an environment (cold start). Application auto-scaling Cost reduction: up to 5% on your production and development environments Auto-scaling enables you to upsize/downsize the resources of your application automatically. Auto-scaling also allows lower cost, and reliable performance by seamlessly increasing and decreasing new instances as demand spikes and drops. As such, autoscaling provides consistency despite the dynamic and, at times, unpredictable demand for applications. Qovery manages horizontal scaling for applications and vertical scaling for databases. Auto-scaling means that at least one or n instances are running depending on the workload to manage. Qovery manages out-of-the-box auto-scaling. You can expect up to a 5% cost reduction. Advantages: Lower the cost for applications with unpredictable workloads. Work on production and development environments. Downsides: Small cost reduction Infrastructure auto-scaling Cost reduction: up to 100% on your development environments Infrastructure auto-scaling is similar to application auto-scaling but at the infrastructure level. Qovery on AWS relies on EKS and can destroy a development cluster if not used. Advantages: Development clusters are destroyed when not used. Higher cost reduction than “Start and stop schedules”. Downsides: Init the development cluster can take up to 30 minutes !!! Deliver Self-Service Infrastructure, Faster! Qovery turns your existing IaC infrastructure and Kubernetes manifests into repeatable blueprints for complete environments. Try it out now! Business AWS Stay up to date Sign up to receive the latest news from Qovery You might also like Partnership: Save Your Cloud Costs with Usage AI and Qovery You are using Qovery, and you love the product, but what about saving some money on your cloud bills on top? Today, we are making your dreams come true with a brand new partnership with Usage AI, and today, I will explain everything you need to know about it. - Read more p 4 Tips With Qovery To Reduce Your Cloud Costs While the cloud offers significant benefits compared to traditional on-premise infrastructure, its inherent elasticity and scalability lead to uncontrolled costs.  Cloud costs can be opaque and difficult to analyze — and without some system of identifying the source of costs and how to manage them — they can quickly undermine your profit margins. Since Qovery makes it easy to create on-demand environments, it can drastically grow your cloud costs. In this guide, we’ll look at some Qovery features that help to keep your cost under control, let's go! - Read more p
3
Wyoming 'DAO law' to go into effect in July after receiving final approval
by Michael McSweeney Shutterstock Legislation introduced earlier this year in Wyoming to create a legal link between decentralized autonomous organizations (DAOs) and the state government is now the law of the land. The law, first introduced in February, won final approval in the legislature earlier this month before being signed by Governor Mark Gordon on April 21, according to public records. As previously reported, the law allows DAOs to become registered as limited liability corporations, or LLCs, in the state. It goes into effect in July. More broadly, the law serves as a novel bridge between the world of traditional business structures and DAOs, which are governed by the way of blockchain-based smart contracts. Though the history of DAOs is a bumpy one, proponents of the law say it provides a new degree of legal clarity for such organizations. "Digital asset stakeholders made it clear to us they were concerned about facing general partnership liability in the absence of a well-defined corporate structure. Our DAO LLC legislation should dispel that concern," Wyoming Sen. Chris Rothfuss told CoinDesk earlier this week. The measure has also drawn criticism from the legal world as well.
1
Solar-powered crypto mining with a Raspberry Pi
So you're ready to cash in on this cryptocurrency thing, but you're also concerned about the electricity consumed in order to mine your own crypto? Well, I have good news and bad news for you. The good news is cryptocurrency mining on solar power is entirely possible. In fact, you could argue it's critical for the sustainability of cryptocurrency (and other Blockchain-related) activities. According to the Sierra Club the annual power consumption of Bitcoin-related activities alone is comparable to a country the size of Argentina . Not to mention the associated production of 37+ megatons of CO2 each year. The bad news ? Considering the raw power requirements for cryptocurrency mining AND the fact that we're going to use a Raspberry Pi SBC for this project, we probably won't be rolling like Scrooge McDuck any time soon. Is this prospect of crypto mining with a Raspberry Pi as ridiculous as it sounds? Probably! But let's not let reason stop us from building something fun. You've likely heard the phrase, "sunk cost fallacy". This is the concept of throwing good money at a bad idea, only because you've already invested money in said bad idea. In an ideal scenario, we aren't investing new hardware in crypto mining. Existing hardware is a "sunk cost", because we already own it. If we can collectively pretend this is the case, let's take a look at how we're going to build out a crypto miner on a Raspberry Pi 4 Model B and collect some virtual coin. We want this solution to function in perpetuity without manual intervention, so running our RPi exclusively off of solar is a non-starter. Powering anything that requires consistent voltage off of solar directly is a bad idea, what, with nights and cloudy days to consider. This is where a cool little product called the PiJuice comes into play. PiJuice is a Pi HAT with an onboard 1820mAh battery and a micro-USB connector for managing a solar array. With the PiJuice software, we can define battery charge levels at which to gracefully shutdown and boot up our RPi. Using the PiJuice calculator , it looks like we can expect at best 1.34 hours of runtime on a Raspberry Pi 4 off the provided battery alone. Maxing out the CPU on crypto mining will likely result in far less time. With the PiJuice in place for power management, we'll then want a sizable solar array to charge the battery on sunny days. I already had a 42w solar array from when I built a remotely-running ML bird identification system . Since the array has a micro USB connector, I can connect it directly to the PiJuice HAT. It's really as simple as that! Cellular? What does cellular networking have to do with this project? Since this project involves running a headless RPi, I'll have no immediate visual indication of how mining is progressing. So, I'd like to create a cloud-based dashboard of the hash rates generated by my RPi and chart them over time. I also like the fact that cellular networking makes any cloud-connected project portable! To accomplish this, I'm going to use the Notecard from Blues Wireless. It's a cellular and GPS-enabled device-to-cloud data-pump that comes with 500 MB of data and 10 years of cellular for $49. The Notecard itself is a tiny 30mm x 34mm SoM with an m.2 connector. To make integration in an existing project easier, Blues Wireless provides host boards called Notecarriers . I'll be using the Notecarrier Pi HAT for this project. Also, the Notecard ships preconfigured to communicate with Notehub.io , which enables secure data flow from device-to-cloud. Notecards are assigned to a project in Notehub. Notehub can then route data from these projects to your cloud of choice (e.g. AWS, Azure, Google Cloud, among others). Enable Your Virtual Pick Axe With both the Notecarrier-Pi HAT and the PiJuice HAT installed properly on the Raspberry Pi, we are ready to begin software setup. As with any Raspberry Pi project, it's a good idea to make sure all of your installed packages are up-to-date: sudo apt update sudo apt full-upgrade sudo apt-get clean Next, install the PiJuice software: sudo apt-get install pijuice-gui Reboot your Raspberry Pi and then head to Preferences > PiJuice Settings . Click the Configure HAT button and make sure the correct battery on your PiJuice is selected in the Battery tab: Next, we'll want to create two new System Tasks . One to shut down the RPi when the charge is too low and one to boot it up when the battery has enough charge again. Change Wakeup on charge to a high value (I'm using 80%) and Minimum charge to a low value (I'm using 10%). Mining cryptocurrency is nothing new, so we can use one of a variety of Linux-compatible CLI crypto miners. I also decided to mine Monero , which is a cryptocurrency that is still (in theory) profitable to mine with a CPU only. To start, we need to install raspbian-nspawn-64 , which requires you to be on the 64-bit kernel of Raspbian. sudo apt-get install -y raspbian-nspawn-64 NOTE: If you're not using the official 64-bit kernel, you will be prompted to enable it. Next, issue the following command to start using the 64-bit container: ds64-shell Now we need to install our miner, XMRig , from within this 64-bit shell. Install all of the build dependencies: sudo apt-get install git build-essential cmake libuv1-dev libssl-dev libhwloc-dev Clone the XMRig repo locally: git clone https://github.com/xmrig/xmrig.git Issue the following commands to complete the build (note that the build/make steps may take some time): cd xmrig mkdir build cd build cmake .. make Now we can create a config.json file to specify some configuration options for the miner. Use the XMRig configuration wizard to create a starter config file for you. Mine ended up looking like this: { "autosave": true, "cpu": true, "opencl": false, "cuda": false, "pools": [ { "url": "pool.supportxmr.com:443", "user": "MY_MONERO_WALLET_ID", "pass": "RPi", "keepalive": true, "tls": true } ] } NOTE: You can create your own wallet ID by installing a Monero wallet In order to enable logging, add the following line to the JSON object: "log-file": "/home/pi/Documents/apps/xmrig/build/log.txt", Save this as config.json in your build directory and create an empty log.txt file in the same directory. If you'd like, you can test out your miner now with this command: ./xmrig -c "config.json" Assuming everything is working properly, you should start seeing some activity in your terminal window: Now that we've proven our mining software functions, we'll want an easy way to monitor the production of our "mining rig". By now the log.txt file should have a healthy set of data. Let's write a short Python script that will read the log on a periodic basis and pump relevant data to the cloud. Back on the Raspberry Pi ( NOT in the 64-bit container), install python-periphery (for I2C), python-dateutil (for working with dates), and note-python (for interfacing with the Notecard): pip3 install python-periphery pip3 install python-dateutil pip3 install note-python NOTE: The full Python source is available here on GitHub ! In a new Python file, we'll want to do three things: We initialize the Notecard by specifying a productUID (which is the name of a Notehub project that we'll create in a minute) and setting the cellular connection mode to continuous . Normally in battery-conscious environments you would use periodic mode to reduce the frequency of cellular connections. However, in this project, the draw of our cellular modem is the least of our concerns since the mining software is going to use the vast majority of our battery. productUID = keys.PRODUCT_UID port = I2C("/dev/i2c-1") card = notecard.OpenI2C(port, 0, 0) req = {"req": "hub.set"} req["product"] = productUID req["mode"] = "continuous" rsp = card.Transaction(req) Here is an example of a log file line that contains the 10s hash rate (along with a timestamp): [2021-04-20 14:55:02.085] miner speed 10s/60s/15m 77.37 76.91 n/a H/s max 77.87 H/s Our main function will iterate through the log file to identify data that is relevant to the dashboard we are trying to create. For this project I only care about the ongoing hash rate of my miner. def main(start_timestamp): """ loops through log file to get crypto hash rate """ with open("/home/pi/Documents/apps/xmrig/build/log.txt") as fp: lines = fp.readlines() for line in lines: # check if this line starts with a valid date and contains "miner" line_timestamp = line[1:19] if is_date(line_timestamp) and "miner" in line: dt = datetime.strptime(line_timestamp, '%Y-%m-%d %H:%M:%S') if dt >= start_timestamp: send_note(line, dt) time.sleep(60) # check again in 1 minute main(datetime.now() - timedelta(minutes=1)) Finally, we want to send relevant data to the cloud: def send_note(line, dt): """ extract 10s H/s value from log and send to notehub.io """ hash_rate = line[54:line.find(" ", 54) - 1] ms_time = dt.timestamp() * 1000 req = {"req": "note.add"} req["file"] = "crypto.qo" req["body"] = {"rate": hash_rate, "time": ms_time} req["sync"] = True rsp = card.Transaction(req) A discerning eye will notice that everything into and out of the Notecard is JSON-based, making it incredibly developer-friendly. For example, the generated JSON from the above note.add request might look something like this: { "req":"note.add", "file":"crypto.qo", "body":{ "rate":77.37, "time":1619122680000 } } Save the Python script and a keys.py file into the same directory as the log.txt file. Recall that Notehub enables synchronization of data between your device and the cloud. To get started with a free Notehub account: Navigate to notehub.io and login, or create a new account. New Project card, give your project a name and ProductUID . ProductUID keys.py : PRODUCT_UID = "com.your.company:projectname" That's it! When the Python script runs, Notecard will associate itself with this Notehub project. Any notes (events) you send will show up in the Events panel when received: Next, we want to route our data to a cloud dashboard. I've used Ubidots in the past with success, so I created a new route from Notehub to Ubidots . You can view full instructions for creating a Notehub to Ubidots route here . Since our Raspberry Pi will be cycling between on and off states (depending on the battery charge), we will want to make sure our miner and Python script also start on boot. You can find the instructions on how to use systemd to automatically run xmrig and the Python script on the Raspberry Pi forum . For reference, here are the two service files I had to create: [Unit] Description=PiMiner After=multi-user.target [Service] Type=idle ExecStart=/usr/bin/ds64-run /home/pi/Documents/apps/xmrig/build/xmrig -c "config.json" [Install] WantedBy=multi-user.target [Unit] Description=PiMinerPython After=multi-user.target [Service] Type=idle ExecStart=/usr/bin/python3 /home/pi/Documents/apps/xmrig/build/crypto-monitor.py [Install] WantedBy=multi-user.target And what happened when I deployed this off-grid? Well, it's another good news, bad news situation! The good news ? Technically speaking, the project worked. With my data actively routing to Ubidots, I was able to create a dashboard to view my results over time: The bad news ? It was clear that I wasn't going to get too much mining done on a 1820 mAh battery. In fact, I averaged less than an hour of mining before the PiJuice stepped in to shut things down (at least until the solar array was able to charge the battery back up a bit). What about the elephant in the room? How much Monero was I able to earn? After running the miner for a few hours total, I pulled in a whopping 0.00000178 Monero. So maybe we won't be getting rich on crypto with the Raspberry Pi. In fact, I would highly recommend avoiding crypto mining at all, unless you have access to clean energy. #EarthDay and all. I do hope, however, you're able to use the PiJuice and Notecard in a future solar-powered project!
5
No-JavaScript Fingerprinting
a a a a a a a Fingerprinting is a way of identifying browsers without the use of cookies or data storage. Created using properties like language and installed fonts, your fingerprint stays the same even if your browser is in incognito mode. This demo further illustrates that fingerprinting is possible — even without JavaScript and cookies. To verify this, disable JavaScript and cookies, then refresh your browser. Your fingerprint will remain unchanged.
3
Using LiveView and GenServers to track BTC price
My assumption was that price of Bitcoin varies from one cryptocurrecy exchange to another and that I could earn just by buying and selling BTC on different platforms. In order to check that I could just get prices from last few days from few exchanges and use Excel to do the comparison or I could use this idea to improve my knowledge on LiveView and GenServer. As you have already guessed I picked latter option. Our solution consist of two application: ebisu and ebisu_web. First one is responsible for periodically check BTC price in exchanges, saving this data in db and notifying ebisu_web about new pices. Ebisu_web is simple web application that shows BTC price from few exchanges in real time. To not add unnecessary complexity I'm going to describe solution only for BitBay exchange. To fetch data from exchanges we use HTTPoison which is simple yet powerfull HTTP client. Ebisu.Utils.Http is wrapper around this library. It is done that way to reuse it in multiple places and to mock it in tests. This component should be moved to separate application where we could test it with real api and in tests for ebisu app we should use mocks. To simplify code we have yet another layer of abstraction for http client. For each exchange we have specific http client. In BitBay example we are going to fetch BTC price in PLN which we then convert to USD. In heart of our application is exchange module which main responsibility is to get last price of BTC, converte it to USD and save it to db. Data is going to be stored in PostgreSQL using Ecto. For this to work we also have to define schema and migration. Last part of our ebisu application is GenServer called Ticker which we use to periodically call exchange module (which gets and saves information about BTC price) and broadcasts this information to all subscribers using Phoenix.PubSub. Right now have only one subsciber which is LiveView that lives in ebisu_web app. To test our GenServer we have to make it predictable. Right now when start our worker it call external api and this call can take different amount of time on each call. We use Mox library to always return the same data and to not wait for response. Next we have to start worker wait for 150ms and check if we have any tickers in db. Our web application consist of LiveView page which gets tickers at initial load and then passes tickers received from GenServer to web page. On client side js hook is invoked on new data and graph is updated. To display exchange information we are going to use chart.js library. Our view consist of canvas definition where we store initial array of tickers. We define our hook in app.js file. In mounted function we specify how chart should look like. We also define handleEven which is used to update chart with new tickers. This function is called everytime we recive new data from exchange. You can view full code at: https://github.com/elpikel/ebisu
24
Big money bought the forests. Small timber communities are paying the price
Wall Street investment funds took control of Oregon’s private forests. Now, wealthy timber corporations reap the benefits of tax cuts that have cost rural counties billions The cutting Part two: What happened when a public institute became a de facto lobbying arm of the timber industry Part three: Timber tax cuts cost Oregon towns billions. Then clear-cuts polluted their water and drove up the price. This article was produced in partnership with OPB and The Oregonian/OregonLive. OPB is a member of the ProPublica Local Reporting Network. ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published. FALLS CITY, Ore. — A few hundred feet past this Oregon timber town, a curtain of Douglas fir trees opens to an expanse of skinny stumps. The hillside has been clear-cut, with thousands of trees leveled at once. Around the bend is another clear-cut nearly twice its size, then another, patches of desert brown carved into the forest for miles. Logging is booming around Falls City, a town of about 1,000 residents in the Oregon Coast Range. More trees are cut in the county today than decades ago when a sawmill hummed on Main Street and timber workers and their families filled the now-closed cafes, grocery stores and shops selling home appliances, sporting goods and feed for livestock. But the jobs and services have dried up, and the town is going broke. The library closed two years ago. And as many as half of the families in Falls City live on weekly food deliveries from the Mountain Gospel Fellowship. “You’re left still with these companies that have reaped these benefits, but those small cities that have supported them over the years are left in the dust,” Mac Corthell, the city manager, said. For decades, politicians, suit-and-tie timber executives and caulk-booted tree fallers alike have blamed the federal government and urban environmental advocates for kneecapping the state’s most important industry. Timber sales plummeted in the 1990s after the federal government dramatically reduced logging in national forests in response to protests and lawsuits to protect the northern spotted owl under the Endangered Species Act and other conservation laws. The drop left thousands of Oregonians without jobs, and counties lost hundreds of millions of dollars in annual revenue. But the singularly focused narrative, the only one most Oregonians know, masked another devastating shift for towns like Falls City. Wall Street real estate trusts and investment funds began gaining control over the state’s private forestlands. They profited at the expense of rural communities by logging more aggressively with fewer environmental protections than in neighboring states, while reaping the benefits of timber tax cuts that have cost counties at least $3 billion in the past three decades, an investigation by OPB, The Oregonian/OregonLive and ProPublica found. Half of the 18 counties in Oregon’s timber-dominant region lost more money from tax cuts on private forests than from the reduction of logging on federal lands, the investigation shows. Private timber owners used to pay what was known as a severance tax, which was based on the value of the trees they logged. But the tax, which helped fund schools and local governments, was eliminated for all but the smallest timber owners, who can choose to pay it as a means to further reduce annual property taxes. The total value of timber logged on private lands since 1991 is approximately $67 billion when adjusted for inflation, according to an analysis of data from Oregon’s Department of Forestry. If the state’s severance tax had not been phased out, companies would have paid an estimated $3 billion during the same period. Instead, cities and counties collected less than a third of that amount, or roughly $871 million. Polk County, home to Falls City, has lost approximately $29 million in revenue from timber sales on federal land. By comparison, the elimination of the severance tax and lower property taxes for private timber companies have cost the county at least $100 million. “You have that tension between this industry that still employs people, but we’re losing some of the benefits of that relationship,” Falls City Mayor Jeremy Gordon said. “As those jobs diminish, there’s less and less support to subsidize that industry in the community.” Oregon’s connection to the timber industry is so tightly knit that casinos, high school mascots and coffee roasters take their names from mills, loggers and stumps. The state Capitol is domed by a golden pioneer carrying an ax, and its House chamber carpeting is adorned with trees. The mascot of the Portland Timbers, a Major League Soccer team, is a logger who revs a chainsaw and cuts a round off a Douglas fir tree after every home goal. While the industry today still rakes in billions of dollars annually, it’s starkly different from the one that helped build and enrich the state. Oregon lowered taxes and maintained weaker environmental protections on private forestlands than neighboring states in exchange for jobs and economic investment from the timber industry. Despite such concessions, the country’s top lumber-producing state has fewer forest-sector jobs per acre and collects a smaller share of logging profits than Washington or California. If Oregon taxed timber owners the same as its neighbors, which are also top lumber producers with many of the same companies, it would generate tens of millions of dollars more for local governments. Timber once employed 1 in every 10 working Oregonians and pumped over $120 million per year into schools and county governments through severance and property taxes. Now, it employs 1 in every 50 working residents and pays about $24 million in severance and property taxes that go directly back to communities. The profits are concentrated with a small number of companies controlled by real estate trusts, investment funds and wealthy timber families. Small timber owners, who grow forests that are older and more biologically diverse than what corporate owners manage, have sold off hundreds of thousands of acres. In western Oregon, at least 40% of private forestlands are now owned by investment companies that maximize profits by purchasing large swaths of forestland, cutting trees on a more rapid cycle than decades ago, exporting additional timber overseas instead of using local workers to mill them and then selling the properties after they’ve been logged. Such intensive timber farming contributes to global warming because younger trees don’t store carbon dioxide as well as older ones. It also relies heavily on the use of herbicides and fertilizers, magnifies drought conditions and degrades habitat for wildlife such as threatened salmon and native songbirds. Jerry Anderson, region manager for Hancock Forest Management, one of the largest timber investment companies in Oregon, said local leadership makes decisions about the best practices for the land despite responsibilities to investors. “There’s nobody from outside this area that has come in and told us what to do on these individual plantations. Those are local decisions,” said Anderson, who has been managing land in Polk County under various companies for the past 40 years. The last eight years have been with Hancock. “I think our decision-making is very measured.” In investor materials, Hancock, which belongs to the publicly traded, $25 billion Canadian Manulife Financial, says that it is well-equipped for the shift from managing natural forests to plantations of trees designed to grow as fast and as straight as possible, like arrows jutting out from the ground. From a distance, tree plantations can be confused for natural forests. Oregon vistas still boast hundreds of thousands of acres of green treetops. But, on the ground, plantations of trees crammed together are often eerily barren, devoid of lush vegetation and wildlife. Former Oregon Gov. John Kitzhaber said that he and his advisers were alarmed by the shift toward investor-driven forestry during his last of three terms in office. By then, forest ecologists, the U.S. Forest Service and even a former chief investment officer for Hancock had published papers warning that investor-driven forestry was ecologically damaging and less capable of sustaining rural communities. “They have a completely different business model,” Kitzhaber, a Democrat, said. Kitzhaber, who received nearly $200,000 in contributions from timber-connected donors while in office, supported multiple industry-backed measures during his tenure. He led a plan to save Oregon’s salmon that relied on voluntary measures from timber companies instead of regulations, and he signed into law a massive tax cut for the industry that’s still felt in many counties. “The current state isn’t working,” Kitzhaber said in an interview. It may benefit investors, he said, “but it’s not working for small mill owners. It’s not working for rural communities. They don't have any control of their future.” From his favorite spot on a hill near Falls City, Ed Friedow can see what he refers to as the big picture: the Oregon coast, rolling hills, a national forest and industrial lands now managed mostly by timber investment companies. Friedow, a logger who grew up on a farm outside of town, watched as smaller timber companies from his childhood closed in the aftermath of the spotted owl protections, leaving control of the industry with larger companies that were more equipped to scale production. “All of a sudden, it was just like a takeover situation,” Friedow said. At the same time the changes were happening in Oregon, the timber industry was emerging from a nationwide recession that caused widespread bankruptcies in the 1980s. Many debt-laden companies began selling off forestlands. Meanwhile, changes in the federal tax code made timber an attractive investment that wouldn’t crash with the stock market. Under federal tax law, pension funds and other investors can acquire forestlands without paying the corporate taxes incurred by traditional timber companies that mill their own products. Those corporate taxes have reached 35%. Investors in the company instead pay a capital gains tax closer to 15%. In the 1990s, as federal logging plummeted, timber prices skyrocketed, making those investments look even smarter, said Brooks Mendell, president of the forest investment consultancy Forisk. “Overnight, private landowners had something that became more valuable,” Mendell said. Investors jumped at the opportunity to own timber, and existing companies like Weyerhaeuser restructured to take advantage of the tax breaks. The longtime Seattle-based timber company converted into a real estate investment trust in 2010. Timber investment companies, a rarity in the 1990s, now control a share of the forestland in western Oregon roughly the size of Delaware and Rhode Island combined. Weyerhaeuser, the largest of such companies, has more than doubled its size in western Oregon over the past 15 years, the investigation by the three news organizations found. The company owns more than 1.5 million of western Oregon’s 6.5 million acres of private forestland. Between 2006 and 2019 the current largest Wall Street-backed logging companies, and more than doubled the amount of forest land they owned in Western Oregon. Despite its growth, Weyerhaeuser employs fewer people than it did two decades ago and has shed most of its mill operations. It has three wood products facilities in Oregon and directly employs about 950 people, fewer than a quarter of the 4,000 employees the company listed in a 2006 news release. The decrease stems from factors that include consolidation and automation of jobs in mills. Just outside of Falls City, Weyerhaeuser owns roughly 21,000 acres. The company controls the road into the forest that leads to public lands and the land surrounding the creeks that supply the town’s drinking water. In 2006, the city temporarily shut down its water treatment plant because it was clogged with muddy runoff from logging operations. Weyerhaeuser spokesman Karl Wirsing said the company remains a good partner to local communities. In the past five years, the company has donated nearly $1.6 million across the state, including $10,000 to the Falls City Fire Department and $16,000 to the Polk County sheriff to help fund a new position that also patrols private forestlands. “We don’t simply do business in Oregon; our people have been living and working across the state since 1902, and we are proud of our role supporting local communities and economies,” Wirsing said in an emailed statement. But not all communities describe the relationship as a beneficial partnership. Corthell, the city manager in Falls City, said it took him nearly two years of phone calls and emails before Weyerhaeuser responded to his requests for help. The stretch of road between the forest and the town is cracked like a jigsaw puzzle. Corthell had hoped that the timber companies that use the road every day could pitch in to help pay for the $200,000 in needed repairs. But he said he didn’t get a meeting with them until after he suggested the road might close if it weren’t repaired. At that meeting in March, representatives for Weyerhaeuser and a few other timber companies told Corthell that they were willing to provide matching funds if the town could secure a state grant. In response to questions about Weyerhaeuser’s delay in returning Corthell’s emails and calls, Wirsing said the company had previously been willing to contribute to the road project but the town never asked for a specific dollar amount. Corthell is now preparing the town’s grant application. If the funding doesn’t come through, he doesn’t know where he’ll find the money. Penelope Kaczmarek, 65, spent her childhood smelling freshly cut wood at the family mill and the sulfury wafts of the distant pulp mill through her kitchen window in the coastal fishing town of Newport more than an hour southwest of Falls City. She watched floating logs await their turn at her father’s saw blade, mesmerized as men in hickory shirts, sawn-off jeans and hard hats rolled them across the water. Kaczmarek’s father, W. Stan Ouderkirk, was a logger, small mill owner and Republican member of the Oregon House of Representatives in the 1960s and 1970s. He represented Lincoln County, home to the Siuslaw National Forest and a vibrant commercial fishing port. When a large, out-of-state corporation bought his mill in the mid-1970s, Ouderkirk told his daughter that a rise of corporate ownership and loss of local control would lead to worse outcomes for Oregon’s forests and the people who depended on them. “I fear my father was right,” Kaczmarek said. Lincoln County lost an estimated $108 million in timber payments after the federal government restricted logging on public lands. But the sharp drop in federal forestland revenue is only partly to blame for budget cuts that have led some counties to force-release inmates from jail or reduce sheriffs patrols to the point that 911 calls for break-ins and assaults went unanswered. Tax cuts for large timber companies that log on private lands cost the county an estimated $122 million over the same period. Before lawmakers began chipping away at the tax through multiple measures, Lincoln County collected an average of $7.5 million a year in severance taxes. Last year, the county received just under $25,000. Now a psychiatric social worker, Kaczmarek sees people with mental illnesses filling local jails because the county doesn’t have the money to provide adequate health services. In therapy sessions, teachers tell her about overcrowded classrooms and school programs cut to the bare minimum. County leaders blame the majority of the financial struggles on the decline in revenue from logging. To avoid crushing cuts in services, communities that already struggle with high poverty and unemployment rates have had to raise taxes on residents and small businesses, said Jaime McGovern, an economist with the state’s Legislative Revenue Office. “If they don’t get approved, then there’s no money there,” McGovern said. “And so, you’ve seen libraries closing, police stations closing.” In the Marcola School District, about 15 miles northeast of Eugene, the elementary school was so dilapidated that voters in 2015 passed a bond to build a new one. The additional funding helped, but it wasn’t enough. The new elementary school is already bursting with students. “That hits home because I volunteer at the school district and I care about my taxes,” Helen Kennedy, a retired attorney, said. “I care about the kids.” Kennedy, who lives on 3.5 acres in the district, saw her property taxes increase by more than 20% after she voted for the bond. Last year, Kennedy paid $1,443 in property taxes, or about $412 per acre. That’s a fraction of what she’d pay in a city like Portland, but nearly 100 times the rate of the district’s biggest landowner. Weyerhaeuser, which owns more than 49,000 acres in the district, paid about $226,000 in property taxes last year, according to county records. That amounts to about $4.60 per acre. At the rate Kennedy’s land is taxed, the company would have had to pay an additional $20 million. “Holy cannoli,” Kennedy, 64, said about the losses from timber tax cuts. “The old adage that ‘what is good for the timber industry is good for Oregon’ is no longer true.” Hans Radtke knew the loss for counties was coming. Radtke, a member of a gubernatorial task force on timber taxes, sat in a hotel conference room near the state capitol in 1999 listening to lobbyists and timber executives argue that their industry was being unfairly taxed. In the early 1990s, as Oregon voters passed reforms to limit their property taxes, large timber companies successfully lobbied to gradually cut the severance tax in half, lowering their own bills by $30 million a year. But now they wanted to completely eliminate the severance tax. Timber companies argued that since they’d already cut nearly all of the existing forests on their land, and state law required them to plant new trees, they were essentially farmers. And since Oregon didn’t tax crops, it shouldn’t tax trees. As the owner of 100 acres of forestland, Radtke could have personally benefited from the tax cut. But as an economist advising Kitzhaber, the governor at the time, he knew it would devastate rural communities. After several failed attempts to offer changes that would lower industry taxes but avoid eliminating the severance tax altogether, Radtke knew the cut would pass. He turned to the industry lobbyist sitting next to him and said, “You’re fucking us.” “And he just smiled,” Radtke said. The task force dissolved without advancing any recommendations. Months later, Lane Shetterly, a former Republican state representative whose district included Falls City, introduced a bill at the request of the timber industry to phase out the severance tax. The bill contained an increase in forestland property taxes that many believed would lessen the impact of the cut. The Association of Oregon Counties supported it. The school lobby didn’t fight it. The governor signed it. Shetterly, now president of the Oregon Environmental Council, one of the state’s top environmental groups, remembers almost nothing about the bill. “Yeah, man that’s a long time ago,” Shetterly said in a phone interview. Kitzhaber, who vetoed an earlier version before ultimately approving the measure, also doesn’t recall his support of the tax cut. “I don’t question that I did,” Kitzhaber said, “but I can’t remember the context.” Two decades later, Oregonians are still picking up the tab. If Oregon hadn’t phased out its severance tax, timber production in 2018 would have generated an estimated $130 million. The state would have received an estimated $59 million under California’s tax system and $91 million under Washington’s system, the investigation by OPB, The Oregonian/OregonLive and ProPublica found. Unlike Oregon, those states still tax large timber companies for the value of the trees they log. Timber companies continue to pay state taxes that apply to all Oregon businesses, including income taxes and lowered property taxes, kept far below market value as an incentive for residents to own forestland. The companies also pay a flat fee on the volume of logs they harvest. That fee, set in part by a board of timber company representatives, generates about $14 million annually. It funds state forestry agencies and university research instead of local governments. Linc Cannon, former director of taxation for the Oregon Forest & Industries Council, defends the elimination of the severance tax. In many cases, Cannon said, counties didn’t lose as much money because they simply shifted the tax burden to residents and small businesses. Cannon said timber is a crop and should be treated like one. States that tax timber differently are simply wrong, he said. “If you don’t believe timber is a crop, then you can tax it in other ways like Washington does,” Cannon said. Video: Brandon Swanson/OPB A wisp of smoke from a burning pile of logging debris swirled into the fog drift above the jagged hills behind Falls City, home to some of the nation’s most productive timberlands. At each bend in the rocky logging road, Jerry Franklin’s voice rose. Oregon has become a case study for what can happen when state leaders fail to regulate the logging style practiced by investment companies, said Franklin, who is one of the Pacific Northwest’s best-known forest scientists. “This is not stewardship,” Franklin said, pointing to clear-cuts down to skinny stumps, sprayed over with herbicides, dessicated brown plants and streams without a single tree along the banks. “This is exploitation.” Franklin doesn’t object to logging. He and Norm Johnson, another forest scientist with whom he works closely, have drawn the ire of environmental groups for supporting more logging on federal lands, including certain types of clear-cutting. But this, Franklin said, is different. Douglas fir trees, which can live for centuries, are cut after only about 40 years, resulting in lower-quality wood that is worth less. The shorter timetable forces cutting across more acres to produce the same volume, but fewer workers to log and process the wood. At 83, Franklin is older than most of the Douglas firs now growing in Oregon. “They’re wasting it,” Franklin said, his tone matching that of a Sunday preacher, as he looked at clear-cut Weyerhaeuser land. “The incredible capacity of these forests to produce incredible volumes of high-quality wood is wasted. It’s criminal.” In reports to investors, Weyerhaeuser says the average age of a tree cut in the Pacific Northwest is 50, but the company expects a decrease. Some older trees have yet to be logged because of regulations that limit the percentage that can be cut annually, the company states in reports. Weyerhaeuser representatives said the company’s conversion to a real estate investment trust didn’t change its management of forestlands. “We have been practicing and continually improving on this system of sustainable forest management for generations, and we will continue to do so in Oregon — and on all our timberlands — for generations to come,” Wirsing said. Oregon is suffering from the side effects of short-term logging practiced by companies that don’t plan to stay around long, said Steven Kadas, who until two years ago was chief forester for the smaller, locally owned company Thompson Timber. When trees are cut down before reaching the peak of their ability to absorb carbon, it stunts one of the state’s biggest assets in combating climate change. The use of herbicide on clear-cuts and the lack of mature trees have deteriorated habitat for native songbirds on industrial private lands. Streams for salmon, for other fish and for drinking are drying up because young forests use more water and lose more of it to evaporation. “You’re not going to see the results of what you do,” Kadas said. “You’re not going to have to live with those.” Falls City’s mayor stands in the empty lot that once housed the town’s mill, imagining a two-story brewpub, its rooftop seating filled with locals and tourists on a summer evening. Just up the hill, brush and bramble have overtaken a rusted chain link fence. Dirty yellow paint peels off a “dead end” sign dangling upside down. But Gordon envisions a waterfront park fit for Instagram, complete with a footbridge across the namesake falls on the Little Luckiamute River. “Falls City — end of the road. Start of your adventure,” Gordon said. It’s a slogan the town adopted this year as a way to jump-start its economy. The town is the gateway to the Valley of the Giants, a 51-acre federal forest preserve with an iconic grove of trees as big as redwoods, draped in soggy neon moss. On the way is the ghost town of Valsetz. Then, the scenic Oregon coast. But the roads to those destinations are often behind locked gates during peak summer tourism months because of the timber companies that own them. The companies restricting access say they are worried about vandalism and wildfires, but $250 a year can buy you a permit to camp or collect firewood on Weyerhaeuser lands. Hancock, the other major investment company that owns property near the town, opened part of its lands for recreational access during non-wildfire months after receiving $350,000 in grants from the state. Falls City leaders are seeking more grant funding to open up the road to the Valley of the Giants. “I just don’t think that’s something that would sit well in the stomachs of most Oregonians,” Corthell, the city manager, said. “To know that there’s a town right here that’s suffering for lack of ability to support itself in many ways and that we have this giant asset right up the road that we can’t get to because the big corporations have control over it.” A few times a year, Friedow, the local logger, acts as a guide for tours to the Valley of the Giants. He stops at the concrete slabs that remain of Valsetz, telling stories of the now-defunct mill town. Then he begins the more than hourlong drive to the grove with trees older than the founding of the United States. Friedow doesn’t get far out of town before hearing from shocked tourists. They can’t believe the clear-cuts. Read more about our methodology: How we analyzed data from Oregon’s timber industry.
1
Front end development with components and Golang
{{ message }} kyoto-framework/kyoto You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
2
The software company where colleagues decide your salary
The company where colleagues decide your salary About sharing 10Pines By Dougal Shaw Business reporter, BBC News A software firm is taking a radical approach to how it treats employees. 10Pines tries to be transparent and democratic, even allowing staff to set each other's salaries. Ariel Umansky decided to turn down his proposed 7% pay rise in December 2020. He felt he could not justify it in front of his colleagues. In fact it was the second time in five years that he'd declined a raise at 10Pines. "I felt kind of insecure and exposed about me being close to or even on top of people that I considered had a better performance than me," explains Umansky. "It's easy to feel like a fraud." Salaries are decided three times a year at the Argentinian company's "rates meeting", which includes everyone except new hires still on probation. Employees (or mentors on their behalf) can put themselves forward for a raise, which is then openly debated. 10Pines is a technology business founded in 2010 with 85 employees, based in Buenos Aires. It writes software for clients including Starbucks and Burger King, making things like online loyalty cards for customers, apps and e-commerce platforms. Every year 50% of its profits are shared among staff. 10Pines Issues such as individuals' salaries are discussed in open meetings like this one, held before Covid "A key aspect [of open salaries] isn't knowing how much everyone is earning," says Umansky, "but knowing who earns more than who - it's the hierarchy, right?" 10Pines aspires to have a flat hierarchy, and be transparent with employees, as much as possible. After a three-month trial period, new staff join the rest of the team in monthly, open meetings in which key company decisions are decided, such as potential new clients, expenses, company finances - and of course salaries. There's no overall CEO and no real managers within teams, though there are senior figures who are partners, known as "associates" and "masters". "Since there are no bosses to decide raises, we delegate power to the people," says Jorge Silva, 10Pines co-founder and a "master". "We don't want a salary gap like in the United States." 'We tried paying everyone the same salary' Why crying can help you succeed in business 'Try bartering to kickstart your business' New joiners can negotiate their own salary to a certain extent, says Silva, which can be an issue at the beginning. Their proposed salary is discussed with those of a similar experience at the company, to gain their consent. In the final interview of the hiring process the candidate meets the entire team of 80-odd people, an introduction to the way the group dynamic works. There are no technical questions at this stage, it's more about learning about people's interests and a chance for them to see how 10Pines works. "I've been on the other side of it and it's uncomfortable, but informal," says Silva. "But we have stopped hiring processes at this stage," he adds. "Even if they are geniuses, we can feel if they will create tension by not fitting into the team." 10Pines A company diagram gives an overview of how the system works, with fluid, open groups working within the broader "roots" team 10Pines calls its approach "sociocracy". It was inspired by the Brazilian businessman Ricardo Semler and his experience transforming his family's manufacturing firm Semco. He turned it into a so-called "agile, collaborative company" with workers taking oversight of issues traditionally left to managers, finding it led to a low turnover of staff and revitalised the firm's fortunes. He wrote about it in a book called Maverick! "We took that as our bible," says Silva. DNA testing at work: One company's unusual policy 'My billion pound company has no HR department' Why did these people start businesses in a pandemic? There are increasingly "pockets of progressive, transparent companies" like this around the world, according to Ben Whitter, author of Human Experience at Work, and head of employee coaching and consultancy firm HEX Organization in the UK. The idea of transparent salaries can be a good way to level the playing field, between men and women for example, he thinks. "In many companies salaries can be set in the shadows, and there is a fear that they are decided by 'who you know'. This way makes it clear and accountable." However, he can see some drawbacks to the arrangements at 10Pines too. While this set-up may work when you have 80 employees, once that doubles, the benefits can tail off, he reckons. And hiring decisions based on the individual meeting the whole workforce can disadvantage those of an introverted disposition, while also creating a "natural bias of groupthink, where people make decisions they wouldn't normally make as an individual, raising issues about diversity and inclusion". 10Pines Since Covid, meetings have moved online or outside However, 10Pines says it runs diversity programmes, like women-only apprenticeship schemes, and it believes its overall approach can survive at scale. "We have evolved the process over 12 years," explains Angeles Tella Arena, an experienced software developer at the firm. "For example, we started salary discussions when we had 30 employees and were afraid it wouldn't work with 50, but we just kept adapting. You need to update processes so trust is maintained." It may be necessary to create a second office if the company continues to grow, which would replicate and run the same system autonomously, she says. "The key thing is to understand there is a difference between equal and fair," says co-founder Jorge Silva. "We are not all equals, but we try to be fair. We don't want to be like the classic company that tries to control employees and treats them like children." You can follow business reporter Dougal Shaw on Twitter: @dougalshawbbc Related Topics Software Argentina Pay
1
High Tech Genesis Inc. is hiring-Canada/USA/Remote
Who we are and what we do! Well our name says it all! Our focus is the “High Tech Sector” because we believe in joining forces with companies that CREATE (Genesis) technology as their main source of revenue. Founded in 2008, High Tech Genesis Inc. (HTG) is a group of specialists and consultants working in various environments. For instance, we are focussed in creating software that is used to effectively “build” the cloud. We don’t just deploy applications there, we create software that Cloud Service Providers use to build out their infrastructure. Our network engineers provide infrastructure consulting to teach developers how their product is going to be used in the field. Our team has a history of developing security products themselves and are creating security features in existing products. We’re rapidly growing and looking for more people who share our passion for creating something new every day. Join Our Team! Current Opportunities p IT Lab Technician p Cyber Security Instructor p Senior C Developer p Firmware Developer p Radio Access Network (RAN) Engineer p Firmware Developer p Senior C#.Net Software Developer p Hardware Engineer p Embedded Developer p Embedded Telecom Software Developer p Telecom Test Automation Engineer p Python Software Developer p C# Software Developer A little more fun than your average office... ...because we don't take ourselves too seriously! Our Office High Tech Genesis 2781 Lancaster Rd. Ottawa, ON K1B 1A7
2
We at $Famous_company Switched to $Hyped_technology (2020)
When $FAMOUS_COMPANY launched in 2010, it ran on a single server in $TECHBRO_FOUNDER’s garage. Since then, we’ve experienced explosive VC-funded growth and today we have hundreds of millions of daily active users (DAUs) from all around the globe accessing our products from our mobile apps and on $famouscompany.com. We’ve since made a couple of panic-induced changes to our backend to manage our technical debt (usually right after a high-profile outage) to keep our servers from keeling over. Our existing technology stack has served us well for all these years, but as we seek to grow further it’s clear that a complete rewrite of our application is something which will somehow prevent us from losing two billion dollars a year on customer acquisition. As we’ve mentioned in previous blog posts, the $FAMOUS_COMPANY backend has historically been developed in $UNREMARKABLE_LANGUAGE and architected on top of $PRACTICAL_OPEN_SOURCE_FRAMEWORK. To suit our unique needs, we designed and open-sourced $AN_ENGINEER_TOOK_A_MYTHOLOGY_CLASS, a highly-available, just-in-time compiler for $UNREMARKABLE_LANGUAGE. Even with our custom runtime, however, we eventually began seeing sporadic spikes in our 99th percentile latency statistics, which grew ever more pronounced as we scaled up to handle our increasing DAU count. Luckily, all of our software is designed from the ground up for introspectability, and using some BPF scripts we copied from Brendan Gregg’s website our in-house profiling tools $FAMOUS_COMPANY engineers determined that the performance bottlenecks were a result of time spent in the garbage collector. Initially, we tried messing with some garbage collector parameters we didn’t really understand, but to our surprise that didn’t magically solve our problems so instead we disabled garbage collection altogether. This increased our memory usage, but our automatic on-demand scaler handled this for us, as the graph below shows: Ultimately, however, our decision to switch was driven by our difficulty in hiring new talent for $UNREMARKABLE_LANGUAGE, despite it being taught in dozens of universities across the United States. Our blog posts on $PRACTICAL_OPEN_SOURCE_FRAMEWORK seemed to get fewer upvotes when posted on Reddit as well, cementing our conviction that our technology stack was now legacy code. We knew we needed to find something that could keep up with us at $FAMOUS_COMPANY scale. We evaluated a number of promising alternatives that we selected and ranked based on the how many bullet points they had on their websites, how often they’d appear on the front page of Hacker News, and a spreadsheet of important language characteristics (performance, efficiency, community, ease-of-use) that we had people in the office fill out. After careful consideration, we settled on rearchitecting our platform to use $FLASHY_LANGUAGE and $HYPED_TECHNOLOGY. Not only is $FLASHY_LANGUAGE popular according to the Stack Overflow developer survey, it’s also cross platform; we’re using it to reimplement our mobile apps as well. Rewriting our core infrastructure was fairly straightforward: as we have more engineers than we could possibly ever need or even know what to do with, we simply put a freeze on handling bug reports and shifted our effort to $HYPED_TECHNOLOGY instead. We originally had some trouble with adapting to some of $FLASHY_LANGUAGE’s quirks, and ran into a couple of bugs with $HYPED_TECHNOLOGY, but overall their powerful new features let us remove some of the complexity that our previous solution had to handle. Deploying the changes without downtime required some careful planning, but this was also not too difficult: we just hardcoded the status page to not update whenever we pushed new changes, keeping users guessing if our service was up or not. Managing incremental rollout was key: we aggressively A/B tested the new code. Our internal studies showed that gaslighting users by showing them a completely new interface once in a while and then switching back to the old one the next time they loaded a page increases user engagement, so we made sure to implement such a system based on a Medium article we found that had something to do with multi-armed bandits. With our rewrite now complete and rolled out to all of our customers, we think the effort has been a massive success for us and our team. We have measured our performance and you can see a summary of the results below: Every metric that matters to us has increased substantially from the rewrite, and we even identified some that were no longer relevant to us, such as number of bugs, user frustration, and maintenance cost. Today we are making some of the code that we can afford to open source available on our GitHub page. It is useless by itself and is heavily tied to our infrastructure, but you can star it to make us seem more relevant. It’s often said that completely rewriting software is fraught with peril, but we at $FAMOUS_COMPANY like to take big bets, and it’s clear that this one has paid off handsomely. While we focused on our backend changes in this blog post, as we mentioned before we are using $FLASHY_LANGUAGE in our mobile apps as well, since we don’t have the resources to write native applications for each platform. Unfortunately to increase lock-in these rewrites also mean we will be deprecating third-party API access to our services. We know some of our users relied on these interfaces for accessibility reasons, but we at $FAMOUS_COMPANY are dedicated to improving our services for those with disabilities as long as you aren’t using any sort of assistive technologies, which no longer work at all with our apps. We hope that you internalize our company’s anecdote as some sort of ground truth and show it to your company’s CTO so they too can consider redesigning their architecture like we have done. We know you’ll ignore the fact that you’re not us and we have enough engineers and resources to do whatever we like, but the decision will ruin your startup so it’s not like we’ll see your blog posts about your experience with $HYPED_TECHNOLOGY anytime soon. If you’re not in a position to influence what your company uses, you can still bring it up for point-scoring the next time a language war comes up. If you’re reading this and are interested in $HYPED_TECHNOLOGY like we are, we are hiring! Be sure to check out our jobs page, where there will be zero positions related to $FLASHY_LANGUAGE.
4
The Dynamics of Political Incivility on Twitter [pdf]
&*]GC aJ%XedDP4ȮR>0Lˍ!c WTjc{0~Uul8gM K@V? x&2z~w?ߣP(AmGbm$|fI+dW9GA\ZL*#bUCQ"S"2J2g&2ϸZbk]Rt?TCa>ٔ@QTYtЈuFu/PPuA8r%."ԙJpzEӡ#4 G*6Q7bBh<"OXA]ۥ[οpvS |GGVVRE&ٝ%F}2h Џ)QQG7km@G !Uᡔ8>2S9#7tZp^#.tL+ yJF]̊umh$SeEY ? 5l)7.wy̿ae٤'9H%S\Q{~o4f GfBd$_5SqTnc-` ӇQL^K gÙR!ܕ&& f|cX}XlB.v8L"Rn9Q.}`AREPSO5  }(Fϲx 5&1(ѝuڨ>VJ Զ`.O_7Rߋ23sȲ.sا<C&͠C?k+KZoWEډL+A M ǿd9lgW^i2 H(9c>sg%<rjb V}(H1Cu`G-,@ӄ@"ǁ r^W:o$w:!wMw*p )0:+<qS|x#,@9Gݮ/S"$1}& &4XwF*(r9+"8G,>g׊0Ú_@Cyڍ\2{#鄏 ҅k- D\+⁣e%پBZۢR{ 9Ju̲(.:GǀEq  F4a; |8/G\eIp$,{j]d8:!MNz?~=\!Ld$y!,RӈD@Ԡ8iJT5}_s\NJ :?տTD#O~"]J8Vçv5pO ޕz!  q<=d . endstream endobj 64 0 obj <>stream h265V0Pw+QƖ &@ }7c 6Q04̀l ;;Ԓh7ԊX;;B endstream endobj 65 0 obj <>stream x^xeP\m.!6$C74]k=n|̙[ssU]]gWm 2e5 $ 002L]Uv`y *@s@&/@ ̬d Pբ?5VW-dy@%`ne ))(J5R {-@ oewNW l3k" nr8쬜_V '{k ` OrJ88_oؽ^g3'+իqB,M |;[`כ@˟Z{yBL 0V&_ _@&N@[+++?/k_++3֜էշ=2ӟY7X]s9U ?3C lȏׂg]fǚh=-bkhb:s dL<!gﷵ@(:뼋[3H+gI+wP bf 07}@=uq ?lKniefc_U { M+x&)]i11i\ru@KJ(8>1prY^w5 Nf_@,U08Y^ff+<: &W"/T3'Wk_/RAf s`3 Ԍ4Hn^O @CQzA_75Oce0csǏ}MYڭ.[d"4yV([ ަhE}:_dZSQ5,|'hesB<,&  *2q暪oxp {.;'i3ޕJcAWvA1Xߒ`m76ʊ ;$̓(QBqݙj)GƾPqd%pq]7k).f*8Y&p{P rqÞIզ>[=h^#XtS!tSůڥgu?]Ʃ|CSysy=OWLSar1/hc2zMP1h䁌;a8}iQΰ"@3qh<$5ABVabǍh(?Sߍۻ nE?{G2؍-UC yP`WLYRwh'W9Nͬ)9!gFn0 7yq S\֕[oG W?b$ҞQd%fxdj;2m WgZ(NX)cd?㨑tȝro5GE\}X2x֍ʌ\3BU$^/TwiIll9R>b^xY{RrʠFp3B ~`7E=l޴Í΍G+w =>{7$Ƙ8-P@fpS"ӞsCoٕ5/hJftiۓ} \F^!Z;b@3%";QF)B;uü -6] RެotjR`a@MmY 8Fvڳ¯pd6h7}YNV'aONG%*V>D/EX'm8!<y2 (Ushk[u]~I<׶.k𧄎:_ +wVO kkTv}5kڇ/\m_%gRyZR+Cx0vSq=:+ Pxϑ%S]W]Wr|q`&đs)DD;I p d_!~Uz<*mO9Er=Tω]6Տҁspž=[75,W4>.f!%鬨K? sI"z EIt%dz:sƁ,UD s)X"n$9gsSDkt4$䘹 mͯm44a .hKD}h {dž~`<;5P'rw%OgBwr0ͳ'O4vBm/H (&ϗO#KB3,AB8|Nbc6yǮUU1ڶXkh,%Ah}Ifvj@>}*avG~[ۓg7v_.{bi'$^LWBh6bku#;_32Ķ!U *NDvBT X@ڀQއbZ*8qKO Ќ!V9MG)H.U|v7f OqZ2ZJzl&\gG9TGvi~eDR"=X 6x:69U!G FXPddٷc7yUTv|M?ۍpsB^ퟣ`>~煉\1T"QmSӡBPׅR{aӉ=#9k.Q4/>A)) k\-Oarޝ˭jMvL0cjmf!q'(!Ǎ&D/0hMW (i%˽Nuw>Y7Ozō/ڏ, #n]gCPо[wvc¥BtQfWz5&w0홫:/]k8oxA2EHZd>ičNW-&)^z$rwcc>6\ $HjʱM{:tƃf33w;~yNVz匮ƀDfuXyv~ЂtdWeAfta#8&|",ވeSAXO_K\x;ZNf8vov~d4R"je@I4d :uqR-3&t T (V:{ #U{'EXmV>y\_;FG 5f8TpqFĀgEK]jJ-yiC@~g]K-Ҧ'z کJ;)4RU O8#LJ^zB,n Mf8ֹѳKk3$Q)(3F*z rԯ3%gY2 \!."#9E͝QBcz!Bf`Cg]%QEN'PR礳[X@~Dꓹ0A%(JX>3"^E3eZqۥKv ?iA?ݦ*GhJKZlK*/*:k~*LdtsPo4'%fRQv'&!c T{G- FAչ^ׄSOpHeWs<;($΍'r &k'̸ܶVw _CyWNPIڈyɏ;)}Eս żғURNօO/Xދ>B5 Iu1&Yg[(o ׉&)f<nxВeIEg(ԇ 2`Jz ؝n6K? RʃbʦWHC8?9b#PJ5?#y   ap` 3 Qj;hڹ^ l \Poƺۏ;TޚfN25$'#*i(hI%u cӧ%KyPf,\Y6⎺}t:(.O ;8}݃Kn g~o8ua%UVtFyXC1iϘ2}z;|ln@ZfRld`-CPb/d1^U\UO{As]V5-L>2 /.MM3Q!M0AFN3`^l3 ML\ 5CKp@~+הp>bGAux1u؏#_sqexkZ=3m rޔ·/~:(3϶۰og8:蔝D ~e_!gFM 3U\g l튙s8'HE?B-#wmj~bPaYt{b"EGWbGlFμ}Kٻ.`[ڨRn7k6Z0ۥ^Au]PUOȱQ")ᷕk1'zr|ןП4!%cB㣣+T\Hr: +ڮrWWſ9t<- e6עCUҳoMS{IXz9!PYzn0Jwb &C`+DVH.*ՎFyWQ] rQv{6QQ В eHE_\Tu䙈yM&jQ&EnI 6Q,unV_`UY\1֜XX9tJ ]jgyL5$ ;)vt@ Ӎ~@_<Ro LiL?ѩS GĹ,^ʍmQ"^_ب+͓ɣJ޽P= 2*+#P3Ëp$ˀ*59̓t| JE&|}w!L,l8 ˺M$LuCO^P^B[6I5KI $sP̶V؈oFZ*zr+GڈĢM8%'#9s6ז2RSUt),{c6͝q;b%K2mºsΧ.ĚFG/Ľm9|nN&|>°Ѽ[?5 6ښU 듼#4 &q͌VDkjCyAIw$ǩ9,oCX-l>ᛓ@s YLoOTV;;7?{']5/+ 4W$mIKoG/ٌf<͡Ի_jVD~笫7 |{)) =G-m![(7%q4ۧ9$jHC^B)&rVu˕cĉ0C5*SSX+; 8r׃vbkfz^'Lu5f}WZCb/,NfTuE>p@1t8QhDΖL{j1GV҃X ';3;£YɅV=FE~j\CGhvi$:,5NNwf̖^ `pbK3cÀ3;šILI+Dg_| `HF?V3Ո[^,?[&omw՘l(Sj1Ԁ wl!= ׆v h}xV<(>kLY? }N9ڢctN}=1,1%^>_@w]  0[*.CZuDLy9ԦLѺjUIUϤbWywzR=ANs8$~pϭm`81#F~QvDxLWşZG1TEo`)Ԙ`YMFE8iLwYTԑ?jc;3P<AsTEWӖ5NݠfՄۜsG5?5>+8&ГԱ Jp#&$4k+"A\!]@G=@`gvwF3cBv0:R!AvOW&v<&%<'_5d#=gz310XTpG|ZfKlUQ0AUft3!+t=C`2b:`ֱ|2 < -D.j/ZQf=ߎS'';Vv=\;Pޟ&+9+WN|/?СG4iK W=x7JɊ?ȽT*Nfa# WP|GԘfEʭx+Ko /2z;X-"9V+TY Aot/=2]N4~]`4>J[(ߕI:Є:׏ rHfV0>~;g3ó,X40`΅ZE{jHpE}|2(2l8 8ۿ9QO ]sPI7b5J\iW8}ݎL0r`؞`67aPS&Guɱ& $9V H\ &cxo4M}Pf~CJoiG07ʻ+)dKӦ6'$<' "b>QA;% T{?l}\4Tqɘ"P:,+ui )w\hWY,=-P)r%!sΪnK-3Pcynj4w3kGKBe=Cqsp@w81TV[^5Z0=jӜϢ=ﷸWsF$Lt U|6_ X+GM(Ju/ YGL']ZNiԪc1)M苁4_i\mtG2>"48\H&@vίJb5Q"ORBM3ms@U!H6p,m.Wj}[LN^ˬϟ|J~>d73(4G[ȟ.fҺwPW4룠~J* L%EhD<wqJso\ W وUwTmH'[YmBf&|ũ#|9d6x2QI}1ͻܗ"u?DxM\i/2,#>/y3j`[ѓmaE%):8eFã51DW/#7LfG %-;<^LnWF؏d_Be %|aet8.^D2(<9X*m)P>9rPV6𡬴DL= 09c*fzi3=wm\B&zGVR8+_l4#"2ttd^TgP%mdp +M|#4b_u+>dtf>בӊbG9 [M{#U*A=GĻv.ha{I$m"3Ժ&CyZ#~^Dq&Sk}Zs@}v1΅52\ndo ݸ$Ydgx w…*y(ߕ%e,ԖZ=;'mML)bN7sr&$q|P% )ŜN2'vYrFZ@0]/(;l@P[76b3B+{cp6Nb皭?`L} 䪣@a%4VsFQS†ߺ}3VuxrR#ŃC-x;2BtoݕeSYr2Wg좜G9kQ1]E)`c"`O943MfSH7`XKG Kn ~n( cfluVX'AW]5=?rO;mJdKƷ)9=Buꎓ m)EV%Q 7nKK"Va#@19om+WC4 >*{MIWU wzrB1lt4XDwߚY0g(Y%Kcx-gqD5*KG<2D/ \Tx INL8 7Fsñ)`__'vG7Fޏ7U %$O#nQL76 ^?1]C>y<8%w Oh1G>9zx d(*ި|<юo#VM~ %7x[>DzB0@_maRir&"L<8 sv TTNeꦒe P#9Ⱦj*+Vy >stream x^VT]F@:$a[n `fT:CJ@Di iAB@ICAw޵]ZqyN6}#A(Bb@4@ILA aNNc8LisGQH`2A!.,.  6oG4@ tM&TB1m $~0w Ё`av##*-$ @< q0wOa.0`GP/; \v0$၄F=W_+W"8w0p }H=Sb1FaˁxB.[!U;C۹]1h E_iSVABP AOÎG:#Q^Hp$WPWHLC/,D7@Rm(++ c;sEM0 0n @v-$;;Yc a ~=zt +)iiߩQ?AQ$R Iq@?C;TiH);G g.]0?7B7q? Rpq="߳f,uPX KOՁeA($gp*Շc0~\H> u`?mX9c4M0z9p Kfb∻;ć婰#! `{ أI~@ G;c_8X Bu ;ww$S[(_av$ӓ(;P*f/!֨H1 קg~ڄO>)@oO g9~4Q+w E19|HJ=wyDjV&xqDŖƌSmč1c7naqn <X9E|flovwW^ֻJpFZ_M܁Flz4ؑҫv"D8zr'=#(fqPG^KrW&w{˩BY6q17u}}$e6LfEC=eR] Q38j%Ǐ 4.MFH Wf"> #hՍ ^u^=2:~;F`̕Y1_1y#as\q@fg+Ŕv`>D<5 xvxx_9vXH2lB0ܫTK- ]hk)#GZ^=XU|dMB6YXiKh.{[L [KQT.,L#WR# Ǧ1T{K)vpnnI2#7J!Y̩sio-ZOތ a={Slia2I] n(k@sʭN=w`J Dgyx zAU&|Qݭ÷nd/<R\xq !QQ!Bg02@J:^M@QFZ:7HM\b kGk ϣ\hvk+-_037iЊ=(wla/u\hh)GͶ\n.uwƚ[ZnN?m/NݞRΆYRӛjrN^([(qUVBKWTȑnF8Xvz:z*pV: $ Y-['&YM?X7 u@~j:Wҥok8xYj;oz>TE-dfi 4[lߏBV:b<EV#Ћ+eGptBmEEhQQdTk+}(&+NFhٍwHT}4X-$@oeڑڽvn d]՗1 ƕOk`<$:__^=u|eKN#V}407t'}b(>zKqQxY#~s.~mZ;ǣ<D[O&e?|#8{G" V%F[(?՗FE*iB\y90(r"[8o'.\ww?g-Q2Wu5cxJQ@E>,Bx;tbŸpkz~C }jl ,$L9+W} .[!=ٽ o1ZJk! ,ˢE|0)Βc7tH|aERDmL:%&ݦĥPl=͹֮n_2/q볉-c(jY^}ꙗ,;(Rb4ҥaLC|AGu3xOцwjޥy5$k{y K2P ڄ`ٚ\qKmwi _]Uo882gl=\[Ʌd?o̹x+Ȱ}mOTcUݞ97ƧwZ2g-FxqUW#$T rpkU-&>i#|Y2kro9LFpnF(q Yuа$3 fٗ=E7oOַ4>y]I[\\giKXvWQ:ݛY5HIi]uU2 eط[2C\eWD}i:n'uiMw7KGPrp{Y$B窾uhs>y)&{  I4皁XhsW#$_AQ}EL(G1H@A"R ia֭/ōP537P*GLx=Ua¢w7 ㌝8\ D%>g+2}ВZkdRQGl[V B*оBQ(Qd+ ~8zrRǨ606-|Nǒ{Y D5Mqi!Qfl6%AԷ88EVOAmNg s]J؃^, )Rdz w›-O1ΘvkݲKݵwk줝\~ CdȻ &kM5-(9TG}V14xǕA{+'z6 6eǥk>LNnb1Zy434Mkf!J<BeDx=<O4 rВ?VEPrFYɴ,:R]!w'”~(݁$S kl UJ2Y?CoRcg ȩ)%IDWq]MH!dNjw f9p*!Ϻ=TuϷ| R6, V!<8Lz[Yٺ,At9G0!t3h]c"Je>G^ ֙oz±=]iW$y2äAbfpjwfztxU"xߜsVhTU[ tPz i=;B$ܿ >Ph ܘo7s;?m}^h:Fc]E ~7zՎQT_ԗCd!3#Pz½$IM9ٽbMAI}reԦRbΜZ| jD% HQj4~#ȦgOYKuw+2Bt}rlj?#ykt!Zݩ|ygECik⮴y!0E/&mv< fB3h*w#z2n MՆÌT㌋z&ѡ}][(F_0k[~Ȝf`j 2Zk)CIcy)цlQuueNbQlKd w]1(;wpAao͖ ǂ>{N?-75 )Lrb|-窞sTnLH~ٝm_ *_dl1'cV?BR3V-O\+Z T},5ĹO6RV[F>|p #j3xpi̛/nV ArZjLB1>KQYn2hMrc> @kݐ+3"7߽Hbc׮uGFk,f?I7[iٻ|07exDZ~ +bҹ8Spy&I*rO.BhK;sg' 蓩+_R8=.Ӈz 9AlDdQVoT|˅Hj˳I;h j^&ˏըXJm3ƻ6ORgZ#iDv+2:3}ڭ{r7p60ڥqR?jc=4gVxZkyܟj'#vx*!.*[{?'(T7+eu"I? TgnMlӅR~C-=kO_ODSF#A`*!'b7YBlQ{5Gծ/ GxI:2oMC{rN-4L\GD4n,h f=a1-D2Tyg4)>D f FT!1{9~6K8¥/##c{tK NG:Zq%2=3Pۼd *n|^w+!i ۖ:=(MH_>OLhv#%ͦt3)|h@ _Rrq+BI3/(Ol '"sUU9}hhxii>MD! LOqFu+XZpDmg-9BY|ִL;0> 8m*jH[ޥdȲ'NPna)aV¶k | +ԑ7R:LG PL"^O{\5Ӵ.ZMX(,Oo,sOT=p׌:zR#n󥄇II.Ջm,p@3u׊u^@ST ʘ_$aSZ-˒72H44H Xfj%!CI qN| VXThΫ~ ءuT5Ѥ!O* 65>yH>/NA$m0wx CB0`)}xGr+̦!ڸxo(hF<ϋ=*dPzr5~UMwR{tXl쫸g>mhr.*]9B^9_&'ksʉ z'p"RX6mN<f)J:iLDG6!"țzNG* jv;B[FZVHm|uhq@ lkKjSֻ dnYr_@z<5Banن٭DZ9u]5¸F %tͣŻ$ʚly%,* +yo,:,X%,?nܑ;}w}^[5}+"p@X#w4#αEȣȏl/v5j;} TyI.ISFˡ Rvn58E2)YEd ގ"K K0|ȂmS/= Z }ߍh|{ VKwĹgj4ZޙĴFUf-W%R&Iƈ)QnԻk c~| R8y>VL ez=apPDfP=^ПYw ^ v7Ku;ٷaf `럤gf+T ᛠdX9TS}*Cm@aVH3$Ucoਅ:$/-뒏#a ?+&Z)km?LEc!j-?3P&Tq w[Z5)Å\mb#r|]=;qy+ ꢼZ5g h |fsնIϤen#Ea&s 53M=N*}]^huϋ7 Kh}lshã`0 ϗo8Pmz+i:ǣN}7_3#Rݚd~A1x7$E}.@י),LyKixS Qk8zʱ&@gDg|cq$VGlʍ(.#E$q0AGbFj.b-lE3}CvG!;Tc Jǖ\::X>J$/U<7+}$W;=>{XZ'0.*dZf仹i=SbZ;#ڼlpnSJl4s7pR6,-3wƌ5__k*>>Vtd|kU&R)۾Ak^_nE`ES+A\ m2en\?j'MMƄXPw%tdڮ[]y/LK됔^L:F;Ʋukp<3je|JFVC$%vPxrwkE+VMr2- ϥPաܒabj9ԇli=%?z9մ Qn\\mWazE/-vdS]-n +e񃬪cd7?4#+upTL,F6Փ%%,AB҂- j% 5 endstream endobj 67 0 obj <>stream hތkoFV'K:E !Ip sl <;;J#Rk"pj#(X=Z" #j0;D0L  T!Hc8"ggzUFg!ЈNlS:1.蒮hBUkҰZRDSl@;+BkK0OlCNVY?8גgAK~y/hhi>f,oiLd"\f8 ? W~9?Z,I_g)%Ovmr㊉ǷcC$#Vb !p9 ? IEZ+U~G{ES*47! ˍfwk|Sw{]nTN ZMiA|\8dXU4in N x7yšޡApH?(i4\Jr>]EZ^9¥:S\SܫA;3bmqUUܧ}k"Q AEtmm)E[O(O75KSm;hrE?$)q޵y wZxsZ 8:c9xkApdBG CSj+ay룵"3 baz[a9V靳`-rkVcAVSs (/S-xq[}b8`*kwlhƩsw?$e _gqe*~h*bq/$gzEx8{ΠwG558+>stream hޔYێ6=F2E M6mmK N~}HZDΜ!9CYېL;ynd&F2-EKdV͜uh]> ^  L "a)5dxԞ2$fy *<r "eLBf:@2i$FB2KcuiЭ$[ͳ c\3<C0PLY2V+C$ <5,c@E 倥`9 ` *fKh& J,[%Kmx$$!$fKm +>mF ,ɖg] yz*a@Y 9RʠlB}d YG"Җm$t Nxq<D?"BXKcNDa,m@,㡛2c c9LF1'ɎiƱv^n=Oi c ]UH1+bYmAVeV:fi%V.YaU.Z0Wo{Fd׷jh5ӋDz=/MSXw __?#ʡUC.Ǫiwڗ˶jwЧ3t'qہ h @j{AZ%Jxw^AH A+Jnw{Ȅ^<Uf t8;M~;T|WHg8oS אYƯӔYQ3*f'fѸg2uN>fλLg漳/tk.^t9'`y|o.vm%Εrm2M^ǟ8ua%WuwjfFKg(BPQFRYa&l2Qrw䌿΄\gق#X.FY@s[s[xBdN^-hQ TPCy% j%DyDI.=<N)e]<_=V;zΡyc3Oe>p97we[1}a\4LcvS68y#z\S,NДEnjKsgfX.,FU&09WESopc"2i1e-ؤޜv۪=ݶԯ%I.֍P1k/?rfb<+Ia>2JDMUtvHmy3Tۺs̹GԸ\7jjۅEXG}cU#ǞWԹ0S.FliWG!VGXJ:ƀє%/ +J NQMRFlZRT̊ fW3oW8Kv0u>'PY(1KbtrA|=F6bo%˪oD)5Ƿ\]yn- '\ő?+A|eQlÙnz.h&^D=jTޥt}f LSU9L$L?<$)M@KURj]> S0s&#Cg:LBPrSi \,C_k$e7 KGOoNۻP+rBlC `n͖_Z&5X7kkJl9 mZS݀YT)]jX8S,yW@)#S5Ub FMKUP{fǨi`s?55iR1H_]0iEradfJEl9~(4o֙/6rL}=wuW;mXLύw}C#^$<.qEa(MwU +(1-rn5l:]oh݈nAEk亾(|7モ^qsڗ?(5`NmT+u?NDѦ󍓺\hRJG ?]&)dstʫJ#'=z \|g< GA%@;5A.)J*HJ"tE )N"蔉E8VTbN]Y+BN!HkS endstream endobj 69 0 obj <>stream hބWn6>Z9$ PAP46ȃU[ZܯJYO ̜/!+|4e b`QeucPPr$I9/Yd;'c*3c?xNI4AQb!'E&C % D#2QTa9GYCfN$hʒe@) 4 %a aC* < `x Fy,<eg`aE@c1IQm&xN$$(spa7_Qe~?at4t뾽z" X6cgΨ TAY,t3kHus9j\@(x fvLcn׋7zSֶv^7GSpyq0L05*YcduVֻh |sulԌ}=C_whTUO?}JZ\P7CWW􇩫ІJ62Vfe=4c dw3djtvm7| =vco<@bbs2au:X8L,)q>}J 9oHAN8Pc7Ë_n݌[#9z]<9扱BIur[$3[S|&Qz~8NN9ʣWrMY/~w]w]Xp3{BrAOFZql7;v-R豓I^\qsЦW oJ~Llb)~u\gyHN\S>stream application/pdf 2020-02-11T09:39:11Z LaTeX with hyperref 2020-02-11T09:57:36Z 2020-02-11T09:57:36Z pdfTeX-1.40.19 False This is pdfTeX, Version 3.14159265-2.6-1.40.19 (TeX Live 2018) kpathsea version 6.3.0 uuid:3cbbd504-ca4f-4a35-899c-c6fe49358ae0 uuid:5cee16b6-dedf-4328-a5c9-64ef2f616b8d endstream endobj 71 0 obj <>stream h252T0P052R01R  66y% Fޙ) y  RYZlgWmSlVl&M9Db)ĐĢT6Si1b5 "gfDT;c endstream endobj 72 0 obj <>stream hdMK1wk?tڋR lK)֓00yyF `>OS? J%N='׺-]`/XW=9oC4U˖aqD*z" 昙wATLֲTkgՂvռv'%îtlYD7!~L/<\wi&CGeTY endstream endobj 73 0 obj <>/Filter/FlateDecode/ID[ ]/Info 524 0 R/Length 346/Root 526 0 R/Size 525/Type/XRef/W[1 3 1]>>stream hbb&FF&0)DrʀH& d | Rh&= $z[U RRDdE^Ef2E@X/29`cz0?vd8"oeAn`Ti:=5 Ҭ,"v= ҫĖ .` 6DbPfI(o]`_`K"!W=X|Xe.lK c=N~%`k_ %GI|$pzfqiH'm( <14,?LҬ>bRc*$p d: c>e endstream endobj startxref 116 %%EOF
25
Dogecoin Is Up Because It’s Funny
To continue, please click the box below to let us know you're not a robot.
2
American activism is best hope to save U.S. democracy from Trump
IE 11 is not supported. For an optimal experience visit our site on another browser. UP NEXT DOJ's reported interest in Trump foreign business deals reframes scope of Mar-a-Lago probe 03:24 Saddled with Trump, unpopular policies, GOP toys with tanking economy under Biden 09:03 Carroll moves to teach Trump lesson he failed to learn from loss in court 07:29 Bicyclist brings white supremacists low with skilled heckling 01:45 Durham report, long-awaited by Trump supporters, fails to deliver on hype 06:13 E. Jean Carroll 'thrilled' to finally hold Donald Trump to account for his lies 11:59 Antisemitic speakers to skip Trump stop on right-wing roadshow tour 03:39 Why being Latino and also a neo-Nazi are not mutually exclusive 03:12 Radical right wing mass violence sits dangerously close to Republican politics 10:57 Jury hears closing arguments, begins deliberations in Carroll lawsuit against Trump 05:50 Republicans were not always indifferent to Supreme Court ethics scandals 04:40 Anti-trans neo-Nazis in Ohio find common cause with state Republicans 03:05 U.S. religious extremists help push radical anti-gay laws in Africa 09:21 Blue states passing laws to protect against red state overreach 07:21 Willis flags summer dates to local police for potential unrest due to 'charging decisions' 01:55 How 'verification in reverse' powers the conservative media bubble and right-wing politics 05:23 Carlson crisis threatens right-wing media ability to carry feckless Republican Party 05:50 Carlson's fall marks another step in declining power of right-wing figureheads 06:52 How Russian propaganda went from fringey social media to Fox News 06:28 84-year-old charged in shooting of Black teen who went to wrong house 06:52 MSNBC HIGHLIGHTS p p
2
The mysterious origin of the northern lights has been proven
The aurora borealis, or northern lights, could easily be described as Earth’s greatest light show. A phenomenon that’s exclusive to the higher latitudes has had scientists in awe and wonder for centuries. The mystery surrounding what causes the northern lights has been speculated but never proven, until now. A group of physicists from the University of Iowa have finally proven that the “most brilliant auroras are produced by powerful electromagnetic waves during geomagnetic storms,” according to a newly released study. James Schroeder, from Wheaton College, was the lead author of the study. The study shows that these phenomena, also known as Alfven waves, accelerate electrons toward Earth, causing the particles to produce the light show we know as the northern lights. The aurora borealis lights up the night sky in Iceland. MARIANA SUAREZ/AFP/AFP via Getty Images “Measurements revealed this small population of electrons undergoes ‘resonant acceleration’ by the Alfven wave’s electric field, similar to a surfer catching a wave and being continually accelerated as the surfer moves along with the wave,” said Greg Howes, associate professor in the Department of Physics and Astronomy at the University of Iowa and co-author of the study. This idea of electrons “surfing” on the electric field is a theory first introduced in 1946 by a Russian physicist, Lev Landau, that was named Landau damping. His theory has now been proven. Scientists have understood for decades how the aurora most likely is created, but they have now been able to simulate it, for the first time, in a lab at the Large Plasma Device (LPD) in UCLA’s Basic Plasma Science Facility. Scientists used a 20-meter-long chamber to recreate Earth’s magnetic field using the powerful magnetic field coils on UCLA’s LPD. Inside the chamber, scientists generated a plasma similar to what exists in space near the Earth. “Using a specially designed antenna, we launched Alfven waves down the machine, much like shaking a garden hose up and down quickly, and watching the wave travel along the hose,” said Howes. As they began to experience the electrons “surfing” along the wave, they used another specialized instrument to measure how those electrons were gaining energy from the wave. The northern lights appear over a waterfall in Iceland. MARIANA SUAREZ/AFP/Getty Images Although the experiment didn’t recreate the colorful shimmer we see in the sky, “our measurements in the laboratory clearly agreed with predictions from computer simulations and mathematical calculations, proving that electrons surfing on Alfven waves can accelerate the electrons (up to speeds of 45 million mph) that cause the aurora,” said Howes. “These experiments let us make the key measurements that show that the space measurements and theory do, indeed, explain a major way in which the aurora are created,” said Craig Kletzing, the study co-author. Auroral beads are seen from the International Space Station. NASA Space scientists around the country were ecstatic to hear the news. “I was tremendously excited! It is a very rare thing to see a laboratory experiment that validates a theory or model concerning the space environment,” said Patrick Koehn, a scientist in the Heliophysics Division of NASA. “Space is simply too big to easily simulate in the lab.” Koehn said he believes being able to understand the acceleration mechanism for the aurora-causing electrons will be helpful in many studies in the future. “It does help us understand space weather better! The electron acceleration mechanism verified by this project is at work elsewhere in the solar system, so it will find many applications in space physics. It will be of use in space weather forecasting as well, something that NASA is very interested in,” Koehn said in an email to CNN. Now that the theory of how the illuminating aurora is created has been proven, there’s still a long way to go in forecasting how strong each storm will be. The northern lights dance across the night sky, high in the Arctic Circle. JONATHAN NACKSTRAND/AFP/AFP/Getty Images “Predicting how strong a particular geomagnetic storm will be, based on observations of the Sun and measurements from spacecraft between the Earth and the Sun, remains an unsolved challenge,” said Howes in an email. “We have established the link of electrons surfing on Alfven waves about 10,000 miles above the Earth’s surface, and now we must learn how to predict the strength of those Alfven waves using spacecraft observations,” he added. Correction: A previous version of this story misidentified the affiliation of the physicists who wrote the study. They are from the University of Iowa.
2
Singapore approves lab-grown 'chicken' meat
Singapore approves lab-grown 'chicken' meat About sharing Eat Just Eat Just chicken nuggets Singapore has given regulatory approval for the world’s first “clean meat” that does not come from slaughtered animals. The decision paves the way for San Francisco-based startup Eat Just to sell lab-grown chicken meat. The meat will initially be used in nuggets, but the company hasn’t said when they will become available. Demand for alternatives to regular meat has surged due to consumer concerns about health, animal welfare and the environment. According to Barclays, the market for meat alternatives could be worth $140bn (£104bn) within the next decade, or about 10% of the $1.4tn global meat industry. UK scientists growing 'bacon' in labs McDonald's to introduce plant-based burgers Persuading China to switch to 'fake' pork Plant-based meat options such as Beyond Meat and Impossible Foods are increasingly found on supermarket shelves and restaurant menus. But Eat Just’s product is different because it is not plant based, but instead grown from animal muscle cells in a lab. The company called it a "breakthrough for the global food industry" and hopes other countries will now follow suit. Over the last decade, dozens of start-ups have attempted to bring cultured meat to market, hoping to win over conventional meat eaters with the promise of a more ethical product. Two of the largest are Israel-based Future Meat Technologies and the Bill Gates-backed Memphis Meats, which are both trying to enter the market with affordable and tasty lab grown meats. Singapore’s Shiok Meats is working on lab grown crustacean meats. While many have touted the environmental benefits, some scientists have suggested it might be worse for climate change under some circumstances. By Mariko Oi, BBC News Singapore The boss of Eat Just called it "one of the most significant milestones in the food industries" but challenges remain. Firstly, it is much more expensive to produce lab-grown meat than plant-based products. Case in point: Eat Just previously said it would sell lab-grown chicken nuggets at $50 each. The cost has since come down but it will still be as expensive as premium chicken. Another challenge for the company is the reaction of consumers. But Singapore's approval of Eat Just's product will likely attract competitors to set up operations in the city state, and it could also prompt other countries to approve it, too. The Singapore Food Agency (SFA) said an expert working group reviewed data on Eat Just’s manufacturing control and safety testing of the cultured chicken. “It was found to be safe for consumption at the intended levels of use, and was allowed to be sold in Singapore as an ingredient in Eat Just’s nuggets product,” the SFA said. The agency said it has put in place a regulatory framework for “novel food” to ensure that cultured meat and other alternative protein products meet safety standards before they are sold in Singapore. “I'm sure that our regulatory approval for cultured meat will be the first of many in Singapore and in countries around the globe,” said Josh Tetrick, the Eat Just co-founder in a media release. No antibiotics were used in the process, and the chicken had lower microbiological content than conventional chicken, the company said. “The first-in-the-world regulatory allowance of real, high-quality meat created directly from animal cells for safe human consumption paves the way for a forthcoming small-scale commercial launch in Singapore,” Eat Just said. You may be interested in watching How chicken therapy is helping children More on this story Tesco targets 300% rise in vegan meat sales p
1
Human-level intelligence or animal-like abilities? (Adnon Darwiche, 2018)
research-article Human-level intelligence or animal-like abilities? Author: Authors Info & Claims p p p p p https://doi.org/10.1145/3271625 p 26 September 2018 Publication History Skip Abstract Section Abstract What just happened in artificial intelligence and how it is being misunderstood. Index Terms Human-level intelligence or animal-like abilities? Applied computing Computing methodologies Machine learning Information systems Login options Check if you have access through your login credentials or your institution to get full access on this article. Sign in Full Access Get this Article Information Published in October 2018 107 pages ISSN: 0001-0782 EISSN: 1557-7317 DOI: 10.1145/3281635 Editor: p Issue’s Table of Contents Copyright © 2018 Owner/Author Publisher Association for Computing Machinery New York, NY, United States Publication History Published: 26 September 2018 Check for updates Qualifiers research-article Popular Refereed Bibliometrics Citationsp Article Metrics 37 Total Citations View Citations 36,060 Total Downloads Downloads (Last 12 months) 903 Downloads (Last 6 weeks) 109 Other Metrics View Author Metrics PDF Format View or Download as a PDF file. PDF eReader View online with eReader. eReader HTML Format View this article in HTML Format . View HTML Format Figures Other 0p
1
Good TikTok Creative – Cerave L'Oréal
Welcome to Good TikTok Creative! We are  Simon Andrews  and  Anthony McGuire , two people who have been working in marketing, advertising and media for decades. We are very excited about TikTok as a brand new platform for creativity and think this topic is severely under-explored. TikTok Case Study #26 = Cerave We have featured L'Oreal before, as they are true pioneers of digital marketing. In GTTC #19 we scrutinised their Halloween campaign for their NTX range which revived Melanie Martinez's Dollhouse. That was big; #DOLLHOUSECHALLENGE got 2.4 billion views. This time we're looking at Cerave - a dermatological skincare brand owned by L'Oreal. We can see some of the history of this campaign; if you look at  CeraveSkincare  they first tried running videos repurposed from other channels. Which just didn't work - most videos had just a few thousand views. Then look at  Cerave  - now they have gone bespoke and created content that fits the grammar of TikTok. First they recruited the best skincare influencers; @skincarebyhyram  - big on all channels, but with 6.8m followers on TikTok, @dermdoctor - 3.9m followers @dermbeautydoc - up and coming with 122k followers Then they added some star quality with Charli D’Amelio , who has 107m followers. This mix of reach and respectability is a powerful combination - the one video so far has 132m views. And a sweepstake runs alongside - building a CRM programme and harvesting 1st party data. We imagine these spokespeople will appear in retail channels at some point - but nothing on their Amazon page yet. This is a great example of how to use TikTok. Bespoke content designed for the platform and using the grammar of the medium. If you look back a few months, Cerave had previously found some success on TikTok. This Buzzfeed article and CNN article , both from July 2020, describe how Cerave started gaining traction with Gen Z through recommendations from TikTok influencers. For the brand managers at Cerave, this was in line with a key insight that young people are taking more advice on skin care from influencers rather than dermatologists. At the center of this sales boom for Cerave was the 24-year-old influencer Hyram Yarbro, known on TikTok by the handle @skincarebyhyram. It appears that there was grassroots demand for Cerave generated by influencers like Hyram last year. If we fast forward to January 2021, Cerave (owned by L’Oreal) has now launched a major campaign with influencers including Hyram Yarbro, Charli D’Amelio, @dermdoctor, and @dermbeautydoc. If you watch the Cerave video, it’s very much in line with authentic TikTok practices. The aesthetic is very low production. It literally looks like the videos could have been recorded in a bathroom. The background music sounds like a template you would find on GarageBand and the blue text overlay they added is something even I could have done. By the way — I mean these all as compliments! I hadn’t heard of Cerave before, but of course I knew L’Oreal. L’Oreal owns and operates hundreds of brands , so they have room to experiment with some more than others. I would imagine that based on the success of Cerave, L’Oreal brand managers are now making calculations on what next big brand they could push on TikTok. Let’s see what they come up with next. TikTok takes on Facebook with US e-commerce push This article from the FT yesterday outlines the push TikTok is making to compete more directly with Facebook in terms of commerce products. Some of these new developments include e-commerce livestreaming products, a partnership with Shopify, and a global partnership with WPP . If you have been paying attention at all to TikTok over the last year, none of this should come as a surprise. On the topic of livestreaming, I have also written about this before and we touched on the topic last year in a webinar we hosted with David Hoctor from TikTok. What’s left to be seen is A) How advertisers respond to this new commerce push from TikTok, and B) How competitors like Facebook/Instagram/Snap/Amazon respond. Being a former employee of Facebook/Instagram (this is Anthony writing), I’m certain that Mark Zuckerberg will put an immense amount of resources to compete against TikTok. But I would also say—at this point I’m more bullish on TikTok. The Good TikTok Creative Survey We are interested in getting your feedback on Good TikTok Creative in order to improve the newsletter! Please fill out the 2-min survey  here ! If you want to dive deeper into TikTok, download Anthony’s  free e-book  on TikTok case studies or check out Anthony’s  TikTok crash course . Have you seen good creative on TikTok recently? Let us know and we can feature it in a future newsletter.
1
SWORD Health closes on $85M Series C for virtual MSK care
SWORD Health, a virtual musculoskeletal care platform founded in 2015, announced today that it has raised an $85 million Series C funding round led by General Catalyst. Other participating investors included BOND, Highmark Ventures, BPEA, Khosla Ventures, Founders Fund, Transformation Capital and Green Innovations. The funding comes months after the company raised a $25 million Series B round – which, put differently, means that the New York-based company has now raised $110 million across six months. CEO and founder of SWORD Health, Virgílio Bento, said that company was not actively having conversations with external VCs when it raised the round. The Series C closed within three weeks of the first anchor investor’s check. “Given the interest of the market, given the valuations, and given the ability to bring other stellar investors [who] can help us grow even faster and more efficiently – that’s why we decided to raise again,” he said. SWORD Health’s massive tranche of capital comes as the world of MSK digital health startups continues to boom, thanks to the broad rise of virtual care. Venture-backed startups such as Kaia Health, which saw its business grow by 600% in 2020 and Hinge Health, which was last valued at $3 billion, are hitting growth stage. SWORD Health, while founded in 2015, has only been in the market for 18 months. Bento declined to share the company’s exact valuation, but he confirmed that it was north of $500 million. MSK conditions, which can range from a sprained ankle to a disc compression, are diverse and, unfortunately, universally felt. The sheer expansiveness of the condition has triggered a crop of entrepreneurs to create solutions that help people avoid surgery or addictive opioids, two of the mainstream ways to deal with MSK conditions. SWORD Health’s solution looks like this: The platform connects consumers to a virtual physical therapist who is accessible via traditional telemedicine. Beyond that, the company gives each consumer a tablet and motion sensors. The consumers are promoted to go through the motions, and get feedback and tips through a SWORD HealthDigital Therapist. Musculoskeletal medical startups race to enter personalized health tech market Nikhil Krishnan, the founder of Out-of-Pocket, explained how it all works through a first-person account: As you go through them, the sensors + digital therapist can tell if the movements are correct and how far you’re moving in each direction. The digital therapist has 5000 different types of feedback messages like “don’t bend your knee,” “lean forward more,” and “your squat form is more embarrassing than your Facebook etiquette circa 2009.” You get a score of 1-5 stars depending on how far you move in a direction for a given exercise. My regimen was usually between 17-25 exercises and in total would take me 20-25 minutes. SWORD Health sells to insurers, health systems and employers in the United States, Europe and Australia. SWORD Health’s biggest competitor is Hinge Health, last valued at $3 billion. However, for now, Bento isn’t too worried about the behemoth. “It’s really two different studies on how to build a healthcare company,” Bento said. He pointed to how SWORD Health spent its first four years as a company developing its sensor, while he claims that Hinge went out to the market with “a half-baked solution” in sensor technology. That said, in March 2021, Hinge acquired medical device maker Enso to grow its non-invasive, musculoskeletal therapy tech, and continues to have the biggest marketshare among private startups in the sector. The company touted that it has increased its number of treated patients 1,000% year over year, which has led to 600% year-over-year revenue growth. Given the fact that it’s only been in the market for 18 months, these metrics don’t provide an entirely holistic picture into the business, but instead offer a snapshot into the recent growth of an early-stage tool. With millions more, the SWORD Health founder is set to invest more in the company, and continue to not focus too much on profitability. “This is a big problem that we want to solve, so we really want to reinvest all of the gross profit that we are generating into building a platform that is able to deliver more value to patients,” he said.
1
Hacking into Google's Network for $133,337
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
15
Women-Are-Wonderful Effect
The b is the phenomenon found in psychological and sociological research which suggests that people associate more positive attributes with women when compared to men. This bias reflects an emotional bias toward women as a general case. The phrase was coined by Alice Eagly and Antonio Mladinic in 1994 after finding that both male and female participants tend to assign positive traits to women, with female participants showing a far more pronounced bias. Positive traits were assigned to men by participants of both genders, but to a far lesser degree. The authors supposed that the positive general evaluation of women might derive from the association between women and nurturing characteristics. This bias has been cited as an example of benevolent sexism. [1] The term was coined by researchers Alice Eagly and Antonio Mladinic in a 1994 paper, where they had questioned the widely-held view that there was prejudice against women. They observed that much of the research had been inconclusive in showing a bias. They had found a positive bias towards women in their 1989 and 1991 studies, which involved questionnaires given to students in the United States. [2] In 1989, 203 psychology students of Purdue University were given questionnaires in groups of 20 and asked to assess subjects of both genders, which showed a more favourable attitude to women and female stereotypes. [3] In 1991, 324 psychology students of Purdue University were given questionnaires in groups of 20 and asked to assess subjects of both genders. They evaluated the social categories of men and women, relating the traits and expectations of each gender through interviews, emotion-associations and free-response measures. Women were rated higher in attitudes and beliefs but not emotions. [4] Rudman and Goodwin conducted research on gender bias that measured gender preferences without directly asking the participants. Subjects at Purdue and Rutgers participated in computerized tasks that measured automatic attitudes based on how quickly a person categorizes pleasant and unpleasant attributes with each gender. Such a task was done to discover whether people associate pleasant words (good, happy, and sunshine) with women, and unpleasant words (bad, trouble, and pain) with men. [5] This research found that while both women and men have more favorable views of women, women's in-group biases were 4.5 times stronger [5] than those of men. And only women (not men) showed cognitive balance among in-group bias, identity, and self-esteem, revealing that men lack a mechanism that bolsters automatic preference for their own gender. [5] Other experiments in this study found people showed automatic preference for their mothers over their fathers, or associated the male gender with violence or aggression. Rudman and Goodwin's suggested that maternal bonding and male intimidation influences gender attitudes. Another experiment in the study found adults' attitudes were measured based on their reactions to categories associated with sexual relations. It revealed that among men who engaged more in sexual activity, the more positive their attitude towards sex, the larger their bias towards women. A greater interest in and liking of sex may promote automatic preference for the out-group of women among men, although both women and men with sexual experience expressed greater liking for the opposite gender. [5] One study found that the effect is mediated by increased gender equality. The mediation comes not from differences in attitudes towards women, but in attitudes towards men. In more egalitarian societies, people have more positive attitudes towards men than in less egalitarian societies. [6] https://doi.org/10.1002/ijop.12420 Effect of gender equality A study with participants from 44 countries involving prediction of an individual's personality based on photographs verified the effect in multiple countries and found that the effect decreased the higher a countries measure of gender equality. This effect seemed to be due to men being viewed less negatively the more egalitarian a country was rather than women being viewed more positively. [7] Some authors[ p ] have claimed the "Women are wonderful" effect is applicable when women follow traditional gender roles such as child nurturing and stay-at-home housewife. [8] However, other authors[ p ] have cited studies indicating that the women-are-wonderful effect is still applicable even when women are in nontraditional gender roles, and the original Eagly, Mladinic & Otto (1991) study discovering the women-are-wonderful effect found no such ambivalence. [9] ^ ^ ^ ^ ^ a b c d ^ ^ ^ ^
2
Alternative Syntax for Python's Lambda
Did you know...? LWN.net is a subscriber-supported publication; we rely on subscribers to keep the entire operation going. Please help out by buying a subscription and keeping LWN on the net. By March 3, 2021 Jake Edge The Python lambda keyword, which can be used to create small, anonymous functions, comes from the world of functional programming, but is perhaps not the most beloved of Python features. In part, that may be because it is somewhat clunky to use, especially in comparison to the shorthand notation offered by other languages, such as JavaScript. That has led to some discussions on possible changes to lambda in Python mailing lists since mid-February. Background This is far from the first time lambda has featured in discussions in the Python community; the search for a more compact and, perhaps, more capable, version has generally been the focus of the previous discussions. Even the name "lambda" is somewhat obscure; it comes from the Greek letter "λ", by way of the lambda calculus formal system of mathematical logic. In Python, lambda expressions can be used anywhere a function object is needed; for example: >>> (lambda x: x * 7)(6) 42 In that example, we define an anonymous function that "multiplies" its argument by seven, then call it with an argument of six. But, of course, Python has overloaded the multiplication operation, so: >>> (lambda x: x * 7)('ba') 'bababababababa' Meanwhile, lambda can be used in place of def, though it may be of dubious value to do so: >>> incfunc = lambda x: x + 1 >>> incfunc(37) 38 # not much different from: >>> def incfunc(x): ... return x + 1 ... >>> incfunc(37) 38 Lambdas are restricted to a single Python expression; they cannot contain statements, nor can they have type annotations. Some of the value of the feature can be seen in combination with some of the other functional-flavored parts of Python. For example: >>> list(filter(lambda x: x % 2 == 1, range(17))) [1, 3, 5, 7, 9, 11, 13, 15] There we use the filter() function to create an iterator, then use list() to produce a list of the first eight odd numbers, with a lambda providing the test function. Over time, though, other Python features supplanted these uses for lambda expressions; the same result as above can be accomplished using a list comprehension: >>> [ x for x in range(17) if x % 2 == 1 ] [1, 3, 5, 7, 9, 11, 13, 15] The most obvious use of lambda may be as key parameters to list.sort() , sorted() , max()/min() , and the like. That parameter can be used to extract a particular attribute or piece of each object in order to sort based on that: >>> sorted([ ('a', 37), ('b', 23), ('c', 73) ], key=lambda x: x[1]) [('b', 23), ('a', 37), ('c', 73)] Arrow operators? A thread about "mutable defaults" on the python-list mailing list made its way over to python-ideas when James Pic posted a suggestion for an alternate lambda syntax. His ultra-terse syntax did not strike a chord with the others participating in the thread, but it led "Random832" to suggest looking at the "->" or "=>" arrow operators used by other languages, such as C#, Java, and JavaScript. It's worth noting that all three of these are later additions to their respective languages, and they all have earlier, more difficult, ways of writing nested functions within expressions. Their designers saw the benefit of an easy lambda syntax, why don't we? Former Python benevolent dictator Guido van Rossum agreed: "I'd prefer the JavaScript solution, since -> already has a different meaning in Python return *type*. We could use -> to simplify typing.Callable, and => to simplify lambda." He also answered the question by suggesting that the "endless search for for multi-line lambdas" may have derailed any kind of lambda-simplification effort. Van Rossum declared multi-line lambda expressions "un-Pythonic" in a blog post back in 2006, but in the thread he said that it is not too late to add some kind of simplified lambda. Steven D'Aprano was concerned about having two separate "arrow" operators. "That will lead to constant confusion for people who can't remember which arrow operator to use." He said that the "->" symbol that is already in use for the annotation of return types could also be used to define anonymous functions. It is also a well-known idiom: There are plenty of popular and influential languages that use the single line arrow -> such as Maple, Haskell, Julia, CoffeeScript, Erlang and Groovy, to say nothing of numerous lesser known and obscure languages. He also posted a lengthy analysis of how both uses of the single-line arrow could coexist, though it turned out that there is a parsing ambiguity if type annotations are allowed in "arrow functions" (i.e. those defined using the single-line arrow). One can perhaps be forgiven for thinking that the example he gives is not entirely Pythonic, however: (values:List[float], arg:int=0 -> Type) -> expression [...] In case anyone is still having trouble parsing that: # lambda version lambda values, arg=0: expression # becomes arrow function (values, arg=0) -> expression # add parameter annotations (values:List[float], arg:int=0) -> expression # and the return type of the function body (expression) (values:List[float], arg:int=0 -> T) -> expression The obscurity of the name "lambda" also came up. Brendan Barnwell lamented the choice of the name as confusing, while Ned Batchelder called it "opaque": People who know the background of lambda can easily understand using a different word.  People who don't know the background are presented with a "magic word" with no meaning.  That's not good UI. But, as D'Aprano pointed out, it is far from the only jargon that Python (and other language) programmers will need to pick up: [It's] no more "magic" than tuple, deque, iterator, coroutine, ordinal, modulus, etc, not to mention those ordinary English words with specialised jargon meanings like float, tab, zip, thread, key, promise, trampoline, tree, hash etc. While Paul Sokolovsky is a big fan of the lambda keyword, he does think that differentiating between the two uses of the arrow notation (the existing use for return types and a possible future use for defining short functions) is important. He thinks it would be better to use two different arrows; for defining functions, he is in favor of the double-line arrow (=>) instead. Proof of concept To demonstrate the idea, he came up with some proof-of-concept code that implements the double-line arrow as a lambda replacement. Here are some examples that he gave: >>> f = (a, b) => a + b # not actual Python syntax >>> print(f(1, 2)) 3 >>> print(list(map((x) => x * 2, [1, 2, 3, 4]))) # nor this [2, 4, 6, 8] Cade Brown did not like the double-line arrow, on general principles, but Sokolovsky reiterated his belief that the two uses of arrows should be distinct; he also thinks that using the same symbol as JavaScript has some advantages. D'Aprano, however, is not convinced that following JavaScript's notation is necessarily the right thing for Python, nor does he think there is a need to separate the two uses of arrows. As might be guessed, others disagree; it was, to a certain extent, a bikeshedding opportunity, after all. For his part, Van Rossum was not really opposed to using the single-line arrow for function definitions if there were no technical barriers to overlapping the two uses. No one seemed to think that allowing type annotations for lambda functions, which are generally quite self-contained, was truly needed. On the other hand, both David Mertz and Ricky Teachey were opposed to adding new syntax to handle lambdas, though Teachey thought it would make more sense if it could be used for both unnamed and named functions: But if we could expand the proposal to allow both anonymous and named functions, that would seem like a fantastic idea to me. Anonymous function syntax: (x,y)->x+y Named function syntax: f(x,y)->x+y That was something of a step too far for Van Rossum, though. There is already a perfectly good way to create named functions, he said: Proposals like this always baffle me. Python already has both anonymous and named functions. They are spelled with 'lambda' and 'def', respectively. What good would it do us to create an alternate spelling for 'def'? [...] I can sympathize with trying to get a replacement for lambda, because many other languages have jumped on the arrow bandwagon, and few Python first-time programmers have enough of a CS background to recognize the significance of the word lambda. But named functions? Why?? Greg Ewing hypothesized that it simply comes from the desire for brevity: "In the situations where it's appropriate to use a lambda, you want something very compact, and 'lambda' is a rather long and unwieldy thing to have stuck in there." D'Aprano added that it may come from mathematical notation for defining functions, which is not necessarily a good match with Python's def: So there is definitely some aesthetic advantage to the arrow if you're used to maths notation, and if Python had it, I'd use it. But it doesn't scale up to multi-statement functions, and doesn't bring any new functionality into the language, so I'm not convinced that its worth adding as a mere synonym for def or lambda or both. While there was some support for the idea, and Sokolovsky is particularly enthusiastic about it, so far there have been no plans mentioned for a Python Enhancement Proposal (PEP). Adopting an arrow syntax for lambda expressions may just be one of those topics that pops up occasionally in Python circles; maybe, like the recurring request for a Python "switch", it will evolve into something that gets added to the language (as with pattern matching). On the other hand, lambda may be one of those corners of the language that is not used frequently enough to be worth changing. Only time will tell. Index entries for this article Python Enhancements Python Lambda (Log in to post comments) Alternative syntax for Python's lambda Posted Mar 3, 2021 23:13 UTC (Wed) by NYKevin (subscriber, #129325) [Link] As far as named lambdas go, you can already do this with the walrus operator: >>> list(map(f := lambda x: x * 7, [1, 2, 3])) [7, 14, 21] >>> f(4) 28 Maybe arrows are better than the lambda keyword, but we've already got a perfectly good way to give names to arbitrary inner expressions. Why invent an alternative syntax just for lambdas? Given how many people vehemently hated the walrus before, during, and after the PEP process, I would naively expect this to go down in flames. h3 Posted Mar 4, 2021 1:20 UTC (Thu) by b (subscriber, #4164) [Link] I hope this go down in flames. This doesn't change any of the restrictions of lambda: still doesn't allow anything other than one single expression (which I don't think is a problem: if you need anything else, it's easy enough to create a named function). This proposal only changes the syntax a bit. Yet another infraction of the Zen of Python ("There should be one-- and preferably only one --obvious way to do it" in this case), for no real gain. And that change in syntax is contrary to Python's history of favoring clear words over a soup of symbols. h3 Posted Mar 4, 2021 13:21 UTC (Thu) by b (guest, #112111) [Link] Maybe it is the "heavy handed process with the approval by committee approach" for Python enhancements but the recent feature additions all look like bikesheds to me. From := to switch to this, I really think that Python is losing slowly using the signature "anyone can read the code to understand what it does" property. And the "but the beginners" argument looks precisely backward to me. When I am a beginner not knowing lambda and see def fn(f : Callable) -> int: ... fn(a -> 2 * a) what will I ask? "What's with the ascii-art arrow? No the second one?" Ha! With def fn(f : Callable) -> int: ... fn(lambda a: 2 * a) I can ask "what's with the lambda"? Using the same thing for two purposes is a bad idea and using line noise instead of words to structure programs does not appear to be making things more beginner-friendly to me. Having these things as words gives us words to talk about them and I don't have to know if it's an arrow or a pointer or a cursor. ("_" is one of these things that is ultimately hard to pronounce, too, and ":="). I think not having ";" much makes Python easier to learn not lest because people don't have to remember what a "semicolon" is. h3 Posted Mar 5, 2021 12:22 UTC (Fri) by b (subscriber, #29694) [Link] > I can ask "what's with the lambda"? and you can google "python lambda" and find a bunch of pages describing them. On the other hand my experience is that search engines mostly ignore symbols, if I google "python {" I just get the python homepage, not a description of python sets and dictionaries. Alternative syntax for Python's lambda Posted Mar 6, 2021 7:04 UTC (Sat) by marcH (subscriber, #57642) [Link] > Having these things as words gives us words to talk about them and I don't have to know if it's an arrow or a pointer or a cursor. ... and for that reason everyone and will keep calling the new arrow notation... "lambda"! So much for "beginner-friendliness". The beginner-friendly name is "anonymous function" but good luck getting non-beginners used to something that long. > > > [It's] no more "magic" than tuple, deque, iterator, coroutine, ordinal, modulus, etc, not to mention those ordinary English words with specialised jargon meanings like float, tab, zip, thread, key, promise, trampoline, tree, hash etc. Nice one. Alternative syntax for Python's lambda Posted Mar 26, 2021 14:29 UTC (Fri) by quantzur (guest, #151322) [Link] The lambda function still doesn't know it's own name: >>> def g(x): return x * 7 ... >>> g(4) 28 >>> g.__name__ 'g' >>> list(map(f := lambda x: x * 7, [1, 2, 3])) [7, 14, 21] >>> f(4) 28 >>> f.__name__ '<lambda>' Alternative syntax for Python's lambda Posted Mar 4, 2021 5:41 UTC (Thu) by Otus (subscriber, #67685) [Link] I'm almost starting to wish for a Python 4. The 2->3 transition was bad, but having a stable 2.7 for a decade was great. h3 Posted Mar 4, 2021 23:01 UTC (Thu) by b (subscriber, #1195) [Link] I think the Python3 decade has actually ruined Python, not for its outcomes in the language (that topic is exhausted already), but for the culture it has bred in the core developers. Python 2.7 reached its enormous popularity for being a great balance of complexity and ease of use, which led to PyPI, and it's current status of tool-of-choice for a massive number of programmers. But there were a couple of warts that just annoyed people enough that it was decided to break things, just this once, to fix them. In hindsight, that was probably a mistake, but the language had enough momentum to overcome it, and here we are at 3.10. Unfortunately, that decade has bred a culture in the core team of believing it's ok to make major changes to the language. The half-baked async/await that dribbled in over the mid 3.x series, pattern matching, and now other frivolous thought bubbles are being taken seriously as potential additions to the language. There is a fundamental tension to being a language designer/implementer: knowing when to stop. There's a point where you need to start working on a new language, rather than trying to shoehorn your latest novelty into the thing you've already built. It's tough to leave your massive success to one side, and start again at the bottom, but if you don't you end up creating C++, and not C. I would love to have a Python 4.0 which is a feature freeze of Python 3.x, and the announcement of Snake-0.3b1: like Python but with X and Y and also Z! (ps. yeah, yeah, get of my lawn, etc) h3 Posted Mar 5, 2021 7:13 UTC (Fri) by b (subscriber, #10998) [Link] There is a fundamental tension to being a language designer/implementer: knowing when to stop. There's a point where you need to start working on a new language, rather than trying to shoehorn your latest novelty into the thing you've already built. It's tough to leave your massive success to one side, and start again at the bottom, but if you don't you end up creating C++, and not C. And C++ is one of the most successful languages of all time. Fortran 2018 is a horrible mess, but modern Fortrans are still used by people and groups that have been using Fortran for decades. Hyperextended Pascal variants are still used today, whereas Modula-2 and Oberon aren't. Python, Java, C++ and JavaScript are the biggest languages out there and they're extended forms of languages that are at least 25 years old. There's aesthetic reasons to start fresh, but if you want your programming language feature to be used, it's probably going to see much more use if you make it an extension for Python, JavaScript, Java, C++ or C#. h3 Posted Mar 10, 2021 9:36 UTC (Wed) by b (subscriber, #2796) [Link] And C++ is one of the most successful languages of all time. C++: The COBOL of the 1980s. Alternative syntax for Python's lambda Posted Mar 6, 2021 15:23 UTC (Sat) by Polynka (guest, #129183) [Link] > There's a point where you need to start working on a new language > I would love to have a Python 4.0 which is a feature freeze of Python 3.x, and the announcement of Snake-0.3b1: like Python but with X and Y and also Z! Erm… *looks at ~Perl 6~ Raku* I don’t think that it’s actually a great idea. (and I mean its adoption, not the language itself) h3 Posted Mar 15, 2021 7:37 UTC (Mon) by b (subscriber, #85566) [Link] A delayed language will eventually be good; a language that gets rushed to meet nonsense deadlines will become the legacy code everyone curses for decades to come. The most baffling thing to me about the whole epic of Raku isn't how it concluded, but that as recently as last week I still see people who think they're clever for regurgitating hateful Perl 6 FUD from the era when it was still called Perl 6, Slashdot was a relevant tech site, and comparisons to Duke Nukem Forever were still valid. The language is a good tool to write in, but it's accidentally just as valuable for its ability to quickly expose the sort of people with ossified brains and odious worldviews most of us wouldn't want anything to do with. Alternative syntax for Python's lambda Posted Mar 10, 2021 1:38 UTC (Wed) by milesrout (guest, #126894) [Link] >Python 2.7 reached its enormous popularity for being a great balance of complexity and ease of use, which led to PyPI, and it's current status of tool-of-choice for a massive number of programmers. But there were a couple of warts that just annoyed people enough that it was decided to break things, just this once, to fix them. This is revisionist nonsense. Python 3 development was started long before Python 2.7 existed. PyPI dates back to at least 2003, before Python 2.4 was released, let alone 2.7 (which wasn't released until 2010, seven years later). Meanwhile, Python 3 was first officially released in 2008, but had been floating around as an idea since at least 2006 if not earlier. PEP3000 dates to April 2006, and---as far as I know---was not the first suggestion of a backwards incompatible Python release. Alternative syntax for Python's lambda Posted Mar 5, 2021 22:13 UTC (Fri) by flussence (subscriber, #85566) [Link] If Python starts a clean break and major refactoring to clean up 3.x's myriad gobbledygook syntax extensions right now, using all the resources and prior art available to them, then it'll be caught up to where Raku is today in significantly less time than the 20 years of effort that took. Maybe optimistically it'd be 5 years. *If*. If it doesn't, well… people already complain that it's the new Fortran but at this rate it's going to be the new PHP too. Looking forward to that next “fractal of awfulness” blogpost… h3 Posted Apr 15, 2021 20:20 UTC (Thu) by b (subscriber, #74601) [Link] Does anyone want to be where Raku is today? Or for that matter where Perl plus Raku is today? Perl has lost steam since 2005-ish, and Python only just stepped up circa 2018. Python didn't eat Perl's lunch; it looks like something that's internal to Perl itself. h3 Posted Apr 16, 2021 11:02 UTC (Fri) by b (guest, #307) [Link] Raku is actually an useful language to learn and use today. Yes, it does not have the (deserved) place in the pantheon that Python has. It has to maturate its tools (yay more compiler optimizations) and its frameworks. I use it regularly, and with nice results on my field (data integration and analysis). Alternative syntax for Python's lambda Posted Apr 18, 2021 10:28 UTC (Sun) by flussence (subscriber, #85566) [Link] >Does anyone want to be where Raku is today? It's the easiest way today (and for the past 6 years or so) to run Python 2 and 3 code in an environment with modern quality-of-life features like arrow functions, switch statements and threads. I'm sure there's demand for that somewhere. Alternative syntax for Python's lambda Posted Mar 4, 2021 13:51 UTC (Thu) by andrewsh (subscriber, #71043) [Link] How about making it possible to use def without a name? a = def (x: int, y: bool): return x if z else 0 h3 Posted Mar 5, 2021 1:28 UTC (Fri) by b (subscriber, #129325) [Link] Then you have one of three problems: 1. The anonymous def is an expression which contains one or more statements. Whitespace is normally not significant inside of an expression, but statements have an indentation level, so one of those two rules has to give. At a technical level, this is probably solvable (although the fact that you would be able to nest an anonymous def inside of another anonymous def makes it harder), but at the human level, it makes indentation significantly harder to reason about. 2. The anonymous def is a statement which contains one or more statements. Then you can't nest it inside of arbitrary expressions, and so it is a lot less flexible. You might as well just use a regular def. 3. The anonymous def is an expression which contains exactly one expression. Then it is no more than an alternative spelling of lambda. Why bother? This has been discussed many times before, and I think it is unlikely that we'll see any sort of "happy middle ground" emerge any time soon. h3 Posted Mar 10, 2021 2:43 UTC (Wed) by b (guest, #126894) [Link] The first solution is entirely doable. It's not hard to formally specify how indentation and whitespace inside indentation-insensitive expressions would work in a way that is quite intuitive, and where writing something that doesn't parse gives you an *error* instead of failing silently. The way it works currently in Python (or at least you can conceptualise it this way to produce exactly the same result) is that the scanner/lexer parses each token, including whitespace tokens. Then it joins 'continuation lines' joined by backslashes, determines initial whitespace at the beginning of each line and then basically deletes the remaining whitespace, converting it into newline, indent and dedent tokens. Then any indent/dedent/newline tokens inside parens are deleted, so that this continues to work: x = foo(1, 2) That's seen by the parser as x = foo ( 1 , 2 ) NEWLINE. not as x = foo ( 1 , NEWLINE INDENT 2 ) NEWLINE DEDENT. It would be quite simple: mark every newline and indent within a particular depth of bracketing with that depth, and then ignore any indents of greater depth than the current level of bracketing while parsing, and insert virtual dedent tokens when going down levels of nested bracketing. so that would be x = foo ( 1 , NEWLINE1 INDENT1 2 DEDENT1 ) NEWLINE so if you had in fact written x = foo(if True: 2 else: 3) you'd get x = foo ( if True : NL1 IN1 2 NL1 DE1 else : NL1 ID1 3 NL1 DE1 ) NL1 From the perspective of whatever is parsing at the top level, that's: x = foo ( [arbitrary tokens] ) NL and from the perspective of whatever is parsing inside the parens, that's: if True : NL IN 2 NL DE else : NL ID 3 NL DE which is exactly what you get if you parse that statement at a top level in Python currently (at least conceptually): if True: # if True : NL 2 # IN 2 NLelse: # DE else : NL 3 # IN 3 NL# DE It's also really easy to implement. I have done so myself as a proof of concept, which I later turned into the parser for a scripting language that is basically Python-with-multi-line-lambdas. It's a few extra lines in the scanner and a few extra lines in the parser. It doesn't slow anything down from a computational complexity perspective. And as a bonus, it gives you a hard error rather than silently-do-the-wrong-thing if you get nested indentation wrong. I know it's a really popular meme in the Python community that it's OMG2HARD4ME to do multi-line lambdas in a way that gives intuitive, hard-to-get-wrong do-what-I-mean results and is easy and efficient to implement, but as far as I can tell it's just not true. It would be massively useful. It wouldn't just give multi line lambdas, it would give the ability to nest any expression inside any other expression. All the Python special cases, like generator expressions (just use (for x in y: f(x))), a if b else c (just use a proper if statement, although that would be forced to be on at least 2 lines), etc. could be deprecated and phased out eventually. Of course it will never happen in Python because it's remarkable conservative about syntax that actually matters while wildly liberal with making relatively useless syntax changes like :=. [You could just write foo((x = 1)) with my suggestion, by the way, which is a far nicer way of doing it than :=...] h3 Posted Mar 13, 2021 13:53 UTC (Sat) by b (guest, #56129) [Link] Or one can just use one of the myriads of languages that got this right a decade or two ago (or 7 in the case of Lisp). Alternative syntax for Python's lambda Posted Mar 4, 2021 15:07 UTC (Thu) by LtWorf (guest, #124958) [Link] I think for the sake of brevity it should be λ a,b: a+b Only 1 character instead of 2. h3 Posted Mar 4, 2021 17:03 UTC (Thu) by b (subscriber, #144768) [Link] I must say Racket's use of the actual unicode lamba symbol is my favorite syntax for lambas h3 Posted Mar 4, 2021 21:05 UTC (Thu) by b (guest, #145244) [Link] Yes, I think it's charming how programmers restrict ourselves to using the character set available on typewriters, which themselves use characters developed for convenience when writing using quill pens on animal-skin parchment. h3 Posted Mar 10, 2021 2:08 UTC (Wed) by b (guest, #126894) [Link] The font faces we use today are not very much like the characters developed for convenience when writing "using quill pens on animal-skin parchment". Letter forms change drastically with every new medium of writing. When symbols were chiseled into stones you get sharp and angular runes. When symbols are pressed into clay with a stylus you get wedge-shaped cuneiform. When symbols are painted onto paper with brushes you get the calligraphic Chinese script. When symbols are painstakingly carved into elaborate burial chambers you get the exquisite Egyptian hieroglyphs. When the same symbols are hastily jotted onto papyrus with a reed pen you get hieratic. When symbols are carved into a wax tablet you get what became our UPPERCASE latin alphabet, generally big sharp uncomplicated lines. When the same symbols are drawn onto a parchment or papyrus with a pen, you get the old Roman cursive, which basically became our lowercase. The letterforms in modern fonts aren't even that similar to the script/cursive forms people use in handwriting today (see 'a' or 'g'). I honestly believe that programmers would use more unicode symbols if the input methods were better. Is that actually a good thing? There are a lot of Unicode confusables after all... Alternative syntax for Python's lambda Posted Mar 5, 2021 19:39 UTC (Fri) by cpitrat (subscriber, #116459) [Link] Stupid question: how do you type it? Do you configure your editor to have a special shortcut? Do you know the char code by heart? h3 Posted Mar 5, 2021 21:35 UTC (Fri) by b (subscriber, #55106) [Link] I type λ using the XCompose input method. That's with this entry in my custom .XCompose file[0]: <Multi_key> <g> <r> <l> : "λ" U03BB # GREEK SMALL LETTER LAMBDA Here are the equivalents for lowercase a-z: αβ_δεφγηιθκλμνοπχρστυ_ωξψζ ('c' and 'v' have no corresponding entries and so were replaced with '_'). And the equivalents for uppercase A-Z: ΑΒ_ΔΕΦΓΗΙΘΚΛΜΝΟΠΧΡΣΤΥ_ΩΞΨΖ. Also many useful symbols: → ⇒ ↑ ↓ × ÷ « » ⋄ ⊛ Ⓧ ¾ ⅞ ☺ € ₡ ¢ ∀ ∃ ± Unfortunately Windows doesn't have any built-in equivalent to XCompose. I have a few char codes memorized for that, but λ isn't one of them. To make this work under Linux with X11 you need to create the file ~/.XCompose, set GTK_IM_MODULE=xim and QT_IM_MODULE=xim, and assign some key to the Multi_key function in your keyboard settings. I use the Menu key for Multi_key but you might prefer something else. [0] https://jessemcdonald.info/gogs/nybble/compose-config/raw... h3 Posted Mar 5, 2021 21:43 UTC (Fri) by b (subscriber, #60784) [Link] I use LeftWin for Compose myself. It's a lot easier to reach fluidly than Menu on a standard-pitch 104/105 key keyboard. Alternative syntax for Python's lambda Posted Mar 5, 2021 23:58 UTC (Fri) by mbunkus (subscriber, #87248) [Link] While not built-in, I've been using the nice little Open Source program WinCompose[1] for quite a while now on Windows. It's easily configurable, both regarding the sequences as well as the key to use as the compose key, and it comes pre-configured with a wide range of sequences, a lot of which match the traditional XCompose sequences. [1] http://wincompose.info Alternative syntax for Python's lambda Posted Mar 6, 2021 9:55 UTC (Sat) by LtWorf (guest, #124958) [Link] On KDE, systemsettings allows you to pick a key to use as XComposeKey. I don't think the env vars you are exporting are needed. I do not have them. It used to be that GTK did not support longer sequences than 2 so I had to write them in kwrite and copy paste, but it seems to be working now for a while. h3 Posted Mar 8, 2021 23:17 UTC (Mon) by b (subscriber, #55106) [Link] > On KDE, systemsettings allows you to pick a key to use as XComposeKey. Yes, that's how I have it configured. In other desktop environments you can use "setxkbmap -option compose:menu" (plus your normal model/layout options) for the same effect. > I don't think the env vars you are exporting are needed. If you don't set them then both Gtk and Qt will pick a default input method. If that happens to be XIM then everything will work just fine. If not, the .XCompose file might be ignored; it depends on which input method was chosen. IBUS has some support for reading .XCompose in recent versions. I'm not sure about the others. Alternative syntax for Python's lambda Posted Mar 6, 2021 1:49 UTC (Sat) by aeline (subscriber, #144768) [Link] In emacs and DrRacket that hotkey Cmd-\ h3 Posted Mar 6, 2021 1:54 UTC (Sat) by b (subscriber, #144768) [Link] DrRacket also allows most unicode, by typing \latexcode and pressing Cmd-Enter.This is really nice for following mathematics/language papers. A snippet from a recent project: ;; Values (v ::= n b ∅ ;; Unit (λ (x : t) e) ;; Value Abstraction (Λ x e)) ;; Type Abstraction Alternative syntax for Python's lambda Posted Mar 10, 2021 3:50 UTC (Wed) by pj (subscriber, #4506) [Link] I use kitty as a terminal and have a hotkey for unicode input where I can then type in the long name and choose it from a list. This brings up... which lambda? U+39b is greek capital lambda U+3bb is greek small letter lambda U+1d27 is greek letter small capital lambda ...or one of the other 10ish mathematical lambda characters? (list at https://unicode-table.com/en/search/?q=lambda ) Is there a convention for this? is one more canonical than the others? h3 Posted Mar 10, 2021 9:12 UTC (Wed) by b (subscriber, #60784) [Link] .... personally, I think the answer should be "the ugaritic cuneiform one" :D Alternative syntax for Python's lambda Posted Mar 10, 2021 11:28 UTC (Wed) by fredrik (subscriber, #232) [Link] In Gnome, and terminals like Alacritty, you use the generic unicode character shortcut, which is Ctrl + Shift + u followed by the unicode code point for lambda which is 03BB, which can be abreviated to 3bb, so basically: Ctrl + Shift + u 3 b b Space Easy as pie! 🙂 Alternative syntax for Python's lambda Posted Mar 22, 2021 16:11 UTC (Mon) by hummassa (guest, #307) [Link] *l:help digraphs Alternative syntax for Python's lambda Posted Mar 4, 2021 20:05 UTC (Thu) by eru (subscriber, #2753) [Link] The lambda keyword has had this kind of meaning since the original LISP in the fifties. I find it kind of user-friendly compared to these terse notations. A thicket of punctuation signs is not readable. h3 Posted Mar 5, 2021 11:53 UTC (Fri) by b (subscriber, #23250) [Link] Indeed. int& (*fpi)(int*) = [](auto* a)->auto& { return *a; }; Totally readable, only 50% punctuation. h3 Posted Mar 5, 2021 18:08 UTC (Fri) by b (subscriber, #2753) [Link] Not to mention that thanks to the lack of automatic memory management in C++, using lambda there is a great way to introduce subtle bugs. h3 Posted Mar 22, 2021 16:52 UTC (Mon) by b (guest, #307) [Link] Whoah? There is nothing but automatic memory management in C++, unless you are using new and delete, in which case I have a stack of books for you to read before you start over. h3 Posted Mar 22, 2021 21:31 UTC (Mon) by b (guest, #2285) [Link] He probably means how you can define a C++ automatic reference capture lambda, then when it has a lifetime longer than those references the world blows up. Java's garbage collection has a similar problem here though. Instead of exploding, your "innocent and harmless" capture holds a 300 MB data structure live in RAM. I've seen C++ lambda captures hold references after a function exits, after a thread exits and even after the program itself exits. Threads continue running even after static object destruction. h3 Posted Mar 23, 2021 0:27 UTC (Tue) by b (subscriber, #23250) [Link] Yes, this exactly. It's trivially easy to accidentally capture 'this' in a lambda without extending the lifetime of that object. Alternative syntax for Python's lambda Posted Mar 24, 2021 0:00 UTC (Wed) by nix (subscriber, #2304) [Link] Actually, a reference or two might be helpful. I've not really looked at the changes in C++ and what they imply for typical style since about 2000, and the language has moved on. I ought to catch up, but the relevant content appears to be scattered across hundreds of webpages each of which assumes you have read all the others first. If there really is a book somewhere discussing how C++ has changed and how typical style has changed, I'd be very interested, because modern C++ has at some point since C++11 dissolved for me from "I can kind of read this if I squint" to "this is a pile of angle brackets and I really have no idea why they are here rather than somewhere else". So I think I have some relearning to do -- but from where? Alternative syntax for Python's lambda Posted Mar 5, 2021 19:18 UTC (Fri) by amarao (subscriber, #87073) [Link] Can't we use rust notation for closures? |x, y| x + y - 2? Rust has second form for closures with {}, but we can ignore those. h3 Posted Mar 6, 2021 7:03 UTC (Sat) by b (subscriber, #144768) [Link] I agree I really like this syntax (although I think Ruby had it first)It feels very visually distinct from tuples, which I think is plus. Alternative syntax for Python's lambda Posted Mar 9, 2021 17:07 UTC (Tue) by NYKevin (subscriber, #129325) [Link] The problem with that is, if you accidentally have any value to its immediate left, then it already parses as a very different expression: >>> import ast>>> print(ast.dump(ast.parse('z |x, y| x + y - 2', mode='eval'), indent=4))Expression( body=Tuple( elts=[ BinOp( left=Name(id='z', ctx=Load()), op=BitOr(), right=Name(id='x', ctx=Load())), BinOp( left=Name(id='y', ctx=Load()), op=BitOr(), right=BinOp( left=BinOp( left=Name(id='x', ctx=Load()), op=Add(), right=Name(id='y', ctx=Load())), op=Sub(), right=Constant(value=2)))], ctx=Load())) (The indent= argument to ast.dump() requires Python 3.9+ for pretty-printing.) That's not necessarily a complete non-starter, seeing as the lambda syntax probably would not allow you to put a value there, but it would be awkward if a small typo (e.g. omitting a comma) would result in a parse like that. Unfortunately, you can't just say "Well, x and y are non-existent variables, so don't parse it like that." An unrecognized variable is currently interpreted as an uninitialized global; it's assumed that said global will exist by the time the function is actually called (or else it's a runtime error). Python needs to continue to support that use case, or else recursive function calls won't work (functions are "just" variables that point at function objects). Alternative syntax for Python's lambda Posted Mar 7, 2021 0:15 UTC (Sun) by marcH (subscriber, #57642) [Link] Maturity: when you ran out of important things to do and finally have time to discuss pointless changes. Alternative syntax for Python's lambda Posted Mar 8, 2021 12:16 UTC (Mon) by msuchanek (guest, #120325) [Link] Somewhat off-topic, but to me, the syntax of map and filter is a bigger issue than lambda. I'd prefer them to be methods rather than functions. This is pretty hard for me to read: >>> map(lambda x: x + 1, range(5)) The lambda blends into the arguments and the comma is weak as a visual separator. This would much easier: >>> range(5).map(lambda x: x + 1) Could that be partly to blame for the complaints about the lambda syntax? h3 Posted Mar 9, 2021 17:32 UTC (Tue) by b (subscriber, #129325) [Link] How would you propose we rewrite the following? map(lambda x, y: x + y, range(5), reversed(range(5))) I'm not sure that range(5).map(lambda x, y: x + y, reversed(range(5))) is an improvement. And I'm definitely not writing zip(range(5), reversed(range(5)).starmap(lambda x, y: x + y), because that's just ridiculous. h3 Posted Mar 11, 2021 8:35 UTC (Thu) by b (subscriber, #1183) [Link] Well, in Elixir (which I think lifted the syntax from Ruby) it would look like: range(5) |> zip(reversed(range(5)) |> map(lambda x, y: x + y) Underwater this is converted to: map(zip(range(5), reversed(range(5)), lambda x, y: x + y) There is something pleasing about having the operator between the operands, but I don't think Python should go this route. It requires your entire standard library to be designed to make this work well. It's most appropriate for functional languages. Alternative syntax for Python's lambda Posted Mar 10, 2021 1:44 UTC (Wed) by milesrout (guest, #126894) [Link] >I'd prefer them to be methods rather than functions. This is not and will never be a good idea. So-called 'UFCS' proposals have failed repeatedly in C++ for good reason, and in Python. Standalone functions are not methods and the method syntax is ugly and misleading. Prioritising one argument over others nearly always makes no sense, and even when it does it's rare that it would be the first argument anyway. Why would it be `foo.map(lambda: ...)` or `foo.map(lambda: ..., bar)` and not `(lambda: ...).map(foo)` or `(lambda: ...).map(foo, bar)`? I think that it is obvious that we should stick to calling functions as `f(x, y, z)` as we have done in many programming languages for decades and in mathematics for hundreds of years. `x.f(y, z)` makes no sense. `a.plus(b)` is just hideously ugly compared to `plus(a, b)`, and it's totally backwards anyway. If you were going to change to anything it should be `a b plus`, as then at least things would be evaluated entirely left-to-right. Alternative syntax for Python's lambda Posted Mar 14, 2021 12:49 UTC (Sun) by cbensf (guest, #120326) [Link] Both would be valid choices in a fresh language, but in Python a lot of other choices have fallen into place to work better with them as functions. * map() and filter() should work on any iterable. In Ruby calling them as methods works by all iterables inheriting the Mixin `Enumerable` which keeps growing with time. But in Python the iteratable/iterator protocols are frozen and extremely narrow. All you need is `__iter__` / `__next__`. This is a feature. Consider `itertools` module — it extends the set of things you can do with iterators simply by exporting more functions, without adding methods to built-in types. Similarly, countless people have written functions processing iterators, sharing some on PyPI. [To be fair, *in practice* the Ruby culture of monkey-patching causes surprisingly little problems! So it's more cultural choice than technical argument.] * There is also a question of symmetry when consuming several iterables, e.g. `zip(foos, bars)` or even the little-known `map(operator.add, foos, bars)`. These would look less pretty as methods on the first sequence... * There are also various builtins "reducing" an iterable like `sum()`, `all()`, `any()`, `max()` etc. that combine pleasantly with generator expressions. E.g. to test if `n` is composable prime: `any(n % d == 0 for d in range(2, n))`. In ruby this would be `(2...n).any? { |d| n % d == 0 }` which is a method on Enumerable that takes a block; so in Python this pattern of functions taking an iterator compensates for lack of blocks in some cases... * A deeper argument IMHO, is that Python has a long tradition of separating the protocol a class has to implement from the public interface one calls. It does this by operators and global functions, which adds flexibility and helps evolve the language: - `a < b` operator started out in python 2 calling `a.__cmp__(b)` with fallback to `b.__rcmp__(a)`; later PEP 0207 added `a.__lt__(b)` and `b.__ge__(a)`. - `bool()` checks `.__nonzero__()_` but falls back to `.__len__()`. - `iter()` supports objects without `.__iter__` if they implement "sequence" protocol of `.__getitem__` with increasing integers. This allowed the for loop, but also everything else that wants to iterate, to keep working with older classes predating the iterator protocol. - Even the trivial `.next()` method was wrapped in a builtin function `next()`, which helped abstract the later renaming to `.__next__` (PEP 3114). Alternative syntax for Python's lambda Posted Mar 25, 2021 20:06 UTC (Thu) by strombrg (subscriber, #2178) [Link] I'm opposed to terse-ifying lambda in Python. Lambda is rarely useful in Python - you're almost always better off using a generator expression, a list comprehension, or something from the operator module. And lambdas tend to give rise to the software development equivalent of a runon sentence. Naming a function in those rare cases that you genuinely need something custom really isn't the end of the world, and is more readable anyway. Alternative syntax for Python's lambda Posted Mar 31, 2021 20:53 UTC (Wed) by mina86 (guest, #68442) [Link] > but it led "Random832" to suggest looking at the "->" or "=>" arrow > operators used by other languages, such as C#, Java, and JavaScript. >> It's worth noting that all three of these are later additions to >> their respective languages, and they all have earlier, more >> difficult, ways of writing nested functions within >> expressions. Their designers saw the benefit of an easy lambda >> syntax, why don't we? I don’t know about C# but before lambda functions were introduced in Java, one had to write dozens of lines to get a simple anonymous ‘x+y’ function. This is nothing like Python which has a short lambda expression. And as for JavaScript, arrow functions have different semantics than anonymous functions. The arrow function wasn’t introduced because ‘function’ is such a long word but because programmers couldn’t comprehend how ‘this’ variable worked. Again, this does not reflect situation in Python. Neither comparison is apt.
1
There is no place for feats, beauty and “heroic deeds” in the modern world
NastyFans - The UNOFFICIAL Nasty Mining Fan Club Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
2
Put Dinosaur in Your Terminal
{{ message }} MatteoGuadrini/dinosay You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
13
Months later, the great Twitter hack still boggles my mind
One of the wildest stories of the year was the day some of the most-followed Twitter accounts on the planet posted cryptocurrency scams because of a massive unprecedented hack. Elon Musk was the first hacked account most people noticed. “I’m feeling generous because of COVID-19,” a now-deleted 4:17PM ET tweet said. “I’ll double any BTC payment sent to my BTC address for the next hour. Good luck, and stay safe out there!” The tweet also included an address where people could send bitcoin. But because Twitter scammers regularly use Musk’s name and image to post cryptocurrency scams, it was hard to tell if the tweet was just Musk mocking them. It quickly became clear that, yes, Musk was hacked, and it wasn’t just him. The company accounts of Coinbase, Gemini, and Binance had posted suspicious tweets shortly before Musk did. Then a deluge of tweets appeared: Apple, Barack Obama, Bill Gates, Floyd Mayweather, Jeff Bezos, Joe Biden, Kanye West, Michael Bloomberg, Uber, Warren Buffett, Wiz Khalifa, and others all posted tweets like Musk’s in short order. Some accounts posted multiple tweets while under the hackers’ control. Previous Next Presumably, many of these accounts are protected by things like two-factor authentication and strong passwords that would make them very hard to break into. The fact that they were all posting the scam suggested that the attackers had access to some kind of internal Twitter tool to bypass that security — and Twitter confirmed that was the case later that evening. Notably, President Donald Trump’s account wasn’t co-opted to post the scheme. Since we live in a world where Trump can move markets and make international headlines with one 280-character missive on Twitter, it’s likely a good thing that his account wasn’t taken over. While we don’t know if the hackers even attempted to tweet as Trump, his account reportedly has extra protections that may have prevented an intrusion. The chaos was funny, in its way. For a little while, it appeared that Twitter had stopped verified accounts from posting new tweets. That meant The Verge and the majority of our staff weren’t able to tweet, so we briefly relied on former Verge staffer Casey Newton’s unverified wrestling-focused Twitter account (which currently has 207 followers) to share updates about the attack. Other unverified accounts filled our timelines with jokes about a world free of blue checkmarks: About two weeks after the hack, it became clear that this was the work of a teenager. Three people were charged for the attack on July 31st, including a 17-year-old from Florida who authorities claimed was the “mastermind” of the operation. A 16-year-old from Massachusetts was served a search warrant by federal agents in September to investigate their potential involvement; this person “appears to have played an equal, if not more significant, role” than the 17-year-old, according to The New York Times . Those involved were able to steal over $118,000 worth of bitcoin by duping people into sending the cryptocurrency to the addresses included in the scam tweets, according to a report by New York’s Department of Financial Services. Because of the way bitcoin is designed, the transactions aren’t reversible, so there’s no way to return that money to the people it was stolen from. Their attack didn’t use “any of the high-tech or sophisticated techniques often used in cyberattacks–no malware, no exploits, and no backdoors,” the report said. Instead, the hackers accessed internal Twitter tools by tricking Twitter employees into giving them login credentials. Twitter says it has strengthened its internal security and invested in new tools and training for employees and contractors. But months later, it’s still hard to fathom how a group of motivated hackers brought one of the most influential social networks on the planet to its knees. Thank goodness it was apparently just a bunch of bitcoin-obsessed hackers behind the attack and that they didn’t use their unprecedented access to, say, start a war. In some ways, given Musk’s Twitter run-ins with the Securities and Exchange Commission and an unsuccessful defamation suit, it’s fitting that they chose his account to tell us what they were doing.
1
La La La La La (Around the World) – Social Media Image Compilation
Publication date p Topics social media, art, archive "Treating images liked on social media platforms as an archive developed unconsciously. Set in time with the rapid beat of the nightcore remix of the ATC song ‘Around The World’ it feels a bit like an indoctrination video that would be used by MKultra or in the Ludovico Treatment, portraying a vision of the world defined by conflict, technology and absurdity" Source: https://www.instagram.com/tv/CMDSxjjhWff/ Downloaded with Instaloader with this command: instaloader -- -CMDSxjjhWff Instagram account at time of download was nick_vyssotsky I've extracted the individual images here: https://archive.org/details/hypernormalization-extracted-images Addeddate 2021-03-08 01:53:10 Color color Identifier hypernormalization Scanner Internet Archive HTML5 Uploader 1.6.4 Sound sound Year 2021 plus-circle Add Review comment Reviews There are no reviews yet. Be the first one to write a review. 681 Views 3 Favorites p download ITEM TILE download p download JPEG download p download MPEG4 download p download TEXT download p download TORRENT download download 15 Files download 9 Original SHOW ALL IN COLLECTIONS Community Video Community Collections Uploaded by makeworld on March 8, 2021
1
Abito Da Sposa in Tulle Spazzola
Valuta: € Aiuto Mio account Traccia il mio ordine Login o Registrati Homepage Abiti per Matrimonio Abiti da Sposa h1 (Num. oggetto:GN13OEK32) 0 (0) 136.48 € € US$ £ CA$ AU$ CHF HK$ (80% di sconto) 682.41€ Risparmi 545.93 € p Guida di Colore Opzione Di Taglia Guida di Taglia Quantità: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 Preferito 4722 Aggiungi ai Preferiti Descrizione del prodotto Silhouette : Principessa, A-Line Scollo : V-Scollo Strascico : Spazzola Treno Lunghezza Di Maniche : Maniche Lunghe Ornamento : Paillettes Rivestito : Si Reggiseno Incorporato : Si Colore : Bianco Tessuto : Tulle, Paillette Stagione : Inverno, Estate, Primavera, Autunno Peso Netto : 2 Kg Peso di Spedizione : 2.48 Kg Tempi di imballaggio : 7-15 giorni lavorativi Tempi di spedizione : 2-9 giorni lavorativi Data Arrivato : 16. 06. 2023 - 01. 07. 2023 Altre categorie Abiti da Sposa Abiti da Sposa spazzola treno Abiti da Sposa Abiti da Sposa scollo a v Abiti da Sposa Abiti da Sposa da principessa Abiti da Sposa Abiti da Sposa bianco Recensioni degli utenti Valutazione media: 5 (0 recensioni) Domande e risposte (0) p 104.45 € 126.25 € 86.07 € 123.10 € 123.17 € 112.77 € 274.97 € 236.93 € Iscriviti alle news stile festa di nozze Ingresso veloce Chi siamo Contattaci Politica di privacy Site Map Copyright © 2022 Gillne.it. Tutti i diritti riservati.
9
Learn setting up Google Tag Manager with this tutorial
p p GTM Tutorial As a Google Tag Manager consultant, I've set up GTM on 100+ client websites. This Google Tag Manager tutorial is where I teach you the process I've refined over the years, step by step, with examples and videos for you to learn. Further down, you can download a GTM setup configuration file with all of the following best practices to import into your container. If you can't wait, jump right into the installation tutorial or learn how to set up Google Tag Manager. But if you are a beginner it is important to first understand how to use a tag management system together with other tools. So keep on reading below first. I assume you already know what Google Tag Manager is. So lets talk about how GTM works and how to use it. Ideally, you only want to have one 3rd-party tag in the source code of your website or web app. The only 3rd-party tag on your website should be the Google Tag Manager code snippet. All other tags are then implemented through the tag manager itself. Other marketing and tracking tags are e.g. Google Analytics (GA), Facebook, Twitter, Linkedin, AdWords, DoubleClick and god knows what. The primary reason are the advantages of Google Tag Manager: Due to these advantages, already 30% of all websites on the internet use a tag manager. And among them Google Tag Manager has a market share of 94%. So, unless you have a solid reason not to add a tag to GTM, as a general rule of thumb, add all tags to the GTM container. Use GTM like a connecting layer between your website and 3rd-party tags. Use GTM like a middle-layer between your website and 3rd-party services. Without it, your site and 3rd party tags are not in direct connection. Those services are mostly JavaScript libraries for marketing or tracking tools that are implemented with a tag. But any other tags apply as well. The only exception to the rule applies when you do conversion optimization with split-testing tools. Because during conversion rate optimization, A/B tests are going to load different variants of a page. So the visitor may see how the content is re-rendered for a split-second. To avoid CSS flicker and ensure that variant tests load fast, a split-testing tag may also go directly into the source code. Now that we have this out of the way, let’s look at the implementation. Let's start the Google Tag Manager tutorial by showing you where to get the Google Tag Manager code snippet and then where to install it on the website. You can log in just by using your usual Google account. This is the common method to implement GTM. Do you use a popular content management system? If yes, you can also use a plugin that takes care of the Google Tag Manager installation. If your CMS also offers you a plugin to install other tags Don't use yet another plugin to install Google Analytics, Facebook or Google Ads. Instead, use GTM to install those tags. It will result in a faster page load speed It gives you more options to configure the tag The GTM user interface also receives updates with new features regularly, so you are almost always better off implementing other marketing tags directly with it than with another integration. Plus, the gains in load time are good for your bounce rate and help SEO. Below a list of the most common content management systems and their plugins to install Google Tag Manager. There are two WordPress plugins to implement GTM that I would use. b, there is the classic option called Google Tag Manager for WordPress. The b option is Site Kit by Google. It primarily allows you to add a dashboard to your Worpress backend showing information from your Google Analytics account and Google Search Console data - which is pretty sweet. And it also allows you to install GTM. For Shopify, there is a free plugin for GTM installation creatively called Google Tag Manager Installer . For Squarespace, there is no GTM extension or plugin. But you can add the GTM tag manually, by visiting sidebar > settings > advanced > code injection. Next, you paste the GTM tag into the form fields like this: Visit the main menu for your Wix website on the left sidebar. From there visit Marketing & SEO and then click on Marketing Integrations further down in the sidebar. Then you will find multiple Wix integrations for Google Analytics, the Facebook pixel and also one Wix extension to implement Google Tag Manager. Click on connect and get Google Tag Manager installed. When you first log in to GTM Go to the submit button and publish an empty container. Otherwise, once you test if GTM works, the script will return a 400 response error and you will spend hours debugging why. 😭 It's a classic 😉 After you implemented the GTM script and published a container version (important), you can test if Google Tag Manager is working by doing any of these checks: Log into your GTM account and click on preview. Then, open a new tab in the browser and visit your website. The GTM debugger window should pop open on the bottom of the window if GTM works correctly. Activate the GTM debugger mode to check if GTM is working correctly. Open Chrome Developer Tools with a right-click on any page of your site and select inspect (Alternatively F12 on Windows and Shift+CTRL+J on Mac). Then you go to the network tab and simultaneously reload the web page (F5 on Windows and CMD+Shift+R on Mac). The network tab will fill with all network requests necessary to load the page. In the filter field in the top-left, type gtm.js to find the request for your JavaScript code and verify it has a status code of 200. Let me show you: If you don’t have a 200 status code, maybe you forgot to submit and publish a container first in GTM? Google Tag AssistantInstall the Google Tag Assistant Chrome extension and start it on your site. It should call out a GTM container ID. You can also use the Chrome Extension Google Tag Assistant to ensure Google Tag Manager is working correctly. When setting up Google Tag Manager you can make many advanced configurations. So how you set up GTM, depends on what other tools you plan to use in your tag management system. That's why I brought together all relevant tutorials that cover whatever you could possibly want to set up in your GTM account - with examples. Simply follow this Google Tag Manager guide and thereby create a solid tag management foundation for your site. Only the tutorial on implementing a data layer requires coding skills or potentially web developers. Note: In this Google Tag Manager tutorial, we will use GTM by manually setting up new tags and triggers for each event. The approach is not super scalable, but it is fast enough and easy, while meeting most tracking ambitions and still being applicable to other advanced setups. Larger websites and e-commerce stores require a scalable tag management solution. Therefore a data layer is implemented as the central piece to track events. With such a setup, you can use event handlers instead of setting up tags, triggers and variables for each event. This is the first step for everybody. Learn in this guide how to implement solid Google Analytics tracking, with Goals, Funnels, and your own visits excluded from the traffic. Plus more best practices. Set up Google Analytics Once the fundamental tracking implementation is running as it should, you will also want to learn tracking user engagement. How often, for example, does a visitor send form submissions and click on a submit button or another HTML element? My event tracking guide explains exactly that for a button click and you can use the same method for any other click tracking. Set up event tracking The most common use-case for GTM after installing GA is adding remarketing tags to a website. After all, they make the majority of 3rd-party marketing tags and tracking codes that clutter the code base of our sites. Hence we implement them through our GTM account to keep the code base clean from marketing and analytics tags while taking advantage of the benefits of Google Tag Manager. Let’s learn how to add the most common remarketing tags in the digital marketing space, the Facebook pixel, and the Google Ads remarketing tag. Add Facebook pixel First, you will need your Facebook pixel ID. Visit Facebook’s Events Manager and click the green plus symbol to create the Facebook pixel. Afterwards, your pixel ID will be listed on the screen. Find your Facebook pixel ID in the Event Manager. Then via Google Tag Manager, create a new tag, call it for example Facebook - Page view and visit the gallery fortag templates. Search for Facebook and select the Facebook Pixel. Implement the Facebook pixel from GTM's tag templates. Add your Facebook Pixel ID and click save. Set the tag to fire on all pages. Afterwards, click submit in the top right corner to push the tag live. First, get your Google Ads conversion ID for your audience from Shared Library > Audiences. The user interface changed recently, but look for tag details or setup tag to find the below information. Take your conversion ID and conversion label from the tag details in your audience. Then in GTM, go to the tags section and click on new to add our new marketing tag. Give it a name like Google Ads - Page view. Choose as type of variable Google Ads Remarketing. Set your conversion ID and optionally the conversion label. Then let the tag fire on all pages and click save. Let me show you in this video: Click submit in the top right corner to push the marketing tag live. Implement a data layer You will want to implement a data layer if you set up tags on a regular basis and it takes too long and is simply too labor-intensive. Another benefit is that you can use the information from your database for firing triggers or send it as event data. Other external data sources can also be integrated. Webites that that need ecommerce tracking typically fall into this category. My article about the data layer explains the implementation, data layer variables and how to configure custom tracking in a scalable way, which is a typical use-case for large ecommerce stores that need enhanced ecommerce tracking. Implement data layer Each time I set up Google Tag Manager, the setup comes with a few configurations that I add every time. These best practices are applicable and helpful for most businesses and shouldn't be missing in any GTM tutorial. See the list below and pick the ones useful to you. Further down, you can download a GTM setup configuration with all these best practices to import into your own container. To track outbound link clicks means to track any clicks on external links that lead from your website to other websites. Tracking external links is a best practice that let's you know to which websites you send visits and helps you verify the interest of your visitors. To implement external link tracking, there are three steps: Clicks on emails are a helpful metric that tends to correlate with phone calls or visits to a physical shop. To set up Google Analytics tracking for email clicks follow the steps in below tutorial: Tracking taps on phone numbers is primarily helpful on mobile devices. Tapping on a phone number link directly initiates a phone call. On desktops, mouse clicks usually don’t initiate anything. But as for tracking clicks on emails, it is a good metric to confirm contact rates overall, because it is correlated with other methods of contacting a business. Learn to configure GTM for tracking phone number clicks, by following the below steps. Tracking how often visitors download your materials is a good indicator of engagement. Such downloads can be e.g. eBooks, Excel-sheets, images, videos or music. Tracking downloads works well to distinguish between visitor groups that were not interested in the page content and visitors that indeed were interested and downloaded what you offered. Follow this tutorial to learn setting up download tracking: A google tag manager tutorial wouldn't be complete without a part about debugging. To test any of the previous event configurations and be sure they are working, do any of the following: Since the above configurations are universally useful to most Google Tag Manager implementations, I created the above GTM setup as a file to import into other Google Tag Manager containers. It’s a .json file with the settings and naming convention we went through. You can just import the container code without having to set up anything manually. Either you use it with a brand new container to save time setting the tags up yourself, or you can import them to your existing container and update the Google Analytics settings variable including the tracking ID to your own ID. You can download and install these configurations (each with tags, triggers and variables) to set up Google Tag Manager: Simply import the container settings and deploy them. For demonstration purposes, I added a Google Analytics settings variable with a Google Analytics tracking ID of UA-12345678-9. Please update the GA tracking code to your own or alternatively, change the GA settings variable in the tag configuration to your own one. Download the GTM setup configuration and see below how to import it. To get the most out of this GTM tutorial, follow the below steps and import the settings to your GTM container: We have to be aware the information we track. Data is not just data, because countries have regulations about data privacy which affect the type of information we may collect during tracking. Likewise, there are also terms on Google's side, that forbid tracking personal information and send the data to their servers. Generally, emails or phone numbers are personally identifiable information (PII) and we are not allowed to send that to our Google Analytics account, because it’s against their terms of service. However, the phone numbers on our website or our own company email addresses are hardly private, because it is not the users' data but our own and publicly available on the website. Never the less, if Google Analytics ever checked their database and found that data, they couldn’t know that it’s actually not PII data. Therefore, I recommend not to take any risk and obfuscate all email addresses and phone numbers sent to the Google Analytics account. Simo Ahava did some great work and wrote a custom task script to remove PII from Google Analytics hits and I recommend you implement this along with the above configurations. It’s not a must, but by implementing it you avoid any potential confusion as to if you hold PII data or not. Yes, because your website most likely wants to run Google Analytics and other third-party script tags. Setting all that up is a lot easier and faster with Google Tag Manager. Plus your site loads a bit faster too. Already if you only want to run Google Analytics you should use Google Tag Manager. Setting up event tracking, cross-domain tracking or form tracking are common next steps, but a lot b and b with GTM than without. There are built-in triggers for scroll tracking and video tracking too, plus many tutorials online explaining how to set up Google Analytics with it. Log in to analytics.google.com with your Google account and get your Google Analytics tracking code including the tracking ID. Now, don’t add the Google Analytics tag into the source code of your site. The only hard-coded tag should be the Google Tag Manager tag. So head to tagmanager.google.com to get the GTM code snippet and instead implement that one on every page of your website. Finally, implement the GA code through a built-in tag, add your tracking ID and continue to set up Google Analytics through Google Tag Manager. For adding marketing tags or configuring custom tags, you always use GTM going forward. Google Analytics is the library to collect data about visits and engagement on your website. Google Tag Manager on the other hand is a library running on your site to implement other libraries or tracking tools like Google Analytics. Because many marketing and analytics tools have extra JavaScript snippets, you use the GTM user interface to set them all up. The first part of the code goes as high as possible into the <head> section. I recommend to implement it within the <head> but after any <style> or <script> tags that are vital to render the page - because we don’t want to delay them. The second part is to enable a basic functionality in browsers with JavaScript turned off. It goes as high as possible into the <body> element. The logic behind the positioning of both tags is to ensure the early loading of GTM. It enables you to fire custom tags as early as possible during page load. No, but Google Tag Manager enables you to implement Google Analytics in seconds with just a few clicks. The only thing you need is your Google Analytics tracking ID. Generally though, you don’t need to use Google Analytics with Google Tag Manager. They are independent of each other. Visit tagmanager.google.com and log in with your Google account to access Google Tag Manager. To start using GTM, create a new account and choose web-property as the target platform. Then take the snippet and install it on each page of your website.
3
Spotify removes Neil Young's music after he objects to Joe Rogan's podcast
Spotify removes Neil Young's music after he objects to Joe Rogan's podcast toggle caption Kevin Winter/Getty Images Kevin Winter/Getty Images Spotify has removed famed singer-songwriter Neil Young's recordings from its streaming platform. On Monday, Young had briefly posted an open letter on his own website, asking his management and record label to remove his music from the streaming giant, as a protest against the platform's distribution of podcaster Joe Rogan. Rogan has been widely criticized for spreading misinformation about coronavirus vaccines on his podcast, which is now distributed exclusively on Spotify. Late Wednesday, the musician posted two lengthy statements on his website, one addressing the catalyst of his request and the other thanking his industry partners. In the first, he wrote in part: "I first learned of this problem by reading that 200-plus doctors had joined forces, taking on the dangerous life-threatening COVID falsehoods found in Spotify programming. Most of the listeners hearing the unfactual, misleading and false COVID information of Spotify are 24 years old, impressionable and easy to swing to the wrong side of the truth. These young people believe Spotify would never present grossly unfactual information. They unfortunately are wrong. I knew I had to try to point that out." As of last week, more than 1,000 doctors, scientists and health professionals had signed that open letter to Spotify. Untangling Disinformation What the Joe Rogan podcast controversy says about the online misinformation ecosystem According to Rolling Stone , Young's original request on Monday, which was addressed to his manager and an executive at Warner Music Group, read in part: "I am doing this because Spotify is spreading fake information about vaccines – potentially causing death to those who believe the disinformation being spread by them ... They can have Rogan or Young. Not both." The letter was quickly removed from Young's website. Spotify's scrubbing of Young from its service was first reported on Wednesday afternoon by The Wall Street Journal . His removal from the streaming platform makes him one of the most popular musical artists not to appear on Spotify, where his songs have garnered hundreds of millions of streams. In a statement sent to NPR Wednesday afternoon, a Spotify spokesperson wrote: "We want all the world's music and audio content to be available to Spotify users. With that comes great responsibility in balancing both safety for listeners and freedom for creators. We have detailed content policies in place and we've removed over 20,000 podcast episodes related to COVID since the start of the pandemic. We regret Neil's decision to remove his music from Spotify, but hope to welcome him back soon." Earlier this month, Young sold 50% of his songwriting copyrights to the U.K. investment company Hipgnosis Songs, which was founded by music industry veteran Merck Mercuriadis. Most of the recordings in Young's discography are distributed by Warner Music Group, though a handful are distributed by Universal Music Group. In his second open letter posted late Wednesday, Young thanked those partners and acknowledged the financial hit they are taking, and said that 60% of the streaming income on his material came via Spotify. "Losing 60% of worldwide streaming income by leaving Spotify is a very big deal," Young wrote, "a costly move, but worth it for our integrity and our beliefs. Misinformation about COVID is over the line." He continued: "I sincerely hope that other artists can make a move, but I can't really expect that to happen. I did this because I had no choice in my heart. It is who I am. I am not censoring anyone. I am speaking my own truth." Covers of Neil Young songs by other artists remain available on Spotify. As of Wednesday evening, no other prominent musicians had followed in Young's footsteps. Many musical artists are unhappy with Spotify for a variety of reasons — not least of which is that Spotify pays what many musicians believe is an infamously stingy royalty rate. Still, it is the most popular audio streaming service in the world. According to the company, it has 381 million users in more than 184 countries and markets. Musicians want to meet their fans where they are, and not every artist or creator is willing to go to the lengths that Young has, in terms of putting their money where there mouths are. Moreover, Joe Rogan's podcast is extremely valuable to Spotify: it has been the most popular one globally offered on the service for the last two years, and the exclusive distribution deal he signed with Spotify in 2020 is worth a reported $100 million. Spotify's CEO, Daniel Ek, has said that his company isn't dictating what creators can say on its platform. In an interview with Axios last year, he said that Spotify doesn't bear editorial responsibility for Joe Rogan. In fact, Ek compared Rogan to "really well-paid rappers" on Spotify, adding: "We don't dictate what they're putting in their songs, either."
332
Tmux lets you select and copy text with your keyboard
Look, you already know that tmux lets you split your terminals or whatever. You know it lets you maintain remote sessions like a supercharged nohup. Neither of these features are very interesting, if you’re using a modern terminal emulator (or a tiling window manager) and doing your development locally. So you’ve never tried tmux. People raved about it, but you didn’t listen: you’re not in the target audience. tmux is not for you. I am projecting, obviously: that was my impression of tmux, a few years ago. Before I tried it. Before I learned how wrong I was. Because it turns out tmux’s killer feature is actually– Oh. You already saw. Darn. My dramatic reveal is ruined. Should have gone with a more clickbaity title. Anyway, yes, tmux lets you select and copy text with your keyboard. Mind-blowing? I mean, hopefully not. But also, kinda, right? How many hours of your life have you wasted reaching for the mouse and then moving your hand back to the keyboard just to copy that one commit hash? Trying to coax your terminal emulator to scroll at a reasonable speed so you could select a really long bit of command output? Not very many? Yeah, fair enough. It’s not really that big of a deal. But it’s still nice to have the option. I know what you’re thinking: nothing in this moment could bring me more joy than to be able to comfortably select and copy text on my terms. But tmux? It has so many features. I don’t want to learn a whole new thing just to avoid touching my mouse. I don’t want to use those weird dumb splits, or those weird dumb tabs. I don’t want to have to keep yet another definition of “window” straight. I like my iTerm, or my xmonad, and I feel like my life is hectic enough already. Plus I looked at a tmux keybinding cheatsheet once and now I’m upset and afraid. Okay. It’s okay. I’m here for you. We’re going to get through this together. You can use as much or as little of tmux as you want. You can set up tmux so that all it can do is select and copy text. You don’t have to have splits; you can keep using your native tabs. You can have the good stuff without any of the punctuation soup. Here’s an example of a starter ~/.tmux.conf that lets you select text, and pretty much nothing else: Run tmux, hit M-Space (or alt-space, or option-space), and get navigating. q will exit “copy mode.” To start a selection, use Space instead of v: v toggles block and character mode, confusingly. And to copy, press Enter instead of y. So… not the best vi keys, out of the box. You can also set -g mode-keys emacs, if that’s your thing. Run tmux list-keys | grep -- '-T copy-mode-vi' or tmux list-keys | grep -- '-T copy-mode' (if you’re using emacs) to see all the keybindings. You can, of course, rebind anything you want – make y work like vim, make Escape exit copy mode. tmux is very easy to customize; it’s easy to pick and choose exactly what you want. I wrote a whole blog post about setting up tmux à la carte. It will walk you through the basics of setting up a tmux config, demystify tmux’s if statement, and explain how to make tmux’s copy commands integrate with the system clipboard (which might just work out of the box, depending on your terminal emulator, but it might require another line or two in your config). I even wrote a whole other blog post about quickly selecting the output of the last command, because tmux can’t do that out of the box, and it’s… very useful, but sort of difficult to get working. But first: try that config up there for a while. Install tmux, and give it a shot. You won’t even notice that it’s running, unless you hit M-Space. Spend some time with it. If you don’t like it, there’s a 90-day return window, no questions asked. yeah yeah yeah get outta here with that
1
Amazon-dwellers lived sustainably for 5000 years
Amazon-dwellers lived sustainably for 5,000 years About sharing Alvaro del Campo The researchers studied an area of forest in a remote corner of north-eastern Peru By Victoria Gill Science correspondent, BBC News A study that dug into the history of the Amazon Rainforest has found that indigenous people lived there for millennia with "causing no detectable species losses or disturbances". Scientists working in Peru searched layers of soil for microscopic fossil evidence of human impact. They found that forests were not "cleared, farmed, or otherwise significantly altered in prehistory". The research is published in the journal PNAS. Smithsonian Dolores Piperno is based at the Smithsonian's Natural History Museum in Washington DC and the Tropical Research Institute in Panama Dr Dolores Piperno, from the Smithsonian Tropical Research Institute in Balboa, Panama, who led the study, said the evidence could help shape modern conservation - revealing how people can live in the Amazon while preserving its incredibly rich biodiversity. Human impact on Earth traced back 4,000 years by scientists Dino-killing space rock gave rise to Amazon forest Dr Piperno's discoveries also inform an ongoing debate about how much the Amazon's vast, diverse landscape was shaped by indigenous people. Some research has suggested the landscape was actively, intensively shaped by indigenous peoples before the arrival of Europeans in South America. Recent studies have even shown that the tree species that now dominates the forest was planted by prehistoric human inhabitants. Dr Piperno told BBC News, the new findings provide evidence that the indigenous population's use of the rainforest "was sustainable, causing no detectable species losses or disturbances, over millennia". To find that evidence, she and her colleagues carried out a kind of botanical archaeology - excavating and dating layers of soil to build a picture of the rainforest's history. They examined the soil at three sites in a remote part of north-eastern Peru. Dolores Piperno/Smithsonian Phytoliths are microscopic plant fossils All three were located at least one kilometre away from river courses and floodplains, known as "interfluvial zones". These forests make up more than 90% of the Amazon's land area, so studying them is key to understanding the indigenous influence on the landscape as a whole. They searched each sediment layer for microscopic plant fossils called phytoliths - tiny records of what grew in the forest over thousands of years. "We found very little sign of human modification over 5,000 years," said Dr Piperno. "So I think we have a good deal of evidence now, that those off-river forests were less occupied and less modified." Dr Suzette Flantua from the University of Bergen is a researcher in the Humans on Planet Earth (Hope) project. She said this was an important study in working out the history of human influence on biodiversity in the Amazon. "But it's like assembling a puzzle of ridiculous extent where studies like this are slowly building evidence that either supports or contradicts the theory that the Amazonia of today is a large secondary forest after thousands of years of human management," she said. "It will be fascinating to see which side ends up with most conclusive evidence." Corine Vriesendorp The researchers sampled soil from the rainforest The scientists say their findings also point to the value of indigenous knowledge in helping us to preserve the biodiversity in the Amazon, for example, by guiding the selection of the best species for replanting and restoration. "Indigenous peoples have tremendous knowledge of their forest and their environment," said Dr Piperno, "and that needs to be included in our conservation plans". Dr Flantua agreed, telling BBC News: "The more we wait, the more likely that such knowledge is lost. Now is the time to integrate knowledge and evidence, and establish a sustainable management plan for Amazonia and the prehistoric human presence should be included." Follow Victoria on Twitter You may also be interested in: Brazil's environmental police in battle to reduce deforestation in the Amazon Related Topics Peru Conservation Amazon rainforest Trees Biodiversity Nature Rainforests
2
Sports photos with a twist: the remarkable photos of Pelle Cass
Combining thousands of images, Pelle Cass’s photographs of tennis, basketball and more perfectly evoke the chaos and physicality of sport Main image: It’s ball over ... Dartmouth Men’s Basketball, 2019. Photograph: Pelle Cass Thu 11 Feb 2021 02.00 EST Last modified on Wed 19 Oct 2022 10.09 EDT
2
Scientists investigate diamond planets 'unlike anything in our solar system'
For decades Z-X Shen has ridden a wave of curiosity about the strange behavior of electrons that can levitate magnets. Zhi-Xun Shen vividly remembers his middle school physics teacher demonstrating the power of X-rays by removing a chunk of radioactive material from a jar stored in a cabinet, dropping it into a bucket and having students put their hands between the bucket and a phosphor screen to reveal the bones hidden beneath the skin and flesh. “That left an impression,” Shen recalled with a grin. Sometimes he wonders if that moment set the stage for everything that followed.Shen did not, he admits, have a strong interest in physics. There wasn’t much incentive to study in mid-1970s China. The country was in the grip of the Cultural Revolution of 1966, which had shut down all the universities and left most of the nation, including the town south of Shanghai where his parents worked in medicine, in poverty. But as Shen and his mother watched his brother board a bus to the countryside for “reeducation” at a forced labor camp one cold morning, she turned to him and said, “You are our hope for a college education.”Still, given the family’s circumstances, college seemed like an impossible dream. Then an unlikely series of events changed everything. In 1977, the Cultural Revolution ended and universities re-opened. When the same inspiring middle school teacher organized a physics competition, then-16-year-old Shen entered and came in first at every level – school, district, city, and province. It was fascinating and built his self-confidence, cementing his feeling that physics was the field for him, but where could it possibly lead? Shen won a college spot before graduating high school but held back a year on the advice of his father, then entered the physics program at Fudan University in Shanghai. And in his third year as a physics major, he took an entrance exam for a program just launched by Chinese-American Nobel laureate Tsung-Dao Lee that brought a limited number of Chinese students to the U.S. for advanced studies in physics.That’s how, in March 1987, Shen found himself in a jam-packed, all-night conference session that came to be known as the Woodstock of Physics, where nearly 2,000 scientists shared the latest developments related to the discovery of a new class of quantum materials known as high-temperature superconductors. These exotic materials conduct electricity with zero loss at much higher temperatures than anyone had thought possible, and expel magnetic fields so forcefully that they can levitate a magnet. Their discovery had revolutionary implications for society, promising better magnetic imaging machines for medicine, perfectly efficient electrical transmission for power lines, maglev trains and things we haven’t dreamed up yet.“I was able to get there early and get a seat in the room where the talks were going on,” Shen recalled. “To me, it was the most exciting thing – a completely new frontier of science suddenly opened up.” In another extraordinary stroke of luck, he happened to be in a perfect position to jump into this new frontier, not just to probe the quantum states of matter that underlie superconductivity but to develop ever-sharper tools for doing so. As a PhD student at Stanford University, he’d been using extremely bright X-ray beams to investigate related materials at what is now SLAC National Accelerator Laboratory, just up the hill from the main campus. As soon as the meeting ended, he set about applying the technique he’d been using, called angle-resolved photoemission spectroscopy, or ARPES, to the new superconductors.More than three decades later, with many important discoveries to his credit but the full puzzle of how these materials work still unsolved, Shen is the Paul Pigott Professor of Physical Sciences at Stanford’s School of Humanities and Sciences and a professor of photon science at SLAC. He and his colleagues are putting the finishing touches on what may be the world’s most advanced system for probing unconventional superconductors and other exotic forms of matter to see what makes them tick.Key parts of the system are just a few steps away from the X-ray beamline at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) where Shen carried out those first experiments. One of them is a recently upgraded setup where scientists can precision-build samples of superconducting material one atomic layer at a time, shuttle them through a tube and a vacuum chamber into the SSRL beamline without exposing them to air and make measurements with many times higher resolution than was ever possible before. The materials they build are also transported to the world’s first X-ray free-electron laser, SLAC’s Linac Coherent Light Source, for precision measurements not possible by other means. These experimental setups were designed with a singular purpose in mind: to unravel the weirdly collaborative behavior of electrons, which Shen and others believe is the key to unlocking the secrets of superconductivity and other phenomena in a broad range of quantum materials. Shen’s quest for answers to this riddle is driven by his curiosity about “how this remarkable phenomenon that shouldn’t have happened, happened,” he said. “You could argue that it’s a macroscopic quantum phenomenon – nature desperately trying to reveal itself. It only happens because those electrons work together in a certain way.”The first superconductors, discovered in 1911, were metals that became perfectly conducting when chilled below 30 kelvins, or minus 406 degrees Fahrenheit. It took about 50 years for theorists to explain how this worked: Electrons interacted with vibrations in the material’s atomic lattice in a way that overcame the natural repulsion between their negative charges and allowed them to pair up and travel effortlessly, with zero resistance. What’s more, these electron pairs overlapped and formed a condensate, an altogether different state of matter, whose collective behavior could only be explained by the nonintuitive rules of quantum mechanics.Scientists thought, for various reasons, that this could not occur at higher temperatures. So the discovery in 1986 of materials that superconduct at temperatures up to minus 225 degrees Fahrenheit was a shock. Weirder still, the starting materials for this form of superconductivity were insulators, whose very nature would be expected to thwart electron travel.In a perfect metal, Shen explained, each of the individual electrons is perfect in the sense that it can flow freely, creating an electrical current. But these perfect metals with perfect individual electrons aren’t superconducting. In contrast, the electrons in materials that give rise to superconductivity are imperfect, in the sense that they’re not free to flow at all. But once they decide to cooperate and condense into a superconducting state, not only do they lose that resistance, but they can also expel magnetic fields and levitate magnets. “So in that sense, superconductivity is far superior,” Shen said. “The behavior of the system transcends that of the individuals, and that fascinates me. You and I are made of hydrogen, carbon and oxygen, but the fact that we can have this conversation is not a property of those individual elements.”Although many theories have been floated, scientists still don’t know what prompts electrons to pair up at such high temperatures in these materials. The pursuit has been a long road – it’s been 33 years since that crazy Woodstock night ­– but Shen doesn’t mind. He tells his students that a grand scientific challenge is like a puzzle you solve one piece a time. Better tools are gradually bringing the full picture into focus, he says, and we have already come a long way.
2
Show HN: Dezip – a website for browsing source code archives
dezip is a website for browsing source code archives. to use it, type dezip.org/ in your address bar, then the address of the archive file, like this: dezip.org/https://www.lua.org/ftp/lua-5.4.2.tar.gz discomfort with the centralization of software development into sites like github and gitlab. convenient source code browsing shouldn't be coupled so tightly to repository hosting services. currently, the following protocols are supported: ftp://, gemini://, gopher://, http://, and https://; and the following archive formats, identified by file extension: .zip, .tgz, .tar.gz, .tb2, .tbz, .tbz2, .tar.bz2, .txz, and .tar.xz. yeah! click the magnifying glass button or press f to bring up the search field. selected text will appear in the field automatically (so you don't have to copy and paste it). press enter to search. j and k move forward and backward through search results. the current version is available here: dezip-1.1.zip [browse] see BUILD.md for build instructions. dezip was made by strong. feel free to email me at ian@ianhenderson.org or contact me on twitter at @ianh_.
1
Baby's first Rust with extra steps (XPC, launchd, and FFI)
Baby's first Rust with extra steps (XPC, launchd, and FFI)! During an ongoing argument in a chatroom between some folks about how “zomg systemd is ruining everything”, I decided to look at some init system history. I learned a cool tidbit of information from a HN comment : apparently systemd’s design was inspired by Apple’s launchd. Embarassingly, I knew little to nothing about launchd, even as a lifelong Mac user. I began to play with launchctl on my local machine. Turns out that launchd does some pretty cool things: the *.plist describing a job can do more than just specify its arguments. For example, the QueueDirectories key lets you spawn jobs when files are added to a directory (how useful!). I was oblivious to this having interacted with launchd the past years mostly via brew services. With the help of soma-zone’s LaunchControl and launchd.info companion site, I was able to fiddle and figure out what various plist keys did. I wondered if there was something similar to LaunchControl that could run in a terminal. I’ve used chkservice on Linux, but there seems to be no macOS equivalent. I’ve been dying for an excuse to learn a little bit of Rust (loved n+1 years in a row by the SO developer survey ) – so this was my chance. Several months later I ended up with launchk. The rest of this post will go over: getting started, interfacing with launchd, a lot of macOS IPC bits, getting stuck, and a bunch of probably questionable Rust FFI stuff. To start, we somehow need to get a list of services. While reading from popen("/bin/launchctl", ..) is a viable strategy, it wouldn’t teach us much about the innards of how launchctl talks to launchd. We could look at the symbols used in the launchctl binary, but why not start from the launchd source code? launchctl.c -> list_cmd seemingly has all we need and all of this stuff is available to us by including launch.h! Trying to reproduce the call for listing services does not work. Error is void * and to be used with vproc_strerror(vproc_err_t) which I can’t find in headers or symbols. vproc_err_t error = vproc_swap_complex ( NULL , VPROC_GSK_ALLJOBS , NULL , & resp ); if ( error == NULL ) { fprintf ( stdout , "PID \t Status \t Label \n " ); launch_data_dict_iterate ( resp , print_jobs , NULL ); } (lldb) p error (vproc_err_t) $0 = 0x00007fff6df967d1 A different API call is used if one provides a label after launchctl list. // Run full file: https://gist.github.com/mach-kernel/f25e11caf8b0601465c1215b01498292 launch_data_t msg , resp = NULL ; msg = launch_data_alloc ( LAUNCH_DATA_DICTIONARY ); launch_data_dict_insert ( msg , launch_data_new_string ( "com.apple.Spotlight" ), LAUNCH_KEY_GETJOB ); resp = launch_msg ( msg ); launch_data_dict_iterate ( resp , print_job , NULL ); $ ./launch_key_getjob_test com.apple.Spotlight LimitLoadToSessionType: Aqua MachServices: (cba) 0x7fbfbb504700 Label: com.apple.Spotlight OnDemand: (cba) 0x7fff9464d490 LastExitStatus: 0 PID: 562 Program: /System/Library/CoreServices/Spotlight.app/Contents/MacOS/Spotlight ProgramArguments: (cba) 0x7fbfbb504b70 PerJobMachServices: (cba) 0x7fbfbb5049b0 This looks to be what we want but there is a problem: the API is deprecated (and apparently has been so since macOS 10.9). From the header and the launchd Wikipedia page: There are currently no replacements for other uses of the {@link launch_msg} API, including submitting, removing, starting, stopping and listing jobs. The last Wayback Machine capture of the Mac OS Forge area for launchd was in June 2012,[9] and the most recent open source version from Apple was 842.92.1 in code for OS X 10.9.5. After some more reading I learned that the new releases of launchd depend on Apple’s closed-source libxpc. Jonathan Levin’s post outlines a method for reading XPC calls: we have to attach and break on xpc_pipe_routine from where we can subsequently inspect the messages being sent. New hardened runtime requirements look at codesign entitlements – if there is no get-task-allow, SIP must be enabled or the debugger won’t be able to attach: error: MachTask::TaskPortForProcessID task_for_pid failed: ::task_for_pid ( target_tport = 0x0103, pid = 66905, &task ) => err = 0x00000005 ((os/kern) failure) macOSTaskPolicy: (com.apple.debugserver) may not get the taskport of (launchctl) (pid: 66905): (launchctl) is hardened, (launchctl) doesn't have get-task-allow, (com.apple.debugserver) is a declared debugger Afterwards, I was able to attach, but never hit xpc_pipe_routine. Looking at symbols launchctl uses, it seems that there is a (new?) function: $ nm -u /bin/launchctl | grep xpc_pipe _xpc_pipe_create_from_port _xpc_pipe_routine_with_flags Breaking on xpc_pipe_routine_with_flags succeeds! x86-64 calling convention in a nutshell: first 6 args go into %rdi %rsi %rdx %rcx %r8 %r9, and then on the stack in reverse order (see link for various edge cases like varadic functions). From the launjctl post above, we can use xpc_copy_description to get human-readable strings re what is inside an XPC object. Some searching also found us the function signature! int xpc_pipe_routine_with_flags ( xpc_pipe_t pipe , xpc_object_t request , xpc_object_t * reply , uint32_t flags ); (lldb) b xpc_pipe_routine_with_flags (lldb) run list com.apple.Spotlight * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.2 frame #0: 0x00007fff2005e841 libxpc.dylib`xpc_pipe_routine_with_flags libxpc.dylib`xpc_pipe_routine_with_flags: -> 0x7fff2005e841 <+0>: pushq %rbp (lldb) p (void *) $rdi (OS_xpc_pipe *) $3 = 0x00000001002054b0 (lldb) p (void *) $rsi (OS_xpc_dictionary *) $4 = 0x0000000100205df0 (lldb) p (void *) *((void **) $rdx) (void *) $6 = 0x0000000000000000 (lldb) p $rcx (unsigned long) $8 = 0 (lldb) p printf("%s",(char*) xpc_copy_description($rsi)) }<dictionary: 0x100205dd0> { count = 7, transaction: 0, voucher = 0x0, contents = "subsystem" => <uint64: 0x473e37446dfc3ead>: 3 "handle" => <uint64: 0x473e37446dfc0ead>: 0 "routine" => <uint64: 0x473e37446dcefead>: 815 "name" => <string: 0x100205c80> { length = 19, contents = "com.apple.Spotlight" } "type" => <uint64: 0x473e37446dfc7ead>: 7 "legacy" => <bool: 0x7fff800120b0>: true "domain-port" => <mach send right: 0x100205e30> { name = 1799, right = send, urefs = 5 } Not shown above are 4 continues: the prior messages likely do some other setup, but we want this object as it has our argument of com.apple.Spotlight. We are also interested in xpc_object_t* reply, which is in $rdx above. We can define a LLDB variable to keep track of the reply pointer while we step through: (lldb) expr void** $my_reply = (void **) $rdx (lldb) p $my_reply (void *) $my_reply = 0x0000000000000000 Keep stepping if it’s still null. Eventually it will point to an XPC object that we can inspect: (lldb) p printf("%s",(char*) xpc_copy_description(*((void **) $my_reply))) <dictionary: 0x1007043f0> { count = 1, transaction: 0, voucher = 0x0, contents = "service" => <dictionary: 0x1007044e0> { count = 9, transaction: 0, voucher = 0x0, contents = "LimitLoadToSessionType" => <string: 0x1007046c0> { length = 4, contents = "Aqua" } "MachServices" => <dictionary: 0x100704540> { count = 2, transaction: 0, voucher = 0x0, contents = "com.apple.private.spotlight.mdwrite" => <mach send right: 0x1007045a0> { name = 0, right = send, urefs = 1 } "com.apple.Spotlight" => <mach send right: 0x100704610> { name = 0, right = send, urefs = 1 } } "Label" => <string: 0x100704750> { length = 19, contents = "com.apple.Spotlight" } "OnDemand" => <bool: 0x7fff800120b0>: true "LastExitStatus" => <int64: 0x1a74b81b7404f909>: 0 "PID" => <int64: 0x1a74b81b741be909>: 497 "Program" => <string: 0x100704b00> { length = 67, contents = "/System/Library/CoreServices/Spotlight.app/Contents/MacOS/Spotlight" } "ProgramArguments" => <array: 0x1007049b0> { count = 1, capacity = 1, contents = 0: <string: 0x100704a40> { length = 67, contents = "/System/Library/CoreServices/Spotlight.app/Contents/MacOS/Spotlight" } } "PerJobMachServices" => <dictionary: 0x1007047f0> { count = 3, transaction: 0, voucher = 0x0, contents = "com.apple.tsm.portname" => <mach send right: 0x100704850> { name = 0, right = send, urefs = 1 } "com.apple.coredrag" => <mach send right: 0x100704910> { name = 0, right = send, urefs = 1 } "com.apple.axserver" => <mach send right: 0x1007048b0> { name = 0, right = send, urefs = 1 } } } Looks like some of the same keys we saw from the launch_msg example! Better yet, we can manipulate xpc_object_t by importing xpc.h as described by Apple’s XPC Objects documentation . For example, we can try to read the “ProgramArguments” key: (lldb) p (void *) xpc_dictionary_get_dictionary(*((void**) $my_reply), "service"); (OS_xpc_dictionary *) $30 = 0x00000001007044e0 (lldb) expr void * $service = (void *) 0x00000001007044e0; (lldb) p printf("%s",(char*) xpc_copy_description((void *) xpc_dictionary_get_array((void *) $service, "ProgramArguments"))) }<array: 0x1007049b0> { count = 1, capacity = 1, contents = 0: <string: 0x100704a40> { length = 67, contents = "/System/Library/CoreServices/Spotlight.app/Contents/MacOS/Spotlight" } We can create a dictionary with xpc_dictionary_create and populate it xpc_dictionary_set_*. We can read & interact with the reply. Two pieces remain: PS: This procedure was used to dump several launchctl commands I wanted to use in launchk: you can find them here. Answering those questions was the first significant hurdle. This is broad and detailed topic, so I will do my best to summarize as needed. To begin with terminology: XPC and Mach ports are both used for IPC. XPC is implemented atop Mach ports and provides a nice high level connections API ( NSXPCConnection ). launchd can also start XPC services on-demand (lazily, when messages are sent to that service) and spin them down if the system experiences load and the service is idle. It’s recommended (for convenience, security) to use XPC if possible. And, as we saw above in the XPC Objects API docs, xpc_object_t can be a reference to an array, dictonary, string, etc. On macOS, creating a process spawns a new Mach task with a thread. A task is an execution context for one or more threads, most importantly providing paged & protected virtual memory, and access to other system resources via Mach ports. Ports are handles to kernel-managed secure IPC data structures (usually a message queue or synchronization primitive). The kernel enforces port access through rights: a send right allows you to queue a message, a receive right allows you to dequeue a message. A port has a single receiver (only one task may hold a receive right), but many tasks may hold send rights for the same port. A port is analogous to a UNIX pipe or a unidirectional channel. Some of the special ports a new task has send rights to: A task can only get a port by creating it or transfering send/recv rights to another task. To make things easier, in addition to its init duties, launchd also maintains a registry of names to Mach ports. A server can use the bootstrap port to register: bootstrap_register(bootstrap_port, "com.foo.something", port_send), then subsequently another task can retrieve that send right via bootstrap_look_up(bootstrap_port, "com.foo.something", &retrieved_send). To make things more confusing, bootstrap_register was deprecated requiring developers to implement the “port swap dance” workaround: the parent task creates a new port and overrides the child task’s bootstrap port, then the child task creates a new port and passes the send right to the parent, finally the parent sends the actual bootstrap port to the child (i.e. over the newly established communication channel) so it may set the bootstrap port back to its correct value. So, back to the question: what’s an XPC pipe? From the launchctl symbols we listed above, there is a xpc_pipe_create_from_port. Some online digging revealed headers with the function definition and example usages in Chromium sandbox code. So, an XPC pipe can be made from a Mach port. I am unsure how to describe them: it looks to be a way to say “this Mach port can serialize XPC objects”? At any rate, let’s break on it: xpc_pipe_t xpc_pipe_create_from_port ( mach_port_t port , int flags ); * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 2.6 frame #0: 0x00007fff2005a542 libxpc.dylib`xpc_pipe_create_from_port libxpc.dylib`xpc_pipe_create_from_port: -> 0x7fff2005a542 <+0>: movq %rsi, %rdx 0x7fff2005a545 <+3>: movl %edi, %esi 0x7fff2005a547 <+5>: xorl %edi, %edi 0x7fff2005a549 <+7>: jmp 0x7fff2006f896 ; _xpc_pipe_create (lldb) p $rdi (unsigned long) $52 = 1799 (lldb) p $rsi (unsigned long) $58 = 4 (lldb) p/t $rsi (unsigned long) $57 = 0b0000000000000000000000000000000000000000000000000000000000000100 Conveniently, port and flags retain the same values across runs. 1799 is the same value seen earlier for domain-port. If we log the bootstrap_port extern (mach_init.h), it is also 1799! Cool! // bootstrap_port: 1799 printf ( "bootstrap_port: %i \n " , bootstrap_port ); Putting everything together, we see a reply that is the similar to the one inspected in the debugger earlier, plus now there are no deprecation warnings (ha): // Full: https://gist.github.com/mach-kernel/f05dcab3293f8c1c1ec218637f16ff73 xpc_pipe_t bootstrap_pipe = xpc_pipe_create_from_port ( bootstrap_port , 4 ); xpc_object_t list_request = xpc_dictionary_create ( NULL , NULL , 0 ); // Populate params xpc_dictionary_set_uint64 ( list_request , "subsystem" , 3 ); xpc_dictionary_set_uint64 ( list_request , "handle" , 0 ); xpc_dictionary_set_uint64 ( list_request , "routine" , 815 ); xpc_dictionary_set_string ( list_request , "name" , "com.apple.Spotlight" ); xpc_dictionary_set_uint64 ( list_request , "type" , 7 ); xpc_dictionary_set_bool ( list_request , "legacy" , true ); xpc_dictionary_set_mach_send ( list_request , "domain-port" , bootstrap_port ); xpc_object_t reply = NULL ; int err = xpc_pipe_routine_with_flags ( bootstrap_pipe , list_request , & reply , 0 ); bootstrap_port: 1799 XPC Response: <dictionary: 0x7f837dd044f0> { count = 1, transaction: 0, voucher = 0x0, contents = "service" => <dictionary: 0x7f837dd045e0> { count = 9, transaction: 0, voucher = 0x0, contents = "LimitLoadToSessionType" => <string: 0x7f837dd047c0> { length = 4, contents = "Aqua" } ... There is more to be discussed regarding xpc_pipe_routine and MIG (edit: probably another post, this one is already huge), but at this point we’ve collected enough info to move on. We know what C headers and functions need to be used to make queries against launchd, and we know how to dump queries made by launchctl. Our goal is to focus on getting our minimal C example into a Rust project. As a newcomer, rust-analyzer was tremendously helpful for discovering functions and API surface. Get Rust from rustup. Afterwards, make a new directory, cargo init, then set up bindgen. Include the same headers as in the example C program in wrapper.h – they are in the default search path and require no further setup (find where: xcrun --show-sdk-path). Let’s start by including the generated bindings and putting the type aliases (for the typedef) and function declarations into place: include! ( concat! ( env! ( "OUT_DIR" ), "/bindings.rs" )); extern "C" { pub fn xpc_pipe_create_from_port ( port : mach_port_t , flags : u64 ) -> xpc_pipe_t ; pub fn xpc_pipe_routine_with_flags ( pipe : xpc_pipe_t , msg : xpc_object_t , reply : * mut xpc_object_t , flags : u64 , ) -> c_int ; pub fn xpc_dictionary_set_mach_send ( object : xpc_object_t , name : * const c_char , port : mach_port_t , ); } pub type xpc_pipe_t = * mut c_void ; Things don’t look terribly different. *mut c_void and *const c_char are equivalent to C void* and const char*. On to making the XPC bootstrap pipe and empty dictionary: let bootstrap_pipe : xpc_pipe_t = unsafe { xpc_pipe_create_from_port ( bootstrap_port , 0 ) }; // pub fn xpc_dictionary_create( // keys: *const *const ::std::os::raw::c_char, // values: *mut xpc_object_t, // count: size_t, // ) -> xpc_object_t; // Make an empty dictionary (no keys, no values) let list_request : xpc_object_t = unsafe { xpc_dictionary_create ( null (), null_mut (), 0 ) }; All FFI functions are unsafe: Rust can’t check for memory safety issues in external libs. We need to use unsafe Rust to call C functions and dereference raw pointers. null() and null_mut() give us null *const T and *mut T pointers respectively. Now, to populate the dictionary: // Make me crash by changing to "subsystem\0" let not_a_cstring : & str = "subsystem" ; let key : CString = CString :: new ( not_a_cstring ) .unwrap (); unsafe { xpc_dictionary_set_uint64 ( list_request , key .as_ptr (), 3 ); } There is some extra work to go from a string slice to a std::ffi::CString . new() automatically null-terminates the string and checks to see that there are no null-bytes in the payload, so it returns a Result<CString, NulError> that must explicitly be handled. Afterwards, we can use as_ptr() on the CString to get the const *c_char expected by xpc_dictionary_set_uint64. Once the dictionary is filled we can attempt the XPC call: // Full: https://gist.github.com/mach-kernel/5c0f78e18def295d7251ffd41083920a let mut reply : xpc_object_t = null_mut (); let err = unsafe { xpc_pipe_routine_with_flags ( bootstrap_pipe , list_request , & mut reply , 0 ) }; if err == 0 { let desc = unsafe { CStr :: from_ptr ( xpc_copy_description ( reply )) }; println! ( "XPC Response \n {}" , desc .to_string_lossy ()) } else { println! ( "Error: {}" , err ) } The goal (as I understand it) is to make an API for safe usages of the bindings. Advice from a friend of mine, and Jeff Hiner’s post have been invaluable resources. I still have a lot of work to do on FFI etiquette! It was suggested to me to move all the bindings to a *-sys crate, so I started with that. Everything revolves around xpc_object_t. I made a struct around it and xpc_type_t (get with xpc_get_type) to make it more convenient to check whether or not to call xpc_int64_get_value vs xpc_uint64_get_value, etc. #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub struct XPCType ( pub xpc_type_t ); unsafe impl Send for XPCType {} unsafe impl Sync for XPCType {} #[derive(Debug, Clone, PartialEq, Eq)] pub struct XPCObject ( pub xpc_object_t , pub XPCType ); unsafe impl Send for XPCObject {} unsafe impl Sync for XPCObject {} I can then make XPC objects by implementing From<T>. impl From < i64 > for XPCObject { fn from ( value : i64 ) -> Self { unsafe { XPCObject :: new_raw ( xpc_int64_create ( value )) } } } We also want to be able to get values out of the XPC Objects. We check that the pointer is indeed an XPC int64, and only call the function if check_xpc_type succeeds (the function returns Result<(), XPCError>, the ? returns Err(XPCError) if there is a mismatch). pub trait TryXPCValue < Out > { fn xpc_value ( & self ) -> Result < Out , XPCError > ; } impl TryXPCValue < i64 > for XPCObject { fn xpc_value ( & self ) -> Result < i64 , XPCError > { check_xpc_type ( & self , & xpc_type :: Int64 ) ? ; let XPCObject ( obj_pointer , _ ) = self ; Ok ( unsafe { xpc_int64_get_value ( * obj_pointer ) }) } } Then, a roundtrip test, and one with a wrong type. This hopefully should keep us from hurting ourselves: #[test] fn xpc_value_i64 () { let xpc_i64 = XPCObject :: from ( std :: i64 :: MAX ); let rs_i64 : i64 = xpc_i64 .xpc_value () .unwrap (); assert_eq! ( std :: i64 :: MAX , rs_i64 ); } #[test] fn xpc_to_rs_with_wrong_type () { let xpc_i64 = XPCObject :: from ( 42 as i64 ); let as_u64 : Result < u64 , XPCError > = xpc_i64 .xpc_value (); assert_eq! ( as_u64 .err () .unwrap (), ValueError ( "Cannot get int64 as uint64" .to_string ()) ); } To be honest, I marked the structs Send and Sync out of convenience, but there are criteria (quoting Jeff’s post): You can mark your struct Send if the C code dereferencing the pointer never uses thread-local storage or thread-local locking. This happens to be true for many libraries. You can mark your struct Sync if all C code able to dereference the pointer always dereferences in a thread-safe manner, i.e. consistent with safe Rust. Most libraries that obey this rule will tell you so in the documentation, and they internally guard every library call with a mutex. xpc_type_t seems safe enough: xpc_get_type returns stable pointers that can be checked against externs we can import from our bindings (e.g., this is the xpc_type_t for arrays: (&_xpc_type_array as *const _xpc_type_s)). xpc_object_t is a pointer to a heap allocated value: an integer or string are easier to reason about, but what happens to things like XPC dictionaries? Herein lies a brutal segfault that took a while to figure out. xpc_dictionary_apply takes an xpc_dictionary_applier_t, which is an Objective-C block (a big thanks to the block crate!) that is called for every k-v pair in the dictionary. I used this in order to try to go from an XPC dictionary to HashMap<String, XPCObject>. The xpc_dictionary_apply and related manpages made mention of retain and release, which led to these two functions: xpc_retain, xpc_release. They increase/decrease the reference count of the XPC object (in a manner similar to Rust’s Arc). I tested calling xpc_retain before inserting into the map, thinking the value did not live long enough: let block = ConcreteBlock :: new ( move | key : * const c_char , value : xpc_object_t | { unsafe { xpc_retain ( value ) }; let str_key = unsafe { CStr :: from_ptr ( key ) .to_string_lossy () .to_string () }; let xpc_object : XPCObject = value .into (); map_refcell .borrow_mut () .insert ( str_key , xpc_object .into ()); true }); // https://github.com/SSheldon/rust-block#creating-blocks let block = block .copy (); let ok = unsafe { xpc_dictionary_apply ( object .as_ptr (), &* block as * const _ as * mut _ ) }; This fixed the segfault! Aftewards, it made sense to also implement Drop to clean up after objects we no longer need: impl Drop for XPCObject { fn drop ( & mut self ) { let XPCObject ( ptr , _ ) = self ; unsafe { xpc_release ( * ptr ) } } } All felt very motivating but missed the mark. I littered unsafe code in application logic thinking I was “fixing segfaults” but was approaching the problem incorrectly (and – just because it ran 🤦). Our goal is to provide a safe API: the XPCObject wrapper is not always safe to use (nor the xpc_object_t it carries safe to dereference). There was another problem too: There’s a memory leak, or better yet, about 5k new leaks every 10 seconds. The solution was to be as explicit as possible about which ways XPC object pointers make it into XPCObject. It is probably not wise to play with reference counts for objects we did not make – so the new strategy was to use xpc_copy to get deep copies of XPC objects we wanted to place in Rust structs. A second way in would be via xpc_etc_create functions. Knowing where these places are allows us to do some logging (ps: thanks for ref count offsets Fortinet ): [INFO xpc_sys::objects::xpc_object] XPCObject new (0x7f804f60ea10, string, refs 1 xrefs 0) [INFO xpc_sys::objects::xpc_object] XPCObject new (0x6ba154c65adfac3d, int64, refs ???) [INFO xpc_sys::objects::xpc_object] XPCObject drop (0x7f804f60ea10, string, refs 1 xrefs 0) [INFO xpc_sys::objects::xpc_object] XPCObject drop (0x6ba154c65adfac3d, int64, refs ???) [INFO xpc_sys::objects::xpc_object] XPCObject drop (0x7f804f60e7b0, string, refs 1 xrefs 0) With this, we can figure out if our create/drop counts match: $ grep 'XPCObject new' log.txt | wc -l 193472 $ grep 'XPCObject drop' log.txt | wc -l 193448 Close enough (24) – there were probably some live objects before exiting (?). Nice and flat, that’s what we want to see! And much more significantly – no more unsafe in application code. Not being explicit threw me into another roadblock that also took on the order of days to figure out. This is the same launchctl list XPC dictionary we have been using for all of the examples: let mut message : HashMap <& str , XPCObject > = HashMap :: new (); message .insert ( "type" , XPCObject :: from ( 1 )); message .insert ( "handle" , XPCObject :: from ( 0 )); message .insert ( "subsystem" , XPCObject :: from ( 3 )); message .insert ( "routine" , XPCObject :: from ( 815 )); message .insert ( "legacy" , XPCObject :: from ( true )); let dictionary : XPCObject = XPCObject :: from ( message ); xpc_pipe_routine with this dictionary would cause a segfault. I logged both the XPC dictionary made in Rust and the earlier example C program. I checked to make sure that I got the routine and type numbers correct but didn’t check the types. Mind you, there exists an XPC function to make ints – so it all worked fine until whatever received the message was unable to deserialize the key correctly. Adding as u64 to get a uint64 XPC object was the fix: message .insert ( "routine" , XPCObject :: from ( 815 as u64 )); There is not a whole lot more from here on out. An XPCPipeable trait handles wrapping xpc_pipe_routine and surfacing errors in a Result<XPCObject, XPCError>, including those we get in the XPC response (try one of the earlier examples with a dummy name). xpc_strerror can be used both for the errno returned by the pipe function and the error codes provided in the dictionary: <dictionary: 0x7fccf0604240> { count = 1, transaction: 0, voucher = 0x0, contents = "error" => <int64: 0x46e15ff7d31ff269>: 113 } XPCDictionary and XPCShmem were made for these two ‘special’ types of XPC objects, and a QueryBuilder to avoid repeating code that inserts the same few keys into a dictionary. let LIST_SERVICES : XPCDictionary = XPCDictionary :: new () .entry ( "subsystem" , 3 as u64 ) .entry ( "handle" , 0 as u64 ) .entry ( "routine" , 815 as u64 ) .entry ( "legacy" , true ); let reply : Result < XPCDictionary , XPCError > = XPCDictionary :: new () .extend ( & LIST_SERVICES ) // Specify the domain type, or fall back on requester domain .with_domain_type_or_default ( Some ( domain_type )) .entry_if_present ( "name" , Some ( "com.apple.Spotlight" )) .pipe_routine_with_error_handling (); This looks a lot better than what we started with in the first part. I don’t know about idiomatic, but I can live with it. There remains more work to be done: for example, pipe_routine_with_error_handling should ideally be able to take a pipe as an argument instead of blindly using the bootstrap pipe, the XPC* structs have public pointer members, and you can still make an XPCObject from any xpc_object_t. I hope to fix these things in the coming months as I get more free time and learn how to do so properly. We shall move on, but feel free to look at xpc-sys to see the end result. I used Cursive because the view absractions were very easy to grok and get started with. Much of the visual layout was inspired by another TUI I use for managing Kubernetes clusters: k9s. I like the omnibox-style interface. It seems reasonable to encode all of the tricky-key-combos bits into one component, then have it send semantically meaningful updates (e.g. a command was submitted). Views can implement OmniboxSubscriber, and OmniboxSubscribedView is kind of a hack so I can go from &mut dyn View to &mut OmniboxSubscribedView (to have on_omnibox available): pub trait OmniboxSubscriber : View { fn on_omnibox ( & mut self , cmd : OmniboxEvent ) -> OmniboxResult ; } // This is here so we can do view.as_any_mut().downcast_mut::() pub trait Subscribable : View + OmniboxSubscriber { fn subscribable ( self ) -> OmniboxSubscribedView where Self : Sized , { OmniboxSubscribedView :: new ( self ) } } I chose a LinearLayout as my root container. Only one child can be focused at a time in a LinearLayout which seems reasonable: we invoke on_omnibox for only that child. tokio futures with an interval were used to keep polling the XPC endpoint that returned the list of services (so we can see things as they pop on or off). Past this point the rest of the challenges were related to launchd. For example, the type key in the XPC message changes with the desired target domain for a given service. The type key is required when both starting and stopping a service. bits of launchd was useful and helped clarify what domains each of the types mapped to. To find services, it was easy to query for every type key and return the first match. However, when they are not running, how do we know which one to choose? I was not able to figure this out and settled on a prompt: Similarly, it would be nice to filter out services that are enabled and disabled. launchctl dumpstate includes this information, so I thought it would be easy to do the same as before (grab the info out of an XPC dictionary). The dumpstate endpoint takes an XPC shmem object that will be populated with the reply after the call. It took me a little while to understand how to work with shmems, only to finally look inside and find: a giant string. The same one you see when running launchctl dumpstate. Fun! (lldb) b xpc_pipe_routine_with_flags (lldb) p printf("%s",(char*) xpc_copy_description($rsi)) <dictionary: 0x100604410> { count = 5, transaction: 0, voucher = 0x0, contents = "subsystem" => <uint64: 0x91e45079d2a3988d>: 3 "handle" => <uint64: 0x91e45079d2a3a88d>: 0 "shmem" => <shmem: 0x100604630>: 20971520 bytes (5121 pages) "routine" => <uint64: 0x91e45079d297888d>: 834 "type" => <uint64: 0x91e45079d2a3b88d>: 1 (lldb) expr void * $my_shmem = ((void *) xpc_dictionary_get_value($rsi, "shmem")); (lldb) expr void * $my_region = 0; (lldb) expr size_t $my_shsize = (size_t) xpc_shmem_map($my_shmem, &$my_region); (lldb) p $my_shsize (size_t) $my_shsize = 20971520 (lldb) mem read $my_region $my_region+250 0x103800000: 63 6f 6d 2e 61 70 70 6c 65 2e 78 70 63 2e 6c 61 com.apple.xpc.la 0x103800010: 75 6e 63 68 64 2e 64 6f 6d 61 69 6e 2e 73 79 73 unchd.domain.sys 0x103800020: 74 65 6d 20 3d 20 7b 0a 09 74 79 70 65 20 3d 20 tem = {..type = 0x103800030: 73 79 73 74 65 6d 0a 09 68 61 6e 64 6c 65 20 3d system..handle = 0x103800040: 20 30 0a 09 61 63 74 69 76 65 20 63 6f 75 6e 74 0..active count 0x103800050: 20 3d 20 35 38 33 0a 09 6f 6e 2d 64 65 6d 61 6e = 583..on-deman Some other XPC endpoints (dumpjpcategory) take UNIX fds and are used in a similar manner. Not really knowing how to safely parse the string, or if I can get structured data out in any other way, I decided to forward the output to a $PAGER. Most if not all other requests have responses with useful keys inside an XPC dictionary, so this is far from a complaint! :) Other ‘weirdness’ circles around error semantics. For example, on Catalina, I get the following err invoking xpc_pipe_routine (the dialog calls xpc_strerror for human-readable messages) as a part of the reload command: On Big Sur, there is no err response unless the failure is critical. I wonder if it’s configurable. From here on out was feature work: I tried to focus on stuff I wanted like search and filtering. I made some TUI components, a TableView out of a SelectView , and the little [sguadl] filter inside the omnibox. At work, most of my day is spent on web services. My C is terrible; I live for the nursery. To have gotten to a place where everything works feels great! And fights with the borrow checker were all good opportunities to learn how to write better code. I mean it: I am not great at paying attention. It feels so much more accessible to get build errors instead of undefined behavior that can go unnoticed. And honestly, with stuff in std::sync you don’t have to be a genius to attempt a quick fix. Thanks very much to the work done by others in links scattered throughout this post, it made this possible for me to try on my own! I hope to keep rhythm with learning to write better Rust code. There is one resounding message, though: the hype is real! :)
39
What everyone knows and what no one knows
Full Text p Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News The following observation has no scientific value and it may have no value at all. It surprised me, though. It is about something that everyone in a group of young people across the world consistently seems to know, and something else that no one in that group seems to know. I recently conducted a large number of remote one-on-one interviews of candidates to a new master program in software engineering. The students are from many countries across four continents and typically have a bachelor degree in computer science, although in many cases from not so well-known universities. Many also have some industry experience as software engineers. Initially, I was using different questions for different interviews, perhaps because of an unconscious fear of intellectual laziness, but I soon realized that this concern was silly; consistency is more important, making it possible to compare candidates. So I developed a medley of staple questions, with some variations to account for diversity (if you spot in the first few minutes that a candidate is ahead of the pack, you can try more advanced probes). The experience was eye-opening as to the quality of degrees worldwide; I became used, for example, to proud CS graduates telling me that Quicksort is O (n). The answers to two of my standard questions particularly struck me. One is something that everyone in my sample of over 120 interviewees answers correctly; the other something that no one can answer. The first one follows my request to name some design patterns. If the interviewee hesitates, I will coxswain him or her a bit. For example: do you know about MVC? Then a light goes on and the answer comes, Model-View Controller. Not just the expansion of the acronym: when I query further, I get a decent explanation. Well, not necessarily a deep one (I have since found out that if I present a concrete case of software architecture and ask whether a certain part is M, V, or C, the results are not always impressive), but let's not be too harsh; the basic idea is all I wanted in the interviews. Now here is an idea that seems to have made it to the masses, from Ghana to California and in between. I hope Trygve Reenskaug, the inventor of MVC, is proudly reading this. Another of my questions is about logic. Its statement is very simple: Assume that A implies B. What can we say about `not A' and `not B'? To me, this is about the same as asking "What is the value of 2 + 2?". It is not. In my entire set of interviews, I did not get a single correct answer. It was not a matter of misunderstood words or context; I always took the trouble to explain what this was all about, in as many different ways as needed to make the question clear. I got a few vague and wordy excursions into things that are true and things that are false, but never the clear, straightforward answer. Not once. The closest I heard is that maybe `not A' does not imply`not B' -- along with a few affirmations that maybe it does. (Qualification: I did not use that question with a small number, maybe five, of  the top candidates, assuming they would find it trivial. In retrospect I should have, but that could at most have affected the overall result marginally.) So everyone in the world, or at least that sample of the world, knows MVC. No one knows the elementary rules of logic. Over the past decade I have seen many computer science departments, including in top institutions, remove the logic course from the curriculum. And MVC is a great concept. But still. Interpretation, if any, left to the reader. Bertrand Meyer is chief technology officer of Eiffel Software (Goleta, CA), professor and provost at the Schaffhausen Institute of Technology (Switzerland), and head of the software engineering lab at Innopolis University (Russia).
5
How to solve the DevOps talent gap
DevOps is becoming the most critical function in software development but there is a shortage of talent to support needs across technology driven organizations. Applications have evolved and the shift in architecture is now to be more closely intertwined with development than ever. Developers have new requirements that are invented by the evolution toward containerized applications and microservice-based architectures. Instead of just focusing on their code, they now need to be more mindful of how their code relates to the infrastructure that supports their applications. The interconnected web of dependencies that live under the hood are difficult to unravel as new technologies, processes, and systems have been introduced and then retired over the years. When large companies have adopted a plan to move toward modern application architectures, they are subject to the constraints of their existing ecosystems. Startups are unique and often can start developing their applications’ architecture from a blank slate. The irony of containerized applications and microservice architectures is that the same problems arise around DevOps for both startups and large enterprises. The problems exist around dependency management, server maintenance, and support for consistency across environments. Changes in computing, technology, and processes are going to continuously evolve and adapt at a speed that is faster than most teams’ ability to keep up. We can only do our best to increase time to value and accelerate productivity to make sure we remain inside the window of survival. A lack of sufficient DevOps talent should not eliminate the rollout of a strategy to future proof your business. Complementing your existing DevOps team with technologies that add the services and automation in areas that they don’t have bandwidth to reach is how teams can accomplish more with less. Technology companies need to support DevOps, automate their workflows, or stand in their place when there is a void of not having a team in place. The reality is that every company is a technology company in the 21st century, and everyone is going to deal with challenges around bandwidth issues in their DevOps organizations. If the talent pool is so small in the industry, there is a high chance that your pool of hires will also be too small for the needs of your business over time. DevOps hires are so important because not only are they limited but they are the people that determine what the organization should or should not do to streamline development. We are too dependent on people in a field that should allocate investment where it is measurable and within our control. Environments are where the visions and strategies developed by DevOps come to life. Preview environments that have all of the tooling, technology relationships, data, APIs, and all other criteria vital to application performance result in faster development lifecycles. The value of environments that replicate production in pre-production phases of development is obvious but standing up those environments is a strain on time and resources. In many cases, creating these important environments is so time consuming and costly these important steps are skipped, much to the detriment of the entire business. Release complements DevOps teams or can replace the DevOps overhead of smaller organizations through automated execution of workflows. We make it easy to run your engineering organization within an advanced DevOps framework without the additional headcount or reduction of resources. We accomplish this by offering a platform for Environments as a Service (EaaS) where we configure your applications to enable automated creation of environments with each code commit that live in isolation to provide visibility at every change. In this way, organizations can hire DevOps talent without additional headcount, and even gain productivity at mass quantities. On-Demand environments take the workload of provisioning and maintaining servers off of your engineering team so that they can focus on code instead of spinning up servers. Release offers  a hands off process to run your microservices and apps on Kubernetes (k8s) to spin up environments within a matter of minutes. DevOps can prioritize more important tasks while engineers can bring software to production with velocity when reproducible environments are accessible to all.
1
Azure AD provisioning, now with attribute mapping, improved performance and more
p 01:03 PM Howdy folks, We’ve made several changes to identity provisioning in Azure AD over the past several months, based on your input and feedback: Easily map attributes between your on-premises AD and Azure AD. Perform on-demand user provisioning to Azure AD as well as your SaaS apps. Significantly improved sync performance in Azure AD connect. Manage your provisioning logs and receive alerts with Azure monitor. And as in previous months, we continue to work with our partners to add provisioning support to more application. In this blog, I’ll give you a quick overview of each of these areas. Map attributes from on-premises AD to Azure AD The public preview of Azure AD Connect cloud provisioning has been updated to allow you to map attributes, including data transformation, when objects are synchronized from your on-premises AD to Azure AD. Check out our documentation to learn more on mapping attributes from AD to Azure AD. On-demand provisioning of users We’ve enabled on-demand provisioning of users to Azure AD and your SaaS apps. This is useful when you need to quickly provision a user into an app. And it is also useful for administrators when they are testing an integration for the first time. See our documentation for on-demand provisioning of users in Azure AD and quickly provision a user into an app. Azure AD Connect with improved sync performance and faster deployment The latest version of Azure AD Connect sync offers a substantial performance improvement for delta syncs and it is up to 10 times faster in key scenarios. We have also made it easier to deploy Azure AD Connect sync by allowing import and export of Azure AD Connect configuration settings. Learn more about these changes in our documentation. Create custom alerts and dashboards by pushing the provisioning logs to Azure Monitor You can now store their provisioning logs in Azure Monitor, analyze trends in the data using the rich query capabilities, and build visualizations on top of the data in minutes. Check out our documentation on the integration. New applications integrated with Azure AD for user provisioning. We  release new provisioning integrations each month. Recently, we turned on provisioning support for 8x8, SAP Analytics Cloud, and Apple Business Manager.  Check out our documentation on 8x8,  Apple Business Manager and SAP Analytics cloud As always, we’d love to hear any feedback or suggestions you have. Let us know what you think in the comments below or on the Azure AD feedback forum. Best regards, Alex Simons (twitter: @alex_a_simons) Corporate Vice President Program Management Microsoft Identity Division Version history Last update: p 01:03 PM Labels
6
An essay on property rights as the foundation of all law
By Dale B. Halling June 11, 2017 “Ultimately property rights and personal rights are the same thing.” Calvin Coolidge In the United States, we tend to study the Constitution to secure and understand our freedoms. This is a bit strange as our freedom throughout history has been secured mainly by property rights. This was understood by the founders and many others. There is a “diversity in the faculties of men, from which the rights of property originate…. The protection of these faculties is the first object of government.”[1] James Madison’s Federalist p.10 “The reason why men enter into society is the preservation of their property.” John Locke “No other rights are safe where property is not safe.” Daniel Webster “Ultimately property rights and personal rights are the same thing.” Calvin Coolidge Without property rights, no other rights are possible. Ayn Rand “http://aynrandlexicon.com/lexicon/property_rights.html” target=”_blank”>Man’s Rights,” The Virtue of Selfishness, p.94. “Property rights … are the most basic of human rights and an essential foundation for other human rights.”[2] Milton Friedman Property rights in the United States were a matter of state law for most of its history, with the minor exception of the Fifth Amendment. Thus to gain a better understanding of how our freedom is secured, we need to study property rights. This is a big subject and this essay will focus on the historical development and the philosophical foundations of property rights. The concept of property rights started with some sense of ownership of food and personal possessions among nomadic people. People had the idea of a superior moral claim to the apple they picked or the deer they killed or the clothes they made and wore compared to other people. With the advent of the Agricultural Revolution people began to think they had a superior moral claim to the land they cultivated and the crops grown on this land, which was the beginning of the idea of property rights in land. However, these were not real property rights, because the King or other political body almost always reserved the power to trample peoples’ property rights when it was politically expedient. In the Middle Ages “property rights” were thought to reside ultimately in the King or the sovereign. Legal realists still hold onto this idea. During the Renaissance legal theorists worked on a rational basis for property rights, starting with Hugo Grotius in the early 1600s. Adam Mossoff has written an excellent paper explaining the historical development of property rights theory including the major theories today, called What is Property? Putting the Pieces Back Together .[3] In the Middle Ages “property rights” were thought to reside ultimately in the King or the sovereign. Legal realists still hold onto this idea. During the Renaissance legal theorists worked on a rational basis for property rights, starting with Hugo Grotius in the early 1600s. After Grotiuss, John Locke continued the work of developing a rational theory of property rights. Locke’s formulation is that anything in a state of nature (unowned) that someone makes useful, results in them having a property right in the item they made useful. So if you shoot a deer you have property rights in the deer or if you plant olive trees on some ownerless land you have a property right in the land and the trees. This is true according to Locke because you have an exclusive moral claim over yourself (body and mind) and anything you create value in gives you property rights in the item. This is commonly summarized as having property rights in one’s self. It is important to understand that all of law is based on property rights logically (and historically). Some libertarians have tried to postulate systems where property rights are some sort of contract. You cannot have a contract unless you have an exchange and you cannot exchange something you do not own. You also need to have property rights over yourself to enter into a contract. Contract law presupposes property rights law and to reverse the process results in nonsense. Tort law makes no sense without property rights. If you do not own yourself or some property how can you claim to have been harmed. This is true of all other areas of law also. Property rights law was developed in common law countries and in the United States along Locke’s theoretical formulation for at least a century or more. For instance, in the United States the Homestead Act (of 1862) provided that any adult who had not taken up arms against the U.S. could acquire 160 acres of land by farming and living on the land for five years. The Act made the implicit assumption that the land was in a “state of nature” and that people could obtain property rights by making it more valuable. This is almost an exact formulation of Locke’s theory of property rights, except that the land had to be surveyed first and the acquirer had to put in an application. The Act made the implicit assumption that the land was in a “state of nature” and that people could obtain property rights by making it more valuable. This is almost an exact formulation of Locke’s theory of property rights. There are several interesting things about the Homestead Acts. One is that they were first proposed before the U.S. Constitution was ratified and many other homestead acts were passed after the one in 1862. The Homestead Act of 1862 was clearly passed as part of the politics of the Civil War in the U.S. Another interesting point is the Homestead Act implies that land grants by Kings did not result in valid property rights. For instance, the land grants to George Washington for his military service from the British Crown did not confer valid property rights in the land. Washington had problems with squatters on this land, who seemed to understand that Washington’s property rights in this land were invalid since he did nothing to create value in the land.[4] Another interesting thing about the Homestead Act is that the surveyed plats were separated by roads. There were no taxes to create or maintain these roads, so they were un-owned land or land in which no one could have property rights in. It is important to note that property rights in land that cannot be accessed make those rights meaningless. An essential element of all property rights in land includes access to and from the land and the rest of the world. This does not mean that the owner of the land cannot exclude people from their land, but it does mean that property rights in land cannot interfere with reasonable travel. This is one of those questions in law where the philosophy lays out the general theory, but the law has to work out some practical realities in which there is no exact answer. In the Homestead Act, they decided that roads had to exist around every square mile block of privately owned land (one mile grid). This obviously would have to be modified sometimes for terrain and another distance or pattern for the roads could have been selected without violating the general principles. It would also be an abridgement of people’s right to travel if property rights in land could imprison people. People exercised the right to travel over land before there were any property rights in land. Thus property rights in land that unduly impinge on the ability of travel violate other people’s rights. It appears the Romans understood this. In the twelve ancient Roman tablets that set out the law, tablet seven appears to require land owners to maintain the roads. “1. Let them keep the road in order. If they have not paved it, a man may drive his team where he likes.”[5]  Table eight requires “Where a road runs in a straight line, it shall be eight feet, and where it curves, it shall be sixteen feet in width.”[6] Tablet nine requires “When a man’s land lies adjacent to the highway, he can enclose it in any way that he chooses; but if he neglects to do so, any other person can drive an animal over the land wherever he pleases.”[7] The Roman tablet eight also require space between buildings, “A space of two feet and a half must be left between neighboring buildings.”[8] This last law could have been for travel or to keep fires from spreading through the city. Unfortunately, there does not appear any commentary to let us know. Some people have suggested that this ownerless land for roads in the Homesteading Act is inconsistent with Ayn Rand’s Objectivism: “Capitalism is a social system based on the recognition of individual rights, including property rights, in which all property is privately owned.”[9] This mistake is based on a misunderstanding. There is no such thing as property. There are property rights and things in which people may have property rights. In informal language we often use the shorthand property to refer to something in which we or other people have property rights in. Unfortunately, this shorthand results in confusion. Correctly interpreted what Rand’s statement is saying is that governments cannot have property rights in land or anything else, only people can. What the government has is a custodial duty. The government cannot have a moral claim to have made something useful, only individuals can do this. Rand explained it this way with respect to the Homestead Act of 1862: Thus, the government, in this case, was acting not as an owner but as the custodian of ownerless resources who defines objectively impartial rules by which potential owners may acquire them.[10] Rand did not directly address the concept of property rights, however she laid out many of her ideas in two articles in Capitalism: The Unknown Ideal: 1) The Property Status of Airwaves, and 2) Patents and Copyrights. Rand echoes Locke when she explains the origin of property rights, “Any material element or resource which, in order to become of use or value to men, requires the application of human knowledge and effort, should be private property—by the right of those who apply the knowledge and effort.”[11] Rand is stating that because you made/created something valuable you have a moral claim to the item that is greater than other peoples’. Rand’s main refinement over Locke is to make it clear that this includes mental effort (in a way Locke leaves that more ambiguous), “thus the law establishes the property right of a mind to that which it has brought into existence.”[12] One important point that should be clear from this discussion is that dead people cannot have property rights. Property rights are a moral and legal relationship between a person and an item (tangible or intangible). A related point is that when someone abandons their property rights by no longer making something useful, then it is ownerless again and therefore in a state of nature. This means that someone else can come in and make the item productive again and therefore acquire property rights in the item. This is a very complicated subject and covering it in even a cursory way could be a whole book, however I will point to some examples. In common law there is something called adverse possession, which “is a situation when a person who does not have legal title to land (or real property) occupies the land without the permission of the legal owner” and gains legal title to the land.[13] Another complicated situation where these principles come into play is when a person dies or estates law. A dead person cannot have property rights in anything, so suddenly those items they had property rights in are ownerless. Property rights in land do not go on forever as many people assume. A detailed- discussion of this issue is beyond the scope of this article. We have talked about how property rights arise, but not what they are. Many people think that their property rights in their land are unlimited that they go up infinitely into the sky and down to the center of the Earth and they can do anything they want on their land. Why do they think this? Did they create value 500 feet below the surface of their land?  Did they create value 500 feet into the air above their land? Of course not. The property rights you obtain are related to the value you created. The most common form of property rights is called “fee simple” in the law. Fee simple allows you (ignoring building codes) to farm/ranch and have a house (building), run a business, etc. on your land. It does not allow you to put a commercial hog sty on your farm next to your neighbor’s house. This would violate nuisance laws, which ensure that you have reasonable enjoyment and use of your land. On the other hand, you cannot buy a farm and then build a house next to your neighbor’s pig sty and then sue them for nuisance. You cannot buy a farm and then build a house next to your neighbor’s pig sty and then sue them for nuisance. In addition, there are other groups of property rights such as mining rights, which come in two varieties, lode and placer. Lode mineral rights are designed to ensure that the person who discovers a vein of say, gold is the owner of the whole vein. Otherwise it would be easy for other people to say they discovered the obvious other end of the vein and profit at the expense of the true discoverer of the vein. These rights may not include any rights to the surface land above them, while a place type of mineral rights does. There are also grazing rights, water rights, easements, trademark rights, property rights in chattel, copyrights, patent rights (inventions), trade secrets, etc. All of these property rights are different and come with different rights of action and rules, based on the value that was created. The property rights you obtain are related to the value you created Property rights are not monolithic as many people seem to believe. As Adam Mossoff explains in his paper, Why Intellectual Property Rights? A Lockean Justification : As Locke first explained, property is fundamentally justified and defined by the nature of the value created and secured to its owner … To wit, different types of property rights are defined and secured differently under the law. Some property rights come with the right to exclude, however grazing rights do not include a right to exclude unless the person is interfering unreasonably with the grazing rights owner’s ability to graze the land. Even with “fee simple” ownership of land your right to exclude is limited to using reasonable means to exclude people who are interfering with you enjoyment and use of your land. This means you cannot shoot someone for crossing your land. Property rights are a vast and complex area of law of which this article just touches on. Property rights are the most important area to securing our freedoms and all law starts with and builds on property rights. The key philosophical foundations of property rights are: Property rights are the foundation of all law Property rights are a moral and legal claim to take action with respect to an object Property rights arise when a person creates value The rights obtained with property rights depend on the value created – they are not monolithic. Property rights are the foundation of all our freedoms and much more important than the Constitution in securing our freedoms. [1] The Economic Principles of America’s Founders: Property Rights, Free Markets, and Sound Money, Paul Ermine Potter and Dawn Tibbetts Potter, accessed 4/15/17,  http://www.heritage.org/political-process/report/the-economic-principles-americas-founders-property-rights-free-markets-and#_ftnref3 [2] Milton Friedman’s Property Rights Legacy, Forbes, Ken Blackwell, accessed 4/15/17 https://www.forbes.com/sites/realspin/2014/07/31/milton-friedmans-property-rights-legacy/#238d1416635d [3] Mossoff, Adam, What is Property? Putting the Pieces Back Together. Arizona Law Review, Vol. 45, p. 371, 2003. Available at SSRN: https://ssrn.com/abstract=438780 or http://dx.doi.org/10.2139/ssrn.438780 [4] George Washington, Covenanter squatters, http://explorepahistory.com/hmarker.php?markerId=1-A-28F accessed April 30, 2017. [5] http://www.historyguide.org/ancient/12tables.html, accessed May 7, 2017. [6] http://www.constitution.org/sps/sps01_1.htm, accessed May 7, 2017 [7] http://www.constitution.org/sps/sps01_1.htm, accessed May 7, 2017 [8] http://www.constitution.org/sps/sps01_1.htm, accessed May 7, 2017 [9] “What Is Capitalism?”Capitalism: The Unknown Ideal, 19 Ayn Rand Lexicon, http://aynrandlexicon.com/lexicon/capitalism.html accessed May 7, 2017. [10] Ayn Rand, Capitalism: The unknown Ideal, The Property Status of Airways, p. 132. [11] Ayn Rand Lexicon, “The Property Status of the Airwaves,” Capitalism: The Unknown Ideal, 122 [12] Ayn Rand, Capitalism: The Unknown Ideal, Patents and Copyrights, p. 141. [13] https://en.wikipedia.org/wiki/Adverse_possession, accessed May 7, 2017. (Visited 1,007 times, 1 visits today)
2
Global Freedom Index 2021
8 Mar 2021 World How much individual freedom do people have in each country? Let’s find out. For more maps, follow Landgeist on Instagram or Twitter. Like this map and want to support Landgeist? The best way to support Landgeist, is by sharing this map. When you share this map, make sure that you credit Landgeist and link to the source article. If you share it on Instagram, just tag @Land_geist. On Twitter tag @Landgeist. The Freedom House recently released their annual report ‘Freedom in the World’. 210 countries and territories were ranked on a scale from 1 to 100 based on people’s global freedom. Overall, there are 81 countries that are free, 59 partly free and 54 that are not free. Over the past 15 years, the index shows a worrying trend. The number of free countries is slowly declining and the number of not free countries is slowly increasing. Most of the world’s least free countries and territories are located in Asia and Africa. The world’s freest countries are located mostly in Europe, North America and Oceania. The highest scoring countries are Finland, Norway and Sweden, with a maximum score of 100. The least free country/territory, is Tibet. With a score of only 1. Each country and territory is rated based on 25 indicators. These indicators cover political rights and civil liberties of individuals. the methodology is largely based on the Universal Declaration of Human Rights. The 25 indicators can be grouped into the following subjects: According to the Freedom House, China has had a leading role in the decline of global freedom in 2020: The malign influence of the regime in China, the world’s most populous dictatorship, was especially profound in 2020. Beijing ramped up its global disinformation and censorship campaign to counter the fallout from its cover-up of the initial coronavirus outbreak, which severely hampered a rapid global response in the pandemic’s early days. Its efforts also featured increased meddling in the domestic political discourse of foreign democracies, transnational extensions of rights abuses common in mainland China, and the demolition of Hong Kong’s liberties and legal autonomy. Meanwhile, the Chinese regime has gained clout in multilateral institutions such as the UN Human Rights Council, which the United States abandoned in 2018, as Beijing pushed a vision of so-called non-interference that allows abuses of democratic principles and human rights standards to go unpunished while the formation of autocratic alliances is promoted. Although the overall trend is negative. 2 countries stand out positively in 2020: Malawi and Taiwan. Both countries have improved significantly in 2020 and have shown the resilience of their democracies. Malawi was able to hold new elections, after the constitutional court ruled that fresh elections had to be held in 2020. After evidence of corruption and voter fraud in the 2019 elections. Opposition candidate Lazarus Chakwera won the rerun vote by a comfortable margin, proving that independent institutions can hold abuse of power in check. Taiwan has overcome several challenges in 2020. They’ve been able to repress the coronavirus with remarkable effectiveness. Besides that, they also had to deal with increasing aggression from the Chinese regime. Beijing tried to sway global opinion against Taiwan’s government and deny the success of its democracy. This was partially successfully by pressuring the World Health Organization to ignore Taiwan’s early warnings of human-to-human transmission and exclude Taiwan from the WHO’s World Health Assembly. In early 2020, Taiwanese voters defied politicized disinformation campaigns from China and overwhelmingly re-elected president Tsai Ing-wen. The full list of countries and territories and their score, can be found at the Freedom House website. Share this: h3 Loading...
1
Web Stories Use Cases for ECommerce
Mobile-first approach, fast loading, and visual commerce are the undisputed leaders among the eCommerce trends in 2021. It seems that modern consumers are so used to social media that online stores are becoming more and more like them. And today, I want to talk about one super successful invention of social networks – stories, which are infiltrating all popular platforms today. Born in Snapchat in 2013, stories captivated all large social platforms in 7 years, including even the business-focused LinkedIn and Slack. Moreover, this format didn’t stop on the social networks and inspired Google to develop a new web technology called Web Stories. Web Stories are a new format of web pages, consisting of successive screens. They can replace each other automatically or via the user’s taps. It includes images, video, audio, gifs, 360 images & videos, explorable by gyroscope; text, shapes, and all those elements can be animated. In general, this is a mixture of a classic presentation with the usual stories from social networks. The difference is that you can share your Web Story anywhere across the web, and it will be live forever. That’s it. However, if you want to study this topic in more detail and see how it might look, take a glance at the Web Stories site from Google and read what their creators say about this format. Now Web Stories are actively gaining popularity among bloggers and news portals. Many analysts call this format a winner in the race for consumer attention. But what does all of that mean for eCommerce? Creating Web Stories can help an online store improve several business metrics at once – traffic from Google, ad campaign conversions, average check, loyalty, and retention of existing customers. And this article is just about how exactly to make it. Let’s start with your product detail pages (PDP). Every online store has PDP, and if you keep your eye on the current trends, you probably already have prepared diversified high-quality visual content. It’s time to turn your images, videos, text into immersive Web Stories and to use them as: Web Story about each of your products will have its own link that you can easily insert into your ad, be it Google Shopping Ads or a sponsored post on social media. Imagine a mobile user clicks on your Facebook ad, and instead of the standard, not always a well-thought-out mobile version of the usual PDP, he finds himself in the immersive world of a Story! Though, why imagine? Click on the following ad example on your smartphone to see how it works 👇 In this case, the Web Story looks so organic and natural that the user might not immediately realize that he has left Facebook. Then, it is up to you: you can lead a potential customer to a standard PDP, or right away to the shopping cart/checkout, or/and push him to share your Story anywhere on the network. Moreover, Web Stories are good as pre-landing pages not only for their immersive effect. They are fast, Google Web Vitals compliant, and designed specifically for mobile gadgets. Have you heard that more than half of all eCommerce sales will most likely be mobile by the end of this year? And these sales will definitely happen in the fastest and most optimized for mobile gadgets online stores. As a programmer, I know firsthand how difficult and time-consuming it can be to optimize an existing, large, and because of that cumbersome store, so… Just let Web Stories simplify your task! Embedding Web Stories of your PDP on other pages on your website is a great way to enhance the customer experience, motivate visitors to spend more time on your site, and explore more products. Thus, you can draw more attention to certain products. For example, bestsellers or discounted products. For this, just place their Web Stories on the Home page, in a special section, or on any listing page (category, search results, etc.) And also, you can place PDP Web Stories in the related products section. Draw more attention to them, and thereby increase the average check! Such embedding can be done through the ordinary links/buttons/pictures or the Web Story player. And if we aim at attracting attention, it is better to use the second option. It looks something like this (all the following stories in this article are embedded using the Web Story Player): However, using PDP Web Stories may not only improve the consumers’ experience but also increase the traffic to your website. Firstly, the user’s session duration is a Google ranking factor. The more time consumers spend tapping your Stories, the higher your site climbs in search results. Secondly, Web Stories are separate web pages with their own meta-schemas, which increases your chances of being found on Google. And thirdly, the cherry on top is the opportunity of getting into Google Discover, which recently introduced a special section for Web Stories in USA, India, and Brazil. The lucky ones who successfully enter Google Discover receive an enormous surge in traffic. And as long as the competition in this segment is low, you can easily become such a lucky one! As you can see, once created PDP Web Stories can bring various benefits to an online store, both in conversion and traffic. But we’ll touch upon that later, and now let’s look at other Web Stories use cases for eCommerce. Let’s move to the next level and consider the possibility of creating Web Stories not for one but for several products simultaneously. This bundle/collection thing has long been loved by eCommerce representatives for its ability to increase the average check. After all, whatever the consumer wants to buy, not a single product in the world can exist in a vacuum. Bought a phone – need a charger and headphones; bought a top – need a skirt, and so on. So, why don’t we collect the best visual content on all of these products into one captivating Web Story? And use it for: For this, embed your bundle Web Story on each product page, in the section of related products/you may also like/others also bought. Or even right in the shopping cart, highlighting that the entire bundle will cost less than all these products separately. Again, we can do it with a link, a button, a picture, a Web Story player. Imagine how gorgeous it might look if you prepare an independent image for the bundle and present in a Story a complete look from all these clothes on a model! Of course, collections and bundles in Web Stories format can be used as pre-landing pages for advertising in the same way as PDP. Furthermore, such Web Stories can serve as announcements of a new collection launching, a sale, etc. Again, we can embed them wherever our hearts desire: on the Home page, listing pages, product detail pages… You already know how to embed them. And here’s another idea for consideration – to make a link/button to the Web Story in an email about a new collection/sale to existing customers. It brings us to retention issues, and we will touch on this topic again a bit later. Thus, the main feature of Web Stories for multiply goods is a separate screen for each product. From each such screen, we can lead the consumer to a regular PDP or Web Story of this product (if you liked the above case). And from the last screen, the user can go to the bundle/collection page, if there is one, or instantly to the shopping cart with all the added products. However, an online store does not need to have collections/bundles to try out this case. You can use, for example, top products from some category. It isn’t about increasing the average check but can provide a comprehensive visual demonstration of the assortment of certain goods. Just as in the previous examples, we put a link to a specific product on each screen, and from the last screen, we lead the consumer to the category page. This Web Stories group is special in giving users the ability not only to tap screens but also to enter certain data directly into the Story. It is where the real interactivity begins, and by the way, the Web Stories technology creators introduced this opportunity quite recently. Despite the general technical features, eCommerce can use such Web Stories for different purposes. It is no longer a secret for anyone that having positive reviews from consumers on a product page significantly increases its conversion. Unfortunately, most people more willingly write a review when they are disappointed with a purchase. As for positive reviews, we need to kindly ask satisfied customers and make the goods evaluation process as simple as possible. That’s exactly what I did in an example below (please feel free to choose any answers to test how it works): In such an entertaining and simple format, you can ask the customer a few questions about the product, website, delivery; give several options for answers/ratings. Then, send him the link to Web Story via email, messenger, SMS. As you already have this customer ID, you don’t need to force him to register. With a little effort from programmers, you can collect the answers and integrate them into the product page in the best possible way. In this world, nothing can be said to be certain, except death, taxes, and funny personality quizzes. People love them, and eCommerce has already learned how to benefit from this cute addiction. For example, a quiz on choosing a suitable product is becoming more and more popular. We only have to transform it in the Web Stories format, and voila!: Such entertaining quizzes remain attractive to new potential consumers and existing customers either. Therefore, you can boldly use them as pre-landing pages to gain traffic, embed them on website pages to prolong user sessions and raise product conversions, and send links in emails or messages to existing customers for better retention. Since Google continues to develop the Web Stories technology, I dare to assume that the interactive components will be improved and acquire new features shortly. Therefore, eCommerce representatives should be on the lookout. Direct interaction with the consumer is a crucial thing in any sale, isn’t it? Well, it has been a long road since the beginning of my article, so I feel it is my duty to summarize. Web Stories are a new format of web pages from Google, developed for mobile users and replete with visual effects. Now you may be thinking something like, “Well, I got it. Web Stories might be a benefit for my store, but who is going to deal with it? We have thousands of products, the content team will go crazy, and programmers always have something to do” And if so, I have something for you. My company, Product Stories, provides services on the integration and automatic generation of Web Stories for eCommerce. So, you are welcome to find out more, or get a quote right here by filling out this form:
53
High-Speed Internet at a Crossroads
By Alvin Powell Harvard Staff Writer Date Share Email Facebook Twitter LinkedIn Also in the Series Doctors not the only ones feeling burned out Speaking from experience on what makes a global killer William Hanage on COVID lessons we haven’t learned Lifestyle influences long COVID risk Measuring the power of vaccines View all of The Coronavirus Update A year ago, the Gazette spoke with Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) Professor Jim Waldo about the technological side of the pandemic-forced shift to work from home. Waldo said then that the world was experiencing a real-time experiment that would stress test the internet in ways it hadn’t been previously. A year later, the Gazette caught up with Waldo, Gordon McKay Professor of the Practice of Computer Science and SEAS’ chief technology officer, to see how it went. GAZETTE:  When we spoke a year ago, you described this as an experiment in real time and, in essence, a stress test for the internet. How did the internet do? p I’ve been working with a student on exactly this question. What we realized in going through the paper is that that question doesn’t really make sense because the internet itself is not a thing. It is a set of protocols that allow more localized networks to connect to each other and work with each other. Some of them have done quite well, and some of them have done not so well. Certain internet vendors have had a tough time in trying to scale out their offerings. Most internet providers built their service based on a use case [scenario] of delivering entertainment: Download speeds were much higher than upload speeds. But in a world of Zoom conferences, we need something that is more symmetric in download and upload — or at least upload speeds need to be a lot faster than they were when it was mostly Netflix and HBO Max. So that disparity has really come to the fore. Then there have been the usual worries about how we get to the last mile, especially in more rural communities, where the economic incentives of the providers are not very high in offering high-speed internet. One of the interesting things about the infrastructure bill that has been proposed in Congress is it talks a lot about universal high-speed broadband access in the same way that, back in the ’30s, the recovery bills from the Great Depression talked about telephone access for everybody. p When we talk about the difference between providers, where is the dividing line? Are particular parts of the country better or worse, or is the dividing line nation to nation globally? p The dividing lines are much more urban versus rural than they are state versus state or region versus region. Some of what we saw was sort of surprising. There was a real downtick in the quality of service for places like Austin, Texas, early on in the pandemic that you would think would have really good internet service, being that it’s become sort of a tech hub. But for the most part, cities have done reasonably well, and rural areas have done reasonably badly. So this divide is really economic and population density. It’s really straightforward. If you are running a wire, whether it be cable or fiber, it costs the same amount per mile pretty much no matter where you put it. So if you can put it someplace where you can service 100,000 people, it’s a lot more economically advantageous than if you’re going to be serving 20. This is one of the outcomes of letting the service providers decide where and at what quality they are going to be providing service. p Do you feel we’re at a point in our society’s development, and in the importance of the internet to our day-to-day lives, where government should step in and either create incentives or regulate that this has to happen? p I think we’re at a decision point. The pandemic certainly brought home to everybody just how important the internet is to day-to-day life. It’s been central to education; it’s been central to staying in contact with people; it has become the workhorse of this pandemic. I can’t even imagine what the pandemic would have been like 40 years ago, before the networks were everywhere. We would have just had to shut down. So as a society, I think we have to decide. We decided almost a century ago that universal telephone service was something that we wanted to have. We need to make that decision about the internet now. Maybe we will decide that it’s not something that we have to have, and we can let the free market decide. But I think that would be a mistake, quite frankly, because it will disenfranchise, in an important sense, the ability of a fairly significant chunk of our society to really connect with other parts of it. The urban-rural divide is already pretty stark. p Were there important questions early in the pandemic that were answered over the last year? p We have an answer for whether the infrastructure can deal with huge spikes and consistent spikes. And the answer, surprisingly, is mostly yes. There are pockets where it was unable to, and we know where those pockets are, and we should probably decide that we want to fix those. I think another thing that we discovered is the differential between download and upload speeds isn’t important until it’s vital. Midway through the last year, I took a look at my local internet plan, and it was a fairly standard 200 down, two to three megabits up. I switched that to an 800 down 15 up and it has made a huge difference to my ability to both teach and stay in contact with my colleagues. p What were you experiencing before that? p Before that Zoom was pretty flaky at times. There would be the standard sort of freezes. My wife and I had to be careful that we weren’t both Zooming at the same time. Afterwards, it’s just not a question anymore. It just works. p If everyone did that, would that stress the provider? Or are you using extra bandwidth that they just hadn’t sold for some reason? p It’s clear to me, as I started watching the speeds from my own home, that the actual speed I was getting was very much dependent on how much internet my neighbors were using as well. So, again, we have this infrastructure that was optimized for download, which we can think of as perhaps the internet equivalent of telephone party lines in the last century. Party lines still exist in some places, but not very many because the infrastructure has gotten better. People wanted a more reliable network for voice. I think people would like a more reliable network for internet. One of the other things that has become really clear is that while there may be multiple internet providers, there are very few internet providers at any single place. So we essentially have, at least in regions, de facto monopolies that have very little reason to increase their offerings, at least from the sense of competition. p What questions have been raised by the past year, as we look at the years to come? p Well, we spoke about the fundamental question — whether access to high-speed broadband is something that everyone should have. And symmetric networks, from the standpoint of download and upload speeds, I think, are going to be seen as much more valuable. I also think the original notion of network neutrality — the network just carries bits and doesn’t try to distinguish between whether it’s carrying video or voice or anything else — has been shown once again to have been a good decision because we have been using the internet in ways we never expected to be using it and at volumes that we never expected to be using. The regulatory questions are much harder. Is the internet like the telephone system used to be, where it should really be a regulated monopoly because the shared infrastructure is expensive to put in and having multiple infrastructures may not be useful? But if a company is going to be a monopoly in an area, you want to have some way of regulating it for everybody’s good. Or are we going to say that the free enterprise system will really work well, in which case we need to open up things like access to telephone poles, which is surprisingly baroque and difficult, so that we can have true competition? GAZETTE:  What does that mean? So, an internet provider who wants to run cable can’t put it on telephone poles, or has to ask the phone company? p To get something on a telephone pole, you have to coordinate with everyone else who has something on that telephone pole, which is generally four or five different companies, and it becomes nearly impossible to do that. There’s a great book by one of my colleagues at the Law School, Susan Crawford, called “Fiber.” It’s about all of the difficulties that you will have trying to get anything on a telephone pole. It’s just eye-opening how difficult it can be to do things. p It seems like such a mundane thing that nobody really thinks about. But if you can’t get your wires where they have to go, you’re dead in the water. p Everybody thinks of the tech monopolies as being companies like Facebook or Google or Amazon, but the tech monopolies I worry about are Comcast and other base providers. Yes, there are several, but they’ve each carved out a region. Maybe if you’re lucky, you have a choice between two. GAZETTE:  How about innovations? Do you see anything new and exciting on the horizon that’s developed over the last year that you may not have without this push from the pandemic? p The innovations I’ve seen have largely been in the areas of how we teach online, pedagogical innovation. About 10 years ago, when Harvard started working with edX, there was a lot of very good discussion among the faculty about how we teach, how we distinguish between these massive online courses and what we do in person. I think that was very healthy for teaching at Harvard. But I think the innovations that have been tried while we were forced online have been just as interesting. When we get back into the classroom, which I will love to do, it’s going to be different than it was a year and a half ago, because people have tried new things. I’ve seen a huge amount of innovation in offerings that try to replicate in-person contact with online contacts. Zoom is an obvious one that everybody went to, but there are lots of casual meeting applications that allow you to go to a space that you can wander around and talk to others. That’s an interesting software suite, though I’m not sure it’s been as successful as people thought it would be. But there’ve been a lot of attempts at getting something like that working. So we’ve thought a lot more about how casual meetings occur than we did prior to the pandemic. Also, there’s going to be a huge amount of innovation in terms of how often we need to be on campus, especially among staff. I run the IT group for the School of Engineering and, quite frankly, we discovered that other than cultural group mechanisms, we can do our work quite well completely remotely. So we will probably be going back to work two days a week at most and then doing a lot of work from home. During the pandemic we had a new hire, a very good system administrator who was based in Brooklyn. He worked for us for four months before he finally moved up here. He moved up here because he wanted to be in this area and not because the job required it. p A lot has been written about this, but do you see that geographic uncoupling of work actually happening more? p I think it’s possible, but I don’t think it’s going to be as large as people think. It’s still going to matter that you are in the same area so that on occasion, you can come into the office and meet physically together. But I think there’s going to be much less of the five days a week on campus sort of work. People will be able to work two to three days a week at home, without any loss of productivity or loss of culture within their group. p Was there a particular surprise that you experienced over the last year? p We coped much better than we had feared. We also realized how important the casual contact — that is missing — is to the culture of Harvard, the intellectual life, and the teaching. My classes went really well. What I missed were the times that I would just run into a student and we’d talk. Much of the education that we provide is through those casual contacts, and even more important, I think, is the casual contacts between the students themselves. p Some students were able to experience that, at least in some form. WALDO:  To some extent, but it wasn’t the same. Students who have come back on campus say, “Yeah, it’s better than being stuck at home or being stuck in an apartment.” But it’s still not the same as wandering around campus or having a full dining hall where you run into more people than your suitemates or the people who you have consciously invited over. The value of a Harvard education has very little to do with the classes. It has to do with the students that we select and the fact that we put them together for four years, give them some interesting things to think about, and provide enough adult supervision to keep the “Lord of the Flies” thing from happening. They educate each other. That’s the real value of a Harvard education, and that’s what we can’t really replicate online. We’ve been trying. I did a lot more group exercises in my classes that I usually do, just so the students would have contact with each other outside the classroom. Related Toward an unhackable quantum internet On internet privacy, be very afraid p When you hear people talk about going back to normal, how much “back to normal” really is possible at this point? p I hope it doesn’t go entirely back to what used to be “normal.” At least for me, I have found some ways of teaching that I’m not going to give up. I started taping what I would usually have given as a lecture, letting the students watch that before class. This notion of a “flipped classroom” has been around for a while, but the last year forced me to do that. And quite frankly, I hope I don’t have to give another live lecture ever again. I’ll tape it; I’ll have them watch it — I know most of them are going to watch it at 1.5 or 2x speed. I’ll sound like Alvin and the Chipmunks, but that’s OK. Then we can spend the time in class actually working on problems or discussing some of the issues that I brought up. Again, it’s back to the notion that students do best when they educate each other, and I hope that’s one thing that doesn’t change. I would also hope that we are not the only ones who decide that staff doesn’t need to be on site all the time because we have a traffic system that is designed for 20 to 30 percent fewer people than we currently have on it. If people were working from home, we might have a traffic system that actually worked for us. I’ve been going back to campus one day a week and the traffic — there are still some jams — is breathtakingly different. p Well, thank you. There are interesting things going on in a more hopeful time. p I am more hopeful. I think sifting through what we can learn from this experience to make a non-pandemic life better is going to take some serious thought. It’ll be interesting to watch what comes out. I think another thing that has happened is we’ve all become a lot more comfortable with change because we’ve had to. Interview was lightly edited for clarity and length.
1
Buddy: The New Jenkins – CI/CD with no config hassle
Create delivery pipelines from 100+ ready-to-use actions: builds, deployments, notifications, shell scripts, ssh commands, and more. Try Buddy in cloud for instant results and install it on your own infrastructure as your private CI server – just as you did with Jenkins. p p
3
Unionized workers are more likely to assert the right to safe, healthy workplace
The Research Brief is a short take about interesting academic work. Unionized workers are more likely than their non-union peers to speak up about health and safety problems in the workplace, according to a just-published, peer-reviewed study I conducted with Jooyoung Yang, who was a Ph.D. student in applied economics at the time of the research. To reach this conclusion, we examined over 70,000 unionization votes from 1985 to 2009 and focused on elections where the tally in favor or against was very small. This allowed us to zero in on the impact of unionization itself on worker behavior. We then compared these workplaces with the number of inspections conducted by state or federal occupational health and safety enforcement agencies that resulted from an employee complaint. We found that unionized workplaces were 30% more likely to face an inspection for a health or safety violation. The likely reasons why, in my view, are that unions can help organized workers learn about their rights, file complaints and provide greater protections against illegal retaliation by employers. [Deep knowledge, daily. Sign up for The Conversation’s newsletter.] The health and safety of workers is always a concern, but the current pandemic makes the issue more important than ever, especially for essential workers in health care, retail and child care centers and schools. But beyond them, all workers – including those with typically safe office jobs – are at increased risk of catching the coronavirus. The costs of providing sufficient protective gear or taking other steps to ensure worker safety can be high, which means some companies have at times resisted doing all they can to protect their employees. What’s more, they are trying to prevent their workers from learning about cases of coronavirus in their workplace and have been lobbying governments for immunity from any liability. That means it’s even more vital that workers are able to raise their voices when they feel that their workplace is unsafe. Our research suggests belonging to a union can play a big role in ensuring those voices are heard. This may also be why we’ve seen more workers going on strike and asserting their rights to safer and healthier workplaces. Currently, I am working on two related follow-up projects. One aims to build and analyze more comprehensive measures of labor rights violations by connecting records from the various federal agencies that protect workers, such as the U.S. Occupational Safety and Health Administration and the National Labor Relations Board. The other studies how workers share information with one another about their employers on Glassdoor and how useful the job search site is to job seekers.
1
Are Amazon Digital Tokens Real?
Home Cryptocurrency & Blockchain Amazon Users Are Being Offered Amazon Tokens, but Are They Real? Internet and e-commerce giant Amazon has confirmed the creation of a native token, but is it available yet? Is the token being offered to some users real? By Alyssa Exposito Aug. 26 2021, Published 11:19 a.m. ET Reports suggest that Internet and e-commerce giant Amazon is positioning itself to not only accept Bitcoin (BTC) as a form of payment but also create its very own token. In light of this news, some users are being offered the opportunity to purchase Amazon tokens (AMZ). Are the tokens real? Amazon created a stir in Jul. 2021 with a job advertisement seeking candidates for its venture into cryptocurrencies. Although the team at Amazon has been involved in blockchain research since 2019, only just recently has the company announced its intentions in the emerging space. Amazon has an estimated 310 million active users in the U.S., holding nearly half of the U.S. e-commerce market or 5 percent of all e-commerce sales. With millions of users flocking to the site, many are susceptible to attempts to retrieve their information or hack into their accounts. In Reddit community "AmazonToken," users describe their experiences with the site leading them to purchase AMZ tokens. Unfortunately, many have attempted purchasing what they believed would be AMZ on their login page, only to receive no tokens in addition to losing their funds. Whereas Amazon has moved forward with accepting BTC as payment and discussed releasing its own token, no further details have been released regarding the matter. Some users may have been led astray because Uniswap, a decentralized exchange, has an AMZ token listed but it's not available to trade. Amazon has not commented recently on its potential token. However, the project seems to be well underway. As one insider commented, "this is a full-on, well-discussed, integral part of the future mechanism of how Amazon will work.” What looks like a fake/scam @amazon token, advertised on @Facebook. pic.twitter.com/xyg18whrJ8 — Andres Wolberg-Stok (@andresws) August 26, 2021 Source: Twitter: @andresws rBitcoins 🤖: Amazon ‘definitely’ lining up Bitcoin payments and token, confirms insider https://t.co/aWO19SAhsM — Cryptowire - BTC Class of 2013 (@cryptow1re) August 25, 2021 Source: Twitter: @cryptow1re Inspired by innovation in the blockchain and cryptocurrency, Amazon hopes to further engage in the space. The company has been steadily getting its crypto ducks lined up in hopes to develop both its payment services and native token. As one insider commented, "After a year of experiencing cryptocurrency as a way of making payments for goods, it is looking increasingly possible that we’re heading towards tokenization.” So, though we should disregard AMZ token sales as a scam for now, they may soon become authentic.
1
Time’s A-Tickin’ for the TikTok Deal
With over 800 million users worldwide, 100 million of which reside in the US, TikTok has taken the world by storm. From October 2017 to March 2019, the number of adult American users grew over 5.5 times, launching the app to the top spot as the most downloaded app of 2020. However, with its meteoric rise came increased scrutiny. Government lawmakers, concerned by Bytedance – the parent company of TikTok – and its susceptibility to Beijing’s request to hand over the mountains of data it stores on American users, rallied to action. During the final weeks of his presidency, Trump took a hard stance on the company, claiming that the app was a threat to national security as it controlled too much data on American users. In August of 2020, he signed two executive orders which prohibited US companies from working with Bytedance and threatened to ban TikTok from US app stores unless Bytedance divested its US operations. In response, TikTok began courting partners and confirmed a deal to jointly own the app with Oracle. However, as the Biden administration moved into the White House and put a pause on the case, the deal was reportedly shelved. As of April 2021, it remains to be seen if Biden is simply reviewing the executive orders or if he intends to wholly depart from his predecessor’s China strategy. By the time Trump called for the TikTok ban, US-China tensions had already reached new depths. The administration targeted apps like TikTok and WeChat over their Chinese roots and moved to restrict their US footprints. On August 6th, President Trump signed the first of two executive orders which prevented China-based companies like TikTok from doing business in the US. Under the order, Trump also sought to ban Chinese communications platform WeChat, claiming its connection to the Chinese-owned company Tencent led to concerns over data privacy and, by extension, national security. Days later, the former administration released a second order compelling Bytedance to sell off all of its US operations within the subsequent 90 days, citing the company as a potential threat that could “impair the national security of the United States.” After weeks of courting partners, including major tech companies like Microsoft, it was reported that Oracle, the US software giant, was prepared to purchase and take over TikTok’s US operations for a whopping US$60 billion. According to experts close to the source, TikTok reportedly believed that selling its US holdings to Oracle would allow the company to get on the Trump administration’s “good side,” particularly as Oracle co-founder Larry Ellison publicly supported the former president. However, the exit of Mr. Trump has freed TikTok from the capricious demands of the previous administration, and Biden’s desire to separate himself from the actions of his predecessor have allowed Bytedance to shelve the deal, leading many experts to believe that a TikTok-Oracle deal is permanently off the table. Biden has already begun to confront the enormous challenge of maintaining a tough stance on China as an economic and security challenge while simultaneously developing a more strategic and calculated approach than that of the former administration. Yet, many supporters credit President Trump for handling China in a much more aggressive manner, drawing from his policy toolbox to levy tariffs on Chinese products, limit sensitive US technology exports to China, and impose sanctions on Chinese officials and companies. As a small piece of a puzzle, Trump’s executive order requiring the TikTok selloff was a part of his larger approach to police China and attempt to establish dominance over the bilateral relationship. Because this often resulted in direct confrontation, many of Biden’s opponents now expect the President to maintain an equally combative stance to US-China relations and are quick to condemn actions perceived as “giving China a break.” However, many supporters of the Biden administration are biting back, arguing that Mr. Trump’s frenzy of executive orders and sanctions were inconsistent, fragmented, and more often symbolic than actually effective. Furthermore, though he pressed China hard on some fronts, he gave in on others. He delayed sanctioning officials and companies over human rights violations in China’s northwest region of Xinjiang and publicly flattered President Xi Jinping’s authoritative leadership style. Many of his executive actions were either left incomplete or riddled with loopholes, including the WeChat ban and the TikTok deal, and were rarely enforced. Additionally, many were considered unconstitutional, as evidenced by a US federal judge issuing a preliminary injunction against Trump’s TikTok ban in December 2020, citing that the president’s emergency executive order overstepped his authority. All in all, Trump’s tit-for-tat China strategy brought more noise than actual results in the bilateral relationship. Given the TikTok acquisition deal was touted as a major accomplishment by the previous administration, Biden’s reversal may signal a departure of policy from his predecessor. In particular, the current administration may be adopting a more targeted, strategic plan towards China and its various companies with US operations compared to Trump’s comparatively unilateral, aggressive approach. One hundred days into Biden’s presidency, there are reports that the administration is conducting a comprehensive review of Trump’s China policies. During a routine press conference in February, White House spokeswoman Jen Psaki commented that there is no specific timetable for the Biden administration’s review of TikTok and other issues related to Chinese technology companies, and that there are no proactive new steps in the pipelines at the time of the announcement. However, as the administration begins to set foreign policy on China, it appears likely that it does not deem the platform a risk to national security and overturns the decision to sell TikTok’s US operations. The President will need to consider more than politics or ideology when forging China policy. To adequately serve American interests, President Biden must instead consider commercial interests and national security interests at the same time. The administration will face the ambitious responsibility of not only cracking down on China for unfair trade practices but also devising a national strategy that helps mold America’s economic position to better square up against emerging Chinese competition. A key component of this plan includes staffing the administration’s cabinet with officials who have been known to take hard stances on China, including Secretary of State Antony J. Blinken and US Trade Representative Katherine Tai. Still, the cabinet recognizes the potential for cooperation with China on mutually beneficial issues like the coronavirus pandemic and climate change, and the reversal of various low cost Trump-era regulations like the TikTok ban could represent an act of goodwill towards the Chinese government. In the complex sphere of US-China politics, a reversal on the decision to require a TikTok selloff could represent one of two intentions. The first, though relatively unlikely given the current rift between US and Chinese officials, is one of an olive branch. By reversing policies deemed low cost to the US but high impact to China, the Biden administration could be priming the stage for more advanced discussions on matters of higher significance. Alternatively, it could simply indicate an overall shift away from Trump’s tit-for-tat approach to China in US foreign policy. For example, when Beijing booted American journalists from China, Trump responded by expelling Chinese journalists in the States. When Washington enacted a 15% tariff on US$110 billion of Chinese products in September 2019, China responded by imposing retaliatory tariffs on oil, soybeans, and other American exports. Instead of perpetuating scuffles like these, the Biden administration is likely seeking to focus on wholesale reforms negotiated both bilaterally and multilaterally through international organizations. Alternatively, there are other possibilities for TikTok’s future in the US. On one hand, it is plausible that Bytedance could stand to benefit from going through with the deal. As it stands, the largest concern threatening TikTok’s presence in the US stems from the potential for its data to be accessed by the Chinese government. However, if Bytedance and its board were to continue with the TikTok-Oracle deal after a thorough review by the White House and the Committee on Foreign Investment in the US (CFIUS), the company could eliminate any lingering security concerns around the TikTok platform going forward. This would inevitably hand control of its US user base over to Oracle, but allow TikTok as an entity to minimize the political risk of its other – both current and future – US-based operations. Another possibility would be to make a deal with the CFIUS in which the company approaches a third party to manage all its US data – thus paving the way to maintain full ownership over TikTok’s US operations. Like the first option, this would appeal to the US government’s main concern with the platform regarding data safety. TikTok has repeatedly maintained that its ownership by the Chinese-run Bytedance does not pose a threat to US national security, as its user data is stored on servers outside of China with the main data centers in the US and Singapore. Still, with the administration and the National Security Council developing a comprehensive plan over the coming months to secure US data from a wide range of potential threats, Chinese apps like TikTok may have to submit to further regulatory concessions and increase transparency with the US government in order to continue operations within the country. Regardless of whichever direction TikTok chooses, it is clear that the Biden administration’s actions over the coming months will set the tone for the future operational landscape of Chinese companies in the US. Pressured by opponents to maintain the aggressive policies of his predecessor, President Biden is expected to formulate a China strategy that puts American interests first and strengthens US competitiveness in the global market. While he maintains that his administration will take a different path than Trump, there is no doubt that heated competition is on the horizon – and TikTok may find itself at the forefront of battle.
2
Researchers make significant advance in using DNA to store data
Data DNA Researchers Test Microchip for High-Density Synthesis of Archival Data Storage DNA
1
Bacteria Can Tell the Time
Humans have them, so do other animals and plants – now research reveals that non-photosynthetic bacteria too have internal daily clocks that align with the 24-hour cycle of life on Earth. The research answers a long-standing biological question and could have implications for the timing of drug delivery, biotechnology, and how we develop timely solutions for crop protection. Biological clocks or circadian rhythms are exquisite internal timing mechanisms that are widespread across nature enabling living organisms to cope with the major changes that occur from day to night, even across seasons. Existing inside cells, these molecular rhythms use external cues such as daylight and temperature to synchronise biological clocks to their environment. It is why we experience the jarring effects of jet lag as our internal clocks are temporarily mismatched before aligning to the new cycle of light and dark at our travel destination. A growing body of research in the past two decades has demonstrated the importance of these molecular metronomes to essential processes, for example sleep and cognitive functioning in humans, and water regulation and photosynthesis in plants. Although bacteria represent 12% biomass of the planet and are important for health, ecology, and industrial biotechnology, little is known of their 24hr biological clocks. Previous studies have shown that photosynthetic bacteria which require light to make energy have biological clocks. But free-living non photosynthetic bacteria have remained a mystery in this regard. In this international study researchers detected free running circadian rhythms in the non-photosynthetic soil bacterium Bacillus subtilis. The team applied a technique called luciferase reporting, which involves adding an enzyme that produces bioluminescence that allows researchers to visualise how active a gene is inside an organism. They focused on two genes: firstly, a gene called ytvA which encodes blue light photoreceptor and secondly an enzyme called KinC that is involved in inducing formation of biofilms and spores in the bacterium. They observed the levels of the genes in constant dark in comparison to cycles of 12 hours of light and 12 hours of dark. They found that the pattern of ytvA levels were adjusted to the light and dark cycle, with levels increasing during the dark and decreasing in the light. A cycle was still observed in constant darkness. Researchers observed how it took several days for a stable pattern to appear and that the pattern could be reversed if the conditions inverted. These two observations are common features of circadian rhythms and their ability to “entrain” to environmental cues. They carried out similar experiments using daily temperature changes; for example, increasing the length or strength of the daily cycle, and found the rhythms of ytvA and kinC adjusted in a way consistent with circadian rhythms, and not just simply switching on and off in response to the temperature. “We’ve found for the first time that non-photosynthetic bacteria can tell the time,” says lead author Professor Martha Merrow, of LMU (Ludwig Maximilians University) Munich. “They adapt their molecular workings to the time of day by reading the cycles in the light or in the temperature environment.” “In addition to medical and ecological questions we wish to use bacteria as a model system to understand circadian clock mechanisms. The lab tools for this bacterium are outstanding and should allow us to make rapid progress,” she added. Implications for this research could be used to address such questions as: is the time of day of bacterial exposure important for infection? Can industrial biotechnological processes be optimised by taking the time of day into account? And is the time of day of anti-bacterial treatment important? “Our study opens doors to investigate circadian rhythms across bacteria. Now that we have established that non-photosynthetic bacteria can tell the time we need to find out the processes in bacteria that cause these rhythms to occur and understand why having a rhythm provides bacteria with an advantage,” says author Dr Antony Dodd from the John Innes Centre. Professor Ákos Kovács, co-author from the Technical University of Denmark adds that “Bacillus subtilis is used in various applications from laundry detergent production to crop protection, besides recently exploiting as human and animal probiotics, thus engineering a biological clock in this bacterium will culminate in diverse biotechnological areas.” The image at the top of the page was taken by Professor Ákos Kovács (DTU) and is a microscopy image of Bacillus subtilis cells containing a promoter-reporter gene fusion; detected reporter activity false coloured to blue.
1
Starting a Business? 6 Ways to Start Your Marketing Campaign
Starting a Business? 6 Ways to Start your Marketing Campaign You don’t need to sit around and do nothing while your website is being designed or your website is being developed, find out what you can do for your marketing campaign. Are you expecting people to come flooding through the doors of your new business website when you first reveal it to the world? Are you expecting to share your website a few times on social media and hope to blow up and go viral while you’re sleeping? Realign your expectations because it’s not quite that easy, and without a marketing campaign, your new business will go nowhere fast. Look, I know your new business is top secret and that no one can know about it until it is finished. But that doesn’t mean you can’t get anything done to give your new business a headstart for when the big reveal finally happens. In fact, there is a lot you can do to establish some domain authority and start your marketing efforts early. Marketing and search engine optimisation (SEO) are very closely linked. While some actions could be seen as digital marketing for your business, the same actions can also be applied to SEO. Let’s explore 6 secretive ways to kick start your campaign using digital marketing and white-hat SEO tactics. 1. Create a Coming Soon Page So you already brought your website domain name and all the variations as well to make sure no one tries to impersonate your brand, but you know? Your website domain does not need to sit there doing nothing. Despite you wanting to keep your new business secret, you can still utilise your website address by putting up a coming soon page, enabling you to start directing people to your website. A coming soon page, even a simple one with your brand logo and the text coming soon, is sufficient enough to create a little bit of mystery around your new business. If you combine this step with others listed below, you will be able to create quite the following before you have even revealed anything about what you do. 2. Start your personal branding journey All (almost) business have someone at the top. When your business is unveiled – that’s going to you. People naturally want to know about successful businesses and who runs them, so what is a better time than to start branding yourself and networking with beneficial people who would be interested in your product or service than now? Focus on you Guest post around your new businesses niche to establish your name Post regular content on social media Engage with your audience Expand your network Initially, You don’t need to reveal big key details about your upcoming business. In fact, by focusing on your personal brand and exposing your strengths (and weaknesses) to people, you can build a brand around yourself and an engaged social following that you can use to market your business too when the time comes. By being active on social media platforms, you will build a personal connection with your community and an influencer reputation. 3. Start Building SEO Business Citations Early Search engine optimisation is not something that starts working overnight. SEO is a slow, tactical game that takes time to become effective. We have all heard of the story of the snail and the hare. Those who are slow and methodical will always win the race. Business citations are a white hat SEO method that will help your local SEO initially, help you rank in search engines for your brand name and help you build some domain authority to your new website. What do you need to build SEO citations? A live website Phone number Mailing Address Contact Email Address You can build your own business citations or use a service like Brightlocal who will create them for you, provide login details and ensure all your records match across various platforms. 4. Create suspense; Podcasts, Infographics & Press Release Build up There are a few key ways you can create suspense and a marketing drive towards your website. Feature on podcasts to build your personal brand and help solidify yourself as a niche influencer. Develop niche related infographics that provides statistics Release incremental press releases that give away more information with each subsequent release. These are just a few ways you can create suspense around your brand and get people hyped. 5. Start a niche blog for your Business If you are serious about your business, then there is no doubt you have already registered your business on companies house (or your countries equivalent). This means there is already a public record of your business name on the internet and likely your business niche as well. If your business will provide a product or service aimed at fitness, for example, then start a blog aimed at fitness and start attracting an audience to your website early. It has a key advantage of establishing your business as an authority in fitness. You can use your content for guest posts and generate powerful backlinks for your website to further establish your website’s authority. Blogging is a powerful method for marketing your business and attracting the right audience but, that doesn’t mean you have to wait until your business is live. In fact, I recommend all of these steps to my clients whenever they are planning on buying a fresh domain. 6.Start Networking on Social Media Networking has become a critical focus point for any successful marketing campaign on social media and something you should start utilising as early as possible. This point really builds upon the point I spoke about a few minutes about – building your own personal brand and networking on various social media websites. Don’t try and keep up on every platform. Choose one or two social media websites. If you are a B2B, then consider using LinkedIn to target your audience. If you are B2C then you should likely consider other platforms such as Facebook and Instagram initially The Takeaway It can feel like you are useless when your business website is being developed or designed, and sometimes it can be hard to know what you can do to stay productive. I have touched on a few ways above that you can start your business marketing campaign early, but there are still many ways you could achieve this. Are you a digital marketer? How do you start building momentum with new businesses? I’d love to hear from you in the comments below.
110
On Cache Invalidation: Why Is It Hard? (2018)
Many people must have heard this quote (by Phil Karlton) many times: There are only two hard things in Computer Science: cache invalidation and naming thing. Two days ago, Nick Tierney mentioned it again in his post “Naming Things”. Since he said he was not sure what cache invalidation meant, and I have a tiny bit experience here, I want to write this short post to explain why cache invalidation is hard from my experience. First of all, the main purpose of caching is speed. The basic idea is simple: if you know you are going to compute the same thing, you may just load the result saved from the previous run, and skip the computing this time. There are two keywords here: “the same thing”, and “the saved result”. The latter means you are essentially trading (more) storage space for (less) time. That is the price to pay for caching, and also an important fact to be aware of when you use caching (i.e., caching is definitely not free, and sometimes the price can be fairly high). The tricky thing is “the same thing”. How do you know that you are computing the same thing? That is all “cache invalidation” is about. When things become different, you have to invalidate the cache, and do the (presumably time-consuming) computing again. Implementing caching without considering the invalidation is often simple enough. Here is a simple example of turning a normal function to a function that supports caching: I used Sys.sleep() only to pretend the function was time-consuming. In the greet() function, Nick is fast and will say “Hello”, and Yihui is slow, speaking Chinese. If we have to call this function many times, there is no need to wait if we can save what Nick and Yihui say. We will need a database to store their words, and keys to retrieve the results. The first time you run this function, it will be slow, but for the second time, it will be instant. The toy example shows the basic idea of implementing caching: turn your input into a key, use this key to retrieve the output in a cache database if it exists, otherwise do the computing and save the output to the database with the key. Of course, toy examples often cannot represent the reality. If Nick goes to Japan, he may speak Japanese. When Yihui is in the US, he should speak English. We need to update the cache database in these cases (invalidating the previously saved results). Now let me talk about a real example in strong’s caching, which should sound similar to the case of Nick being in Japan (or Yihui in US). For those who care about technical details, strong caches results using these lines of code and invalidate the cache here. The basic idea of knitr’s idea is that if you did not modify a code chunk (e.g., did not modify chunk options or the code), the result will be loaded from the previous run. The key of a code chunk is pretty much an MD5 hash (via digest::digest()) of the chunk options and the chunk content (code). Whenever you modify chunk options or the code, the hash will change, and the cache will be invalidated. That sounds about correct, right? I have heard unhappy users curse knitr’s caching. Some thought it was too sensitive, and some thought it was dumb. For example, when you add a space in an R comment in your code chunk, should knitr invalidate the cache? Modifying a comment certainly won’t affect the computing at all (but the text output may change if you show the code in the output via echo = TRUE), but the MD5 hash will change. Then an example to explain why people thought knitr’s caching was dumb: if you read an external CSV file in a code chunk, knitr does not know whether you have modified the data file. If you happen to have updated the data file, knitr won’t re-read it by default if you didn’t modify chunk options or the code. The cache key does not depend on the external file. In this case, you have to explicitly associate the cache with the external file, e.g., Since the chunk option cache.extra is associated with the CSV file, cache will be invalidated when the file is changed (because the cache key will be different). Another example is one code chunk using a variable created from a previous code chunk. When the variable is updated in the previous chunk, this chunk’s cache should be invalidated, too. This leads to the topic of the dependency structure of code chunks, which can be complicated, but there are some helper functions such as knitr::dep_prev() and knitr::dep_auto() to make it a little easier. When a code chunk is extremely time-consuming, knitr should be more conservative (not to invalidate the expensive cache unless there have been critical changes). When a code chunk is only moderately slow (e.g., 10 or 20 seconds), the caching probably should be more sensitive. The tricky thing is, it is hard to find the balance. Either direction can offend users. I said above that the obvious price to pay for caching is storage (either in memory or on disk). However, to make caching work perfectly for you, there is a hidden cost. That is, the cost to understand caching. This is similar to a situation in our daily life: we may spend a lot of time and energy to save some money. We can only see the money we saved, but ignore the cost of time and emotion. If you don’t analyze the two costs, the money you saved may not really be worthwhile. If you don’t fully understand how caching works and the conditions for its invalidation, caching could be too sensitive or dumb, and may not serve you well. Some users may be able to quick understand it, and some may not. If you want to speed, you’d better know the traffic rules first, otherwise you may be pulled over. The full documentation of knitr’s caching is in the knitr book “Dynamic Documents with R and knitr (2nd ed)”. If you don’t have this book, there is a page on knitr’s website that contains more information. I don’t know what was on Phil Karlton’s mind when he said those words, but the above is my experience about caching. The ultimate suggestion I often give to users is that if you feel knitr’s caching is too complicated, it is totally fine to use a much simpler caching mechanism like this: In this case, you clearly understand how your caching works. The one and only way to invalidate the cache is to delete results.rds, which is no longer hard at all. If you prefer this mechanism, you may consider using xfun::cache_rds() .
30
FOSS for Amateur Radio
Benefits for LWN subscribers The primary benefit from subscribing to LWN is helping to keep us publishing, but, beyond that, subscribers get immediate access to all site content and access to a number of extra site features. Please sign up today! September 7, 2021 This article was contributed by Sam Sloniker Amateur ("ham") radio operators have been experimenting with ways to use computers in their hobby since PCs became widely available—perhaps even before then. While many people picture hams either talking into a microphone or tapping a telegraph key, many hams now type on a keyboard or even click buttons on a computer screen to make contacts. Even hams who still prefer to talk or use Morse code may still use computers for some things, such as logging contacts or predicting radio conditions. While most hams use Windows, there is no shortage of ham radio software for Linux. Utilities HamClock, as its name implies, has a primary function as a clock, but it has several other features as well. It shows a world map, and the user can click anywhere on the map to see the current time and weather conditions at that location. It also shows radio-propagation predictions, which indicate the probability that a ham's signals will be received at any particular location on Earth. These predictions are available in numerical form and as map overlays. In addition to propagation predictions, HamClock provides graphs and images indicating solar activity such as sunspots, which strongly affect radio propagation. Most hams keep logs of all contacts they have made over the radio; this was (and still may be) required by law in some countries. Historically, hams have kept logs on paper, but many now use electronic logging programs. There are several Linux-based, FOSS logging programs, such as FLLog (documentation/download) and Xlog. One logging-related program that is designed to work with other logging software is TQSL, which cryptographically signs confirmations of contacts and sends them to the Logbook of the World (LoTW). The American Radio Relay League (ARRL) uses LoTW verification to issue awards for certain achievements, such as contacting 100 countries or all 50 US states, which previously required submitting postcards (called QSL cards) received from the person contacted from each country or state. Collecting QSL cards is still popular, and they can still be used for awards, as LoTW is completely optional. Communication tools Traditionally, in order to communicate, hams have used either continuous wave (CW) to send Morse code or any of a variety of "phone" (voice) modes. The different phone modes all allow two or more radio operators to talk to each other, but they convert audio signals to radio waves in different ways. However, many hams now use digital modes. One of the main benefits of these modes is that they can be decoded from weak signals, allowing more reliable long-range communication compared to CW or phone. FT8 is the most popular digital mode for ham radio. It is used for structured contacts, typically exchanging call signs, locations, and signal strength reports. FT8 sends short, encoded messages such as "CQ KJ7RRV CN72". In that message, CQ means "calling all stations", KJ7RRV is my call sign, and CN72 is my location on the southern Oregon coast, encoded using the Maidenhead Locator System. FT8 is much slower than most other digital modes—sending the message above takes about 13 seconds—but its slow speed allows it to be extremely reliable, even under poor radio conditions. Radio propagation conditions over the last few years have been relatively poor (although they are currently improving) due to a minimum in the 11-year solar cycle. FT8 has been usable under all but the worst conditions, though it is certainly easier to make contacts when conditions are good. WSJT-X, the original and most popular program for FT8, is FOSS and is available for Linux. Fldigi is another program used for digital modes. Unlike FT8, most of the modes in fldigi can transfer free-form text. The most popular mode included, PSK31, is designed for conversational contacts over long distances. Some other modes are primarily used for transferring files, which is supported well by fldigi. Flamp is a separate program that connects to fldigi; it is used to transfer files over radio by encoding them into a plain-text format that can be decoded by flamp on another computer. If an error occurs in transmission, flamp can detect the error and determine which portion of the file it is in, so the sender can resend only the portion that failed. Flmsg is a program that allows email-like forms to be used with fldigi and (optionally) flamp. A form allows structured spreadsheet-like data to be transferred efficiently, by avoiding the need to transmit common information with every message. Many forms are intended for disaster response; for example, there is an "ICS-216 MEDICAL PLAN" form which is specifically designed for sending information about available ambulances, hospitals, and other emergency medical care resources. Some other forms, such as "ICS-213 GENERAL MESSAGE," mostly contain free-form text and are intended for use when no more-specific form is available. Fldigi, flamp, and flmsg (with its forms), along with a few related programs, are all available at W1HKJ's web site or from SourceForge. Radio modems WSJT-X and fldigi use modems that are completely software-based; they use a computer's sound card to transmit and receive audio signals. These signals are sent and received by a radio using a special device called a radio-sound-card interface. There are schematics available online for these interfaces, although most hams purchase a pre-built one. The SignaLink USB is a popular model that also has a built-in USB sound card, allowing the user to continue using the computer's internal sound card for other purposes. Although the manufacturer does not officially support Linux, many people have successfully used the device without needing to install extra drivers. Another digital mode that is commonly used is packet radio. Most packet networks use AX.25, which is a modified version of X.25 that is designed to be used over ham radio. Linux works well for packet radio, because the kernel's networking stack has native support for AX.25. Although external hardware modems can be used, it is now common to use a computer sound card for packet radio. Dire Wolf, a FOSS packet-radio program for Linux, includes a sound-card modem, as well as some routing features that are not provided by the kernel. Winlink, which is a radio-based email system, is another popular digital radio system. Pat is a Linux-compatible, FOSS Winlink client with a web-based GUI. The sound-card modem for Amateur Radio Digital Open Protocol (ARDOP), which is one of the modes for connecting to Winlink, is available for Linux. Winlink can also be used (over shorter distances) with packet radio. There are other modes for Winlink, as well, but most of them are either deprecated or proprietary, and some are Windows-only. FreeDV is a new digital mode with a different purpose than the ones previously mentioned, most of which transfer text in some form. FreeDV is a digital voice mode. It requires two sound cards; as the user speaks into a microphone connected to one, FreeDV uses an open codec called Codec 2 to compress the digital audio, then uses a sound-card modem on the second one to transmit the encoded audio signal over the radio. At the receiver, the same process runs in reverse. FreeDV, under many circumstances, allows more reliable communication than traditional analog phone modes. With analog voice, a weak signal can still be heard, but is difficult to understand. With digital voice, the signal is either clearly audible and intelligible, or it is not heard at all. This means that when a signal is neither strong nor weak, digital voice will usually be clearer and easier to understand. One radio-related device that works with Linux is the RTL-SDR. This is a low-cost software-defined radio receiver, which can be used to receive most radio signals, including some AM broadcast stations, most ham radio signals, shortwave radio stations, marine and aviation communications, many police radios, and more. (Some digital signals can also be received, but most of those outside the ham bands are encrypted.) Some RTL-SDRs cost less than $15, but it is worth spending around $30-40 for a good-quality device. I recommend the ones available at RTL-SDR.com, because some others may not be able to receive AM broadcast, shortwave, and certain ham signals. Becoming a ham For many, getting a ham license is a good way to learn about and experiment with radio technology. At least in the US, any licensed ham can design their own digital mode and use it on the air, as long as it meets certain restrictions and is publicly documented. For others, becoming a ham is a way to help with disaster response. Organizations like the American Red Cross depend on ham radio for communication when internet and cellular infrastructure fails. Yet another reason is simply as a way to meet new people. While the possibility for this is somewhat limited with modes like FT8, which are more "computer-to-computer" than "person-to-person", many hams do publish their email addresses on sites such as QRZ.com, and most are happy to receive emails from people they have contacted on the air. For those interested in getting a ham radio license, there are several resources available. The ARRL's Licensing, Education, and Training web page would be a good place to start if you live in the US. HamStudy.org is an excellent resource for both studying for the test and finding an exam session; it provides study guides for the US and Canadian tests, though its exam finder only lists US sessions. Finally, an Internet search for "ham radio club in [your city/town]" will most likely find a club's web site, which will probably either have contact information or more info on getting a license, or both. Index entries for this article GuestArticles Sloniker, Sam (Log in to post comments) FOSS for amateur radio Posted Sep 8, 2021 18:20 UTC (Wed) by willy (subscriber, #9762) [Link] Another area where ham radio enthusiasts are important is helping organise outdoor events such as adventure races, ultramarathons, orienteering and rogaining. These events often go far into the wilderness where there is no permanent conventional radio connectivity. Hams provide updates from aid stations for those monitoring the progress of participants, and can call in medical aid if needed. The hams get to practise their hobby while helping others practise theirs. h3 Posted Sep 8, 2021 20:32 UTC (Wed) by b (subscriber, #153595) [Link] Yes, that is definitely another good use for ham radio! FOSS for amateur radio Posted Sep 9, 2021 8:46 UTC (Thu) by metan (subscriber, #74107) [Link] There are also other very interesting FOSS projects out there for instance the firmware for NanoVNA as well as TinySA are both released under apache license and both are build on the top of ChibiOS. These two device already made a small revolution in the radio amateur world since everyone can have quite capable vector analyzer and spectrum analyzer for a very cheap price. Before these two appeared you either couldn't easily measure or you had to ask someone who had the equipment. Now you can carry a vector analyzer in your pocket and even measure antenna out in the field. h3 Posted Sep 9, 2021 15:44 UTC (Thu) by b (subscriber, #153595) [Link] Yes, those are definitely good examples of FOSS go ham radio! I have a NanoVNA and it works very well. FOSS for amateur radio Posted Sep 9, 2021 15:13 UTC (Thu) by downey (guest, #117086) [Link] It's not software, but a related good resource I've found for tracking new and existing software and related FOSS development in amateur radio is the "Linux in the Ham Shack" podcast and website: https://lhspodcast.info/ h3 Posted Sep 9, 2021 15:46 UTC (Thu) by b (subscriber, #153595) [Link] Thank you for that link! It does look like a good resource. FOSS for amateur radio Posted Sep 9, 2021 15:45 UTC (Thu) by jebba (guest, #4439) [Link] There's also the somewhat related, 100% awesome project, SatNOGS. Lots of hams involved with that, afaict: https://satnogs.org/ h3 Posted Sep 9, 2021 15:48 UTC (Thu) by b (subscriber, #153595) [Link] Wow, that looks really interesting! I'll definitely do some research on that. FOSS for amateur radio Posted Sep 10, 2021 0:06 UTC (Fri) by pabs (subscriber, #43278) [Link] Much of the software discussed is available in Debian's ham radio blend for easy installation: https://www.debian.org/blends/hamradio/ https://wiki.debian.org/DebianHams h3 Posted Sep 10, 2021 5:27 UTC (Fri) by b (subscriber, #153595) [Link] That looks interesting! I run Arch on my laptop, but I might try that in a VM. h3 Posted Sep 10, 2021 6:42 UTC (Fri) by b (subscriber, #43278) [Link] Note that the live images are very outdated, so you probably want a regular Debian install and then install hamradio-all or one of hamradio-* with apt. h3 Posted Sep 10, 2021 17:07 UTC (Fri) by b (subscriber, #153595) [Link] Okay, I'll do it that way. FOSS for amateur radio Posted Sep 10, 2021 23:39 UTC (Fri) by gerdesj (subscriber, #5446) [Link] I took a quick look at the Debian lists and tried a few searches using yay -Ss. The AUR seems well populated with hamradio packages so why not stay local for now. Failing that, whip the compiler out and install random stuff to /usr/local! h3 Posted Sep 11, 2021 1:07 UTC (Sat) by b (subscriber, #153595) [Link] I tried to install WSJT-X with paru, and it tried to recompile gcc. h3 Posted Sep 11, 2021 3:21 UTC (Sat) by b (subscriber, #153595) [Link] I did install Fldigi on it, but I couldn't find Flmsg or Flamp in any repos. I'll probably try to figure out how to add them to the AUR. I might also try to write some kind of HamClock installer and put it in the AUR. (HamClock has its own update system, so I don't think directly adding it as a package would work.) h3 Posted Sep 11, 2021 4:56 UTC (Sat) by b (subscriber, #43278) [Link] Debian generally deals with internal updater systems by disabling them, so they don't stop packaging from being added. HamClock has another weird thing according to Debian folks; you have to compile a new binary for each display resolution you want. h3 Posted Sep 11, 2021 6:09 UTC (Sat) by b (subscriber, #153595) [Link] I will probably get around the size issue by packaging each size separately. I'll probably make them all conflict with each other to prevent confusing situations with multiple HamClocks. h3 Posted Sep 13, 2021 19:29 UTC (Mon) by b (subscriber, #153595) [Link] HamClock is now in the AUR. The packages are hamclock, hamclock-big, hamclock-bigger, and hamclock-huge. FOSS for amateur radio Posted Sep 13, 2021 5:19 UTC (Mon) by LtWorf (guest, #124958) [Link] Maybe the authors would accept patches to fix this insanity? FOSS for amateur radio Posted Sep 17, 2021 2:28 UTC (Fri) by N0NB (guest, #3407) [Link] Not specifically mentioned but relied upon by many of the fine projects listed is Hamlib. Due to time constraints I have moved away from patch integration and such which has been ably taken over by Mike, W9MDB. I do handle releases and we just released Hamlib 4.3.1 earlier this week. Truly, it's kind of hard to grasp that I've been with the project for over 20 years! The project itself was started by Frank Singleton, VK3FCS, in July 2000 and shortly thereafter Stephane Fillod, F8CFE, joined the project who did much development work in the early days. Hamlib is the result of many contributors around the globe. While not nearly to the scale of the Linux kernel, in fact there has been very little vendor support except for publicly available documentation (in fact many polite requests for additional information have been quietly ignored and in other cases have been outright refused while proprietary projects appear to receive such documentation), the collaborative ethos put in place by the kernel project has been a successful model for Hamlib, thanks to many hams willing to tinker with their radios. Another project that scratches another itch of mine is Tlf, a contest logger. No, it is not at the level of N1MM, but it does boast some impressive capability that I learned a bit more about by recently editing its manual page. Tom, DL1JBE is the Tlf project lead. Work continues in an effort to allow Tlf to be used for more events--ham radio contest organizers are certainly inventive, especially state QSO parties! More users and more contributors are always welcome. The email Linus sent just over 30 years ago may have seemed almost an afterthought at the time. Over the three decades since the shock wave it created continues to reverberate throughout the computing landscape. As important as the Linux kernel is technically, Linus also showed us the way toward collaborative development. The bazaar, indeed. h3 Posted Sep 22, 2021 3:02 UTC (Wed) by b (subscriber, #153595) [Link] Yes, Hamlib is definitely a good example! Tlf looks interesting. I've been looking for a good Linux-based logger, so I'll probably try Tlf. 73 de KJ7RRV
5
Cops in Miami, NYC arrest protesters from facial recognition matches
Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact. Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week. The agency has a policy against using facial recognition technology to surveil people exercising "constitutionally protected activities" such as protesting, according to the report. "If someone is peacefully protesting and not committing a crime, we cannot use it against them," Miami Police Assistant Chief Armando Aguilar told NBC6. But, Aguilar added, "We have used the technology to identify violent protesters who assaulted police officers, who damaged police property, who set property on fire. We have made several arrests in those cases, and more arrests are coming in the near future." An attorney representing the woman said he had no idea how police identified his client until contacted by reporters. "We don't know where they got the image," he told NBC6. "So how or where they got her image from begs other privacy rights. Did they dig through her social media? How did they get access to her social media?" Similar reports have surfaced from around the country in recent weeks. Police in Columbia, South Carolina, and the surrounding county likewise used facial recognition, though from a different vendor, to arrest several protesters after the fact, according to local paper The State. Investigators in Philadelphia also used facial recognition software, from a third vendor, to identify protestors from photos posted to Instagram, The Philadelphia Inquirer reported. New York City Mayor Bill de Blasio promised on Monday the NYPD would be "very careful and very limited with our use of anything involving facial recognition," Gothamist reported. This statement came on the heels of an incident earlier this month when "dozens of NYPD officers—accompanied by police dogs, drones and helicopters" descended on the apartment of a Manhattan activist who was identified by an "artificial intelligence tool" as a person who allegedly used a megaphone to shout into an officer's ear during a protest in June. The ongoing nationwide protests, which seek to bring attention to systemic racial disparities in policing, have drawn more attention to the use of facial recognition systems by police in general. Repeated tests and studies have shown that most facial recognition algorithms in use today are significantly more likely to generate false positives or other errors when trying to match images featuring people of color. Late last year, the National Institute of Standards and Technology (NIST) published research finding that facial recognition systems it tested had the highest accuracy when identifying white men but were 10 to 100 times more likely to make mistakes with Black, Asian, or Native American faces. There's another, particularly 2020 wrinkle thrown in when it comes to matching photos of civil rights protesters, too: NIST found in July that most facial recognition algorithms perform significantly more poorly when matching masked faces. A significant percentage of the millions of people who have shown up for marches, rallies, and demonstrations around the country this summer have worn masks to mitigate against the risk of COVID-19 transmission in large crowds. The ACLU in June filed a complaint against the Detroit police, alleging the department arrested the wrong man based on a flawed, incomplete match provided by facial recognition software. In the wake of the ACLU's complaint, Detroit Police Chief James Craig admitted that the software his agency uses misidentifies suspects 96 percent of the time. IBM walked away from the facial recognition business in June. The company also asked Congress to pass laws requiring vendors and users to test their systems for racial bias and have such tests audited and reported. Amazon echoed the call for Congress to pass a law while asking police to take a year off from using its Rekognition product in the hope that Congress acts by next summer. Clearview in particular—used in Miami—is highly controversial for reasons beyond the potential for bias. A New York Times report from January found that the highly secretive startup was scraping basically the whole Internet for images to populate its database of faces. Facebook, YouTube, Twitter, Microsoft, and other firms nearly all sent Clearview orders to stop within days of the report becoming public, but the company still boasts it has around 3 billion images on hand for partners (mostly but not exclusively law enforcement) to match individuals' pictures against. The company is facing several lawsuits from states and the ACLU, while individuals are seeking class-action status.
1
Create a desktop app from any website
{{ message }} nativefier/nativefier You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Bitcoin to $4M USD by 2025
As of June 2023 Bitcoin has a market cap of 0 and it is trading at around $. This makes Bitcoin the world's 1st largest crypto project. These are our Bitcoin price predictions for Bitcoin's future. June 2023 Bitcoin (BTC) to USD predictions. At the start of June 2023 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for June 2023. The average Bitcoin price for the month of June 2023 is $0.0000. Bitcoin price forecast at the end of June 2023 $0.0000, change for June 2023 14%. July 2023 Bitcoin (BTC) to USD predictions. At the start of July 2023 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for July 2023. The average Bitcoin price for the month of July 2023 is $0.0000. Bitcoin price forecast at the end of July 2023 $0.0000, change for July 2023 11%. August 2023 Bitcoin (BTC) to USD predictions. At the start of August 2023 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for August 2023. The average Bitcoin price for the month of August 2023 is $0.0000. Bitcoin price forecast at the end of August 2023 $0.0000, change for August 2023 11%. September 2023 Bitcoin (BTC) to USD predictions. At the start of September 2023 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for September 2023. The average Bitcoin price for the month of September 2023 is $0.0000. Bitcoin price forecast at the end of September 2023 $0.0000, change for September 2023 11%. October 2023 Bitcoin (BTC) to USD predictions. At the start of October 2023 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for October 2023. The average Bitcoin price for the month of October 2023 is $0.0000. Bitcoin price forecast at the end of October 2023 $0.0000, change for October 2023 -13%. November 2023 Bitcoin (BTC) to USD predictions. At the start of November 2023 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for November 2023. The average Bitcoin price for the month of November 2023 is $0.0000. Bitcoin price forecast at the end of November 2023 $0.0000, change for November 2023 10%. December 2023 Bitcoin (BTC) to USD predictions. At the start of December 2023 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for December 2023. The average Bitcoin price for the month of December 2023 is $0.0000. Bitcoin price forecast at the end of December 2023 $0.0000, change for December 2023 3%. January 2024 Bitcoin (BTC) to USD predictions. At the start of January 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for January 2024. The average Bitcoin price for the month of January 2024 is $0.0000. Bitcoin price forecast at the end of January 2024 $0.0000, change for January 2024 6%. February 2024 Bitcoin (BTC) to USD predictions. At the start of February 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for February 2024. The average Bitcoin price for the month of February 2024 is $0.0000. Bitcoin price forecast at the end of February 2024 $0.0000, change for February 2024 7%. March 2024 Bitcoin (BTC) to USD predictions. At the start of March 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for March 2024. The average Bitcoin price for the month of March 2024 is $0.0000. Bitcoin price forecast at the end of March 2024 $0.0000, change for March 2024 -2%. April 2024 Bitcoin (BTC) to USD predictions. At the start of April 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for April 2024. The average Bitcoin price for the month of April 2024 is $0.0000. Bitcoin price forecast at the end of April 2024 $0.0000, change for April 2024 7%. May 2024 Bitcoin (BTC) to USD predictions. At the start of May 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for May 2024. The average Bitcoin price for the month of May 2024 is $0.0000. Bitcoin price forecast at the end of May 2024 $0.0000, change for May 2024 -3%. June 2024 Bitcoin (BTC) to USD predictions. At the start of June 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for June 2024. The average Bitcoin price for the month of June 2024 is $0.0000. Bitcoin price forecast at the end of June 2024 $0.0000, change for June 2024 14%. July 2024 Bitcoin (BTC) to USD predictions. At the start of July 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for July 2024. The average Bitcoin price for the month of July 2024 is $0.0000. Bitcoin price forecast at the end of July 2024 $0.0000, change for July 2024 7%. August 2024 Bitcoin (BTC) to USD predictions. At the start of August 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for August 2024. The average Bitcoin price for the month of August 2024 is $0.0000. Bitcoin price forecast at the end of August 2024 $0.0000, change for August 2024 7%. September 2024 Bitcoin (BTC) to USD predictions. At the start of September 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for September 2024. The average Bitcoin price for the month of September 2024 is $0.0000. Bitcoin price forecast at the end of September 2024 $0.0000, change for September 2024 15%. October 2024 Bitcoin (BTC) to USD predictions. At the start of October 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for October 2024. The average Bitcoin price for the month of October 2024 is $0.0000. Bitcoin price forecast at the end of October 2024 $0.0000, change for October 2024 9%. November 2024 Bitcoin (BTC) to USD predictions. At the start of November 2024 the price will be around $0.0000 USD. A Maximum price of $0.0000, minimum price of $0.0000 for November 2024. The average Bitcoin price for the month of November 2024 is $0.0000. Bitcoin price forecast at the end of November 2024 $0.0000, change for November 2024 -1%. Bitcoin Price Prediction For 2023 Date Price Minimum Maximum Average Monthly Change% June 2023 $0.0000 $0.0000 $0.0000 $0.0000 14% July 2023 $0.0000 $0.0000 $0.0000 $0.0000 11% August 2023 $0.0000 $0.0000 $0.0000 $0.0000 11% September 2023 $0.0000 $0.0000 $0.0000 $0.0000 11% October 2023 $0.0000 $0.0000 $0.0000 $0.0000 -13% November 2023 $0.0000 $0.0000 $0.0000 $0.0000 10% December 2023 $0.0000 $0.0000 $0.0000 $0.0000 3% Bitcoin Price Prediction For 2024 Date Price Minimum Maximum Average Monthly Change% January 2024 $0.0000 $0.0000 $0.0000 $0.0000 6% February 2024 $0.0000 $0.0000 $0.0000 $0.0000 7% March 2024 $0.0000 $0.0000 $0.0000 $0.0000 -2% April 2024 $0.0000 $0.0000 $0.0000 $0.0000 7% May 2024 $0.0000 $0.0000 $0.0000 $0.0000 -3% June 2024 $0.0000 $0.0000 $0.0000 $0.0000 14% July 2024 $0.0000 $0.0000 $0.0000 $0.0000 7% August 2024 $0.0000 $0.0000 $0.0000 $0.0000 7% September 2024 $0.0000 $0.0000 $0.0000 $0.0000 15% October 2024 $0.0000 $0.0000 $0.0000 $0.0000 9% November 2024 $0.0000 $0.0000 $0.0000 $0.0000 -1% December 2024 $0.0000 $0.0000 $0.0000 $0.0000 13% Bitcoin Price Prediction For 2025 Date Price Minimum Maximum Average Monthly Change% January 2025 $0.0000 $0.0000 $0.0000 $0.0000 3% February 2025 $0.0000 $0.0000 $0.0000 $0.0000 10% March 2025 $0.0000 $0.0000 $0.0000 $0.0000 -6% April 2025 $0.0000 $0.0000 $0.0000 $0.0000 11% May 2025 $0.0000 $0.0000 $0.0000 $0.0000 -12% June 2025 $0.0000 $0.0000 $0.0000 $0.0000 7% July 2025 $0.0000 $0.0000 $0.0000 $0.0000 2% August 2025 $0.0000 $0.0000 $0.0000 $0.0000 13% September 2025 $0.0000 $0.0000 $0.0000 $0.0000 3% October 2025 $0.0000 $0.0000 $0.0000 $0.0000 14% November 2025 $0.0000 $0.0000 $0.0000 $0.0000 7% December 2025 $0.0000 $0.0000 $0.0000 $0.0000 11% Bitcoin Price Prediction For 2026 Date Price Minimum Maximum Average Monthly Change% January 2026 $0.0000 $0.0000 $0.0000 $0.0000 -3% February 2026 $0.0000 $0.0000 $0.0000 $0.0000 2% March 2026 $0.0000 $0.0000 $0.0000 $0.0000 5% April 2026 $0.0000 $0.0000 $0.0000 $0.0000 14% May 2026 $0.0000 $0.0000 $0.0000 $0.0000 -12% June 2026 $0.0000 $0.0000 $0.0000 $0.0000 1% July 2026 $0.0000 $0.0000 $0.0000 $0.0000 2% August 2026 $0.0000 $0.0000 $0.0000 $0.0000 1% September 2026 $0.0000 $0.0000 $0.0000 $0.0000 -4% October 2026 $0.0000 $0.0000 $0.0000 $0.0000 -1% November 2026 $0.0000 $0.0000 $0.0000 $0.0000 10% December 2026 $0.0000 $0.0000 $0.0000 $0.0000 11% Bitcoin Price Prediction For 2027 Date Price Minimum Maximum Average Monthly Change% January 2027 $0.0000 $0.0000 $0.0000 $0.0000 3% February 2027 $0.0000 $0.0000 $0.0000 $0.0000 -7% March 2027 $0.0000 $0.0000 $0.0000 $0.0000 6% April 2027 $0.0000 $0.0000 $0.0000 $0.0000 14% May 2027 $0.0000 $0.0000 $0.0000 $0.0000 -1% June 2027 $0.0000 $0.0000 $0.0000 $0.0000 14% July 2027 $0.0000 $0.0000 $0.0000 $0.0000 -7% August 2027 $0.0000 $0.0000 $0.0000 $0.0000 1% September 2027 $0.0000 $0.0000 $0.0000 $0.0000 -3% This Bitcoin forecast has not been reviewed by a professional and should not be used for making any final Bitcoin financial decisions! Bitcoin's Past performance does not guarantee future results!
1
A beginner’s guide away from scanf (2017)
This document is for you if you started to learn programmming in C. Chances are you follow a course and the method to read some input you were taught is to use the scanf() function. Nothing. And, chances are, everything for your usecase. This document attempts to make you understand why. So here's the very first rule about scanf(): Rule 0: Don't use scanf(). (Unless, you know exactly what you do.) But before presenting some alternatives for common usecases, let's elaborate a bit on the knowing what you do part. Here is a classic example of scanf() use (and, misuse) in a beginner's program: As you probably know, %d is the conversion for an integer, so this program works as expected: Oops. Where does the value 38 come from? The answer is: This could be any value, or the program could just crash. A crashing program in just two lines of code is quite easy to create in C. scanf() is asked to convert a number, and the input doesn't contain any numbers, so scanf() converts nothing. As a consequence, the variable a is never written to and using the value of an uninitialized variable in C is undefined behavior. Undefined behavior in C C is a very low-level language and one consequence of that is the following: Nothing will ever stop you from doing something completely wrong. Many languages, especially those for some managed environment like Java or C# actually stop you when you do things that are not allowed, say, access an array element that does not exist. C doesn't. As long as your program is syntactically correct, the compiler won't complain. If you do something forbidden in your program, C just calls the behavior of your program undefined. This formally allows anything to happen when running the program. Often, the result will be a crash or just output of "garbage" values, as seen above. But if you're really unlucky, your program will seem to work just fine until it gets some slightly different input, and by that time, you will have a really hard time to spot where exactly your program is undefined. Therefore avoid undefined behavior by all means!. On a side note, undefined behavior can also cause security holes. This has happened a lot in practice. Now that we know the program is broken, let's fix it. Because scanf() returns how many items were converted successfully, the next obvious idea is just to retry the "number input" in case the user entered something else: stooooop! Ok, we managed to interrupt this madness with Ctrl+C but why did that happen? Rule 1: scanf() is not for reading input, it's for parsing input. The first argument to scanf() is a format string, describing what scanf() should parse. The important thing is: scanf() never reads anything it cannot parse. In our example, we tell scanf() to parse a number, using the %d conversion. Sure enough, abc is not a number, and as a consequence, abc is not even read. The next call to scanf() will again find our unread input and, again, can't parse it. Chances are you find some examples saying "let's just flush the input before the next call to scanf()": Forget about this idea immediately, please. You'd expect this to clear all unread input, and indeed, some systems will do just that. But according to C , flushing an input stream is undefined behavior, and this should now ring a bell. And yes, there are a lot of systems that won't clear the input when you attempt to flush stdin. So, the only way to clear unread input is by reading it. Of course, we can make scanf() read it, using a format string that parses any string. Sounds easy. Let's consider another classic example of a beginner's program, trying to read a string from the user: As %s is for strings, this should work with any input: Well, now we have a buffer overflow. You might get Segmentation fault on a Linux system, any other kind of crash, maybe even a "correctly" working program, because, once again, the program has undefined behavior. The problem here is: %s matches any string, of any length, and scanf() has no idea when to stop reading. It reads as long as it can parse the input according to the format string, so it writes a lot more data to our name variable than the 12 characters we declared for it. Buffer overflows in C A buffer overflow is a specific kind of undefined behavior resulting from a program that tries to write more data to an (array) variable than this variable can hold. Although this is undefined, in practice it will result in overwriting some other data (that happens to be placed after the overflowed buffer in memory) and this can easily crash the program. One particularly dangerous result of a buffer overflow is overwriting the return address of a function. The return address is used when a function exits, to jump back to the calling function. Being able to overwrite this address ultimately means that a person with enough knowledge about the system can cause the running program to execute any other code supplied as input. This problem has led to many security vulnerabilities; imagine you can make for example a webserver written in C execute your own code by submitting a specially tailored request... So, here's the next rule: Rule 2: scanf() can be dangerous when used carelessly. Always use field widths with conversions that parse to a string (like %s). The field width is a number preceeding the conversion specifier. It causes scanf() to consider a maximum number of characters from the input when parsing for this conversion. Let's demonstrate it in a fixed program: We also increased the buffer size, because there might be really long names. There's an important thing to notice: Although our name has room for 40 characters, we instruct scanf() not to read more than 39. This is because a string in C always needs a 0 byte appended to mark the end. When scanf() is finished parsing into a string, it appends this byte automatically, and there must be space left for it. So, this program is now safe from buffer overflows. Let's try something different: Well, that's ... outspoken. What happens here? Reading some scanf() manual, we would find that %s parses a word, not a string, for example I found the following wording: : Matches a sequence of non-white-space characters A white-space in C is one of space, tab (\t) or newline (\n). Rule 3: Although scanf() format strings can look quite similar to printf() format strings, they often have slightly different semantics. (Make sure to read the fine manual) The general problem with parsing "a string" from an input stream is: Where does this string end? With %s, the answer is at the next white-space. If you want something different, you can use %[: We could change the program, so anything until a newline will be parsed into our string: It might get a bit frustrating, but this is again a program with possible undefined behavior, see what happens when we just press Enter: Here's another sentence from a scanf() manual, from the section describing the [ conversion: The usual skip of leading white space is suppressed. With many conversions, scanf() automatically skips whitespace characters in the input, but with some, it doesn't. Here, our newline from just pressing enter isn't skipped, and it doesn't match for our conversion that explicitly excludes newlines. The result is: scanf() doesn't parse anything, our name remains uninitialized. One way around this is to tell scanf() to skip whitespace: If the format string contains any whitespace, it matches any number of whitespace characters in the input, including no whitespace at all. Let's use this to skip whitespace the user might enter before entering his name: Yes, this program works and doesn't have any undefined behavior*), but I guess you don't like very much that nothing at all happens when you just press enter, because scanf() is skipping it and continues to wait for input that can be matched. *) actually, this isn't even entirely true. This program still has undefined behavior for empty input. You could force this on a Linux console hitting Ctrl+D for example. So, it's again an example for code you should not write. There are several functions in C for reading input. Let's have a look at one that's probably most useful to you: fgets(). fgets() does a simple thing, it reads up to a given maximum number of characters, but stops at a newline, which is read as well. In other words: It reads a line of input. This is the function signature: There are two very nice things about this function for what we want to do: So let's rewrite this program again: I assure you this is safe, but it has a little flaw: Of course, this is because fgets() also reads the newline character itself. But the fix is simple as well: We use strcspn() to get the index of the newline character if there is one and overwrite it with 0. strcspn() is declared in string.h, so we need a new #include: There are many functions for converting a string to a number in C. A function that's used quite often is atoi(), the name means anything to integer. It returns 0 if it can't convert the string. Let's try to rewrite the broken example 2 using fgets() and atoi(). atoi() is declared in stdlib.h. Well, not bad so far. But what if we want to allow an actual 0 to be entered? We can't tell whether atoi() returns 0 because it cannot convert anything or because there was an actual 0 in the string. Also, ignoring the extra x when we input 15x may not be what we want. atoi() is good enough in many cases, but if you want better error checking, there's an alternative: strtol(): This looks complicated. But it isn't: Now let's use this instead of atoi() (note it returns a long int), making use of every possibility to detect errors: This looks really good, doesn't it? If you want to know more, I suggest you read on similar functions like atof(), strtoll(), strtod() etc. Yes, you can. Here's a last rule: Rule 4: scanf() is a very powerful function. (and with great power comes great responsibility ...) A lot of parsing work can be done with scanf() in a very concise way, which can be very nice, but it also has many pitfalls and there are tasks (such as reading a line of input) that are much simpler to accomplish with a simpler function. Make sure you understand the rules presented here, and if in doubt, read the scanf() manual precisely. That being said, here's an example how to read a number with retries using scanf(): It's not as nice as the version using strtol() above, because there is no way to tell scanf() not to skip whitespace for %d -- so if you just hit Enter, it will still wait for your input -- but it works and it's a really short program. For the sake of completeness, if you really really want to get a line of input using scanf(), of course this can be done safely as well: Note that this final example of course leaves input unread, even from the same line, if there were more than 39 characters until the newline. If this is a concern, you'd have to find another way -- or just use fgets(), making the check easier, because it gives you the newline if there was one.
3
Joe Biden gets a win on global taxes at his first G20 as President
President Joe Biden’s first day at the Group of 20 Summit began Saturday with the President achieving one of his core objectives for the global conference – an endorsement of a 15% global minimum tax from world leaders. The tax is a chief priority of Biden’s that the White House believes would end the global race-to-the-bottom on corporate tax rates. The new rule will be formalized when the leaders release a final G20 communiqué on Sunday, when the summit ends. All leaders of the G20 came out in support for a global minimum tax during the summit’s first session, a senior US administration official said. “The President emphasized the importance of this historic deal during his intervention,” the official said, referring to Biden’s turn to speak during the meeting. “The President also mentioned that while we don’t see eye to eye on every issue, we can tackle shared interests.” Each individual nation must pass its own version of the tax, and it may take some time to implement worldwide. One hundred thirty-six nations agreed to such a tax in October, and Saturday’s endorsement from 20 of the world’s largest economies is seen as a step toward worldwide implementation. Italian Prime Minister Mario Draghi, the leader of this year’s G20, said in remarks at the summit’s start that the agreement was proof of the power of multilateralism. “We reached a historic agreement for a fairer and more effective international tax system,” Draghi said, adding, “These results are a powerful reminder of what we can achieve together.” The measure would tax large multinational companies at a minimum rate of 15% and require them to pay taxes in the countries where they do business. The Biden administration breathed new life into the global initiative earlier this year and secured the support of the G7 countries in June, paving the way for a preliminary deal in July. “In our judgment, this is more than just a tax deal. It’s a reshaping of the rules of the global economy,” a senior administration official said. Aspects of Biden’s recently unveiled spending framework would enact part of the global minimum tax scheme, though the fate of that measure remains uncertain as Democrats haggle over timing. Biden administration officials have downplayed the effect that Democratic infighting has on Biden’s ability to rally foreign leaders. “These world leaders really are sophisticated. They understand. There’s a complicated process in any democracy to do anything as ambitious as we’re pursuing in our domestic agenda,” the senior administration official said. “These are multigenerational investments and, of course, we’re trying to reform the tax code to pay for it. And so, you know, I think there’s going to be a broad understanding that takes time.” While the day started off with a win for Biden, he’s facing a more skeptical global audience than he found the last time he trekked to Europe to meet with world leaders. Divisions within Biden’s own political party are threatening to derail his entire economic agenda back home, and Biden himself has acknowledged the credibility of the United States and the future of his presidency are on the line. Despite urging lawmakers to give him a legislative win to tout on the world stage – especially the climate change measures that would give his presence at next week’s United Nations Climate Summit extra weight – Biden has shown up in the Eternal City without a done deal. Added to that complication are the questions coming from some nations about Biden’s commitment to working cooperatively on global issues in the wake of the US’ chaotic withdrawal from Afghanistan. The President arrived at the summit site Saturday morning, stepping from his car and greeting Draghi. Biden posed for a family photo with the G20 leaders, along with Italian medical workers who joined the heads of state on the platform. World leaders also spent Saturday afternoon discussing the Covid-19 pandemic, global supply chain problems, high energy prices and combating the climate crisis, among other topics. Biden spoke during the meeting and “reminded G20 Leaders that new pandemics can arise any time so it is important that we strengthen global health systems and do more to create the global health security infrastructure to make sure we are prepared against the next pandemic,” the official said. “The President stressed the need for balanced, well supplied, and competitive global energy markets so we don’t undermine this critical moment of economic recovery,” the official said. The official said Biden also “underscored his commitment to ending the global pandemic and securing an inclusive global economic recovery, including by supporting developing countries through debt relief.” A separate senior administration official said on Friday Biden would stop short of getting directly involved in OPEC decisions about ramping up supply: “We’re certainly not going to get involved with the specifics of what happens within the cartel, but we have a voice and we intend to use it on an issue that’s affecting the global economy.” “There are major energy producers that have spare capacity,” the official said. “And we’re encouraging them to use it to ensure a stronger, more sustainable recovery across the world.” Energy policy was top of mind for the President in a meeting with German Chancellor Angela Merkel and Vice-Chancellor Olaf Scholz, Merkel’s likely successor. Per a senior administration official, Biden “made the point that we need to see adequate supply of energy in this moment as we make the long term transition to a carbon free economy” during the sideline meeting. The President is expected to hold additional bilateral meetings with world leaders while in Rome, though the White House has yet to make any firm announcements.
1
I got zero detections on virus total
ggezy p Follow 3 min read p Oct 27, 2021 -- Listen Share Photo Shows Zero Detections What is virus total? In a few words It’s simply both a static and dynamic malware analysis platform that uses “Yara based” rules to statically identify malware and a “juju box” to dynamically analyze it. People are trying to bypass virus total in order to deploy malware on their target systems in order to get some sort of valuable information out or simply to encrypt files and demand a ransom payment. Generic bypasses? There’s typically a lot of ways to do this but if you make a malware that is new and unique instead of copying others, you get rewarded with low detections. When people make malware public they say: “DO NOT UPLOAD TO VIRUS TOTAL” which implies that if their malware gets just a single detection, all the other antivirus vendors will get alerted and the executable will get flagged as malicious. NGROK, with one single detection. That number above is only going up, as this file has been scanned 3 hours ago and all those green “Undetected” antivirus vendors will be alerted and given the file sample. The number of detections will go up over time and eventually each antivirus vendor will have a copy of the file and set of rules to identify it. Antivirus companies pay virus total for rules list identifying common malware samples uploaded to the service. The bypass method Today, we will be using NodeJS and pkg to compile a FUD executable. This sounds funny but it actually works. Here’s what we do: Step one Download NodeJS if you already haven't, run the .msi installer. If you have windows, download the 64 bit .msi installer Step two MAKE SURE YOU TICK THIS BOX. Without it for some reason the tools required for pkg to work properly won’t install. Don’t ask why just tick the box. Carry on with the installation as normal. Tick the box for your own good Step three When your installation is complete, there’s one more thing required to do. You need to install pkg: npm i pkg -g Step four Make some malicious JavaScript program and compile it. pkg index.js -o test.exe Step five Watch the world burn Upload it to virus total and watch your number of detections go down to absolutely zero. I found this trick while unpacking YouTube malware myself. Out of interest I uploaded it to virus total and I saw a crazy zero detections. Well there you go! Thank you for reading my article. Have a nice day! p p Follow 14 Followers Aspiring Security Researcher Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
1
The Garmin Hack Was a Warning
It’s been over a week since hackers crippled Garmin with a ransomware attack, and five days since its services started flickering back to life. The company still hasn’t fully recovered, as syncing issues and delays continue to haunt corners of the Garmin Connect platform. Two things, though, are clear: It could have been worse for Garmin. And it’s only a matter of time before ransomware’s big game hunters strike again. By this point, the world has seen a few large-scale meltdowns stem from ransomware-style attacks, where hacker groups encrypt sensitive files and shake down the owners for money. In 2017, WannaCry swept the globe before intrepid hacker Marcus Hutchins found and activated its kill switch. That same year, NotPetya caused billions of dollars of damage at multinational corporations like Maersk and Merck, although the ransomware aspect turned out to be a front for a vicious data-wiper. Time appears to have emboldened some hackers, however, as large companies take their place on the list of popular targets, alongside hospitals and local governments. Recent victims include not just Garmin but Travelex, an international currency exchange company, which ransomware hackers successfully hit on New Year’s Eve last year. Cloud service provider Blackbaud—relatively low-profile, but a $3.1 billion market cap—disclosed that it paid a ransom to prevent customer data from leaking after an attack in May. And those are just the cases that go public. “There are certainly rather large organizations that you are not hearing about who have been impacted,” says Kimberly Goody, senior manager of analysis at security firm FireEye. “Maybe you don’t hear about that because they choose to pay or because it doesn’t necessarily impact consumers in a way it would be obvious something is wrong.” Bigger companies make attractive ransomware targets for self-evident reasons. “They’re well-insured and can afford to pay a lot more than your little local grocery store,” says Brett Callow, a threat analyst at antivirus company Emsisoft. But ransomware attackers are also opportunistic, and a poorly secured health care system or city—neither of which can tolerate prolonged downtime—has long offered better odds for a payday than corporations that can afford to lock things down. The gap between big business defenses and ransomware sophistication, though, is narrowing. “Over the last two years, we’ve seen case after case of vulnerable corporate networks, and the rise of malware designed for the intentional infection of business networks,” says Adam Kujawa, a director at security firm Malwarebytes Labs. And for hackers, success breeds success; Emsisoft estimates that ransomware attackers collectively took in $25 billion last year. “These groups now have huge amounts to invest in their operations in terms of ramping up their sophistication and scale,” Callow says. Even ransomware attacks that start without a specific high-profile target in mind—who knows what a phishing campaign might turn up?—have increasingly focused on spotting the whales in the net. One actor associated with Maze ransomware, FireEye’s Goody says, specifically sought to hire someone whose sole job would be to scan the networks of compromised targets to determine not only the identity of the organization but its annual revenues. The Garmin incident proves especially instructive here. The company was reportedly hit by a relatively new strain of ransomware called WastedLocker, which has been tied to Russia’s Evil Corp malware dynasty. For much of the past decade, the hackers behind Evil Corp allegedly used banking-focused malware to pilfer more than $100 million from financial institutions, as outlined in a Department of Justice indictment last year. In 2017, Evil Corp began incorporating Bitpaymer ransomware into its routine. After the indictment, it apparently retooled and set its sights much higher.
1
Getting Started with Graphics Shaders and RealityKit
At last we can add a geometry or surface shader to an Entity in your RealityKit scene with the release of RealityKit 2! Coming with iOS 15/macOS 12. A shader is code that is passed to the renderer to then be run at specific stages in the rendering pipeline. The two shaders we now have in RealityKit are geometry and surface shaders. Both of these are sampled in the sample code: Building an Immersive Experience with RealityKit. A geometry shader (GeometryModifier) alters the position of the geometry’s vertices. In the sample provided this by Apple this is used to add a wavey effect to the seaweed. Previously the only way to do this would be to bake an animation into a USDZ file, which means you would need to re-create the USDZ every time you tweak the animation (rather than changing a parameter in your shader), and all the seaweed in the scene would look extremely uniform or need their own USDZ animations. A SurfaceShader on the other hand alters the material applied to a mesh. A surface shader alters the shading of each pixel of a material as it is rendered on the screen. These can take into consideration models around them or behind, and can be used to make interesting effects such as glass warping the object behind it, or transitioning between colours. If you want to play around with shaders outside of RealityKit to gain a better understanding, I’d encourage you to check out the following links: There are a lot of resources out there, as shaders are by no means a new technology. When creating any kind of shader in RealityKit, you must use a Metal file. Let’s make a basic looping animation shader to apply to a Cube. For this example, the logic is as follows: To make an oscillating motion like this we will use a sinusoidal wave such as a sine or cosine. With a cube, there are typically 4 vertices in the positive y axis and 4 below. And these four 3D points are each one of: [-0.5, 0.5, 0.5], [0.5, 0.5, 0.5], [-0.5, 0.5, -0.5], [0.5, 0.5, -0.5][ ] Notice that all the y values are positive 0.5 (above the y axis) From a topdown view, this is what our cube’s four coordinates will be doing: See the above graph in Desmos: h2 Shader Cube Top Shader Cube Topwww.desmos.com If each point on the cube has a starting point at (±0.5, ±0.5) and we want to: Based on the above, the offset of our point is in the X and Z world space coordinates by a value changing between +0.5 and -0.5, depending on the sign of our starting point. Let’s take a look at a basic example of a geometry shader now: Place the above code in a new Metal file in your Xcode project. This geometry shader will give us the desired output, but there are a few steps needed to apply this to a model in our RealityKit scene. // Create the cubelet cubeModel = ModelEntity( mesh: .generateBox(size: 1), materials: [SimpleMaterial(color: .red, isMetallic: false)]​// Fetch the default metal librarylet mtlLibrary = MTLCreateSystemDefaultDevice()! .makeDefaultLibrary()!​// Fetch the geometry modifier by namelet geomModifier = CustomMaterial.GeometryModifier( named: "simpleStretch", in: mtlLibrary​// Take each of the model's current materials,// and apply the geometry shader to it.cubeModel.model?.materials = cubeModel.model?.materials.map { try! CustomMaterial(from: $0, geometryModifier: geomModifier)} ?? [Material]() ) ) In the above code snippet, we fetch the default Metal Device with MTLCreateSystemDefaultDevice(). The default device will be the GPU, as all these shaders by default run on the GPU (which is what you want!). Once that cube is added to the scene, we will get a result like this: In addition to a geometry modifier, RealityKit also offers a way to alter the rendered colour on a material. In this second example, we’ll take a simple plane made up of 100x100 vertices and make it look like a slowly waving body of water. To create the base geometry I’m generating my own geometry using the techniques outlined in my previous post. The method to create it is included in the RealityGeometries package. import RealityGeometries​var oceanMat = SimpleMaterial(color: .blue, isMetallic: false)oceanMat.metallic = .float(0.7)oceanMat.roughness = 0.9​let modelEnt = ModelEntity( mesh: try! .generateDetailedPlane( width: 2, depth: 2, vertices: (100, 100) ), materials: [oceanMat] ) Currently all we see is a basic plane with the basic UIColor.blue material. Let’s add some waves to it with a geometry shader. The geometry shader here looks more complex, but it was created by a bit of trial and error: The two values xPeriod and zPeriod are dictating the distance between waves in the x and z axis. I set them to be non-constant, varying based on time and their distance in the x and z plane. Without this the waves would be far too uniform. xOffset and zOffset are the sizes of the x and z planes at their given position based on time and position in the x and z axis once again. I would encourage you to play around with these values if it seems alien. Once again, here’s how to apply that to our material: // Fetch the default metal librarylet mtlLibrary = MTLCreateSystemDefaultDevice()!.makeDefaultLibrary()! let geometryShader = CustomMaterial.GeometryModifier( named: "waveMotion", in: mtlLibrary​modelEnt.model?.materials = modelEnt.model?.materials.map { try! CustomMaterial(from: $0, geometryModifier: geometryShader)} ?? [Material]() ) We can see based on the edges here that there are waves happening, but because the lighting is so uniform in there we can’t properly see any movement in the middle. The next shader we will apply to this geometry is a surface shader. The logic will be very simple, depending on the height of the point on this mesh we will set the base colour to a value moving from a nice ocean blue up to white. Here’s the shader I’ve ended up with: The value of params.geometry().model_position().y ranges from -maxAmp to +maxAmp. So I need to add maxAmp then divide by maxAmp * 2 in order to get a value ranging from 0 to 1. I raise this result to the power of 8 making it non-linear; this means that only the higher values will ramp up quickly to white. To apply both the geometry and the surface shader to our model, we must apply them at the same time like so: // Fetch the default metal librarylet mtlLibrary = MTLCreateSystemDefaultDevice()! .makeDefaultLibrary()! // Fetch the "waveMotion" geometry modifierlet geometryShader = CustomMaterial.GeometryModifier( named: "waveMotion", in: mtlLibrary ) // Fetch the "waveSurface" surface shaderlet surfaceShader = CustomMaterial.SurfaceShader( named: "waveSurface", in: mtlLibrary ) // Apply both to the materialmodelEnt.model?.materials = modelEnt.model?.materials.map { try! CustomMaterial( from: $0, surfaceShader: surfaceShader, geometryModifier: geometryShader )} ?? [Material]() This has been a very brief introduction to all of the things that are possible using RealityKit shaders. I hope you find some inspiration from these examples, and that they help you when implementing shaders in your own applications. For more information follow me here on Medium, twitter or github, as I’m frequently posting new content on both those platforms, including open source swift packages specifically for RealityKit! Also leave some claps if you’re feeling excited by WWDC’s new features in RealityKit this year!
4
Geneva adopts the highest minimum wage in the world at $25 an hour
Voters in Geneva, Switzerland, have agreed to introduce a minimum wage in the canton that is the equivalent of $25 an hour – believed to be the highest in the world. According to government data, 58% of voters in the canton were in favor of the initiative to set the minimum wage at 23 Swiss francs an hour, which was backed by a coalition of labor unions and aimed at “fighting poverty, favoring social integration, and contributing to the respect of human dignity.” While Switzerland has no national minimum wage law, Geneva is the fourth of 26 cantons to vote on the matter in recent years after Neuchâtel, Jura and Ticino. “This new minimum wage will apply to about 6% of the canton’s workers as of November 1st,” Geneva State Counselor Mauro Poggia told CNN in a statement. Communauté genevoise d’action syndicale, the umbrella organization of unions in Geneva, described the result as “a historic victory, which will directly benefit 30,000 workers, two-thirds of whom are women.” The decision was also praised by Michel Charrat, president of the Groupement transfrontalier européen, an association of workers commuting between Geneva and nearby France. Charrat told The Guardian that the coronavirus pandemic “has shown that a certain section of the Swiss population cannot live in Geneva,” and argued that the new minimum wage is “the minimum to not fall below the poverty line and find yourself in a very difficult situation.” Charrat didn’t return a CNN request for comment. The Geneva Council of State, the local executive branch, said in an opinion against the measure that the new minimum wage would be “the highest in the world.” The Swiss system of direct democracy calls on voters to exercise their right four times a year, and allows citizens to collect signatures to introduce “popular initiatives” to be enacted. “On two occasions in the past, initiatives to set a mandatory minimum wage in Geneva had been submitted to the population and rejected,” said Poggia, who is in charge of the Department of Security, Labor and Health for the Geneva canton. The two previous votes took place in 2011 and 2014, and in the latest case, it was a national referendum to introduce an hourly minimum wage of 22 Swiss Francs, which found 76% of voters were opposed. RELATED: Swiss voters reject $25 minimum wage “On 27 September, a new vote on this subject was finally accepted, for a salary of 23 Swiss Francs per hour, or slightly more than 4,000 Swiss Francs per month for an activity of 41 hours per week,” Poggia added. That’s roughly $4,347 per month. While a $25 per hour minimum wage might look staggering from the perspective of the United States, where the federal minimum wage is $7.25 an hour, context is key. Geneva is the 10th most expensive city in the world, according to The Economist Intelligence Unit’s 2020 Worldwide Cost of Living Survey. The roughly 4,000 Swiss francs workers will now earn puts them slightly above the poverty line of 3,968 Swiss francs for a household of two adults and two children younger than 14, as estimated by the Swiss Federal Statistical Office in 2018. Switzerland is among the wealthiest nations in the world, but it wasn’t shielded from the damaging impact of the coronavirus pandemic on its economy. Overall, the Swiss government’s economic experts group expects the adjusted Swiss GDP to fall by -6.2% in 2020, and average unemployment to be around 3.8%, the lowest economic slump since 1975. Michael Grampp, Deloitte’s chief economist in Switzerland, said he believed the coronavirus pandemic had an impact in determining how many voters were in favor of passing the minimum wage initiative. Low income workers in the service sector were the most affected by the lockdown measures put in place in Switzerland. “I think many people realized how many people are working in these sectors. It’s not like everyone here is working for a bank or a chocolate factory. We also have a broad service sector that was hit hard due to the lockdown,” Grampp told CNN. “It definitely helped push the vote towards almost 60%,” he added. Grampp believes more cantons will enact minimum wage legislation. But Poggia said he doesn’t believe the pandemic had a significant impact on the vote. “Compared to other countries, given the strong social security coverage in Switzerland, the economic effects of Covid are currently being contained, even though job losses are already occurring in the sectors that have been directly affected, such as tourism, hotels and restaurants,” he said. Those job losses are forcing people to seek help. Mile-long lines at free food distributions in Geneva made headlines worldwide, and they continue to take place, according to Charlemagne Hernandez, the co-founder of Caravane De Solidarité, an activist group in Geneva that has kickstarted the distributions in the city, mostly through donations. Hernandez told CNN the group he now works for, the non-profit Colis du Coeur, helped an estimated 6,000 to 9,000 people each week during the summer, distributing bags of fresh produce and dry goods sourced through donations, the official Geneva food bank, and various charitable groups. Colis du Coeur is continuing food distribution through the winter. Hernandez said he believes the adoption of the minimum wage initiative in Geneva was “necessary,” as unemployment represents an existential threat for so many low income workers in the city. “It will boil down to not having enough to eat,” he said. Geneva is known as the humanitarian capital of the world because of the presence of so many international organizations and UN offices focusing on humanitarian affairs. Hernandez said solidarity in the city “is much stronger these days than usual,” as people respond to calls for donations in great numbers, helping the food distributions to continue. To those who are skeptical about poverty being an issue in a wealthy country such as Switzerland, Hernandez invites people not to judge. “I come from a slum in Manila originally, so it’s true, it’s not the same kind of poverty, but if you’re hungry, you’re hungry. That’s a benchmark you cannot deny,” Hernandez said. Correction: This story has been updated to reflect Hernandez's affiliation with Colis du Coeur, and to clarify the role of Colis du Coeur and other charitable organizations involved in food distributions in Geneva.
6
Is Aging a Disease?
We’re living longer, but not necessarily better. As the population over 65 in the United States is projected to double by 2060 — with one in five residents in retirement age — so will the number of Americans needing long-term care services. A new study suggests targeting aging itself — rather than individual diseases associated with it — could be the secret to combatting many health care costs traditionally associated with getting older. “People don’t think about aging as something that is treatable or should be treated like a disease,” said David Sinclair, co-director of the Paul F. Glenn Center for Biology of Aging Research at Harvard Medical School and one of the authors of the study. “But it is a disease. It’s just a very common one.” As we get older, there are certain complications we’re more likely to develop as a result of senescence — the process of deterioration with age — itself. Aging — biological changes over time that lead to decay and eventually death — increases the risk of chronic ailments like Type 2 diabetes, heart disease, cancer and Alzheimer’s disease. As average life expectancy increased throughout the 20th century — and is slated to rise another six years by 2060 — the impact of these age-associated diseases has become more pronounced. The traditional medical approach has been to treat diseases as they appear. A rising field known as “geroscience” instead asks the question: What if we could extend the number of years we’re healthy, rather than simply expand our number of years? “Instead of practicing health care in this country, we’re practicing sick care — or what I call ‘whack-a-mole medicine,’” said Sinclair, a biologist who focuses on epigenetics, which studies how behaviors and environments impact a person’s gene expression. “Medical research is moving towards not just putting Band-Aids on the symptom of disease, but getting at the major root cause of all major diseases — which is aging itself.” By focusing on health interventions that aim to delay the frailty and disability that comes with age, experts in the field attempt to slow — and in the future, even reverse — the biological realities of aging. The new research, published on July 5 in Nature Aging, looked at the potential economic impact of such an approach. The study compared current disease-based interventions to a test scenario using Metformin — a diabetes drug that appears to protect against age-related diseases, but is currently not approved for over-the-counter use — as a hypothetical aging intervention that would increase the “healthspan” as well as the lifespan. Researchers used the “Value of Statistical Life” model, a methodology popular among government agencies and economists, to place a monetary value on improvements in health and aging. The results were hard to overlook. Researchers found that increasing “healthy” life expectancy by just 2.6 years could result in a $83 trillion value to the economy. Planning your weekend? Subscribe to our free Top 5 things to do newsletter We’ll deliver ideas every Thursday for going out, staying home or spending time outdoors. Loading... You’re all signed up! Want more of our free, weekly newsletters in your inbox? Let’s get started. “It would reduce the incidents of cancer, dementia, cardiovascular disease and frailty,” Sinclair said. “In total, we’re spending 17 percent of everything we generate on health care – and largely that’s spent in the last year of life.” Currently, a person who turns 65 in the next few years will spend anywhere from $142,000 to $176,000 on average on long-term care during their lifetime, according to a recent report commissioned by the U.S. Department of Health & Human Services. Fifteen percent of Americans over 65 will live with at least two disabilities by 2065, the same report found, further increasing the need for assistance in daily living. Most of this will be paid out-of-pocket by family members or seniors themselves — Medicare doesn’t cover long-term care, and Medicaid only kicks in when a person becomes impoverished. Interventions designed to create slower, healthier aging could have large benefits because there’s a feedback loop, authors of the new study argue: The more successful a society is in ensuring its residents can stay healthy as they grow old, the greater the demand for — and economic payoff from — subsequent age-related innovations. “People have an interest in spending whatever they have to spend a few more years with their family,” Sinclair said. “And that will only increase the longer we live.” Sinclair has become polarizing figure in the scientific community for his tendency to hype his own work publicly and make grand promises about the potential rosy future such research can bring about. The founder of eight biotech companies and longtime champion of a controversial red wine drug resveratrol in an ongoing debate over its possible anti-aging effects, he’s been called as good a salesman as a scientist. At the same time, his work continues to be published in world renowned academic journals, and research on longevity is considered an increasingly legitimate field — largely thanks to his pioneering contributions. Florida is no stranger to the search of the fountain of youth — it led to Ponce de Leon’s exploration of the state in 1513, after all. And while the concept of “curing aging” might seem lofty, recent advances suggest the power to curtail some of its negative effects on our biology may be within reach. Researchers at the Mayo Clinic have shown that a certain drug cocktail can remove senescent cells in older mice, increasing their lifespan and delaying a cluster of age-related diseases by over a month. Early studies on humans have shown similar tentative promise. Metformin is also about to undergo a series of clinical trials to study its efficacy as an anti-aging treatment in humans. In December of last year, Sinclair’s lab at Harvard published a study in which they partially restored vision in aging mice by reprogramming their gene expression. More radical and hailed as a possible way to reverse one of the more painful side effects of aging — vision loss — the researcher said they will begin similar trials on primates this fall, and humans the following year. For some, these developments hint at larger aspirational goals: Scientists who study biology have yet to discover proof that death is inevitable. How long can we live, should age-related advances continue? And ethically, should ‘curing’ aging really be the aim? Isn’t growing older and dying a normal part of life? When asked if a limit exists, Sinclair was coy. “I don’t know,” he said. “But what I do know is that young people don’t get sick as often. If we could literally stay as fit as a 30-year-old forever, what would go wrong? “We said that cancer and heart disease were ‘natural’ 100 years ago,” Sinclair added. “Now, would you accept if a doctor said you had a lump in your throat and dismissed it as natural? So why do we accept it for aging?”
134
India proposes law to ban cryptocurrencies, create official digital currency
January 30, 2021 05:57 pm | Updated 05:59 pm IST - MUMBAI The Centre plans to introduce a law to ban private cryptocurrencies such as bitcoin and put in place a framework for an official digital currency to be issued by the central bank, according to a legislative agenda listed by the government. The law will "create a facilitative framework for creation of the official digital currency to be issued by the Reserve Bank of India [RBI]," said the agenda, published on the Lower House website on January 29. The legislation, listed for debate in the current parliamentary session, seeks "to prohibit all private cryptocurrencies in India, however, it allows for certain exceptions to promote the underlying technology of cryptocurrency and its uses," the agenda said. In mid-2019, a government panel recommended banning all private cryptocurrencies, with a jail term of up to 10 years and heavy fines for anyone dealing in digital currencies. The panel has, however, asked the government to consider the launch of an official government-backed digital currency in India, to function like bank notes, through the Reserve Bank of India. The RBI had in April 2018 ordered financial institutions to break off all ties with individuals or businesses dealing in virtual currency such as bitcoin within three months. However, in March 2020, the Supreme Court allowed banks to handle cryptocurrency transactions from exchanges and traders, overturning a central bank ban had that dealt the thriving industry a major blow. Governments around the world have been looking into ways to regulate cryptocurrencies but no major economy has taken the drastic step of placing a blanket ban on owning them, even though concern has been raised about the misuse of consumer data and its possible impact on the financial system.
3
List of cultural heritage institutions with open access collections
This version of Google Chrome is no longer supported. Please upgrade to a supported browser.Dismiss COUNTRY INSTITUTION INSTITUTION TYPE INSTITUTION WEBSITE INSTITUTION WIKIDATA ADMISSION FEE IN DOMESTIC CURRENCY CURRENCY ISO CODE CONVERTED ADMISSION FEE € OPEN ACCESS SCOPE DATE OF 1ST OPEN ACCESS INSTANCE LICENCE/RIGHTS STATEMENT FOR DIGITAL SURROGATES OF PUBLIC DOMAIN OBJECTS CLOSEST EQUIVALENT LICENCE/STATEMENT (IF NON-STANDARD) RIGHTS POLICY OR TERMS OF USE LICENCE/RIGHTS STATEMENT FOR METADATA (IF STATED) GITHUB API(S) DATA SOURCE(S) DATA SOURCE(S) Argentina Academia Argentina de Letras Library http://aal.edu.ar https://www.wikidata.org/wiki/Q2057876 ARS Some eligible data Public Domain Mark https://commons.wikimedia.org/wiki/Category:Files_from_Academia_Argentina_de_Letras Argentina Archivo General de la Nación Argentina Archive http://www.agnargentina.gob.ar https://www.wikidata.org/wiki/Q2860529 ARS Some eligible data Public Domain Mark https://commons.wikimedia.org/wiki/Category:Files_provided_by_Archivo_General_de_la_Naci%C3%B3n_Argentina Argentina Archivo Histórico de la Provincia de Buenos Aires Archive https://www.amigoslevene.com https://www.wikidata.org/wiki/Q18581036 ARS Some eligible data Public Domain Mark https://commons.wikimedia.org/wiki/Category:Files_from_the_Archivo_Hist%C3%B3rico_de_la_Provincia_de_Buenos_Aires Argentina Archivo Histórico de San Luis Archive http://www.archivohistorico.sanluis.gov.ar https://www.wikidata.org/wiki/Q56746875 ARS All eligible data 'Free and unrestricted access' CC0 http://www.archivohistorico.sanluis.gov.ar/AHAsp/Paginas/Foto.asp?Tipo=1&CarpetaVId=10&Page=1&FotoId=15043 Argentina Biblioteca Feminaria Library http://www.tierra-violeta.com.ar https://www.wikidata.org/wiki/Q18592746 ARS Some eligible data Public Domain Mark https://commons.wikimedia.org/wiki/Category:Files_from_Biblioteca_Feminaria Argentina Biblioteca Nacional de Maestros Library http://www.bnm.me.gov.ar https://www.wikidata.org/wiki/Q5727816 ARS All eligible data Public Domain Mark http://www.bnm.me.gov.ar/catalogos Argentina Museo Casa Rosada Museum https://www.casarosada.gob.ar/la-casa-rosada/museo https://www.wikidata.org/wiki/Q6034386 EUR Some eligible data CC BY-SA https://commons.wikimedia.org/wiki/Category:Files_provided_by_the_Museo_del_Bicentenario Argentina Museo Nacional de Bellas Artes Museum https://www.bellasartes.gob.ar https://www.wikidata.org/wiki/Q1848918 ARS Some eligible data Public Domain Mark https://commons.wikimedia.org/wiki/Category:Files_provided_by_the_Museo_Nacional_de_Bellas_Artes Aruba Biblioteca Nacional Aruba Library http://www.bibliotecanacional.aw https://www.wikidata.org/wiki/Q631808 AWG Some eligible data Public Domain Mark https://commons.wikimedia.org/wiki/Category:Media_contributed_by_Biblioteca_Nacional_Aruba Australia ABC Open Archives Archive http://www.abc.net.au/archives/openarchives.htm https://www.wikidata.org/wiki/Q781365 AUD Some eligible data CC BY-SA http://www.abc.net.au/archives/openarchives.htm https://commons.wikimedia.org/wiki/Category:Files_from_the_Australian_Broadcasting_Corporation Australia Australian National Maritime Museum Museum http://www.anmm.gov.au https://www.wikidata.org/wiki/Q844329 AUD Some eligible data No known copyright restrictions No Known Copyright https://www.flickr.com/commons/usage/ https://www.flickr.com/services/api/ https://www.flickr.com/photos/anmm_thecommons Australia Australian National University University http://www.anu.edu.au https://www.wikidata.org/wiki/Q127990 AUD Some eligible data CC BY https://anulib.anu.edu.au/research-learn/copyright/copyright-and-digitisation http://troveconsole.herokuapp.com https://trove.nla.gov.au/search/category/images?keyword=nuc%3A%22ANU%3AIR%22&l-rights=Free & https://trove.nla.gov.au/search/category/images?keyword=nuc%3A%22ANU%3AHITL%22&l-rights=Free Australia Australian War Memorial Museum http://www.awm.gov.au https://www.wikidata.org/wiki/Q782783 AUD Some eligible data No known copyright restrictions No Known Copyright https://www.flickr.com/commons/usage/ https://www.flickr.com/services/api/ https://www.flickr.com/photos/australian-war-memorial Australia Blue Mountains Library (Local Studies) Library https://library.bmcc.nsw.gov.au https://www.wikidata.org/wiki/Q28219999 AUD Some eligible data No known copyright restrictions No Known Copyright https://www.flickr.com/commons/usage http://troveconsole.herokuapp.com https://www.flickr.com/photos/blue_mountains_library_-_local_studies Australia Eyre Pensinsula Railway Preservation Society Other http://www.eprps.org.au https://www.wikidata.org/wiki/Q85294488 AUD Some eligible data No known copyright http://troveconsole.herokuapp.com https://trove.nla.gov.au/search/category/images?keyword=nuc%3ASEPR&l-rights=Free Australia Libraries Tasmania Library https://libraries.tas.gov.au https://www.wikidata.org/wiki/Q6458744 AUD Some eligible data No known copyright http://www.tas.gov.au/stds/codi.htm http://troveconsole.herokuapp.com https://trove.nla.gov.au/search/category/images?keyword=nuc%3ATSL&l-rights=Free Australia Museum of Applied Arts and Sciences Museum http://maas.museum/powerhouse-museum https://www.wikidata.org/wiki/Q2107035 AUD Some eligible data No known copyright restrictions No Known Copyright https://maas.museum/copyright https://github.com/museumofappliedartsandsciences https://www.flickr.com/services/api/ https://www.flickr.com/photos/powerhouse_museum Australia Museums Victoria Museum https://museumsvictoria.com.au https://www.wikidata.org/wiki/Q500890 $0.00 AUD €0.00 All eligible data 2015 Public Domain Mark https://collections.museumvictoria.com.au/items/253032 CC0 https://github.com/museumsvictoria https://collections.museumvictoria.com.au Australia National Library of Australia Library https://www.nla.gov.au https://www.wikidata.org/wiki/Q623578 AUD Some eligible data Public Domain Mark https://www.nla.gov.au/about-this-site/copyright https://github.com/nla https://www.nla.gov.au/collections Australia Northern Territory Library Library https://ntl.nt.gov.au https://www.wikidata.org/wiki/Q7059036 AUD Some eligible data CC BY https://nt.gov.au/page/copyright-disclaimer-and-privacy http://troveconsole.herokuapp.com https://trove.nla.gov.au/search/category/images?keyword=nuc%3AXNLS&l-rights=Free Australia Queensland State Archives Archive https://www.qld.gov.au/recreation/arts/heritage/archives https://www.wikidata.org/wiki/Q7271010 AUD All eligible data Public Domain Mark https://www.flickr.com/photos/queenslandstatearchives Australia Queensland University of Technology University https://www.qut.edu.au https://www.wikidata.org/wiki/Q1144750 AUD Some eligible data CC0 https://www.qut.edu.au/additional/copyright http://troveconsole.herokuapp.com https://trove.nla.gov.au/search/category/images?keyword=nuc%3A%22QUT%3ADC%22&l-rights=Free Australia Royal Australian Historical Society Other http://www.rahs.org.au https://www.wikidata.org/wiki/Q7373750 AUD Some eligible data No known copyright restrictions No Known Copyright http://www.rahs.org.au/about-rahs/rahs-copyright-statement/ https://www.flickr.com/services/api/ https://www.flickr.com/photos/royalaustralianhistoricalsociety Australia State Library of New South Wales Library http://www.sl.nsw.gov.au https://www.wikidata.org/wiki/Q6133813 AUD Some eligible data No known copyright restrictions No Known Copyright https://www.flickr.com/commons/usage/ https://www.flickr.com/services/api/ https://www.flickr.com/photos/statelibraryofnsw Australia State Library of Queensland Library http://www.slq.qld.gov.au https://www.wikidata.org/wiki/Q2050653 AUD Some eligible data Public Domain Mark http://www.slq.qld.gov.au/home/copyright http://onesearch.slq.qld.gov.au Australia State Library Victoria Library http://www.slv.vic.gov.au https://www.wikidata.org/wiki/Q1200052 AUD Some eligible data CC BY http://www.slv.vic.gov.au/contribute-create/open-data https://github.com/statelibraryvic/opendata/find/master https://www.data.vic.gov.au Australia Swinburne University of Technology Library Library https://www.swinburne.edu.au/library https://www.wikidata.org/wiki/Q73786272 AUD Some eligible data Public Domain Mark https://www.swinburne.edu.au/copyright-disclaimer http://troveconsole.herokuapp.com https://trove.nla.gov.au/search/category/images?keyword=nuc%3A%22VSWT%22&l-rights=Free Australia Tasmanian Archive and Heritage Office Archive https://www.libraries.tas.gov.au https://www.wikidata.org/wiki/Q61095292 AUD Some eligible data No known copyright restrictions No Known Copyright http://www.linc.tas.gov.au/tasmaniasheritage/browse/pictorial/flickr-commons https://www.flickr.com/services/api/ https://www.flickr.com/photos/107895189@N03 Australia Trove Digital Library Other https://trove.nla.gov.au https://www.wikidata.org/wiki/Q18609226 AUD Some eligible data No known copyright https://trove.nla.gov.au/about/policies/copyright http://troveconsole.herokuapp.com https://trove.nla.gov.au/search/category/images?keyword=nuc%3A%22ANL%3ADL%22&l-rights=Free Australia University of Melbourne (Melbourne History Workshop) University http://www.unimelb.edu.au https://www.wikidata.org/wiki/Q319078 AUD Some eligible data CC BY https://unimelb.edu.au/disclaimer http://troveconsole.herokuapp.com https://trove.nla.gov.au/search/category/images?keyword=nuc%3A%22VU%3AMHW%22 Australia University of Tasmania (Library Open Repository) University http://www.utas.edu.au https://www.wikidata.org/wiki/Q962011 AUD Some eligible data CC BY https://www.utas.edu.au/copyright-statement http://troveconsole.herokuapp.com https://trove.nla.gov.au/search/category/images?keyword=nuc%3A%22TU%3AOR%22&l-rights=Free Austria Akademie der bildenden Künste Wien University http://www.akbild.ac.at https://www.wikidata.org/wiki/Q414219 EUR Some eligible data Public Domain Mark https://www.akbild.ac.at/Portal/bibliothek/repositorium/arepository_UserkonzeptundNutzungsbedingungen.pdf https://repository.akbild.ac.at Austria Belvedere Museum https://www.belvedere.at https://www.wikidata.org/wiki/Q303139 €13.00 EUR €13.00 All eligible data CC BY-SA https://www.belvedere.at/en/opencontent https://digital.belvedere.at/objects/images?filter=openContentProgram%3Atrue Austria Herbarium, Karl-Franzens-Universität Graz University https://botanik.uni-graz.at https://www.wikidata.org/wiki/Q59340662 EUR Some eligible data CC BY-SA https://www.uni-graz.at/en/imprint/ CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BDATA_PROVIDER%5D%5B%5D=University%20of%20Graz%2C%20Institute%20of%20Plant%20Sciences%20-%20Herbarium%20GZU&view=grid Austria Institut für Botanik, Universität Wien University http://herbarium.univie.ac.at https://www.wikidata.org/wiki/Q165980 EUR Some eligible data CC BY-SA http://herbarium.univie.ac.at/ CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BDATA_PROVIDER%5D%5B%5D=Natural+History+Museum%2C+Vienna+-+Herbarium+W&per_page=72&q=&view=grid Austria Oberösterreichische Landesbibliothek Library https://www.landesbibliothek.at https://www.wikidata.org/wiki/Q1439945 EUR Some eligible data CC BY-SA http://digi.landesbibliothek.at/viewer/ CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BDATA_PROVIDER%5D%5B%5D=Ober%C3%B6sterreichische+Landesbibliothek&per_page=72&q=Ober%C3%B6sterreichische+Landesbibliothek&view=grid&f%5BREUSABILITY%5D%5B%5D=open Austria Österreichische Nationalbibliothek Library https://www.onb.ac.at https://www.wikidata.org/wiki/Q304037 EUR Some eligible data Public Domain Mark http://www.bildarchivaustria.at/Pages/Informationen/AGBs.aspx CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BDATA_PROVIDER%5D%5B%5D=%C3%96sterreichische+Nationalbibliothek+-+Austrian+National+Library&f%5BREUSABILITY%5D%5B%5D=open&per_page=72&q=%C3%96sterreichische+Nationalbibliothek&view=grid&f%5BREUSABILITY%5D%5B%5D=restricted Austria Universität für Musik und darstellende Kunst Graz University https://www.kug.ac.at https://www.wikidata.org/wiki/Q875147 EUR Some eligible data Public Domain Mark https://www.kug.ac.at/en/library/portal/open-access.html CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?q=DATA_PROVIDER%3A%22Bibliothek+der+Universit%C3%A4t+f%C3%BCr+Musik+und+darstellende+Kunst+Graz%22&f%5BREUSABILITY%5D%5B%5D=open&view=grid Austria Vorarlberger Landesbibliothek Library http://www.vorarlberg.gv.at https://www.wikidata.org/wiki/Q1339543 EUR Some eligible data CC BY https://pid.volare.vorarlberg.at/Sammlung/Ansichtskarten.aspx CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BREUSABILITY%5D%5B%5D=restricted&per_page=72&q=Vorarlberg+State+Library+&view=grid&f%5BREUSABILITY%5D%5B%5D=open Austria Wienbibliothek im Rathaus Library https://www.wienbibliothek.at https://www.wikidata.org/wiki/Q2568831 EUR Some eligible data CC BY-SA https://www.digital.wienbibliothek.at/ CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BDATA_PROVIDER%5D%5B%5D=Wienbibliothek+im+Rathaus&per_page=72&q=&view=grid&f%5BREUSABILITY%5D%5B%5D=open Belgium FelixArchief Archive http://www.felixarchief.be https://www.wikidata.org/wiki/Q29016912 EUR Some eligible data CC0 https://felixarchief.antwerpen.be/nieuwspagina/Wikimedia CC0 https://commons.wikimedia.org/wiki/Category:Images_from_FelixArchief Belgium Groeningemuseum Museum https://www.visitbruges.be/nl/groeningemuseum https://www.wikidata.org/wiki/Q1948674 EUR Some eligible data CC0 https://commons.wikimedia.org/wiki/Category:Images_from_the_Groeningemuseum Belgium Horta Museum Museum http://www.hortamuseum.be https://www.wikidata.org/wiki/Q1995367 EUR Some eligible data CC0 https://commons.wikimedia.org/wiki/Category:Images_from_the_Victor_Horta_Museum Belgium Industriemuseum Museum https://www.industriemuseum.be https://www.wikidata.org/wiki/Q2245203 EUR Some eligible data Public Domain Mark https://www.industriemuseum.be/nl/disclaimer https://www.industriemuseum.be/nl/collecties-landing Belgium Jakob Smitsmuseum Museum https://www.jakobsmits.be https://www.wikidata.org/wiki/Q2595341 EUR Some eligible data CC0 https://commons.wikimedia.org/wiki/Category:Images_from_Jakob_Smitsmuseum Belgium King Baudouin Foundation Other https://www.kbs-frb.be https://www.wikidata.org/wiki/Q2780100 EUR Some eligible data CC BY-SA https://commons.wikimedia.org/wiki/Category:Images_from_the_King_Baudouin_Foundation Belgium Koninklijke Bibliotheek van België Library https://www.kbr.be https://www.wikidata.org/wiki/Q383931 EUR Some eligible data CC0 https://www.kbr.be/en/legal-information#Intellectual%20property https://commons.wikimedia.org/wiki/Category:Images_from_the_Royal_Library_of_Belgium Belgium Letterenhuis Museum https://www.letterenhuis.be https://www.wikidata.org/wiki/Q3813695 EUR Some eligible data Public Domain Mark https://www.letterenhuis.be/nl/pagina/reproducties https://commons.wikimedia.org/wiki/Category:Images_from_AMVC_Letterenhuis Belgium Liberas (Het Liberaal Archief) Archive http://www.liberas.eu https://www.wikidata.org/wiki/Q2745365 EUR Some eligible data No known copyright restrictions No Known Copyright http://www.liberaalarchief.be/LiberaalArchief-NoKnownCopyrightRestrictions.pdf https://www.flickr.com/services/api/ https://www.flickr.com/photos/142575440@N02/ Belgium MoMu - Fashion Museum Antwerp Museum https://www.momu.be https://www.wikidata.org/wiki/Q2799943 EUR Some eligible data CC BY-SA https://commons.wikimedia.org/wiki/Category:Images_from_MoMu_-_Fashion_Museum_Province_of_Antwerp Belgium Museum Plantin-Moretus Museum https://www.museumplantinmoretus.be https://www.wikidata.org/wiki/Q595802 €8.00 EUR €8.00 All eligible data CC BY http://search.museumplantinmoretus.be/static/opendata CC0 https://pro.europeana.eu/page/api-rest-console http://search.museumplantinmoretus.be/static/opendata Belgium Plantentuin Meise Other https://www.plantentuinmeise.be https://www.wikidata.org/wiki/Q3052500 EUR Some eligible data CC BY-SA https://www.plantentuinmeise.be/en/informatie/Disclaimer https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BDATA_PROVIDER%5D%5B%5D=Meise+Botanic+Garden&f%5BREUSABILITY%5D%5B%5D=open Belgium Prentenkabinet, Universiteit Antwerpen University https://www.uantwerpen.be https://www.wikidata.org/wiki/Q47885560 EUR Some eligible data CC0 https://www.uantwerpen.be/en/about-uantwerp/mission-vision/terms-of-use/ https://commons.wikimedia.org/wiki/Category:Print_Room_of_the_University_of_Antwerp Belgium Royal Museum of Fine Arts Antwerp Museum https://www.kmska.be https://www.wikidata.org/wiki/Q1471477 EUR Some eligible data CC0 https://commons.wikimedia.org/wiki/Category:Images_from_the_Royal_Museum_of_Fine_Arts_Antwerp Belgium Universiteitsbibliotheek Gent Library https://lib.ugent.be https://www.wikidata.org/wiki/Q611001 EUR All eligible data CC BY-SA https://lib.ugent.be/en/info/open CC0 https://pro.europeana.eu/page/api-rest-console https://lib.ugent.be/info/exports Brazil Arquivo Nacional Archive http://www.arquivonacional.gov.br https://www.wikidata.org/wiki/Q2860546 BRL Some eligible data Public Domain Mark http://www.arquivonacional.gov.br/br/estudos-de-usuario https://pt.wikipedia.org/wiki/Wikip%C3%A9dia:GLAM/Arquivo_Nacional Brazil Biblioteca Brasiliana Guita e José Mindlin Library https://www.bbm.usp.br https://www.wikidata.org/wiki/Q18500412 BRL All eligible data Public Domain Mark https://www.bbm.usp.br/normas https://digital.bbm.usp.br/handle/bbm/1 Brazil Museu da Imigração, São Paulo Museum http://museudaimigracao.org.br https://www.wikidata.org/wiki/Q3854482 BRL Some eligible data CC BY-SA https://commons.wikimedia.org/wiki/Category:Media_contributed_by_the_Immigration_Museum_of_the_State_of_S%C3%A3o_Paulo?uselang=pt Brazil Museu de Anatomia Veterinária da Faculdade de Medicina Veterinária e Zootecnia da USP Museum http://mav.fmvz.usp.br https://www.wikidata.org/wiki/Q10333736 BRL Some eligible data CC BY-SA http://mav.fmvz.usp.br/en/ https://commons.wikimedia.org/wiki/Category:Museum_of_Veterinary_Anatomy_(FMVZ_USP) Brazil Museu de Zoologia da Universidade de São Paulo Museum http://www.mz.usp.br https://www.wikidata.org/wiki/Q4193904 R$0.00 BRL €0.00 All eligible data CC BY-SA https://commons.wikimedia.org/wiki/Category:Collections_of_the_Museu_de_Zoologia_da_Universidade_de_S%C3%A3o_Paulo Brazil Museu do Homem do Nordeste Museum http://www.fundaj.gov.br https://www.wikidata.org/wiki/Q10333902 R$4.00 BRL €0.74 All eligible data Public Domain Mark http://villadigital.fundaj.gov.br Brazil Musica Brasilis Museum http://musicabrasilis.org.br https://www.wikidata.org/wiki/Q25427665 BRL Some eligible data CC BY-SA http://musicabrasilis.org.br/partituras Brazil Senado Federal do Brasil Other http://www.senado.leg.br https://www.wikidata.org/wiki/Q2119413 BRL Some eligible data No known copyright restrictions No Known Copyright http://www12.senado.gov.br/noticias/arquivo-de-imagens-no-flickr-commons https://www.flickr.com/services/api/ https://www.flickr.com/photos/senadothecommons Bulgaria NALIS Foundation Library http://www.nalis.bg https://www.wikidata.org/wiki/Q27163637 BGN Some eligible data Public Domain Mark http://www.nalis.bg/ CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BDATA_PROVIDER%5D%5B%5D=NALIS+Foundation&per_page=72&q=&view=grid&f%5BREUSABILITY%5D%5B%5D=open Bulgaria Държавна агенция „Архиви (Bulgarian Archives State Agency) Archive https://www.archives.government.bg https://www.wikidata.org/wiki/Q12279299 BGN Some eligible data Public Domain Mark https://commons.wikimedia.org/wiki/Category:Images_from_the_Bulgarian_Archives_State_Agency Bulgaria Национална библиотека „Св. св. Кирил и Методий“ (SS. Cyril and Methodius National Library) Library http://www.nationallibrary.bg https://www.wikidata.org/wiki/Q631641 BGN All eligible data Public Domain Mark http://www.nationallibrary.bg/wp/?page_id=1444&lang=en CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BCOUNTRY%5D%5B%5D=Bulgaria&f%5BREUSABILITY%5D%5B%5D=open&per_page=72&q=&view=grid&f%5BREUSABILITY%5D%5B%5D=restricted Bulgaria Регионална библиотека "Пенчо Славейков" (Pencho Slaveykov Regional Library) Library http://www.libvar.bg https://www.wikidata.org/wiki/Q74907546 BGN All eligible data CC0 http://catalog.rodina-bg.org/absw/abs.htm CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BCOUNTRY%5D%5B%5D=Bulgaria&f%5BREUSABILITY%5D%5B%5D=open&per_page=72&q=&view=grid&f%5BREUSABILITY%5D%5B%5D=restricted Bulgaria Регионална библиотека “Любен Каравелов” (Luben Karavelov Regional Library) Library https://www.libruse.bg https://www.wikidata.org/wiki/Q52379897 BGN Some eligible data CC0 https://www.libruse.bg/ CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BCOUNTRY%5D%5B%5D=Bulgaria&f%5BREUSABILITY%5D%5B%5D=open&per_page=72&q=&view=grid&f%5BREUSABILITY%5D%5B%5D=restricted Bulgaria Централна библиотека на БАН (Central Library of the Bulgarian Academy of Sciences) Library http://cl.bas.bg https://www.wikidata.org/wiki/Q12298877 BGN Some eligible data Public Domain Mark http://cl.bas.bg/index_html?set_language=en CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BCOUNTRY%5D%5B%5D=Bulgaria&f%5BREUSABILITY%5D%5B%5D=open&per_page=72&q=&view=grid&f%5BREUSABILITY%5D%5B%5D=restricted Cameroon Doual'art Gallery http://doualart.org https://www.wikidata.org/wiki/Q1051789 XAF Some eligible data CC BY-SA https://commons.wikimedia.org/wiki/Category:Doual%27art Canada Archives of the Law Society of Ontario Archive https://lso.ca https://www.wikidata.org/wiki/Q6503109 CAD Some eligible data No known copyright restrictions No Known Copyright https://lso.ca/about-lso/osgoode-hall-and-ontario-legal-heritage/collections-and-research https://www.flickr.com/services/api/ https://www.flickr.com/photos/lsuc_archives/ Canada Bibliothèque et Archives nationales du Québec (BAnQ) Archive https://www.banq.qc.ca/accueil https://www.wikidata.org/wiki/Q39628 CAD All eligible data Public Domain Mark https://www.banq.qc.ca/outils/declarations_droits_licences/ http://numerique.banq.qc.ca https://commons.wikimedia.org/wiki/Category:Files_uploaded_by_Bibliothèque_et_Archives_nationales_du_Québec Canada Canada Agriculture and Food Museum Museum https://ingeniumcanada.org/cafm https://www.wikidata.org/wiki/Q4212098 $12.00 CAD €8.30 All eligible data CC0 https://ingeniumcanada.org/archives/terms-and-conditions Open Government Licence - Canada 2.0 http://data.techno-science.ca/en Canada Canada Aviation and Space Museum Museum https://ingeniumcanada.org/casm https://www.wikidata.org/wiki/Q1031932 $13.00 CAD €8.99 All eligible data CC0 https://ingeniumcanada.org/archives/terms-and-conditions Open Government Licence - Canada 2.0 http://data.techno-science.ca/en Canada Canada Science and Technology Museum Museum https://ingeniumcanada.org/cstm https://www.wikidata.org/wiki/Q1163464 $13.00 CAD €8.99 All eligible data CC0 https://ingeniumcanada.org/archives/terms-and-conditions Open Government Licence - Canada 2.0 http://data.techno-science.ca/en Canada Cloyne Pioneer Museum and Archives Museum http://pioneer.mazinaw.on.ca https://www.wikidata.org/wiki/Q61058427 CAD Some eligible data No known copyright restrictions No Known Copyright http://pioneer.mazinaw.on.ca/flickr_statement.php https://www.flickr.com/services/api/ https://www.flickr.com/photos/cdhs Canada Congregation of Sisters of St. Joseph in Canada Other http://www.csjcanada.org https://www.wikidata.org/wiki/Q64595220 CAD Some eligible data No known copyright restrictions No Known Copyright http://www.csjcanada.org/csj-archives https://www.flickr.com/services/api/ https://www.flickr.com/photos/csj_canada_archives Canada Deseronto Archives Archive https://deserontoarchives.ca https://www.wikidata.org/wiki/Q64604954 CAD Some eligible data No known copyright restrictions No Known Copyright https://www.flickr.com/commons/usage/ https://www.flickr.com/services/api/ https://www.flickr.com/photos/deserontoarchives Canada Dundas Museum & Archives Museum http://www.dundasmuseum.ca https://www.wikidata.org/wiki/Q64604994 CAD Some eligible data No known copyright restrictions No Known Copyright https://www.flickr.com/commons/usage/ https://www.flickr.com/services/api/ https://www.flickr.com/photos/dundasmuseum Canada Galt Museum & Archives Museum https://www.galtmuseum.com https://www.wikidata.org/wiki/Q5519354 CAD Some eligible data No known copyright restrictions No Known Copyright https://www.flickr.com/commons/usage/ https://www.flickr.com/services/api/ https://www.flickr.com/photos/galt-museum Canada Halifax Municipal Archives Archive https://www.halifax.ca/about-halifax/municipal-archives https://www.wikidata.org/wiki/Q61064490 CAD Some eligible data No known copyright restrictions No Known Copyright https://www.halifax.ca/about-halifax/municipal-archives/services/copying-services https://www.flickr.com/services/api/ https://www.flickr.com/photos/halifaxarchives Canada Hamilton Public Library Library http://www.hpl.ca/local-history https://www.wikidata.org/wiki/Q5645111 CAD Some eligible data No known copyright restrictions No Known Copyright http://www.hpl.ca/articles/use-images https://www.flickr.com/services/api/ https://www.flickr.com/photos/hpllocalhistory Canada Huron County Museum & Historic Gaol Museum http://huroncounty.ca/museum https://www.wikidata.org/wiki/Q66305614 CAD Some eligible data No known copyright restrictions No Known Copyright http://huroncounty.ca/museum/online_collections.php https://www.flickr.com/services/api/ https://www.flickr.com/photos/huroncountymuseum Canada Library and Archives Canada Library http://www.bac-lac.gc.ca https://www.wikidata.org/wiki/Q913250 CAD Some eligible data CC BY https://www.bac-lac.gc.ca/fra/a-notre-sujet/propos-collection/Pages/numerisation-bac.aspx https://commons.wikimedia.org/wiki/Category:Images_from_Library_and_Archives_Canada Canada Musée McCord Museum http://www.musee-mccord.qc.ca https://www.wikidata.org/wiki/Q1128578 CAD Some eligible data No known copyright restrictions No Known Copyright https://www.flickr.com/commons/usage/ https://www.flickr.com/services/api/ https://www.flickr.com/photos/museemccordmuseum Canada Nova Scotia Archives Archive https://archives.novascotia.ca https://www.wikidata.org/wiki/Q7064104 CAD Some eligible data No known copyright restrictions No Known Copyright https://www.flickr.com/commons/usage/ https://www.flickr.com/services/api/ https://www.flickr.com/photos/nsarchives Canada Provincial Archives of Alberta Archive http://culture.alberta.ca/paa https://www.wikidata.org/wiki/Q2860566 CAD Some eligible data No known copyright restrictions No Known Copyright http://culture.alberta.ca/paa/about/flickr.aspx https://www.flickr.com/services/api/ https://www.flickr.com/photos/alberta_archives Canada UBC Library Digitization Centre University https://www.ubc.ca https://www.wikidata.org/wiki/Q391028 CAD Some eligible data No known copyright restrictions No Known Copyright http://digitalcollections.library.ubc.ca/cdm/about https://www.flickr.com/services/api/ https://www.flickr.com/photos/ubclibrary_digicentre Canada University of Victoria Libraries University http://www.uvic.ca/library https://www.wikidata.org/wiki/Q50280985 CAD Some eligible data No known copyright restrictions No Known Copyright http://www.uvic.ca/library/featured/collections/disclaimer-flickr.php https://www.flickr.com/services/api/ https://www.flickr.com/photos/128520551@N04 Canada Vancouver Public Library Library http://www.vpl.ca https://www.wikidata.org/wiki/Q1376408 CAD Some eligible data No known copyright restrictions No Known Copyright https://www.vpl.ca/find/cat/C393/C393 https://www.flickr.com/services/api/ https://www.flickr.com/photos/99915476@N04 Chile Biblioteca Nacional de Chile Library http://www.bibliotecanacional.cl https://www.wikidata.org/wiki/Q2901485 CLP Some eligible data CC BY-SA https://commons.wikimedia.org/wiki/Category:Media_contributed_by_the_Library_of_the_National_Congress_of_Chile Chile Memoria Chilena Other http://www.memoriachilena.cl https://www.wikidata.org/wiki/Q15059439 CLP All eligible data In Memoria Chilena you can find works of great heritage value that are part of the Common Cultural Heritage. Such creations can be used by anyone, provided that the paternity and integrity of the work is respected.' Public Domain Mark http://www.memoriachilena.cl/602/w3-article-123838.html#i__w3_ar_acercade_cuerpo_1_123838_Propiedad20intelectual20y20derechos20de20autor http://www.memoriachilena.cl Chile Museo de la Memoria y los Derechos Humanos Museum http://ww3.museodelamemoria.cl https://www.wikidata.org/wiki/Q6940940 CLP Some eligible data CC BY-SA https://commons.wikimedia.org/wiki/Category:Media_contributed_by_the_Museo_de_la_Memoria_y_los_Derechos_Humanos Chile Museo Nacional de Historia Natural de Chile Museum http://www.mnhn.cl https://www.wikidata.org/wiki/Q3064279 CLP Some eligible data CC BY http://www.dibam.cl/portal/Contenido/Institucional/29742:TERMINOS-Y-CONDICIONES-DE-USO https://sketchfab.com/MNHNcl Croatia Hrvatska akademija znanosti i umjetnosti University http://info.hazu.hr https://www.wikidata.org/wiki/Q1264085 HRK Some eligible data Public Domain Mark http://dizbi.hazu.hr/?terms CC0 https://pro.europeana.eu/page/api-rest-console https://www.europeana.eu/portal/search?f%5BDATA_PROVIDER%5D%5B%5D=Croatian+Academy+of+Sciences+and+Arts&f%5BREUSABILITY%5D%5B%5D=open&per_page=72&q=&view=grid&f%5BREUSABILITY%5D%5B%5D=restricted Croatia Nacionalna i sveučilišna knjižnica u Zagrebu Library http://www.nsk.hr https://www.wikidata.org/wiki/Q631375 HRK Some eligible data Public Domain Mark https://digitalna.nsk.hr/pb/?projekt CC0 https://pro.europeana.eu/page/api-rest-console https://digitalna.nsk.hr/pb Denmark Arbejdermuseet Museum https://www.arbejdermuseet.dk https://www.wikidata.org/wiki/Q3365660 DKK Some eligible data Public Domain Mark https://www.arbejdermuseet.dk/viden-samlinger/arbejderbevaegelsens-bibliotek-arkiv/vilkaar-benyttelse-fotos/ https://www.arbejdermuseet.dk Denmark Den Hirschsprungske Samling Gallery http://www.hirschsprung.dk https://www.wikidata.org/wiki/Q2982867 95.00 kr. DKK €12.75 All eligible data Public Domain Mark https://da.wikipedia.org/wiki/V%C3%A6rker_i_Den_Hirschsprungske_Samling http://www.hirschsprung.dk Denmark Det Kongelige Bibliotek Library https://www.kb.dk https://www.wikidata.org/wiki/Q867885 DKK Some eligible data Public Domain Mark https://commons.wikimedia.org/wiki/Category:Media_donated_by_Statsbiblioteket   SURVEY DATA README SUBMIT ENTRY
2
The Shifting Market for PostgreSQL
PostgreSQL has been around in some form since 1986, yet somehow keeps getting younger and hipper with each year. Startups like Timescale have found old-school PostgreSQL to be key to building their new-school database products, joining companies like EDB in deepening PostgreSQL’s popularity. In fact, EDB just celebrated its 44th consecutive quarter of rising annual recurring revenue. That’s 11 years of PostgreSQL paying the bills (and growing the number of bills EDB can afford to pay). As steady as PostgreSQL has been, however, its progress hasn’t been linear. I recently spoke with EDB CEO Ed Boyajian, now in his 13th year at the helm of the company, who talked about the essential ingredients to PostgreSQL’s rise. The first? Developers. Yes, developers, who keep evolving PostgreSQL to meet new needs in the cloud, even as they optimize it to handle their oldest, on-premises requirements. Over the years, the market has dallied with NoSQL and NewSQL and every other shape of database imaginable. We’ve also gone through various shades of self-managed data center hosting to public clouds and everything in between. Through it all, developers have embraced PostgreSQL, even as the PostgreSQL community has shaped and reshaped the database to fit emerging workloads. Early on, EDB tried to bludgeon Oracle with a compatibility layer that allowed applications to run on PostgreSQL but think they were running Oracle Database. It’s what EDB was known for in its early days. But according to Boyajian, it’s not really the primary reason enterprises adopt PostgreSQL. He says about a third of EDB’s business is net new customers, half of which are enterprises looking to migrate off another database, usually Oracle. The other half? It’s for greenfield applications. This shift toward new application development, and away from Oracle replacements, may be accelerating for PostgreSQL. “This has changed dramatically over time,” said Boyajian. “It has paralleled the shift in who makes database decisions, as that’s moved more and more to the hands of developers and business units.” Give enterprise IT the say and perhaps they continue to fumble along with Oracle (or whatever their legacy database choice happens to be), pushing it into new workloads because, well, that’s what they’ve always done. But let developers decide and you start to see all sorts of different options, from MongoDB to PostgreSQL and a host of other options. Ask tens of thousands of developers which databases they most love, as Stack Overflow did, and PostgreSQL is topped only by Redis. One reason developers love PostgreSQL is that its core development community has focused so much on improving its ease of use. EDB and other corporate and individual contributors have spent years making “Postgres easier to experience and to consume,” says Boyajian. “This has been a priority for the company, helping new users get comfortable and get into the Postgres ecosystem faster and easier.” The more the PostgreSQL community has done this, the more adoption has shifted from Oracle migrations to greenfield, new application development. Intriguingly, this doesn’t always mean cloud. Sure, vendors like AWS (my employer), Crunchy Data, Microsoft, Instaclustr, Google Cloud, and others have done great work to make PostgreSQL easier to use as a managed service. But EDB has managed those 44 quarters of growth without the slightest wisp of a cloud service. “Our experience tells us it’s still largely in a traditional data center context” that enterprises run PostgreSQL, Boyajian says. This doesn’t mean “traditional” in the, well, traditional context. It could be a VMware virtualization environment, or containerized applications running on-premises, or a self-managed PostgreSQL instance on Microsoft Azure or Amazon EC2. According to Boyajian, some customers miss the flexibility and control they might normally have with their applications if running the database as a cloud service. “If you’ve got to fix something, you can’t fix it in the database anymore, and must fix it in the app. And that’s a challenge.” As such, he continues, “We’re seeing the pendulum swing away from that a bit.” So will EDB never offer a cloud service? Actually, they already do, through a partnership with Alibaba. Over time, Boyajian expects his customers to push EDB deeper into the cloud. For now, however, the company is doing well by catering to the hefty market of developers who still want to twist the knobs on their database. It’s one of the most impressive things about PostgreSQL. Decades into what should have been PostgreSQL’s dotage, developers keep reimagining what it can be, and for whom. Next read this: The best open source software of 2022 Devs don’t want to do ops 7 reasons Java is still great Why Wasm is the future of cloud computing Why software engineering estimates are garbage Continuous integration and continuous delivery explained
3
Decrypted movie, Satoshi Nakamoto kidnapped and tortured by the NSA
Storyline User reviewsp Featured review 1010/ This movie is a riot What's up with the people who are giving this movie 2 and 3 stars? It is absolutely funny and humorous (rather than humerous!). I pretty much wet myself when I was watching it. Buck is a blast, and I love Beth and all the other characters. And it's deep too. Maybe not everyone's smart enough to get that the first time round, but it's pretty obvious if you watch and listen at the same time. It's a great movie, and that's why I've given it ten stars. p • 9 2 kattycitrine Nov 5, 2021
25
Why modern cars feel so lifeless to drive
In almost every regard, new cars are better than they've been at any time in their history. They're safer than they used to be—though that is less true for women. Powertrains, particularly battery electric ones, are more powerful and more efficient, which helps to compensate for the extra weight of that added safety equipment. Vehicles are far more reliable, at least for their first 100,000 miles, and even cheap cars come with standard equipment that would seem like science fiction to drivers from just a few decades ago. They ride better; they stop better—so everything's great, right? The problem is that modern cars almost invariably feel a bit boring to drive. The issue is more acute the longer you've been driving, as you might expect, since the cause is technological progression—specifically, power steering. For much of the car's existence, steering was entirely unassisted. The driver turns the wheel connected to a steering column that, through links and pivots and usually a gear, turns the front wheels in either direction. That setup was marvelous for feedback, but it wasn't great in terms of the effort required to turn the wheel, particularly at lower speeds. Drivers of a certain age will tell you that unassisted steering is the purest way to drive—and therefore the best. I am sympathetic to this argument, up to a point. Steering became more of an issue as cars got heavier and front tires got wider, so cars gained hydraulic power-assisted steering to compensate. Hydraulic pistons reduce the effort required to steer the front wheels, and there's not much inertia, but the steering system still communicated forces back from the front wheels and through the steering to the driver's hands. The problem is that running a hydraulic system requires enough power to be noticeable in fuel efficiency. These days, we have compact and powerful electric motors that can assist in turning the front wheels. There are fewer moving parts, there are no hydraulic lines or fluid to worry about, and the systems are getting cheaper. Being electrically controlled means you can even accommodate features like lane-keeping assistance or autosteering. The downside is that the motors are also pretty good at filtering out road forces from traveling back up from the front wheels to the steering wheel.
4
Digital Trails Wind Up in the Police’s Hands
Michael Williams’ every move was being tracked without his knowledge—even before the fire. In August, Williams, an associate of R&B star and alleged rapist R. Kelly, allegedly used explosives to destroy a potential witness’s car. When police arrested Williams, the evidence cited in a Justice Department affidavit was drawn largely from his smartphone and online behavior: text messages to the victim, cell phone records, and his search history. The investigators served Google a “keyword warrant,” asking the company to provide information on any user who had searched for the victim’s address around the time of the arson. Police narrowed the search, identified Williams, then filed another search warrant for two Google accounts linked to him. They found other searches: the “detonation properties” of diesel fuel, a list of countries that do not have extradition agreements with the US, and YouTube videos of R. Kelly’s alleged victims speaking to the press. Williams has pleaded not guilty. Data collected for one purpose can always be used for another. Search history data, for example, is collected to refine recommendation algorithms or build online profiles, not to catch criminals. Usually. Smart devices like speakers, TVs, and wearables keep such precise details of our lives that they’ve been used both as incriminating and exonerating evidence in murder cases. Speakers don’t have to overhear crimes or confessions to be useful to investigators. They keep time-stamped logs of all requests, alongside details of their location and identity. Investigators can access these logs and use them to verify a suspect’s whereabouts or even catch them in a lie. It isn’t just speakers or wearables. In a year where some in Big Tech pledged support for the activists demanding police reform, they still sold devices and furnished apps that allow government access to far more intimate data from far more people than traditional warrants and police methods would allow. A November report in Vice found that users of the popular Muslim Pro app may have had data on their whereabouts sold to government agencies. Any number of apps ask for location data, for say, the weather or to track your exercise habits. The Vice report found that X-Mode, a data broker, collected Muslim Pro users’ data for the purpose of prayer reminders, then sold it to others, including federal agencies. Both Apple and Google banned developers from transferring data to X-Mode, but it’s already collected the data from millions of users. The problem isn't just any individual app, but an over-complicated, under-scrutinized system of data collection. In December, Apple began requiring developers to disclose key details about privacy policies in a “nutritional label” for apps. Users “consent” to most forms of data collection when they click “Agree” after downloading an app, but privacy policies are notoriously incomprehensible, and people often don’t know what they’re agreeing to. An easy-to-read summary like Apple’s nutrition label is useful, but not even developers know where the data their apps collect will eventually end up. (Many developers contacted by Vice admitted they didn’t even know X-Mode accessed user data.) The pipeline between commercial and state surveillance is widening as we adopt more always-on devices and serious privacy concerns are dismissed with a click of “I Agree.” The nationwide debate on policing and racial equity this summer brought that quiet cooperation into stark relief. Despite lagging diversity numbers, indifference to white nationalism, and mistreatment of nonwhite employees, several tech companies raced to offer public support for Black Lives Matter and reconsider their ties to law enforcement. Amazon, which committed millions to racial equity groups this summer, promised to pause (but not stop) sales of facial-recognition technology to police after defending the practice for years. But the company also noted an increase in police requests for user data, including the internal logs kept by its smart speakers. Google’s support for racial equity included donations and doodles, but law enforcement agencies increasingly rely on “geofence warrants.” In these cases, police request data from Google or another tech company on all the devices in the area near an alleged crime around the time it occurred. Google returns an anonymized list of users, which police narrow down, then send a subsequent request for data on suspects. As with keyword warrants, police get anonymized data on a large group of people for whom no tailored warrant has been filed. Between 2017 and 2018, Google reported a 1,500 percent increase in geofence requests. Apple, Uber, and Snapchat also have received similar requests for the data of a large group of anonymous users. Civil rights organizations have called on Google to disclose how often it fulfills these geofence and keyword requests. A magistrate judge in a Chicago case said the practice “ensures an overbroad scope” and questioned whether it violates Fourth Amendment protections against invasive searches. Similarly, a forensic expert who specializes in extracting data from IoT devices like speakers and wearables questioned whether it was possible to tailor a search. For example, while investigating data from a smart speaker, data might link to a laptop, then to a smartphone, then to a smart TV. Connecting these devices is marketed as a convenience for consumers, but it also has consequences for law enforcement access to data. These warrants allow police to rapidly accelerate their ability to access our private information. In some cases, the way apps collect data on us turns them into surveillance tools that rival what police could collect even if they were bound to traditional warrants. The solution isn’t simply for people to stop buying IoT devices or for tech companies to stop sharing data with the government. But “equity” demands that users be aware of the digital bread crumbs they leave behind as they use electronic devices and how state agents capitalize on both obscure systems of data collection and our own ignorance. More From WIRED's Year in Review 📩 Want the latest on tech, science, and more? Sign up for our newsletters! 2020 was the year of canceled culture Curl up with some of our favorite longreads from this year The future of social media is all talk 2020 shows the danger of a decapitated cyber regime The best indie games you may have missed this year Read all of our Year in Review stories here
1
Software Testing Tips for Proper Quality Assurance
In This Article: Table of Contents Among other things, one of the main reasons why projects fail is due to poor quality which makes QA during the development process very important. The main role of running quality assurance tests is to assure the product quality, as the name suggests, and avoid small mistakes from slipping through the gaps which can otherwise lead to greater losses. Having said that, it’s nearly impossible to develop a software that is free of bugs or errors. Despite having expert software quality assurance engineers, even popular applications such as Facebook, Twitter et al crash at times and the entire world goes bonkers. Even a few days ago, #whatsappoutage was trending all over Twitter when nearly a million people around the world reported problems with Instagram and WhatsApp services. There are a number of issues currently affecting Facebook products, including gaming streams. Multiple teams are working on it, and we'll update you when we can. — Facebook Gaming (@FacebookGaming) March 19, 2021 Even though QA is necessary, developers should also know about the software testing tips and should be an active part of software quality assurance planning. Many companies require software engineers to run unit testing or automated code-based tests at the least. But the main reason for having separate QA engineers is that it saves time that developers can otherwise spend on coding. According to sources, 35% of companies sometimes involve non-testers in the software testing process, but 55% of companies still use software testers for the vast majority of testing. What is Software QA or Quality Assurance? QA is a quality management system and one of the most important steps in the entire software development life-cycle. During this stage, experts test the quality of software and the code thoroughly to identify bugs (errors). After which they send the feedback in the form of a documented report directly to the development team so that they can fix the identified bugs or issues before the release. The QA team works closely with the development teams to ensure the high-quality delivery of the application. But the battle between the two teams are inevitable. Software Quality Assurance Principles SQA principles are different for different companies and projects but below are some of the main attributes based on which QA engineers should test the product: Functionality: Are all the features of the application functional? Reliability: Is the software resilient? How long does it take to function again in case of failure? Usability: Is it user-friendly? Are the functions easy for users to understand? Efficiency: Is the speed of the software above par? Maintainability: Stability of the software after the modifications have been applied? Portability: How easily can the software be installed? Will it easily adapt to changes? Software Quality Assurance Process span Come up with a testing strategy such as the application that needs to be tested, which software quality assurance methodologies to use, features to focus on, software quality assurance activities to follow, tools to use, testing schedule, and the like. In short, this stage includes the complete plan and functional/ non-functional requirements gathering. After requirement gathering, the second stage is to plan the test cases. It includes the actions that QA experts will take to ensure the functionality of the application. Tools such as TestRail or Zephyr can help engineers to write test cases. After the test cases have been prepared. The team runs the test cases and makes sure that the solution is developed properly and meets all the requirements from a technical perspective. After identifying a bug, the QA expert is obligated to report the bug responsibly through a bug tracking system. Many companies use Jira for this purpose. Through Jira, developers can track the task updates in real-time without missing out on the quality improvement in the software testing process. Developers are specifically responsible for this last step. After the developer has fixed the issue, he is responsible to mark it as fixed. They can not mark the bug as fixed until final approval from the development team. QA esports should re-run the application to check it for final verification. Software Testing Tips for Quality Assurance Software quality assurance testing should not be considered a final step in the entire development process. According to Lauma Fey of AOE ‘It is one step in the ongoing process of agile software development. Testing takes place in each iteration before the development components are implemented. Accordingly, software testing needs to be integrated as a regular and ongoing element in the everyday development process.’ Let Go of the Common QA Misconceptions Many companies restrict the scope of quality assurance best practices and methodologies even in the most strategic environments. Some common misconceptions about QA are: Testing is only for bug detection Testing alone can resolve all the bugs and enhance the product quality Automated testing means lesser costs span Software testing strategies for software development are very important since QA is not just about bug identification. It’s so much more than that. The main objective is to deliver a seamless customer experience by gathering sufficient information about the improvements and then providing the information to the relevant teams, in this case, the development team. Now, let’s see what QA is all about: Feature Implementation: Proves that the particular feature or functionality has been implemented as requested by the stakeholders. Feature Correctness: A report that gathers the information that the new functionality is however working correctly and can be released. Also that the existing features work correctly and there are no risks of failure. Software Quality Information: Evidence about the stability, reliability, performance, usability, and security of the application. Defect Identification: To check the defects in the application and taking corrective measures. Defect Prevention: Testing can result in a significant decrease in errors in the future. Automated Testing Over Manual Testing Any Day This is one of the most important software testing tips. Since both manual and automation SQA methodologies have their pros and cons. Yet automated testing is preferred. In manual testing, QA experts put themselves in the place of end-users and manually test all the features by running the application on the mobile phone to identify defects and ensure a smooth customer experience. Popular manual testing methods include black-box testing, white box testing, unit testing, system testing, integration testing, and acceptance testing. Whereas in automated testing, automated tools are utilized to ensure the smooth running of the software. Some of the popular automation testing tools are HP QTP(Quick Test Professional)/UFT(Unified Functional Testing), Selenium, LoadRunner, IBM Rational Functional Tester, SilkTest, TestComplete, WinRunner, WATIR, etc. Benefits of Automated Testing Quick identification of bugs and feedback Faster process improvement ideas in automation testing Improved test accuracy and quality of the application Improved test accuracy due to eliminating the human factor Reduced the costs of manual testing, i.e. too costly to be implemented Now that we have learned some of the software testing tips, let’s look at why QA is important. Saves Time and Money: QA assurance catches the bugs during the development phase that saves time. Moreover, identifying bugs after the application has been deployed can be costly. Product Quality: QA ensures the quality reliability, and stability of the product. Moreover, quality control checks the functionality and performance of the software product. Security: Customers always want products that they can trust. Therefore, businesses need to keep customer’s sensitive data protected at all costs. Quality assurance testing does help with your product security. You can perform penetration or vulnerability testing to identify loopholes. Product Reputation: The quality of your software is very important for your brand image. Faulty products can lead to negative reviews that can damage the reputation of your company and lead to a higher customer turnover rate. Whereas a well-tested product can save your brand image. Customer Satisfaction: To satisfy and better serve your customers, make sure your product fulfills the needs. Customers can only be satisfied if your product delivers. Therefore, quality assurance gives your customers exactly what they want; high quality and a fast product.
9
Director of China's CDC names lab leak as one of four possible Covid origins
Mar 27, 2021 - Chinese officials brief diplomats on possible COVID-19 origins Illustration: Aïda Amer/Axios Chinese officials briefed diplomats in Beijing on Friday on four possible ways the coronavirus arrived in Wuhan, AP reports. Why it matters: The briefing comes ahead of the release of the World Health Organization's report on the virus' origin, and "is based on a visit earlier this year by a WHO team of international experts to Wuhan," the AP writes. "The experts worked with Chinese counterparts, and both sides have to agree on the final report. It’s unclear when it will come out," according to AP. Details: Feng Zijian, deputy director of China’s Center for Disease Control and Prevention, identified the four possible origins: A bat carrying the virus infected a person. A bat infected a mammal who then gave it to a person. The virus came from shipments of cold or frozen food. It leaked from a Wuhan laboratory that was researching viruses. Experts said it is most likely that the virus originated from the two animal routes or from the cold food shipment, adding that a "lab leak was viewed as extremely unlikely," AP notes. The big picture: "The debate over the origins of the coronavirus has been ongoing since the start of the pandemic, causing rising tensions between the U.S. and China," Axios' Zachary Basu reports. What's next: WHO said on Friday that the report was finalized and was currently getting fact-checked and translated. “I expect that in the next few days, that whole process will be completed and we will be able to release it publicly,” WHO expert Peter Ben Embarek said, per AP.
1
Design Problem: Footer placement at page or content bottom, whichever is lower
This is one of those commonly occurring nags in web development which I've solved several times before but still have to scavenge the googles and stack-overflows each time I run into it. That's why I've decided to document the simple solution to it in this brief article. What happens is that if you position your footer div and fix it at bottom of the page (position:fixed, bottom:0, width:100%), it will work great on shorter content pages (where you don't have to scroll). But the problem is that on longer pages too, instead of moving to the bottom of the content, it will be stuck there at the viewport bottom like an idiot! The above situation can be seen in action in this fiddle where multiple "lorem ipsum" blocks (<p> elements) are placed to simulate content growth. You'll find that the footer will work flawlessly when the content is short (only 1-2 "lorem ipsum" blocks) but the footer gets stuck at the viewport bottom as you keep adding the blocks and they extend beyond the viewport height! On the other hand, instead of positioning your footer, if you just let it be (this is what about 90% of coders initially do), you have another problem. Your footer will now be placed correctly on longer content pages where you must scroll but on the shorter pages, they'll be hanging in the middle of the page where your content ends as shown in this fiddle. There could be multiple approaches to solve this problem. I personally prefer the old school method which is quite simple and easy to understand. Besides, it doesn't require adding of any blank HTML element like "#offset" or "#placeholder" above your footer. All it requires is that all your HTML elements above the footer must be wrapped up inside one container div element. So, the body should be structured something like this: HTML ..BODY ....div.container ......header1, ......article1, ........p, ......etc, etc. ....footer Then all you have to do is set your div.container's minimum height to viewport's height minus footer's height. Assuming your footer's height is 55px, all you have to do is: div . container { min-height : calc ( 100 vh - 55 px ); } You can see a working demo of this in this fiddle. Even as you start adding more and more "lorem ipsum" paragraph elements, the footer will always be placed at the "right" place irrespective of other element's positioning and content size! This is what you'd call a "properly placed footer":
1
NOWUT Assembly Language
The current version of the compiler is NOWUT 0.30 (MD5: 061141f2948fa2026a2863ed04a32be4) and the documentation is here. NOWUT is the name of a programming language that was created in 2018, and it also refers to the self-hosted compiler which implements the language. The compiler is offered under the terms of the GPL. It currently targets 16- and 32-bit x86 CPUs, 68000, SH2, and MIPS. A separate linker (LINKBIN) is included to handle several output formats, while Win32 programs can be built using the third party GoLink. The earliest, pre-release version of the compiler was written in FreeBASIC. It took about 40 hours to translate the FB code into NOWUT, reimplement necessary console and file I/O functions, and squash bugs until it could build a matching copy of itself. The resulting executable was also 10 times quicker and one third the size. The goal of NOWUT is to be easier than pure assembly language but without unnecessary abstractions. It keeps the straightforward aspects of assembly: At the same time, it removes certain burdens: The compiler design aims more for simplicity than for omniscience. It won't find your bugs or optimize your code. The built-in assembler allows for manual optimization of speed-critical routines. The compiler archive includes these examples: Several other programs written in NOWUT have been released separately:
1
Google Pay to launch digital bank accounts in 2021
Late last year, we heard our first inklings that Google was looking to get deeper into the financial world by partnering directly with banks. Today, Google and their first bank partners are announcing the arrival of digital bank accounts in Google Pay. This morning, eight banks in the US, including BBVA and BMO, have announced that they are partnering with Google to offer digital-first bank accounts directly in the Google Pay app. As explained by BBVA, the financial side of these accounts will be managed by the partner banks, and “Google will provide the front-end, intuitive user experiences and financial insights.” According to another partner bank BMO, the Google Pay app will offer built-in budgeting tools and “financial insights” exclusive to Google’s digital bank accounts. Unfortunately, we still have a bit of waiting to do, as these accounts and their exclusive Google Pay features don’t launch until sometime next year. Google’s vice president of Payments Ecosystems, Felix Lin, shared the company’s excitement to begin these partnerships to offer better tools to our “new generation.” Google is excited to work with BBVA USA in enabling a digital experience that is equitable for all and meets the evolving needs of a new generation of customers. We believe that we can use our technology expertise to benefit users, banks and the entire financial ecosystem. Google also gave us a statement explaining that today’s announcement is an extension of their original ventures into digital bank accounts from Citi and SFCU, to six additional bank partners. We had confirmed earlier that we are exploring how we can partner with banks and credit unions in the US to offer digital bank accounts through Google Pay, helping their customers benefit from useful insights and budgeting tools, while keeping their money in an FDIC or NCUA-insured account. We are excited that six new banks have signed up to offer digital checking and savings. In addition to Citi and Stanford Federal Credit Union, we will also be working with Bank Mobile, BBVA USA, BMO Harris, Coastal Community Bank, First Independence Bank, and SEFCU. Interestingly, another recent Google Pay rumor was that Google would launch a branded debit card to be managed in the Google Pay app. So far, though, we’ve not gotten any confirmation that these digital bank accounts will have a special Google-branded credit card. Add 9to5Google to your Google News feed. You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
3
What We Know About the New Coronavirus Variant Now Spreading
To continue, please click the box below to let us know you're not a robot.
63
Upgrade Your SSH Keys
Whether you're a software developer or a sysadmin, I bet you're using SSH keys. Pushing your commits to GitHub or managing your Unix systems, it's best practice to do this over SSH with public key authentication rather than passwords. However, as time flies, many of you are using older keys and not aware of the need to generate fresh ones to protect your privates much better. In this post I'll demonstrate how to transition to an Ed25519 type of key smoothly, why you would want this and show some tips and tricks on the way there. Tl;dr: Generate any new key with ssh-keygen -o -a 100 -t ed25519, specify a strong passphrase and read further if you need a smooth transition. If you've created your key using software released before 2013 with the default options it's probably insecure (RSA < 2048 bits). Even worse, I've seen tweeps, colleagues and friends still using DSA keys (ssh-dss in OpenSSH format) recently. That's a key type similar to RSA, but limited to 1024 bits size and therefore recommended against for a long time. It's plainly insecure and refused for valid reasons in recent OpenSSH versions (see also the changelog for 7.0). 😬 The sad thing about it is that I see posts on how to re-enable DSA key support rather than moving to a more secure type of key. Really, it's unwise to follow instructions to change the configuration for PubkeyAcceptedKeyTypes or HostKeyAlgorithms (host keys are for a later post). Instead, upgrade your keys! Compare DSA with the technology of locks using keys like this one. You wouldn't want this type of key to unlock your front door, right? $ for keyfile in ~/.ssh/id_*; do ssh-keygen -l -f "${ keyfile }" ; done | uniq You're probably thinking… "I'm using my key for a long time, I don't want to change them everywhere now." Valid point, but you don't have to! It's good to know you can have multiple keys on your system and your SSH client will pick the right one for the right system automatically. It's part of the SSH protocol that it can offer multiple keys and the server picks the one your client will have to prove it has possession of the private key by a challenge. See it in action adding some verbosity to the SSH connect command (-vvv). Also if you're using an SSH agent you can load multiple keys and it will discover them all. Easy as that. Most common is the RSA type of key, also known as ssh-rsa with SSH. It's very compatible, but also slow and potentially insecure if created with a small amount of bits (< 2048). We just learned that your SSH client can handle multiple keys, so enable yourself with the newest faster elliptic curve cryptography and enjoy the very compact key format it provides! Ed25519 keys are short. Very short. If you're used to copy multiple lines of characters from system to system you'll be happily surprised with the size. The public key is just about 68 characters. It's also much faster in authentication compared to secure RSA (3072+ bits). Generating an Ed25519 key is done using the -t ed25519 option to the ssh-keygen command. Ed25519 is a reference implementation for EdDSA using Twisted Edward curves (Wikipedia link). When generating the keypair, you're asked for a passphrase to encrypt the private key with. If you will ever lose your private key it should protect others from impersonating you because it will be encrypted with the passphrase. To actually prevent this, one should make sure to prevent easy brute-forcing of the passphrase. OpenSSH key generator offers two options to resistance to brute-force password cracking: using the new OpenSSH key format and increasing the amount of key derivation function rounds. It slows down the process of unlocking the key, but this is what prevents efficient brute-forcing by a malicious user too. I'd say experiment with the amount of rounds on your system. Start at about 100 rounds. On my system it takes about one second to decrypt and load the key once per day using an agent. Very much acceptable, imo. With ssh-keygen use the -o option for the new RFC4716 key format and the use of a modern key derivation function powered by bcrypt. Use the -a <num> option for <num> amount of rounds. Actually, it appears that when creating a Ed25519 key the -o option is implied. The OpenSSH manpages are not really explanatory about the 'new' format. I found this article pretty useful: "new openssh key format and bcrypt pbkdf" on www.tedunangst.com. Protip Use the same passphrase on all of your key types and profit with more convenience. (See also Multi-key aware SSH client.) :: : console hl_lines= "1" $ ssh-keygen -o -a 100 -t ed25519 Generating public /private ed25519 key pair. Enter passphrase (empty for no passphrase) : Enter same passphrase again: Your identification has been saved in /home/gert/.ssh/id_ed25519. Your public key has been saved in /home/gert/.ssh/id_ed25519.pub. The key fingerprint is : SHA256: [...] gert @hostname The key ' s randomart image is : [...] Note the line 'Your identification has been saved in /home/gert/.ssh/id_ed25519'. Your current RSA/DSA keys are next to it in the same ~/.ssh folder. As with any other key you can copy the public key in ~/.ssh/id_ed25519.pub to target hosts for authentication. All keys available on default paths will be autodetected by SSH client applications, including the SSH agent via ssh-add. So, if you were using an application like ssh/scp/rsync before like... $ ssh user@host it will now offer multiple public keys to the server and the server will request proof of possession for a matching entry for authentication. And your daily use of the ssh-add command will not change and autodiscover the Ed25519 key: ssh-add p Enter passphrase for /home/gert/.ssh/id_rsa: Identity added: /home/gert/.ssh/id_rsa (gert@hostname) Identity added: /home/gert/.ssh/id_ed25519 (gert@hostname) It not only discovered both keys, it also loaded them by entering a single passphrase (because it's the same)! We've reached a very important goal now. Without any change to your daily routine we can slowly change the existing configuration on remote hosts to accept the Ed25519 key. In the meantime the RSA key will still work. Great, right!? If you're afraid this will change your key, don't worry. The private part of your keypair is encrypted with a passphrase which only exists locally on your machine. Change it as often as you like. This is recommended to prevent abuse in case the key file gets into the wrong hands. Repeat for all your key files to ensure a new key format with 100 bcrypt KDF rounds: $ ssh-keygen -f ~/.ssh/id_rsa -p -o -a 100 Using Ed25519 will (and should) work in most situations by now, but legacy systems may not support them as of yet. The best fallback is a strong RSA keypair for this. While the OpenSSH client supports multiple RSA keys, it requires configuration/command line options to specify the path so it's rather error-prone. Instead, I'd recommend upgrading your existing key in-place to keep things simple once this is done. Depending on the strength (key size) of your current RSA key you can migrate urgently or comfortably. In case you have a weak RSA key still, move it out of the way from the standard path and generate a new one of 4096 bits size: $ mv ~/.ssh/id_rsa ~/.ssh/id_rsa_legacy $ mv ~/.ssh/id_rsa.pub ~/.ssh/id_rsa_legacy.pub $ ssh-keygen -t rsa -b 4096 -o -a 100 If you are using an agent, manually point it to all your keys: $ ssh-add ~/.ssh/id_rsa ~/.ssh/id_rsa_legacy ~/.ssh/id_ed25519 Once you are finished the transition on all remote targets you can go back to convenience and let it autodiscover your new RSA and Ed25519 keys; simply omit the keyfile arguments. Support is available since OpenSSH 6.5 and well adopted in the Unix world OSs for workstations. Ubuntu 14.04+, Debian 8+, CentOS/RedHat 7+ etc. all support it already. (If you have details about Mac OS X please drop a line, couldn't find it with a quick search). Some software like custom desktop key agents may not like the new keys for several reasons (see below about the Gnome-keyring for example). Launchpad gained support for Ed25519 keys since February 16, 2022. Thanks Colin Watson for pointing this out in the comments! Gerrit Code Review gained support since 2017 with the 2.14 release. PuTTY on Windows? See below. The Gnome-keyring, as used in Ubuntu Unity at least, fails to read the new RFC4716 format keys but reports success. It's bugged. More details here in my AskUbuntu Q&A post. I'd recommend disabling the Gnome keyring for SSH agent use and use the plain OpenSSH agent instead. Sorry, I'm not using PuTTY, but make sure to upgrade first. This page suggests Ed25519 support since a late-2015 version according to a wishlist item. Generally speaking, I'm not too excited with the speed of implementation of security features in it. We've taken some steps, important ones, but it's far from ultimate security. When dealing with high assurance environments I would strongly discourage key usage like described in this post as this holds the unencrypted private key in memory. Instead, use hardware security (smart cards) to avoid leaking keys even from memory dumps. It's not covered in this post, mainly because it requires a hardware device you need to buy and secondly because the limitations are device dependent. A nice cute solution would be to make use of your TPM already built-in your PC probably, but that would definitely deserve another post. I'm planning on writing some more on how to harden SSH a bit more; custom host keys, custom DH moduli, strong ciphers (e.g. chacha20-poly1305) and secure KeyExchange/MACs. For now this is a great resource already: https://stribika.github.io/2015/01/04/secure-secure-shell.html For SSH host key validation, please see my other article: SSH host key validation done right – strict yet user-friendly. Want to share some ideas? Post it below in the comments. Love my post? Please share it. 🔑 Upgrade your SSH keys! (blog) Use Ed25519, about a transition and other tips & tricks. https://t.co/KDY2Ufh5FC — Gert van Dijk ⚠️ (@gertvdijk) September 23, 2016
4
Did the Big Bang happen at a point?
384 $\begingroup$ TV documentaries invariably show the Big Bang as an exploding ball of fire expanding outwards. Did the Big Bang really explode outwards from a point like this? If not, what did happen? general-relativity cosmology big-bang black-hole-thermodynamics $\endgroup$ 5 557 +150 $\begingroup$ The simple answer is that no, the Big Bang did not happen at a point. Instead, it happened everywhere in the universe at the same time. Consequences of this include: The universe doesn't have a centre: the Big Bang didn't happen at a point so there is no central point in the universe that it is expanding from. The universe isn't expanding into anything: because the universe isn't expanding like a ball of fire, there is no space outside the universe that it is expanding into. In the next section, I'll sketch out a rough description of how this can be, followed by a more detailed description for the more determined readers. A simplified description of the Big Bang Imagine measuring our current universe by drawing out a grid with a spacing of 1 light year. Although obviously, we can't do this, you can easily imagine putting the Earth at (0, 0), Alpha Centauri at (4.37, 0), and plotting out all the stars on this grid. The key thing is that this grid is infinite$^1$ i.e. there is no point where you can't extend the grid any further. Now wind time back to 7 billion years after the big bang, i.e. about halfway back. Our grid now has a spacing of half a light year, but it's still infinite - there is still no edge to it. The average spacing between objects in the universe has reduced by half and the average density has gone up by a factor of $2^3$. Now wind back to 0.0000000001 seconds after the big bang. There's no special significance to that number; it's just meant to be extremely small. Our grid now has a very small spacing, but it's still infinite. No matter how close we get to the Big Bang we still have an infinite grid filling all of space. You may have heard pop science programs describing the Big Bang as happening everywhere and this is what they mean. The universe didn't shrink down to a point at the Big Bang, it's just that the spacing between any two randomly selected spacetime points shrank down to zero. So at the Big Bang, we have a very odd situation where the spacing between every point in the universe is zero, but the universe is still infinite. The total size of the universe is then $0 \times \infty$, which is undefined. You probably think this doesn't make sense, and actually, most physicists agree with you. The Big Bang is a singularity, and most of us don't think singularities occur in the real universe. We expect that some quantum gravity effect will become important as we approach the Big Bang. However, at the moment we have no working theory of quantum gravity to explain exactly what happens. $^1$ we assume the universe is infinite - more on this in the next section For determined readers only To find out how the universe evolved in the past, and what will happen to it in the future, we have to solve Einstein's equations of general relativity for the whole universe. The solution we get is an object called the metric tensor that describes spacetime for the universe. But Einstein's equations are partial differential equations, and as a result, have a whole family of solutions. To get the solution corresponding to our universe we need to specify some initial conditions. The question is then what initial conditions to use. Well, if we look at the universe around us we note two things: if we average over large scales the universe looks the same in all directions, that is it is isotropic if we average over large scales the universe is the same everywhere, that is it is homogeneous You might reasonably point out that the universe doesn't look very homogeneous since it has galaxies with a high density randomly scattered around in space with a very low density. However, if we average on scales larger than the size of galaxy superclusters we do get a constant average density. Also, if we look back to the time the cosmic microwave background was emitted (380,000 years after the Big Bang and well before galaxies started to form) we find that the universe is homogeneous to about $1$ part in $10^5$, which is pretty homogeneous. So as the initial conditions let's specify that the universe is homogeneous and isotropic, and with these assumptions, Einstein's equation has a (relatively!) simple solution. Indeed this solution was found soon after Einstein formulated general relativity and has been independently discovered by several different people. As a result the solution glories in the name Friedmann–Lemaître–Robertson–Walker metric, though you'll usually see this shortened to FLRW metric or sometimes FRW metric (why Lemaître misses out I'm not sure). Recall the grid I described to measure out the universe in the first section of this answer, and how I described the grid shrinking as we went back in time towards the Big Bang? Well the FLRW metric makes this quantitative. If $(x, y, z)$ is some point on our grid then the current distance to that point is just given by Pythagoras' theorem: $$ d^2 = x^2 + y^2 + z^2 $$ What the FLRW metric tells us is that the distance changes with time according to the equation: $$ d^2(t) = a^2(t)(x^2 + y^2 + z^2) $$ where $a(t)$ is a function called the [scale factor]. We get the function for the scale factor when we solve Einstein's equations. Sadly it doesn't have a simple analytical form, but it's been calculated in answers to the previous questions What was the density of the universe when it was only the size of our solar system? and How does the Hubble parameter change with the age of the universe?. The result is: The value of the scale factor is conventionally taken to be unity at the current time, so if we go back in time and the universe shrinks we have $a(t) < 1$, and conversely in the future as the universe expands we have $a(t) > 1$. The Big bang happens because if we go back to time to $t = 0$ the scale factor $a(0)$ is zero. This gives us the remarkable result that the distance to any point in the universe $(x, y, z)$ is: $$ d^2(t) = 0(x^2 + y^2 + z^2) = 0 $$ so the distance between every point in the universe is zero. The density of matter (the density of radiation behaves differently but let's gloss over that) is given by: $$ \rho(t) = \frac{\rho_0}{a^3(t)} $$ where $\rho_0$ is the density at the current time, so the density at time zero is infinitely large. At the time $t = 0$ the FLRW metric becomes singular. No one I know thinks the universe did become singular at the Big Bang. This isn't a modern opinion: the first person I know to have objected publically was Fred Hoyle, and he suggested Steady State Theory to avoid the singularity. These days it's commonly believed that some quantum gravity effect will prevent the geometry from becoming singular, though since we have no working theory of quantum gravity no one knows how this might work. So to conclude: the Big Bang is the zero time limit of the FLRW metric, and it's a time when the spacing between every point in the universe becomes zero and the density goes to infinity. It should be clear that we can't associate the Big Bang with a single spatial point because the distance between all points was zero so the Big Bang happened at all points in space. This is why it's commonly said that the Big Bang happened everywhere. In the discussion above I've several times casually referred to the universe as infinite, but what I really mean is that it can't have an edge. Remember that our going-in assumption is that the universe is homogeneous i.e. it's the same everywhere. If this is true the universe can't have an edge because points at the edge would be different from points away from the edge. A homogenous universe must either be infinite, or it must be closed i.e. have the spatial topology of a 3-sphere. The recent Planck results show the curvature is zero to within experimental error, so if the universe is closed the scale must be far larger than the observable universe. $\endgroup$ 32 90 $\begingroup$ My view is simpler and observational. Observations say that the current state of the observable universe is expanding: i.e. clusters of galaxies are all receding from our galaxy and from each other. The simplest function to fit this observation is a function that describes an explosion in four-dimensional space, which is how the Big Bang came into our world. There are experts on explosive debris that can reconstruct the point where the explosion happened in a three-dimensional explosion. In four dimensions the function that describes the expansion of space also leads to the conclusion that there is a beginning of the universe from which we count the time after the Big Bang. The BB model has survived, modified to fit the observation of homogeneity (quantum fluctuations before 10-32 seconds) and the observation that the expansion we measure seems to be accelerating (the opening of the cone in the picture) Source Note that in the picture the "Big Bang" zero points is "fuzzy". That is because, before 10-32 seconds where it is expected that quantum mechanical effects dominate, there is no definitive theory joining both general relativity and quantum mechanics. There exists an effective quantization of gravity but the theory has not come up with a solid model. Thus extrapolating with a mathematical model -- derived from completely classical equations -- to the region where the "origin" of the universe was where we know a quantum mechanical solution is necessary, is not warranted. Take the example of the potential around a point charge. The classical electrodynamic potential goes as $\frac{1}{r}$, which means that at $r=0$ the potential is infinite. We know though that, at distances smaller than a Fermi, quantum mechanical effects take over: even though the electron is a point charge, no infinities exist. Similarly, one expects that a definitive quantized gravity unified with the other forces model will avoid infinities, justifying the fuzziness at the origin shown in the picture of the BB. In conclusion, in the classical relativistic mechanics' solution of the Big Bang, there was a "beginning point singularity" which as the universe expanded from the four-dimensional explosion, is the ancestor in the timeline of each point in our present-day universe. The surface of a balloon analogy is useful: the points of the two-dimensional surface can be extrapolated to an original "point" when the blowing expansion starts, but all points were there at the beginning. The need for a quantum mechanical solution for distances smaller than 10^-32 demanded from the extreme homogeneity of the Cosmic Microwave Background radiation confirms that quantum mechanical effects are needed for the beginning, which will make the beginning fuzzy. Physicists are still working on the quantization of gravity to extrapolate to what "really happened". Addendum by Gerold Broser There are two further illustrations: Nature Research: Box: Timeline of the inflationary Universe , nature.com, April 15, 2009 (source: nature.com) Kavli Institute for Particle Astrophysics and Cosmology (KIPAC): Inflation , Stanford University, July 31, 2012 Edit since a question has been made a duplicate of the above: Was the singularity at the big bang a black hole? [duplicate] Black hole singularities come from the solutions of general relativity, and in general describe very large masses which distort spacetime , and have a horizon, after which nothing comes out and everything ends up on the singularity, the details depending on the metric used . You see above in the history of the universe image that the description in the previous sentence does not fit the universe. Galaxies and clusters of galaxies are receding from each other which led to the Big Bang model, and what more, the expansion is accelerating as seen in the image. So the Big Bang mathematics do not follow the black hole mathematics. $\endgroup$ 15 34 $\begingroup$ The answer is that we don't know. Why? Because the theory of gravity which we have and use, GR, has a singularity. Things which should be finite in a physical theory, like the density, become infinite. And theories with a singularity are simply wrong, they need a modification, and this modification is necessary not only at the singularity itself, but already in some environment of this singularity. Moreover, we already know for independent reasons that a modification is necessary: Because if one looks at times $10^{-44}$ s after the singularity, quantum gravity becomes important, which is an unknown theory. And we have also empirical evidence that the most trivial model based on well-established theories (GR with SM for matter) fails: It is the so-called horizon problem. It requires, for its solution, some accelerated expansion in the very early universe. One can propose models which lead to such an expansion based on particle theory, theories usually named "inflation" (imho very misleading, as I explain here), but they usually use speculative extensions of the SM like GUTs, supersymmetry, strings and so on. So, even the details of a particle theory which would give inflation are unknown. So, while big bang theory is well established, if one thinks about everything having been as dense as inside the Sun, and I would say reliable if as dense as inside a neutron star, we have not much reason to believe that the theories remain applicable for much higher densities, and certainly not for the density becoming infinite. From a purely mathematical point of view one cannot tell anything about the singularity itself too. If one considers, for example, the metric in the most usual FLRW coordinates $ds^2=d\tau^2-a^2(\tau)(dx^2+dy^2+dz^2)$, then the singularity would be a whole $\mathbb{R}^3$. The limit of the distance between the points would be zero (which is the reason why one usually prefers the picture with a point singularity). On the other hand, the limit of what one point which moves toward the singularity can causally influence in its future remains (without inflation) a small region, which in no way tends to cover the whole universe, which corresponds much better with a whole $\mathbb{R}^3$ space singularity. $\endgroup$ 0 20 $\begingroup$ In addition to what the others have said, let me explain a simple analogy for the expansion of the universe. Consider a balloon, the surface of which is considered as the universe. Let's draw dots on the balloon which symbolize galaxies. Now, blow the balloon. All the galaxies will start to separate from each other. Now suppose you are on one of the galaxies. You will observe all the galaxies moving away from you, and you would be led to the conclusion that you are on the center of the universe. This is what every galaxy would observe. Thats why there is no center for the universe's expansion. I hope you enjoyed my analogy. $\endgroup$ 3 18 $\begingroup$ The explosion that you have seen is actually 4 dimensional representation of the universe. If we are representing universe in 4D then big bang had happened at a point and is expanding as a hollow sphere. But in 3D the big bang should have happened in every point of the universe and is expanding into every direction. This interpretation is using Friedman model of universe. $\endgroup$ 1 11 $\begingroup$ [Editorial note: This answer was meant to be a comment to @good_ole_ray's comment to John Rennie's answer but the 600 characters comment limit...you know.] Re "galaxies seem to be moving away from a common center" "common center" is more appropriate than one may think at first sight. Sure, it's not that kind of center 99 % of the people understand as such: a single point surrounded by other points with the outermost points in ideally equal distance to the center, i.e. things known as sphere, ball, orb, globe or bowl, hollow or not doesn't matter. The center I'm talking about here is so 'common', in the meaning of 'joint', because all existing points in our universe are this center. It's easier to understand if one imagines the young universe, rather tiny at the beginning. It then looked more like a point as we know it from our day-to-day life. But it evolved, it expanded and it expanded in a way that between any two points (or units of space) another point (or unit of space) arose. Such "pushing" the former two points (or units of space) apart from each other. And this happens since 13.7 bn years, at any point of the universe so that the points that was one once are many now. Or in other words: any of the points is now far, far away from each of the other points that were at the same position once. But they're still the center because they once were the center. This, their, property hasn't changed because they didn't move because of a proper motion but because new space arose between them. And why is this? Because the Big Bang wasn't an explosion in the common sense. Since there was no space into which something could have exploded into. Space, and time, for that matter, only started to exist with the Big Bang. It also happens slowly on a small scale. The latest value of the Hubble parameter is $71_{-3.0}^{+2.4} \frac {km}{Mpc \cdot s}$ which is rather small on a small scale (if one considers an AU [~150m km] to be small – but compared to astronomical dimensions that's even tiny anyway): $$ 1 \space Mpc = 3.09 \cdot 10^{22} \space m $$ $$ 1 \space AU = 1.5 \cdot 10^{11} \space m $$ So the (theoretical) increase of the average distance between sun and earth due to the universe's expansion can be calculated to $$ v_{\Delta{AU}} = 3.44 \cdot 10^{-7} \space \frac {m}{s} = 10.86 \space \frac {m}{yr}. $$ But since that happened for such a long time the former small scale became large scale everywhere but in the vicinity of our galaxis (or, to be precise: in the vicinity of any [subjective] observation point in the universe). And, be aware of that this applies only to space itself. It does not mean that the earth drifts actually away from the sun, or that you move away from your beloved ones constantly, and vice versa. Remember, there's gravity, the weakest of the four fundamental interactions according to its factors $$ m_1 \cdot m_2 \cdot \frac {1}{r^2} $$ but the most relentless when it comes to masses. "galaxies seem to be moving away from a common center" also isn't true for all galaxies observed from any observation point. The spectral lines of the Andromeda galaxy, for instance, are blueshifted. That means it is close enough to us that it's proper motion towards us is greater than the drift away from us caused by the universe's expansion: Andromeda ($ 300 ± 4 \frac {km}{s} $) ←---------------------------- ⊙ ␣  ---→ Expansion speed at 2.5m ly, the distance of Andromeda (~$ 54.42 \frac {km}{s} $) Legend:   - ≙ $ 10 \frac {km}{s} $ [Final editorial note: Well, this was a little bit more than 600 characters.] P.S.: @good_ole_ray I hope you have the opportunity to read this before it is flagged as not appropriate, or even worse, because it doesn't really address the original question. $\endgroup$ 6 $\begingroup$ Our best theory for modelling cosmology is GR. Now, the equations of GR support either a bounded or unbounded universe. To decide between them would mean setting certain boundary conditions. Einstein himself originally chose an unbounded, static universe because that he felt reflected the cosmological assumptions at the time: space is infinite and barely changes. To achieve this, in 1917 barely two years after duscovering GR, he introduced a new term into GR, the cosmological constant. This produced a cosmological pressure that counteracted gravity resulting in a static universe. Friedmann, however, in 1922, assuming the homogeneity and isotropy of space, showed that then GR implied that the spatial metric must have constant curvature, and so was either a sphere (the 3d surface of a 4d ball), a hyperbolic space or flat. The latter two spacetimes are unbounded but the first is bounded. He also showed that these spacetimes were dynamic and so either contracting or expanding in time, or some combination of the two and derived an equation for the scale factor. Einstein, however, was unwilling to accept Friedmann's vision of an evolving universe and dismissed his work. Now, in 1912 Vesto Slipher had discovered that the light from galaxies were red-shifted implying that they were all receding from the stand-point of earth and at varying speeds. At that time, they were not known to be galaxies and in fact, the entire universe was thought to just consist of the Milky Way. There had been earlier suggestions that the universe may be much larger than supposed, primarily by Kant who published such a supposition in 1755 in his General History of Nature and Theory of the Heavens. It was Hubble, by calibrating distances using Cepheid variables, who showed a decade later that these astronomical bodies were much too far away to be part of the Milky Way and were galaxies in their own right. Suddenly, the universe had grown a great deal larger. And then in 1929, combining his observations with that of Slipher derived what was once called the Hubble law, but is now called the Hubble-Lemaitre law, linking together the distance of a star from earth and the amount of red-shift its light had shifted by. It turned out that Hubble's discovery had already been anticipated by a Belgian priest and theoretical physicist, Lemaitre, two years earlier in 1927 in his paper, A Homogeneous Universe of Constant Mass and Increasing Radius Accounting for the Radial Velocity of Extragalactic Nebulae. In this work, Lemaitre expanded upon Friedmann's cosmology, though his work was done independently, in essence by choosing the expanding spherical Friedmann metric. Einstein, still holding on to his vision of a static universe, also dismissed this work too, saying "your calculations are correct but your physics is atrocious". It was to Lemaitres theory, especially after he also theorised 'a primeval atom' from which the universe sprang from, that Fred Hoyle dismissively called 'the Big Bang Theory', a name that stuck. Now, at the time of the Big Bang, all distances shrink to zero and Lemaitre's spherical universe shrinks to a point, a point of infinite density and temperature. Its easy to see in this picture that tge Big Bang happened everywhere all, all at once simply because everywhere is just a point. Moreover, this also is suggestive of spacetime itself being 'created'. Whilst Lemaitre himself chose a closed universe - the surface of a sphere, Friedmann showed that an open universe was possible, either flat or a hyperbolic hyperboloid. Is a Big Bang here also possible, a time when distances between points approached zero and where density and temperature approached infinity? Well, yes: take an infinite expanse of space with a certain fixed density of mass and halve the distances, then the density will cube. By iteration, we see the density rapidly increase towards infinity. Thus, even in an open universe, where spacetime extends out infinitely, it is possible to have a Big Bang. In this case, it began everywhere, and all at once. But what does this mean for the topology of spacetime? Have we somehow squeezed an infinite spacetime into a point? No. There is a topological property called compactness that doesn't rely on a metric (sometimes called a geometry, because to do geometry requires measuring distances and angles and it is precisely a metric that enables this). A sphere is compact but both hyperbolic hyperboloid and flat space are non-compact. However, at the time of the Big Bang, or to be more precise, as we approach it, the distances between all points approach zero. So geometrically, it looks as though this spacetime approaches a point, but in fact, it does not. No matter how close the points are, if we go out far enough, which we can in an open spacetime, we will find the distances between points become appreciable. It's only at the time of the Big Bang does the metric go to zero and states that all points have zero distance between them. And so is geometrically a point, whilst at the same time being non-compact. This is bizarre. And what it really points to is the likelihood of new physics here. Moreover, we should recall GR does not deal with non-degenerate metrics. In fact, metrics by definition are non-degenerate. $\endgroup$ 1 Not the answer you're looking for? Browse other questions tagged general-relativity cosmology big-bang black-hole-thermodynamics or ask your own question.
2
Diodes part 4: Glowing ionic diodes recreated and my ionic thesis research
published: Sunday 27 December 2020 modified: Wednesday 27 January 2021 author: Hales markup: textile In 2019 I did my undergraduate thesis on the home-made ionic diodes that I’ve previously blogged about, with the hope of finding a way that some form of homemade (low-technology) ionic amplifying device could be devised. I didn’t succeed, but I found out about a whole pile of cool stuff. For those not familiar: if you can make certain types of amplifying device (eg inverters) then you are most of the way to making devices that can do any computation. Traditional IC technologies, like the many families of CMOS, are very performative but are a completely unapproachable technology for most students and experimenters ($$$$$+). You simply can’t “make your own IC at home” — the closest I have seen are people making some individual transistors (after lots of effort, investment in chemicals and furnaces). You can also solder off-the-shelf parts onto a PCB to make an “IC”, but this provides minimal educational value for actual IC development and has a wildly different list of tradeoffs. The devices I looked at have absolutely garbage performance (speeds in single-digit Hz at best, lifetime in minutes) but also had some mild promises of achieving transistor-like functionality. I still have hope that someone will devise a method of making home-technology ionic integrated circuits, but I never cracked the grail of transistor-like devices. My full thesis PDF goes into great detail of a variety of topics and experiments. In this blog post I’m just going to summarise some more of the interesting things and show pretty pictures. If any links to PDFs or other resources die (especially the 1920’s magazines): please tell me, I have backup copies of everything. There are lots of errors and informalities in my thesis, my passion isn’t in formal writing and I’m a one man band. Nonetheless I will appreciate it if you take the time to read my stuff and provide discussion, criticism or feedback. Keep in mind that many of my peers did their thesis entirely in simulation, I was lucky enough to be able to negotiate a topic that was much wilder and open-ended. I managed to get the aluminium foil and bicarb soda diodes to glow! Homemade kitchen-ingredient photon throwers, no fancy equipment required. In this first image I have aluminium foil tape glowing inside a pool of electrolyte in a drinking glass. (i) shows with power off, (ii) shows with 100VAC (50Hz) applied: Very interestingly: my whitish glow colour was completely different to all of the literature which claimed it should be green. The full circuit is two back-to-back aluminium electrolyte diodes (see full thesis PDF), with the second electrode not within frame of the photos above. Lifetime was measured in hours if operated correctly, however they would slowly dim. I adapted this from a drinking glass to a planar style with the goal of imitating IC-style construction. Aluminium foil, sticky tape and a clear plastic slide (aka ‘overhead transparency slide’): Note the slightly different colours on the exposed sections of aluminium. These are the oxide/hydroxide layers that grow after you put a drop of electrolyte (water + bicarb soda) on the contacts and slowly apply an AC voltage (starting from 0V, finishing somewhere around 100VAC). This planar style also glowed reliably once I had some parameters down (mainly purer water). Here is a photo taken through a cardboard tube, with a little bit of light leaking in around the edges. Again (i) is power off and (ii) is with 100VAC+ applied: I have previously only been able to find a single photo of this phenomena. I was pretty chuffed when it worked, more so when I found it repeatable :) Mentions of this glow are otherwise mostly limited to 1920’s magazines like (page 48 of PDF) Radio News for August 1926 ‘Chemical Condensers of Large Capacity’ by Clyde J Fitch and (page 29 of PDF) QST 1921 September ‘Operating Notes on Electrolytic Rectifiers’ by Roy Atkinson. I made (and destroyed) a lot more of these devices than shown here. They have many limitations: See my full thesis PDF for more chemistry, operations, theory & history of these devices. I was really disappointed by a lot of sources. Not just random websites or magazine articles: published papers didn’t seem to be much better. Ionic devices were mostly abandoned in the mid 1900’s due to the advent of more promising solid-state technologies (like germanium & silicon), so prior material on ionic devices is very sparse. In recent years some more research has picked up, but many published papers and websites that initially show promise end up being useless for my goals, eg: Chapter 5 of my report is dedicated to questioning research and claims around ionic transistors. If you have a good grasp of discrete transistor logic and discrete amplifiers then this may be amusing to read: I go into a lot of detail of what is and isn’t useful in a transistor-like-device if you want to make logic circuits or amplifiers: “This device is likely a transformer” is about as passive-aggressive insulting as I think I’ve ever been to a totally imaginary device. Nonetheless it’s very important to properly contemplate & criticise all of these fictional examples, otherwise you end up accidentally convincing yourself that your have created a useful amplifier (like I did at one point with one of my experiments). I was really eager to try and recreate an experiment by Letaw and Bardeen (minus the massive quantities of mercury, which for some reason my supervisor didn’t want me to have). Sadly in the end I realised that their own data didn’t support their device actually being a useful or functional amplifier in any way (section 5.7.1). I was pretty miffed. It’s always worth considering that your current path isn’t the best path. A lot of traditional transistor designs simply don’t work at a macro-manageable scale, they need micron-thin insulators or active regions. I explore scaling several of these ideas up to macro (‘kitchen-technology’) scale in section 6.2 of my report. Interestingly: ionic transistor research (at the nano scale) has yielded working devices over the past few decades, so there are some less mainstream ideas worth trying to scale up from there too. Chapter 8 of my thesis report outlines some crazier ideas to try. Resonant tunnel-diode logic concepts could potentially make logic circuits out of the ionic diodes I already have (they have some funky behaviour when driven to higher voltages, including some hysteresis). Bubble-control transistors are an idea where gas-bubble positions are abused to modulate fluid conductivity, ie turn the ‘bad’ electrolysis I get on my contacts into something useful): Ideally I would avoid this whole wet-state-electronics concept, but it was the most promising path I had thanks to the working ionic diodes I had previously developed. Making a single functional diode was possible with a (relatively unreactive) coin, sticky-tape, aluminium foil, bicarb soda and water: The above shows an over-saturated solution (white). My later tests used much more controlled concentrations, so the fluids appear as transparent water instead: Interdigitated gold electrodes were made on clear slide transparency (ie the clear A4-sized pieces of plastic you print onto for overhead projectors). The fingers are made of gold-leaf foil (very cheap) and double-sided tape. Cheap kapton clone tape was used as an insulator. Ordinary clear sticky-tape works just as well, but I found photo examples were easier to make with the coloured kapton: Note that the two sets of gold contacts shown above are slightly different colours. The photographed specimen had been used under DC conditions, which affected each electrode differently. I didn’t have automated sampling equipment in my primary lab (ie my home), so the actual test jigs looked a lot more primitive then you might expect: I also followed some tangents into making the other components of ICs, such as resistors, using easily available materials and methods that were somewhat analogous to actual IC masking processes. Here is an example using a mixture of tube-silicone (used on gutters & bathrooms) and powdered graphite (used as lock lubricant) to make resistors: These resistors were pressure sensitive, so I imagine they could make interesting sensors. Depending on graphite concentration and track geometry I was able to get anything from hundreds of ohms to megaohms: Mal-forming the oxide/hydroxide layers on aluminium contacts would lead to poisoned contacts: Finally my attempts to recreate the glow were not initially successful. I instead recreated what some of the early 1920’s authors described as a failure mode called ‘fireworks’ (from bad/impure ingredients): A few weeks ago I woke up in the morning and realised with a bit of shock “Hey I never put this stuff on my website, did I?”. Woops. Eventually I might get back to doing some experiments in this area, but for now I’m snowed under with projects and other things I want to write about. Hopefully someone else tries to recreate the glowing effect and shares their results. If someone can achieve a different colour again or success with lower driving voltages then it would give more clues as to the method of light production. I still dream of one day being able to lay down some materials by hand (or similar) on a flat sheet of plastic to create a collection of functioning transistor/amplifier/logic devices. So far I’ve had a lot more success than I expected with my diode steering circuits and I was quite shocked to find non-linear resistors were dead easy to make ; but a ‘transistor-like-device’ is a still a very big step.
7
Amazon to investigate allegations of discrimination following petition
close Amazon on Friday confirmed that it is investigating allegations of discrimination at the company's cloud-computing subsidiary, Amazon Web Services (AWS). The tech giant is hiring an independent, women-owned/led investigative firm to look into concerns from employees working for AWS Professional Services, or ProServe, and will begin an assessment of the team, Amazon confirmed to FOX Business on Friday. It is also launching a series of women’s leadership circles so female employees can share their experiences. The investigation and assessment come after The Washington Post first reported that at least 550 AWS ProServe employees signed an internal petition accusing AWS of having "an underlying culture of systemic discrimination, harassment, bullying and bias against women and under-represented groups." DEMOCRATS BLAST JEFF BEZOS' SPACE TRIP, DEMAND HE PAY ‘FAIR SHARE’ OF TAXES The petition also alleges that the company's efforts to investigate discrimination claims are "not fair, objective or transparent," according to the newspaper, which is owned by Amazon founder Jeff Bezos. Amazon shared an email that AWS CEO Adam Selipsky and Amazon CEO Andy Jassy sent to the petition's authors. "I share your passion for ensuring that our workplace is inclusive and free of bias and unfair treatment," the email reads. "I can tell you we are committed to that outcome, as well as to specifically investigating any incident or practice that is inappropriate. I understand you are aware that, given the nature of the concerns here, we have retained an outside firm to investigate and understand any inappropriate conduct that you or others may have experienced or witnessed." FEDS SUE AMAZON TO ‘FORCE’ RECALL OF HAZARDOUS PRODUCTS; AMAZON SAYS RECALL WAS ALREADY DONE Selipsky and Jassy added that the investigative firm is "experienced and objective," and that Selipsky would personally review their findings. Former AWS ProServe employee Cindy Warner published an open letter to Selipsky and Jassy on Friday, saying she had been terminated from her role on June 30 after filing a lawsuit against the cloud-computing company. Warner mentioned the petition in her letter and said she is "honored and humbled that the petition identifies [her] as one of the people whose reports and experiences spurred [petitioners] to call for change in ProServe and AWS." GET FOX BUSINESS ON THE GO BY CLICKING HERE "It quickly became clear, as explained in my complaint, that our unit had a major sexism problem," Warner wrote in her letter. "Female team members would approach me on a near-daily basis asking for guidance on how to handle persistent discrimination and bullying by male managers and coworkers. The only other female L8-level employee at the time regularly came to me in tears, searching for ways to stop the abuse by her male colleagues." The former AWS employee, who started working for the ProServe team in February of 2020, specifically named her "immediate boss, Todd Weatherby…and other white male leaders in ProServe" who she says were "effectively ignoring persistent issues of harassment and discrimination while putting up inclusive workplace window dressing." CLICK HERE TO READ MORE ON FOX BUSINESS An Amazon spokesperson said the company "conducted a thorough investigation into Ms. Warner’s complaints as soon as she made them." "To date, we have found her allegations to be unsubstantiated," the spokesperson said. "Amazon does not tolerate discrimination or harassment in any form and any situation, and employees are encouraged to raise concerns to any member of management or through an anonymous ethics hotline with no risk of retaliation. When an incident is reported, we investigate and take proportionate action, up to and including termination."
3
How to upgrade Linux to FreeBSD remotely via SSH
How to upgrade Linux to FreeBSD remotely via SSH? The answer is simple: download and use my script. Time runs fast, MBR evolved to GPT, BIOS to UEFI and I followed the footsteps of good old Depenguinator 2.0 and 3.0 to bring some Christmas magic to the end of 2021 year. This is a very early draft, the script abilities are very limited, there are a lot things to implement and many bugs to fix, but it runs fine on my laptop successfully installs FreeBSD 13.0 over the default CentOS Linux 8 on a VirtualBOX machine. Notes The script is extremely dangerous, it can easily ruin your OS, data and life. Do not run it in production or on the system that has any value. You have been warned! Network connection required: linux2free.sh downloads files from Internet Currently the linux2free.sh script supports UEFI only boot. Sorry for MBR scheme, perhaps someday I’ll add it (or not) Only Redhat based Linux distributions are supported The resulted FreeBSD system is very minimalistic. It uses a simple custom starup scripts to bring up network interfaces and start sshd, you have to configure the system and install additional packages yourself linux2free created a boot EFI partition and a small ZFS filesystem for FreeBSD to start up, but the rest of the space formerly used by Linux has to be redistributed manually. Installation sudo dnf upgrade -yrebootwget https://raw.githubusercontent.com/mezantrop/linux2free/master/linux2free.sh && sudo sh linux2free.sh TODO Allow root to ssh in remotely Set default router Support more Linux distributions Make the code better (Oh, there are plenty things to do! See TODO remarks over the script body) History 2021.12.25 v0.1 Mikhail Zakharov Initial version2021.12.26 v0.2 Mikhail Zakharov SSH root login, default route, resolver Don’t hesitate to enchance, report bugs or call me, Mikhail Zakharov zmey20000@yahoo.com Share this: h3 Loading... This entry was posted in My projects and tagged FreeBSD, linux, scripts, upgrade. Bookmark the permalink. Archives May 2023 April 2023 March 2023 February 2023 January 2023 October 2022 September 2022 August 2022 June 2022 April 2022 March 2022 February 2022 January 2022 December 2021 August 2021 February 2021 December 2020 November 2020 October 2020 August 2020 July 2020 June 2020 April 2020 March 2020 February 2020 January 2020 December 2019 October 2019 May 2019 April 2019 March 2019 January 2019 December 2018 November 2018 October 2018 August 2018 July 2018 June 2018 March 2018 January 2018 November 2017 September 2017 August 2017 July 2017 June 2017 May 2017 April 2017 January 2017 December 2016 November 2016 October 2016 September 2016 August 2016 July 2016 June 2016 May 2016 April 2016 March 2016 February 2016 January 2016 November 2015 October 2015 September 2015 July 2015 March 2015 February 2015 January 2015 December 2014 November 2014 Top Posts & Pages Quick reference for adding and attaching drives in VirtualBox CLI ASCII Saver - Screensaver for terminals An elegant way to limit concurrency with Python asyncio Шифры Атбаш, Цезарь и Виженер в Гравити Фолз Notes on Installing SCO Open Desktop Release 3.0 in 2021 A nice way to return string values from functions in Bash Sorting Tumblrs you are following by their last activity Conway's Game of Life for Excel IPv6 multicast ping in Python Structure size optimization in C
2
Scientists locate likely origin for the dinosaur-killing asteroid
An artist's illustration of the asteroid impact that killed the dinosaurs. (Image credit: Esteban De Armas/Shutterstock) Jump to: Asteroid or comet? How big was it? Dino-killing asteroid: Where did it come from? Additional resources: Sixty-six million years ago, a mountain-size asteroid slammed into Earth just off the coast of Mexico's Yucatán Peninsula, dooming the dinosaurs and leading to their extinction. The collision was cataclysmic, triggering tsunamis that swamped vast swaths of coastline and firestorms that may have raged across the entire globe. The impact also blasted huge amounts of dust and vaporized rock into the air, which, along with the soot from all those fires, blocked the sun for long stretches. To make matters worse, much of that vaporized rock was rich in sulfur, generating sulfuric acid aerosols that rained back down on Earth and acidified the oceans. Related: Potentially dangerous asteroids (images) These and other associated haymakers wiped out three-quarters of all species on Earth, including the non-avian dinosaurs, in an event known as the K-T mass extinction. (Extensive volcanism in the Deccan Traps region of Western India and long-term climate change likely contributed to the carnage, experts say, but the cosmic impact was the killing blow.) The asteroid that did all this damage is long gone, almost entirely destroyed in its kamikaze strike. But scientists have been able to take its measure to some extent over the past few decades. Here's what we know about the famous dino-killer. Dino-killing asteroid: Was it an asteroid or comet? A team led by Nobel Prize-winning physicist Luis Alvarez proposed the death-by-above hypothesis for the K-T extinction in 1980, after noticing that 66-million-year old clays around the globe sport far more of the rare metal iridium than the layers above and below them. The iridium was likely delivered by an impactor that hit Earth at that time, the scientists reasoned. A decade later, another research team identified the place where the space rock hit, spotting a buried crater about 90 miles (150 kilometers) wide straddling the Yucatán Peninsula and the Caribbean Sea. Further investigation revealed that the crater, named Chicxulub, formed 66 million years ago, and that iridium levels seem to decrease at greater distances from its center. What type of object slammed into Earth that fateful day 66 million years ago? Alvarez and his team suspected it was an asteroid, and that remains the general consensus. It's not unanimous, however; some researchers think a comet blasted out Chicxulub Crater. In a February 2021 paper, for example, Harvard astrophysicists Amir Siraj and Avi Loeb argued that comets are the best match with the geochemical evidence, which indicates that the impactor had a carbonaceous chondrite composition. (Carbonaceous chondrites are a relatively rare type of dark, primitive meteorite containing high levels of carbon and minerals that were altered by water, among other characteristics.) Siraj and Loeb also calculated that about one-fifth of all long-period comets — icy wanderers that take more than 200 years to complete one orbit — break apart when they pass close to the sun, generating lots of fragments. "This population could increase the impact rate of long-period comets capable of producing Chicxulub impact events by an order of magnitude," Siraj and Loeb wrote in the study, which was published in the journal Scientific Reports. "This new rate would be consistent with the age of the Chicxulub impact crater, thereby providing a satisfactory explanation for the origin of the impactor." Their paper spawned a response by Arizona State University researcher Steve Desch and colleagues. Desch and his team pushed back against Siraj and Loeb's geochemical arguments, saying that comets match only with a certain class of carbonaceous chondrite known as CI — and that's not the fingerprint the impactor left behind. "In fact, careful consideration of the geochemical evidence strongly favors a CM or CR carbonaceous chondrite and rules out a cometary impactor," Desch et al. wrote in their paper, which was published in June 2021 in the journal Astronomy & Geophysics. Siraj and Loeb have written a response to that response, contributing to a debate that has been going on for 40 years and may continue for decades into the future. That being said, Siraj and Loeb are in the minority here. "The broad consensus is in favor of an asteroid impactor," Desch and his team wrote in their paper. Related: Photos of asteroids in deep space Dino-killing asteroid: How big was it? This NASA graphic shows the location of the Chicxulub crater from the asteroid impact that doomed the dinosaurs. (Image credit: NASA/JPL-Caltech/David Fuchs/Wikimedia Commons) The dimensions of Chicxulub Crater — about 90 miles wide by 12 miles (20 km) deep — give us a rough idea of the impactor's size. For example, Siraj and Loeb calculated that the incoming object was likely about 4.3 miles (7 km) wide. But that's probably not the number you're most familiar with, because it assumes the impactor was a piece of a long-period comet. Asteroids in the main belt between Mars and Jupiter orbit more slowly than long-period comets, so they must be bigger to gouge out a hole in the Earth the size of Chicxulub. Folks in the asteroid camp think the impactor was about 6.2 miles (10 km) in diameter. Asteroid or comet fragment, the space rock was big enough to spur one of Earth's six known mass extinctions. (Humanity is currently living through, and is primarily responsible for, the sixth one.) Dino-killing asteroid: Where did it come from? The Chicxulub asteroid orbited between Mars and Jupiter before hitting Earth. (Image credit: NASA/McREL) Related stories: — Asteroid impact, not volcanic activity, killed the dinosaurs, study finds — Fiery meteor that doomed the dinosaurs struck at 'deadliest possible' angle — How did birds survive the dinosaur-killing asteroid? In 2007, a team of scientists led by Bill Bottke of the Southwest Research Institute (SwRI) in Colorado proposed an origin story for the dino-killing asteroid — a massive collision in the innermost portion of the main asteroid belt about 160 million years ago, which broke apart a 106-mile-wide (170 km) space rock called Baptistina. "Fragments produced by the collision were slowly delivered by dynamical processes to orbits where they could strike the terrestrial planets," the researchers wrote in the 2007 study, which was published in the journal Nature. "We find that this asteroid shower is the most likely source (>90% probability) of the Chicxulub impactor that produced the Cretaceous-Tertiary (K-T) mass extinction event 65 [million years] ago." (The date of the impact and extinction has recently been revised, to 66 million years ago.) In 2011, however, observations by NASA's Wide-field Infrared Survey Explorer spacecraft more or less ruled out the "Baptistina" hypothesis by showing that the in-space collision likely occurred just 80 million years ago or so. That didn't give the newly spawned fragments enough time to move into a position from which one of them could be gravitationally nudged onto an Earth-crossing trajectory, researchers said. We still don't know exactly where the dino-killing asteroid came from; its parent body remains a mystery. But a 2021 study narrows its native neighborhood down a bit. In that paper, SwRI scientists David Nesvorný, Bottke and Simone Marchi used computer models to better understand the asteroid population and powerful impacts like the one that caused the K-T mass extinction. They determined that the dinosaur killer was an asteroid that formerly resided in the main belt, likely in its outer portions. Nesvorný and his team also calculated that about half of all Earth-impacting objects more than 3 miles (5 km) wide are dark carbonaceous asteroids. And they found that objects at least 6 miles wide probably hit our planet just once every 250 million to 500 million years. So — fingers crossed! — another one shouldn't head our way for a long time. Mike Wall is the author of " Out There " (Grand Central Publishing, 2018; illustrated by Karl Tate), a book about the search for alien life. Follow him on Twitter @michaeldwall . Follow us on Twitter @Spacedotcom or on Facebook .