Unnamed: 0
int64 0
3k
| title
stringlengths 4
200
| text
stringlengths 21
100k
| url
stringlengths 45
535
| authors
stringlengths 2
56
| timestamp
stringlengths 19
32
| tags
stringlengths 14
131
|
---|---|---|---|---|---|---|
2,900 | Challenges for Educators in 2021. There are always challenges, but that’s… | Challenges for Educators in 2021
Photo by Ketut Subiyanto from Pexels
Technology is making it easier for students to learn.
Edtech will be the most widely used in class.
Online classes will be more common, and the students will be able to learn at a pace that works for them.
There are always challenges, but that’s what makes what we do so rewarding!
The Biggest Challenges
I think one of the biggest challenges is making sure that students are getting everything they need out of an experience so they can succeed later down the road when they’re going into college or careers or whatever they decide to do after high school. Another challenge is making sure all students have access to opportunities, especially those who come from underprivileged backgrounds and might not otherwise have access to these opportunities. That’s why giving back and making sure all kids get a chance at success is important to me because it has changed my life for the better.
My Vision for the Future
I want every student to have the same opportunity that I had and to be able to go to a great school that gives them not only a great education but also has their best interest at heart. It’s important that we are all striving for the same goal: for students to be successful in life. I want every child to have access to all of the opportunities that they need to succeed in life.
My Advice to Educators?
Don’t ever stop learning. There is always something new going on in education, and there are always new ways to use technology to make learning better and more effective. Learning how to use technology well is important because it will not only help you teach but also help you connect with students in a different way so they’re more engaged and motivated.
About the Author
I am an educator have over 3 years of experience in product management, technology leadership, startups, angel investing and Edtech. I’m the Founder an EdTech startup Cudy Technologies (www.cudy.co) to help teachers teach better and students learn better using videos and interactive activities. If you are a teacher or student, signup for free at https://cudy.co/sg/register to start teaching and learning better today!
You can connect with me on Linkedin (https://www.linkedin.com/in/alexanderlhk) and let me know that you are a reader of my Medium posts in your invitation message. | https://medium.com/@alexanderlhk/challenges-that-face-educators-in-2021-9cde4c98f7ef | ['Alexander Lim'] | 2020-12-22 23:24:35.734000+00:00 | ['Edtech News', '2021 Trends', 'Education Technology', 'Education', 'Edtech'] |
2,901 | Data Journalism Crash Course #2: Open Data | With the evolution of access to public data and the greater transparency of governments, media professionals today are experiencing an important change: they left an environment with a certain lack of public data to have, now on the Internet, much more than they could analyze by themselves. Every day, more data are opened and collected by movements of think tanks and academics, civic entrepreneurs, or even obtained by reporters through requests for the Access to Information Laws (although not always without a battle). Currently, it is even possible to monitor responses to requests, either through social initiatives or through official search engines, provided by some governments, under national open data policies — which still maintains a special panel for monitoring compliance with the law by federal agencies.
In this scenario, data journalism has become stronger than ever and is the trend for the future. In a time of polarization and the press faces distrust, this method of producing news gains even more importance in finding exclusive facts and keeping the role of ‘‘ guardian of society ’’ in the press alive. It can light the way for us, bringing evidence to light in an accessible way, contributing to a greater and more qualified understanding of reality through a closer approach to science and less to the spectrum of opinion and dispute of narratives.
Data are considered Open Data when anyone can freely access, use, modify, and share them for any purpose, subject to, at most, requirements aimed at preserving their origin and openness.
This is usually satisfied by publishing the data in an open format and under an open license.
Open data are also guided by the three laws and eight principles.
The three laws
The so-called three “laws” of open data are not laws in the literal sense, enacted by any state. In short, they are a set of tests to assess whether data can be considered open. They were proposed by public policy experts, open data activist, and public policy speaker at Harvard Kennedy School of Government David Eaves. The laws are:
If the data cannot be found and indexed on the Web, it does not exist;
If it is not open and available in a machine-understandable format, it cannot be reused; and
If a legal device does not allow its replication, it is not useful.
The laws were proposed for open government data, but it can be said that they apply to open data in general, even outside government environments. For example, in private companies, civil society organizations, and international bodies. The World Bank, for example, makes open data available.
Did you know?
Data can also be opened voluntarily by private organizations, for several reasons. In recent years, experts have discussed the opening of data by the private sector for actions that benefit the public interest, the so-called “data collaborators”.
The eight principles
In 2007, a 30-person working group met in California, to define the principles of Government Open Data. They reached a consensus on the following 8 principles:
Complete: All public data must be available. Data is electronically recorded information, including, but not limited to, documents, databases, transcripts, and audiovisual recordings. Public data is data that is not subject to valid privacy, security, or access control limitations, regulated by statutes.
All public data must be available. Data is electronically recorded information, including, but not limited to, documents, databases, transcripts, and audiovisual recordings. Public data is data that is not subject to valid privacy, security, or access control limitations, regulated by statutes. Primary : The data must be published in the form collected at the source, with the finest possible granularity, and not in an aggregated or transformed form.
: The data must be published in the form collected at the source, with the finest possible granularity, and not in an aggregated or transformed form. Current: Data is made available as quickly as necessary to preserve its value.
Data is made available as quickly as necessary to preserve its value. Accessible: The data is made available to the widest possible audience and for the most varied purposes possible.
The data is made available to the widest possible audience and for the most varied purposes possible. Machine processable: The data is reasonably structured to allow automated processing.
The data is reasonably structured to allow automated processing. Non-discriminatory access: The data is available to everyone, without the need for identification or registration.
The data is available to everyone, without the need for identification or registration. Non-proprietary formats: The data is available in a format over which no entity has exclusive control.
The data is available in a format over which no entity has exclusive control. Free licenses: The data is not subject to restrictions by copyright, trademark, patent, or industrial secret regulations. Reasonable restrictions on privacy, security, and access control may be permitted in the manner regulated by statutes.
Also, the Californian group stated that compliance with these principles needs to be verifiable and a person should be designated as the responsible contact for the data.
What governments do not do when it comes to open data to communities, civil society organizations and the media do.
Disseminating information
The idea of disclosing public servants’ salaries is relatively new. In the United States, during 2008, several groups of civil servants and unions protested when The Sacramento Bee newspaper decided to publish this information, revealing the wages of 460,309 employees, including teachers, police officers, and firefighters.
Until then, this disclosure was seen as a security risk and an invasion of privacy. Although some governments are reluctant to approach the problem differently, both small towns and country governments understand the value of opening up information to society and have taken enormous steps, making as much information available to citizens as possible. which help to build trust between taxpayers and government officials.
In many parts of the world, governments on different continents are adding the principles invoked by the International Alliance for Open Government. It is a global effort: the world leaders who support this initiative are well aware that people are demanding more transparency, more participation in public affairs, and, therefore, are designing tools to make their management more transparent and effective.
Argentina, despite having asked to be part of this Alliance, was not accepted, because all efforts to comply with a Law on Access to Public Information have failed. Making this legal instrument available is a condition for joining OGP.
On this page: http://www.opengovpartnership.org/members it is possible to see which countries are part of the Alliance and which have expressed a desire to join it, according to the date of access.
IF YOU WANT TO KNOW MORE:
Charalabidis, Y., Zuiderwijk, A., Alexopoulos, C., Janssen, M., Lampoltshammer, Th.J., Ferro, E. The World of Open Data: Concepts, Methods, Tools and Experiences. Springer International Publishing. 2018.
Davies,Tim / Walker, Stephen B. / Rubinstein, Mor / Perini, Fernando (Editors). The State of Open Data: Histories and Horizons. African Minds. 2019- https://www.idrc.ca/en/book/state-open-data-histories-and-horizons
Data.gov. https://www.data.gov/
European Union Open Data Portal. https://data.europa.eu/euodp/en/home
Open Data Handbook: Guides, case studies and resources for government & civil society on the “what, why & how” of open data. https://opendatahandbook.org/
Open Government Partnership. https://www.opengovpartnership.org
The Global Open Data Index. https://index.okfn.org/
The Open Data Barometer. https://opendatabarometer.org
World Bank Open Data from The World Bank: Data. https://data.worldbank.org/
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.datadriveninvestor.com/data-journalism-crash-course-2-open-data-f02c2a9108d6 | ['Deborah M.'] | 2020-10-31 19:03:49.037000+00:00 | ['Open Data', 'Data Science', 'Journalism', 'Data', 'Technology'] |
2,902 | App Library V2.0 Is Live!. We’re so excited to announce that the… | We’re so excited to announce that the App Library V2.0 beta is live! It’s been an amazing journey for us to get here over the past years, and we’re very happy to finally be launching our App Library and seeing our dreams for a Dapp marketplace to come true. Thank you to all our Morpheus Labs community, friends and supporters for your encouragement and feedback every step of the way, we couldn’t have gotten here without you.
Overview of Morpheus Labs SEED
Together we turn ideas into opportunities
Consolidating multiple blockchain technologies and experiment environments onto one platform minimises the switching costs associated with various platforms, applications, and providers. Morpheus Labs SEED provides corporates, financial institutions and government entities easy access to a platform that enables them to develop, test and manage blockchain applications using different blockchain protocols supported by the platform to achieve rapid prototyping, cost efficiency and a fail-safe environment. The platform supports distributed hosting for blockchain network nodes and off-chain applications while providing a centralised platform service for developing, managing and regulating blockchain networks. Ultimately, SEED empowers anyone to partake in this revolutionary technology.
Morpheus Labs SEED’s AppLibrary is a crowdsourced innovation marketplace that caters to the needs of businesses and individuals alike who can source for ideas and applications on the blockchain. A review system is also in place where developers, midsize and large companies can publish their applications for other users to download and customize them for their use. With the ability to curate industrial solutions on the platform, clients cut short a lot of time to find a suitable use case to customise a solution for their businesses.
The Morpheus Labs Team has already partnered with a few blockchain partners and will work out with interested partners to handpick and list comprehensive blockchain-based solutions in our App Library over the next 3–6 months, with the aim to help to bridge enterprises and implementation of blockchain solutions into businesses from the beginning to the end — to innovate, build, test, market, and distribute your Dapps and grow businesses.
V2.0 Beta Launch Features
Users can download and deploy dApps not only in a workspace (development and customization use) but also in runtime (production use)
Automatic deployment of blockchain and dynamically connection of dApp to the underlying network
Get access to promote any dApp in AppLibrary and create deployments using Use Case templates. Stay tuned, Solution Centre revealed coming soon!
Types of application to publish in our App Library
The following are the steps to publishing dApps in our App Library depending on the three types of application a user wants to publish: Source Code, Compiled Code and External.
Source code: Suitable for development and testing versions (aka dev)
This type includes all types of workspaces where the source code will be used to run the application. These applications are typically those you may use as proof of concept, samples and those developed by you. Typically, these applications will start in dev mode.
2. Compiled code: Suitable for staging and production versions (aka production)
“Compiling” is more specific, and almost invariably refers to a process that takes source code as its input, and outputs something runnable, typically machine code for either a physical or virtual machine
This type includes all types of applications typically built for production. These applications are typically those you may let other users download and use without disclosing the source code. Typically these applications will start in production mode. Vendors can publish applications in “compiled” deployment type for example to advertise, freemium, or for-purchase.
3. External: Suitable for featuring your application with AppLibrary
This type includes all types of applications running outside of Morpheus Labs SEED platform. The benefit of publishing external applications in AppLibrary is directly related to how good the application is to attract more ML SEED users to use it, to extend it. Cross referral and marketing as well as showcasing the top-ranked app are applicable for external applications.
Get everything you need for your blockchain-based applications before they become available to customers. Morpheus Labs SEED gives you all the tools, resources and support you need.
Want to join the beta? Just visit https://docs.morpheuslabs.io/ today to get started right now, or contact us to learn more about Morpheus Labs SEED. | https://medium.com/morpheus-labs/app-library-v2-0-is-now-live-199958723f5 | ['Morpheus Labs Team'] | 2020-11-26 09:59:33.236000+00:00 | ['Dapps', 'Blockchain', 'Defi', 'Technology'] |
2,903 | When Will We See The Flying Cars? | I am not a car savvy person, but I’m a big fan of German cars, and every time I have to drive one of those, it’s a heavenly experience. On the other end, American cars feel like they were designed in the 1980s and are not due for an update until the 2030s. The famous proverb for the luxury German car brand is absolutely true :
“There is no substitute for a Porsche — except another Porsche”
The original German car models’ seemingly timeless design remains immediately recognizable; only connoisseurs can spot the subtle differences.
P.S: I drive a 2011 Volkswagen Polo Trendline and I couldn’t be happier with that beauty.
Transformers and Need for Speed Era
I grew up playing ‘Need for Speed’ and watching ‘Transformers’ cartoons. ‘NFS Most Wanted’ is my all-time favourite amongst the entire series. It has the perfect mix of vehicles, and it’s the first game with cops, and I loved that chase. I’ve put so many hours into that game, trying to make my way up “The Blacklist 15,” escaping from cops and racking up bounty from every pursuit I get into.
Bumble Bee, a cute old yellow 1967 Volkswagen Beetle from Transformers, was my childhood love — a scout later turned into a warrior and later on a police officer after the war against Decepticons.
Photo by Arseny Togulev on Unsplash
When I was in school, I always wondered if it would be possible to have cars that will have their own brains and transform into a superhero when something bad happens on this earth? For me, an alien invasion was the maximum I could have thought about, and I really believed that Decepticons exist in the real world, and my neighbour's uncle’s car is one of them. But as I grew up, my perception changed.
The Inspiring Hooptie Movies
In college, movies like Fast and Furious and the James Bond series introduced me to an entirely different world of ‘cars and technology,’ which changed my thinking process. Suddenly, cartoons and video games started making sense. To explore more, I started reading about the super-advanced engines, sensors and compared them with everything I saw in those movies.
I noticed that some great minds and companies are already working on these revolutionary products and are on the verge of massive automotive technology breakthroughs.
Photo by Henrik Hjortshøj on Unsplash
In the same year, I got a chance to work in my college team called “Fateh” (Fateh is a Hindi word which means “Victory”) where we designed ‘Formula Racing Cars,’ and I got hands-on experience on driving that ‘beast’ which weighed less than 500 kgs and could reach 100 kph in under four seconds. It felt like I am playing the ‘NFS’ with no cops (good for me, though), and it was a surreal experience.
Technological advancement in the automotive sector has not limited itself to designing super-fast engines or designing luxurious cars. Nowadays, autonomous vehicles are a hot topic among automobile makers, and you can see new models better than the previous ones in ‘Auto Expo’ events across the globe. Various car manufacturers like Tesla and tech giants like Google are in this race. They are trying to develop fully autonomous self-driving cars that will assume all driving functions from the driver.
Rise of AI and IoT in Automotive Sector
Even though the developments have been made in this area, a fully independent vehicle is still to be developed, and if I can get a chance to drive an ‘NFS car’ in real life, I am sure that this milestone is not too far. Along with different proximity sensors and cameras, cars are now integrated with IoT systems to reduce human error and make driving more comfortable and safer.
Photo by Timur M on Unsplash
Introducing the IoT technology is the ‘cherry on the cake,’ which has infused semi-autonomous cars while partly controlling the vehicle operations to avoid accidents and reduce the driver's load. Will you trust a driverless car? Well, in my opinion, they might lack moral judgment during accidents, but there are many other optimistic perspectives to investigate it.
For example, it can disrupt the public transportation system as it will eliminate the complicated and expensive operational steps like background checks of new cab drivers and every day’s drug testing, which will definitely reduce the harassment cases reported against cab companies, which are all over the news nowadays.
In my 5 years of professional experience, I was fortunate enough to work for automotive companies like Autoninja and Cardekho, where I witnessed the shocking and unprecedented changing trends in the automobile industry, which is now one of the fastest-growing markets for IoT-based solutions.
As per reports, by 2020, more than 250 million cars are expected to be connected, highlighting the impacts of IoT in the automotive industry. Drivers worldwide can expect their vehicles to become ‘smartphones on wheels,’ as the Internet of Things is already proving car connectivity to be the most promising futuristic technology. That day is not too far when we’ll see flying cars over our homes, just like Star Wars and drive like a Mandalorian. | https://medium.com/@simranjotsinghsran/the-flying-cars-c89280ba8c51 | ['Simranjot Singh'] | 2020-12-17 09:28:55.252000+00:00 | ['Cars', 'Autonomous Cars', 'Technology', 'Self Driving Cars', 'Flying Cars'] |
2,904 | Covid-19 is a personal boon! | Alright, alright! Let me clarify myself before you call me selfish, ego-centric and what not!
Hear me first please!
The world is in a sorry state because of the CoronaVirus pandemic, affecting hundreds of countries, millions of lives and billions of dollars in commerce. In my country, daily wagers are travelling thousands of miles by foot to reach back to their native so at least they will have the bare minimum to survive. So yes! Covid-19 is a global curse and every country in the world is pushing itself to keep the damage to the minimum till vaccines become available.
But I prefer to look on the positive side of the story. A little introduction about my work, a 5+ year full-stack developer working across technologies to bring the best web and mobile experiences to your device! Great, right? The problem: I live in one of the most densely populated city in the world, the dream city of Mumbai. Born and brought up here, I always knew the city inside out to its stretches and currently my home and workplace are at the extreme points of the city.
This means I travel 3 hrs both ways in packed trains morning and evening, draining most of my energy during the time. It is so bad that sometimes I do not even get to open my bag and check my cellphone. Oh! Did I also mention cracking my Macbook Air screen while getting into the crowded train last year? It is that bad.
Exactly Like this!
Because of this daily routine of long-distance travel, I am never a 100% productive at work (obviously) and have almost no energy returning home in the evening. Don’t get me wrong, my workplace and family are completely happy with this little superman effort that I put in every day, but it felt incomplete.
I have been following the rise of Covid-19 since last December and even after early and good preventive measures, it has managed to reach almost all known countries and in my opinion for a very long period (until, of course, the magic vaccine arrives).
Since the mid of March, most of the city is under complete lockdown which means that an advisory was circulated that everyone needs to WFH (Work From Home) until further notice. Many companies started firing employees left, right and centre. I was fortunate not to be a part of that list.
Quickly enough, I realized that I would not be travelling 3 hours a day every day! See, a personal boon! I immediately started setting out new schedules for myself. The outside world was deserted, I call it a silent apocalypse. But inside of me, there was a new little excitement about happily missing the 9:14 am jam-packed local train.
I felt so much weight lifted off my shoulder, just by the thought of not having to go through that tiring routine again. At home, I knew there would be many distractions and my productivity would never be able to match the one in office. At least, I thought I knew. And I had a really good reason for it. Lovely parents, a caring wife, and a 2-year-old always wanting to cling on to my call and laptop are certainly a blessing, but also a distraction.
I quickly understood that I need to set apart some ‘ME’ time to be undistracted. 4 AM time for me is that time. I find great peace being an early bird, and getting most of my chores done before everyone is even awake. I compensate by going back to sleep around 10 in the night.
Early morning, I easily get 3 hours of that undistracted time in which I can complete the majority of my office work. The only points that I leave are the time for meetings, discussion with my coworkers, and other miscellaneous work-related activities.
Now, now! I have almost a full day for myself to do all the things I was dying to do when I had no time! See, Covid-19 is a personal boon! I don’t find my baby a distraction anymore. He is the one whom I can play along whenever I want. Hell! I have started tutoring him at home since all playschools are shut as well. I have time listening to my parents’ stories watching all the albums, praying my Namaaz 5 times a day, cooking with my wife and repairing all the kinds of stuff in the house that have needed my attention for years!
It is an obvious point that we all want this pandemic to diminish ASAP, but a little part of me wants this WFH to continue forever! Impossible, but I am living my dream until it lasts! | https://medium.com/@arbaazdossani/covid-19-is-a-personal-boon-177b06345389 | ['Arbaaz Dossani'] | 2020-05-26 09:58:15.118000+00:00 | ['Lockdown', 'Time', 'Co Vid', 'Productivity', 'Technology'] |
2,905 | We Love Spatial on iOS and Android, Now Billions of You Can Too | Spatial’s iOS app in use
This week Spatial launches our mobile application on iOS and Android. This release marks a huge moment for Spatial as well as for our users. As a platform that prides ourselves on ease of use & accessibility, we feel compelled to provide a way to participate meaningfully with the most ubiquitous devices: smartphones and tablets.
The app works in both AR and VR — you can toggle between both fully interactive experiences. Spatial is cross-platform, a buzzword often mentioned that is hugely significant to what makes Spatial unique at its core. The ability to participate in the same meeting regardless of what device you are on or have access to makes a huge impact for driving adoption.
Tapping the top right corner of the mobile app can transition you from being able to see the virtual environment, to being able to walk around your physical space with all of the materials that populate the virtual world.
Mobile offers our users the ability to see and utilize Spatial from a new and unique perspective. Unlike our web app, users who join on mobile show up as their full avatars in the space. They can manipulate content, add notes, and even take photos that are sent directly into the space while the app is in use. Being able to utilize all of Spatial’s features from a phone allows companies to include all of their clients and internal colleagues in meetings that utilize cutting edge technology. Often, AR/VR trials die due lack of available headsets. Spatial of course offers a useful meeting platform for the times, which now requires only the totally reasonable ask to colleagues at any level: please download this app.
2020 has been a huge year for Spatial, it is hard to believe but we only launched on the Quest, our 1st VR platform, this year. Since then we have seen an overwhelming amount of user growth. Of course the pandemic played no small role in increasing the need for remote work solutions. In May we launched publicly for free after an 1000% increase in demand due to Covid. We’ve seen 130X increase in daily active users since then. Now almost a year into the pandemic and our VR support, one can lay on their couch without a headset on at all, staring at their phone (a familiar position for most of us) and join a virtual reality meeting as a photorealistic avatar of themselves moving around with slight finger motions.
Not only is the mobile app easy to use, it also is incredibly high resolution! For example, on iPhones and iPads you can see every detail of the experience in AR or VR with magnificent clarity. Our users at DS Smith work on packaging design in Spatial. After trying the beta version of the iOS app on their iPads they elected to have their organization join team meetings that way, even though they had access to headsets. They even easily cast their phones to larger screens in the office for demo purposes. As an international company already equipped with iPads they immediately recognized the impact iOS would have on the trajectory of their AR/VR initiatives.
Furthermore, mobile enables organizations like the United Nations Development Program to be able to suggest Spatial as a means of utilizing AR/VR technology for their many programs helping underserved areas around the world. Alejandro Pacheco from UNDP Colombia describes how “People living in left-behind territories have traditionally inherited obsolete technology or nothing at all. Covid-19 has shown to what extent technology and connectivity can be an accelerator or one of the most significant equality generators”. If a technology is only available to the elite who can afford the hardware, its impact will be limited. Spatial’s mobile app is a step towards solving that problem, a hurdle for all emerging technology.
Now any of the billions of people in the world who have an iOS or Android device (with ARkit or ARcore) can try having their meetings in augmented or virtual reality. They can see what it is like to use this technology before making the commitment of purchasing hardware for their teams. And we anticipate that trying mobile will certainly encourage people to upgrade their experiences to an AR or VR headset. In our experience, it already has.
As I mentioned before, this is a moment, and a significant one. For Spatial and for anyone trying to utilize AR or VR to make a difference at their companies and in the world. Bold claim, maybe, but wait and see what we have up our sleeves in 2021.
Apple App Store
Download
Google Play Store
Download | https://medium.com/spatialxr/we-love-spatial-on-ios-and-android-now-billions-of-you-can-too-a685cd68c28f | ['Bri Scully'] | 2020-12-18 15:17:26.387000+00:00 | ['AR', 'Mobile Apps', 'Mobile', 'Technology', 'VR'] |
2,906 | Learn Solidity 01: Writing your first Smart Contract | Photo by Zoltan Tasi on Unsplash
I have been fascinated by blockchain technology for so many reasons, which I will not discuss in this article. However, it’s not a new topic that cryptocurrency is the future of money, and NFTs are doing incredibly impressive for digital artists.
So what is my aim with this article? I got interested in the blockchain space, and after doing a lot of research, I couldn’t find enough learning guides like you would, for, say, Data Science which is pretty standard. It would help if you understood that the reason for this is that there is a shortage of Solidity programmers(about 200,000* developers in the world) compared to other popular programming languages. Therefore, there aren’t enough tutorials out there. Hence, I will be writing a series of articles on learning the language, and you can learn from this to become a blockchain developer.
This is the first article of the series, and it will be on writing your first smart contract with Solidity. There are many concepts that you might not be familiar with yet, and that’s perfectly okay. I will do my best to break down each concept into bits as we move along the series, and you can do more research from there.
To get started, here are a few questions that you should be asking; What is a Smart Contract? What is Solidity? How does the blockchain work?
Follow along as I provide answers to these questions below.
What is a Smart Contract?
A “Smart Contract” is simply a program that runs on the Ethereum blockchain. It is a collection of code (its functions) and data (its state) that resides at a specific address on the Ethereum blockchain. Think of a smart contract as an account on the Ethereum network, which can not be controlled by the user and runs as a program. However, a user can interact with a smart contract by submitting transactions that execute a function defined on the smart contract.
The way smart contracts ensure that transactions occur in real life is much more complex, but here is a simple explanation to understand how things work.
Photo by Cedrik Wesche on Unsplash
Imagine that this article is for sale, and I am the seller, you the reader is the buyer, and you are buying this article by using an affiliate link you saw on the internet. We can deploy a smart contract for this transaction, and here’s how the exchange might occur.
You, the reader, creates a transaction that sends the price of this article to the seller in exchange for a copy of this article. Like you would usually in a normal bank transaction, you will provide my address as an input to the smart contract’s address. The smart contract will then carry out this transaction by verifying that you have enough money in your wallet and that I also have a copy of this article ready for sale. Then, it deducts the money from your wallet, sends it to my wallet, tells me to send you a copy of the article, and finally sends the affiliate commission to the owner of the affiliate link after deducting that from my wallet.
The above is how smart contracts can facilitate transactions between users who do not trust each other, and it is used in the real world to carry out much more complex transactions.
This leads us to the following question: What is Solidity?
What is Solidity?
We already established the fact that Smart Contracts are software programs. For Ethereum, you can write smart contracts using a statically-typed language called Solidity, which is the most widely used smart contract programming language at the moment. Solidity is a full-featured language for writing general-purpose smart contracts, and it looks like JavaScript.
How does the blockchain work?
To answer the last question, I think it’s an important concept to know. I will share a good video by Anders Brownworth, it explains the concept: How does the blockchain work?
Watch here; an elementary visual introduction to the concepts behind how the blockchain works.
Now that you have a good understanding of the concepts above let’s see how you can write your first smart contract. | https://medium.com/coinmonks/learn-solidity-01-writing-your-first-smart-contract-528cad29ba99 | ['Adesiji Blessing'] | 2021-07-19 12:08:22.003000+00:00 | ['Blockchain Technology', 'Solidity', 'Smart Contracts', 'Ethereum', 'Blockchain'] |
2,907 | Making sense of the RAFT Distributed Consensus Algorithm — Part 1 | The goal of The Series
There are many articles on Raft however most of them are short, theoretical, and not detailed. Consensus algorithms is an area where shortcut analysis does not work as the topic is pretty hard to understand. While reading the raft paper and many articles, I found out that no code and not playing with a Raft simulator makes it even harder to understand.
So this series of articles intends to be more natural and lucid from an engineering perspective — it thoroughly explains concepts, core algorithms with code and simulates some important use cases. It also answers several questions at times when needed to make concepts clear.
I would like to encourage you not to rush and take enough time to think through different concepts, algorithms and use cases as you go through this series.
The Problem
The Oxford dictionary defines consensus as:
an opinion that all members of a group agree with
Human face consensus issues all the time — going for a lunch with a group of office friends? You must decide a time & most probably at least one of your friends opposes the time because he has a production deployment going on. Another friend might join you 15 minutes late as he has a long overlapping meeting & you feel lonely without him :| If majority of your friends are busy & don’t agree on a time, your plan gets cancelled & you re-plan tomorrow or so.
The same is true in distributed systems as well — you have a group of servers which need to agree on some state for a given piece of data. To make things worse, the servers are typically spread across geography.
Why do we need consensus
Most of us have used relation database like MySQL, Oracle at least once in our programming life. When you INSERT or UPDATE a value & then perform a SELECT on that, you get the latest value back — it happens typically because we use a single machine to contain all our data. Now imagine, you have a huge amount of data partitioned across 10 machines. For better availability of data, you have enabled replication of data. Say a piece of data is replicated across 3 machines & you want to query the data from any part of the world because your app is global, so a query can reach any suitable replica for a particular data. Now, what if you INSERT or UPDATE a data, but when you do SELECT , you don’t get back the latest value — basically your write request is served by machine W while the read request is served by a replica R . Unfortunately, say, not getting the latest value is unacceptable in your use case. To resolve this problem, we need a mechanism through which multiple servers would agree on a value & irrespective of whichever machine serves your SELECT request, you get the same result back every time. In short, you need a coherent view of your distributed system so that it behaves as if only a single machine is serving all requests. That’s where we need consensus.
If you want to build a strongly consistent distributed system ( CP system in terms of CAP theorem ), you need to have consensus.
Raft to the Rescue
Raft (Replicated & Fault Tolerant) is an algorithm / protocol which solves this problem. It came out of a PhD thesis by Diego Ongaro & John Ousterhout at Stanford University in 2014. Raft is designed to be easily understandable, the predecessor algorithms like Paxos & Multi-Paxos which are very well known consensus algorithms, known to be very difficult to understand, maybe only a handful of people in the world understand them properly — at least this is what the authors of Raft claim. If you follow through this entire series & understand most of it, then probably you understand Raft, then the authors’ claim stands true.
There is no standard implementation of Raft, it just defines several steps to achieve consensus in a fault tolerant way. There are hundreds of implementation of Raft already for different use cases. Most of the engineers won’t need to implement any consensus algorithm in their lifetime but it does not hurt to understand the heart of distributed systems, you’ll see in this series how consensus is a hard problem to solve, you’ll get a view of the critical edge cases that arise in distributed systems all the time which will definitely ignite your thought process & help you become a better system designer.
We won’t discuss any mathematical correctness of the algorithm, however, using different use cases we’ll see how Raft actually works. We’ll also discuss few important algorithms in details with code for a better understanding.
Q. How is Raft implemented?
A. Raft is typically implemented as a module inside a service like a distributed database or etcd like distributed key value store etc. Raft itself is not implemented as a service or micro service. It just works like a background process in the system.
Prerequisite Concepts
Before we get into the details of Raft, understand the following concepts well. We’ll discuss these concepts enough for a fair understanding & we’ll use the terminologies throughout this series thereafter.
Quorum
If your distributed system has N nodes, you need at least (N/2) + 1 nodes to agree on a value — basically you need majority (more than 50%) votes from for consensus just like any country’s constitutional election. Majority vote ensures that when (N/2) + 1 nodes are running & responding, at least one node contains latest value for a given data across read & write requests even if there is a network partition or other failure in the system.
Q. How many node failures we can tolerate when we have a quorum based system with N nodes?
A. If N is odd, we can endure N/2 node failures. If N is even, we can endure (N/2)-1 node failures.
Following is a simple table to state the fact:
Q. Should you choose an even number or odd number for N in production?
A. Consider N = 4 , according to the above table, majority required is 3 & you can tolerate only 1 node failure. For N = 5 , majority is still 3 but you can tolerate 2 node failures. So from failure handling perspective, even number of nodes does not add much value as fault tolerance is lesser. So it’s better to choose odd number nodes in production at the expense of little higher cost — you can tolerate more node failure just in case it’s a bad day for you.
Q. What is the worst value for N in production?
A. If N = 1 or N = 2 & you lose a node, you lose the whole system since you practically can’t tolerate any node failure at all. In fact for N = 2 , you have actually doubled your single point of failures in the system — if any node goes down, your whole system is down. So choose an odd value N ≥ 3 in production.
Q. What is a good value for N in production?
A. The figure obviously depends on your estimation of data, bandwidth, throughput & cost requirement. However, 5 seems to be a good number as you can manage 1 node failure & 1 node can be down for maintenance ( total 2 nodes down ) while 3 nodes are still up & running.
Q. What happens if majority of nodes is unavailable?
A. Ideally your system may stop responding completely depending on how you configure read & write use cases. Typically write stops completely but available nodes may still serve read requests in case you design read requests to be eventually consistent.
Node States
Raft nodes can be in three states: Leader , Follower & Candidate . We’ll see in later section how node transition happens. For now just remember the fact that Raft is strongly a leader based consensus protocol. Leader is the source of truth. Logs always flow from the leader to the followers.
Log
This is not a regular log file that you use in your application for information & debugging purpose. However the concept is more or less same. A log is a disk based file where usually objects called log entries are added sequentially in form of binary data.
Committed & Uncommitted Log
A log entry is committed only when it gets replicated by the majority nodes in the cluster. A committed log never gets overridden. A committed log is durable & eventually gets executed by all the nodes in the Raft cluster.
If a client command / log entry is not yet replicated to the majority of the cluster nodes, it’s called uncommitted log. Uncommitted logs can be overridden in a follower node.
State Machine
Don’t get scared by this term. State machines can be really complex in nature. Typically it means — depending on an input fed to the system, the state of a data (key) changes. In Raft context, think as if this is just like a module which stores the final agreed value for a key. Each node has its own state machine. Raft has to make sure whatever log entry is committed, they get eventually applied to the state machine which works as a source of truth for the data in memory. For fault tolerance, the state machine can be persisted as well.
Term
A term represents a time period through which a node acts as a leader, the concept is based on logical time (not a global time) — it’s just a counter managed by every node individually. Once a term terminates, another term starts with a new leader. Even though at a given point in time, terms across nodes may differ, Raft has a mechanism to sync & converge them to the same value.
The term is also called lease or leader lease, just another name it is.
RPC
Just like your Facebook mobile app communicates with Facebook server through REST API on top of HTTP, nodes participating in Raft communicate with each other using Remote Procedure Call (RPC) on top of TCP. This protocol is suitable for communication across data centres, internal systems & services (not user facing products or services).
Raft uses two different RPC requests. At a high level:
RequestVote (RV): When a node wants to become a leader, it asks other nodes to vote for it by sending this request.
When a node wants to become a leader, it asks other nodes to vote for it by sending this request. AppendEntries (AE): Through this message, a leader asks the followers to add an entry to their log file. The leader can send empty message as well as a heartbeat indicating to the followers that it’s still alive.
How it works
For our explanation, we’ll use a 5 node cluster.
Raft works on the concept of distributed log replication. In order to gain consensus on some state for a log entry, Raft cluster has to choose its leader first, then majority of the followers mimic exact logs from the leader — this ensures logs across nodes are in the same order as the leader. At a time, there would be only one active leader in the system (unless there is a network partition that could cause existence of multiple leaders or possibly no leaders at all) & it’s the source of truth for all logs.
Raft separates leader election from log replication process. Without leader, Raft can’t function. So leader election becomes a mandatory step in the absence of a leader.
Q. What are the major advantages of leader based system?
A. The system becomes simple to understand & operate when the abstraction is based on leader. Clients typically interacts through the leader & the leader takes care of important decision making, metadata state of the system.
Q. What are the major disadvantages of leader based system?
A. The leader becomes a single point of failure. So the system should be able to quickly react to choose another leader when the current one fails. Also since all client interactions happens through the leader, the system might become slower at scale.
Some Raft Design Decisions
Let’s look at few important design decisions which are very core to the protocol.
Random Timeout
Raft uses concept of random Election Timeout — the amount of time a follower waits until becoming a candidate ( see Figure 3 for more details on state transition ). When a cluster bootstraps, every node picks a random timeout between 150 & 300 milliseconds inclusive for itself & it starts counting down the timeout. There are now 2 possibilities:
Before the node times out, it receives a message from another node — it might be a heartbeat or log replication message from the leader or voting request from another peer. In this case, the timeout gets reset & the count down starts again .
. The node does not receive any message at all during the timeout.
Q. Why to choose random timeout?
A. Imagine all of the nodes have a fixed timeout. So in the absence of a leader, they timeout at the same time & there would be no guarantee of leader election since the process can repeat multiple times or indefinitely & all nodes starts counting down the same timeout value again. So randomization helps here. In case the leader is still undecided, the process starts again with a new set of random timeout across nodes & eventually we would have a leader. It’s highly unlikely that we won’t have a leader chosen after a trial or two.
Term Lifetime
When there is no leader in the cluster & a node X times out, it initiates a new election term, increments its term to T by adding 1 to previous term’s value. Just to remind you — a term is a local counter managed by all the nodes. There are again 2 cases here:
If X is elected as the new leader, term T continues i.e; all the new log entries added to the leader X & thereafter are propagated to the followers with term T .
is elected as the new leader, term continues i.e; all the new log entries added to the leader & thereafter are propagated to the followers with term . X loses the election, a new election begins with new term U where U > T .
So pictorially, a terms graph looks like below:
Figure 2: Terms, Courtesy: Raft extended paper
In the above diagram, term 1 starts when the cluster starts up. A leader is elected for term 1 & normal operations like log replication, heartbeat continues till the term 1 ends. The leader dies. Node X increases its term to 2 , gets elected as the new leader & term 2 continues. Now X also dies at some point, some other node Y starts the election process, increases the term to 3 , but unfortunately the election fails to choose a leader resulting in term 3 termination. And the process goes on.
We’ll discuss three major sections in this series of articles:
Basic leader election (the first leader election) when the cluster starts up, nodes are fresh & no log is present yet in the system. Raft log replication process. Leader election when nodes already have some logs.
The First Leader Election
As mentioned earlier, a node can be in different states depending on the cluster situation. Let’s look at the following diagram to understand the state transition.
Figure 3: Node states, Courtesy: eli.thegreenplace.net
Each node starts from the Follower state. Once election time out gets over, it enters into Candidate state — it means the node is now an eligible candidate to become a leader.
state. Once election time out gets over, it enters into state — it means the node is now an eligible candidate to become a leader. Once a candidate gets clear majority votes, it enters into the Leader state.
state. If there is no clear winner during the election process, the candidate times out again, remains in the Candidate state & a new election begins.
state & a new election begins. If a candidate gets a message from a newly leader, it steps down and becomes a Follower .
. If there is a network partition, the current leader might get disconnected from the majority of the cluster, the majority now selects a new leader. When the old leader comes back, it discovers that a new leader is already elected with higher term, so the old leader steps down & become a Follower .
Q. What happens when a cluster starts up?
A. All the nodes start up with random timeout, empty logs & begin counting down. The following figure explains it:
Figure 4: Cluster startup
The black thick perimeter around the nodes represents time out. Note that the perimeter lengths are different representing different timeout values for each node.
Steps
Each node is initialized with term = 1 .
. S4 times out first.
times out first. S4 starts a new election process & increments local term’s value by 1.
starts a new election process & increments local term’s value by 1. S4 votes for itself & sends out RequestVote message to all other nodes in the cluster.
Figure 5
All other nodes receive the request. They first reset their local term to 2 since their current term is lesser & grants vote for the request. Their election timeout is now reset as shown below.
Figure 6
S4 gets clear majority ( + inside smaller green circle in the above diagram means affirmative incoming votes) & becomes the leader. The thick black perimeter around S4 in the following figure indicates that it has become the leader of the cluster.
Figure 7
Now S4 starts sending AppendEntries message to all other nodes.
starts sending message to all other nodes. There is something called Heartbeat Timeout as well which should be configurable in the system. The leader keeps sending empty AppendEntries messages in intervals specified by the heartbeat interval indicating that it’s still alive so that the cluster does not unnecessarily initiate another leader election process.
as well which should be configurable in the system. The leader keeps sending empty messages in intervals specified by the heartbeat interval indicating that it’s still alive so that the cluster does not unnecessarily initiate another leader election process. Followers acknowledges each AppendEntries message.
Q. How many votes a node can give for a term?
A. Each node can vote only once per term.
Q. What happens if the leader behaves maliciously & does not send heartbeat?
A. No heartbeat possibly means that there is currently no leader in the cluster. If any leader intentionally stops sending heartbeat even though it’s alive, it triggers unnecessary leader election process & overloads the system.
Q. How do you choose the election timeout?
A. You should follow the following formula while choosing the election timeout:
Broadcast time << Election timeout << Minimum Time Between Failures (MTBF) of a node where a << b means a is an order of magnitude less than b
Broadcast time: Average time for a node to send RPC & receive response in parallel from all nodes.
MTBF: Any node can fail for this or that reason. MTBF defines the minimum time between two such consecutive failure for a node.
Depending on the infrastructure, broadcast time can be from 0.5 ms to 20 ms . MTBF can be several months or more. You should choose election timeout which abides by this criteria. Anything between 10 ms to 500 ms should be good for election timeout. However, as mentioned earlier, 150 ms to 300 ms is ideal.
No Clear Majority Case
When multiple candidates time out at the same time, the situation becomes very in-deterministic. In the following diagram, nodes S1 , S2 , S4 , S5 timeout almost at the same time & reach out to other nodes for votes. Now since 4 out of 5 nodes are competing for the leader position, no one would get majority as they have already voted for themselves in the new term.
Figure 8
As it can be seen in the below diagram, S5 gets an extra vote from S3 & no one is the winner here. So all the nodes timeout & the same voting process starts again.
Figure 9
Quick Summary
Raft helps to build a strongly consistent distributed system.
Raft relies on Quorum concept to ensure consensus from the majority in a cluster.
Odd count of nodes makes a better cluster than even count of nodes as it has more fault tolerance.
Nodes can be in one of these three states at a time: Leader , Follower & Candidate .
, & . Raft is a leader oriented system. A leader must exist in the cluster before any operation can happen.
Each node maintains its own state machine & log on disk.
Committed logs are persistent & they eventually get replicated to all the nodes in the cluster.
Logs always flow from the leader to the other nodes.
Committed logs are finally applied in state machine.
A term is a monotonically increasing counter which starts with a new leader election. Every node individually manages its own local term. Raft makes sure to eventually converge the term’s value to a single value across nodes.
Raft uses RPC calls / messages for vote request & log replication purpose.
Random election timeout generally helps to elect the leader with a trial or two, it prevents indefinite loop of voting process.
When a cluster bootstraps, one of the node times out & asks others for votes. Upon granted vote by the majority, it becomes the leader.
If no leader is elected, the election process is retried.
We have introduced ourselves to the basics of Raft & the very first leader election process. In the next part of this series, we’ll discuss Raft replication in details which is very critical to the protocol.
Stay tuned!
Reference | https://codeburst.io/making-sense-of-the-raft-distributed-consensus-algorithm-part-1-3ecf90b0b361 | ['Kousik Nath'] | 2021-02-11 05:47:17.315000+00:00 | ['Distributed Systems', 'Software Development', 'Technology', 'Computer Science', 'Algorithms'] |
2,908 | Are MOOCs not as ‘open’ as we think? | Imagine this … its 2001 and you go to the local library to find some information to learn about a new important topic. But you discover that most of the books are not written in the local language you know. The best content is in an unfamiliar foreign language and hence not accessible to you!
This would not have been a problem 20 years ago, because the local library mostly stocked information in the local language — and you did not know what else was out there. The local information was all you had, and you had to do with it. But now, this is no longer the case. With access to a simple device and stable internet connection, you can access so much more information from all across the world — the wealth of knowledge is at your finger-tips! But there is only one problem, what if you do not understand the language of this vast content?
MOOCs (Massive Open Online Courses) are platforms that provide free education to the masses. MOOCs bring together experts in a field of study, free online resources and interested learners all across the world. Most MOOCs are produced and teach their content in English. According to Shah (2017), more than half of the 9,600 MOOCs available are produced in English. More specifically, 80% of the courses currently advertised on the Coursera website are in English.
Keeping with that trend, English makes up 60.3% of content on the internet and continues to be the dominant scholarly language. So, if English has always dominated content on the internet and academia — then why are the majority of MOOCs being produced in English such a problem?
MOOCs claim and represent a change in the way we view education. No longer is education restricted by proximity, income, race, gender, class etc. These accessibility factors are encapsulated in their mission statements edX: “everyone, everywhere” and Coursera: “anyone, anywhere”.
People in developing countries are in theory the greatest benefactors of MOOCs. But we risk reinforcing western and imperialistic notions of English superiority as the dominance of English creates inequality in who is able to gain knowledge and skills from MOOC courses, as a learners proficiency in English is the main predictor of course completion on such platforms.
This amounts to significant barriers to entry for those who are not proficient in English.
As a result, the MOOC claim to serve ‘everyone’ and ‘anyone’ diminishes as large chunks of the developing world population are excluded from the use of MOOCs. According to Langdon Winner technology can empower some people and disempower others, legislating for us. The democratization of education and subsequent social mobility that MOOCs aim to provide is undercut if it is not reaching those who would benefit the most.
Learners from native English-speaking backgrounds or countries that have adopted English into their education curriculums are the ones who are most likely to benefit from MOOCs. This interactive graph shows the spread of English around the world. In 142 countries (blue) English is a mandatory part of public education, illustrating the global pull of English.
So, presented with Collingridge’s infamous ‘dilemma of control’, should more countries adopt English into their curriculums/societies? Or should we resist the global pull of English?
A neo-liberal perspective would argue that the emphasis of English is logical — given its dominance in the higher education market. It also gears students for the global labour market. However, this view was relevant when access to education was limited and scarce. Keen learners would adapt accordingly, ie. travel abroad, learn a new language, learn new cultural norms etc. to access what was only available in a few parts of the world.
But now, MOOCs are democratizing education and making it widely available to people everywhere in the world. We should not let this important development be limited by factors such as language which are in our control.
Local languages, embody a society’s culture and heritage — to be robbed of that is to lose a “language’s memory bank as well as its conceptual frameworks”. This reduces the diversity of our global knowledge and results in a monolinguistic knowledge base — that is self-perpetuating. To resist the global pull of English and cater to a larger set of people — MOOCs should be accessible to non-English speaking populations as well.
This can be achieved in two ways, through the translation of existing MOOCs or the creation of new MOOCs in a local language. The translation approach is a more cost-effective and ‘quick fix’ method of increasing an existing MOOC’s global reach. However cultural influences of the English-speaking academic culture remain, as teaching methods, practices, and concepts reinforce Western knowledge as ‘normative models’. On the other hand, national content production not only requires time, knowledge and funding but the dominance of established English MOOCs may overshadow smaller nationally tailored MOOCs.
For example “various ‘regional’ MOOC platforms have emerged since 2015, supporting other countries and languages: e.g., XuetangX (China), MiríadaX (Spain), MéxicoX (Mexico), France Université Numérique (France), EduOpen (Italy), ThaiMOOC (Thailand), SWAYAM (India), and Edraak (Jordan)”. While this is a step in the right direction, these platforms are still noted as ‘regional’ or ‘country-specific’ whereas the much larger and English-speaking USA and UK MOOC providers are deemed ‘global’. This reinforces the facade that to be ‘global’ is to be Western.
So, who’s responsibility is it to ensure that MOOCs are truly accessible to all? Do MOOC platforms bare the weight to translate and/or provide national MOOCs? Or is it the government’s responsibility to boost the education sector by encouraging local academics to produce MOOCs for national audiences?
What does the future look like?
Despite the dominance of English in academia and on the internet, the most widely spoken language in the world is Chinese, with 1197 million speakers. This, coupled with China being the fastest growing economy and a dominant player in the advancement of AI it is no surprise that China has taken off in the EdTech space. In 2013, Coursera Zone, by Coursera was launched where Coursera’s existing courses would be offered, but combined with Chinese language discussion forums and other Chinese language course materials. Similarly, XuetangX was a modification of OpenedX and the platform was tailored to the local Chinese speaking community, offering a community and channel for local content production. According to Andrew Ng, co-founder of Coursera “Almost any way you slice it, China is the №1 country in terms of the potential for growth and impact on students”.
So, If China continues to dominate in AI and create advanced cutting-edge content in Chinese — will that disadvantaged the English-speaking world? | https://medium.com/@anahitaparikh/are-moocs-not-as-open-as-we-think-fec135384394 | ['Anahita Parikh'] | 2020-12-07 09:07:33.329000+00:00 | ['Education Technology', 'Inequality', 'Language', 'Technology', 'Edtech'] |
2,909 | How I Built the Fastest E-commerce Store for a Home Decor Brand [PART 3] | Welcome Back! This is the third part of a series of exciting articles I will be writing about how I built the fastest E-commerce for a home decor brand named “Aprakrta”.
This article will help you with what you need to know about the development and tools I used to build Aprakrta.com. Topics covered in this chapter are :
Basic steps to implement before building a webpage
Which Js framework concepts are most important to learn?
List of useful Libraries or NPM packages you must know about
Top 3 development lessons I learned through this project!
Before starting anything, First prepare a step by step plan on how you are going to approach project development. This might seem boring but actually does save a lot of your time in the long run.
Now, Suppose you want to build a Homepage of your Web Application!
Visit various web applications that are in the context of your application domain and make a list of 5 things that you liked the most. It might be UI related or any specific tweak or hack you noticed. Remember, This is only for building the mental model of your proposed homepage, You are definitely not supposed to copy. The next step is where you will utilize this mental model to start prototyping. Now, finalize UI mockup ( Prototype ) and then start developing. If you are planning to design while developing its gonna cost you a lot of time and effort. This video by Google engineers will definitely help- Sketching and Paper Prototyping Understanding SEO concepts before developing a webpage will certainly help. If you are facing any issue with UI related concept or terminology which is eating your brain out. Just relax, It happens! Don’t crawl the whole of the internet searching for the solution. Instead, Understand the concept well, Write it down and give it some time and witness a solution magically appearing in your mind. To understand any web development related concept well, I highly recommend you to visit- https://developer.mozilla.org/en-US/docs/Learn/JavaScript Also, Make sure the CSS class naming is understandable. ‘Heading-text-1’ is a much better way to do than ‘Headingtext1’. In short, try to implement better coding practices if not the best.
Which Js framework concepts are most important to learn?
Store ( Vue- Vuex, React-Redux Angular- Ngrx )
Store is where state management of your web application happens. State management makes the state of your app clear and definite in the form of a data structure that you can read from and write to. Basically, Store is your centralized data storage from where you can pass the data to different components.
2. Lifecycle Hooks
A modern Js framework creates a component, renders it, creates and renders its children, checks it when its data-bound properties change and destroy it before removing it from the DOM. All of these above phases are defined as lifecycle hooks. You can add your custom code as per your applications’ needs to each lifecycle hook.
We have different Lifecycle Hooks defined for different Frameworks: Angular- https://angular.io/guide/lifecycle-hooks React- https://reactjs.org/docs/state-and-lifecycle.html Vue- https://vuejs.org/v2/guide/instance.html
//Example- For Vue created()
{ doSomethingCool(); }
List of useful Libraries or NPM packages you must know about:
Be it hosting, database or authentication firebase got you covered. I highly recommend you to visit Firebase. As they say, it is indeed a comprehensive mobile development platform.
2. Workbox
Workbox is a set of libraries and Node modules that make it easy to cache assets and take full advantage of features used to build Progressive Web Apps. Writing Service Workers used to be a complex task. Thanks to Workbox, Now It’s not!
3. Prerender-spa-plugin
Server-Side Rendering of Apps is mainly used for better search results ranking but that introduces again the same problems that were valid for PHP, ASP, JSP, (and such) sites are valid for server-side rendering today. It’s slow, breaks fairly easily, and is difficult to implement properly. To solve this issue, We can use Pre-rendering. Prerendering is basically firing up a headless browser ( a web browser without a graphical user interface ), loading your app’s routes, and saving the results to a static HTML file. You can then serve it with whatever static-file-serving solution you were using previously. The best thing, you can achieve pre-rendering in your app is by using this plugin.
4. Element UI
A Flexible and Beautiful UI framework. Their grid system, forms, and buttons will surely convince you to use it. Here are some examples from Aprakrta App. Material-UI is also a great option but some elements were such a pain to use. | https://medium.com/hackernoon/how-i-built-the-fastest-e-commerce-store-for-a-home-decor-brand-part-3-5b6bbff1fc6b | ['Swanand Kadam'] | 2019-07-16 05:23:02.900000+00:00 | ['JavaScript', 'Web Development', 'Technology', 'Programming', 'Hackernoon Top Story'] |
2,910 | Shure MV7 Podcast Microphone Review | The OG, Legendary Shure SM7B has been the gold standard in broadcasting for 44 years, damn that is a long time. It now has a little sister, the Shure MV7, which will make podcasters and gamers thrilled.
It started with the SM5 broadcasting microphone — a dynamic boom microphone and quickly found a home in radio and film studios in 1966.
According to John Born (Project Manager, Shure Inc), “A group of Shure acoustical engineers were given the SM57 cartridge element (Unidyne III) and asked, without restrictions on size or cost, to make it better. And they went nuts.” That’s probably the reason John refers to the SM7B “as SM57 on steroids.”
It took over 30 years in the industry before the SM7 eventually found its way into recording studios. Michal Jackson used the SM7 to record Thriller.
In a 2012 interview, David Rochman-(Shure’s Corporate Public Relations Manager) asked John to explain why the SM7 created so much buzz 36 years after its introduction. Here’s what he said:
“A combination of things have probably accounted for this consistent spike in popularity. Maybe it just takes this long for a mic to gain acceptance. Some of it has to do with. emergence podcasting — there’s an appetite for a high quality voiceover mic. And some of it has to do with Michael’s Jackson’s death — everyone was talking about his recordings and how they were made. Then there’s the fact that this is a $350 microphone that has beaten studio microphones costing ten times as much in microphone shoot-outs. It’s finally getting the recognition it deserves.”
The SM7B is now $399 plus the cost of a Cloudlifter, $159, and don’t forget an audio interface at $165. It’s easy to see why the $249 price tag for the MV7 is so appealing.
MV7 or SM7B: which one is right for you? Obviously, there are a few differences between the two microphone, visit Shure’s website for the nitty gritty details. | https://medium.datadriveninvestor.com/shure-mv7-podcast-microphone-review-64bb717b7f9c | ['Derek Oxley'] | 2020-12-03 16:46:04.595000+00:00 | ['Podcasting', 'Technology', 'Gaming', 'Voiceovers', 'Podcast'] |
2,911 | Cracking AI, Technology’s Black Box of Opportunity | A Two-part Series on AI — Its Role Today and Opportunities Tomorrow
Business leaders all over the world can be forgiven for pushing their teams to “get into AI” without really knowing what that means. Who wouldn’t want to outpace disruption using intelligent machines that can solve problems and innovate all on their own? That’s how AI works, right? If you’re not sure of the answer to that question, here’s your chance to find out.
With this two-part blog series, we are cracking this tempting and long-hyped black box of mystery. The goal is to provide business leaders with insights to support strategic planning around AI, starting in this blog with an examination of how AI is used today and the kinds of business problems it can and can’t solve. In blog two, we take those insights and forecast the biggest AI opportunities ahead, outline challenges business leaders should expect and offer a timeline for when AI will be in widespread use. Ready to get smart about AI? Read on.
AI: What Is It?
Simply put AI or artificial intelligence is a field computer science that works to give technology the ability to take-on repetitive tasks that require human intelligence and behavior, such as analysis, consideration and decision making. AI algorithms are programmed to analyze large amounts of data and (here’s the intelligence part) make decisions and learn based on the data.
Traditionally software is programmed to take a specific action based on the anticipated outcomes. For example, in Fintech, an online loan application can advance in the approval process if certain requirements are met. If not, a request for more information is sent. The software isn’t thinking or solving a problem, it’s just programmed to fulfill a specific function.
According to The Brookings Institute, “Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses.” With AI, massive amounts of data pour in from many sources. The algorithm learns from that data, identifies what is important and what can be ignored and uses the essential data to act.
How and Where Is AI Used Today?
Today, thousands of companies across numerous industries are using Deep Learning and Machine Learning to improve their customer journeys, revolutionizing the way they interact with customers to deliver more compelling experiences. Meal delivery services like Uber Eats and DoorDash are everyday examples of AI in action. The underlying algorithms take in massive amounts of real-time data (time, weather, date, season, restaurant offers, etc.) and provide targeted offers, deals and menu options to individual consumers to boost engagement and sales. The tech is learning and engaging without a human programmer architecting the next step.
Where Else Do We see AI in Action?
In addition to enhancing the customer experience, AI is easing human workloads and optimizing performance, safety and security across many industries. Here are a handful of examples of AI at work today:
● Finance: In finance, AI is widely used for security purposes, detecting issues like credit card fraud and money laundering. It’s also widely used in stock market trading, compiling complex, real-time points to analyze and execute trades. It’s so widespread in fact that it’s changed the industry and its workforce. Goldman Sachs’ trading desk in New York City once had more than 600 traders. Now it has just two, but it’s software engineering team has ballooned.
● Self-Driving Cars: AI is used in the development of self-driving cars. Its algorithms are essential for identifying road risks in real-time and predicting hazards with weather and traffic data.
● Media Streaming: Innovators in media like Netflix and Hulu use algorithms that monitor audience behavior and engagement to shape real-time user experiences and make programming choices.
● Retailers: B2C retailers use AI to provide targeted coupons and offers based on a complex mix of customer behavior, calendar, weather and inventory data. They also use AI-driven chatbots and virtual assistants to support customers through the shopping process and customer service journey.
● Technology Development: AI is fueling product development across the tech sector as cognitive services, biometrics and speech, face and language recognition all become fundamental aspects of computers, phones and digital home assistants.
● Education: Education providers are using AI to develop learning AR/VR technologies that advance and adapt to individual learners in real-time. As students master skills or encounter challenges, the technology adapts and changes to meet their learning needs.
Up Next: AI Opportunities Ahead
With AI driving next-level innovation across so many industries, it is no wonder business leaders are determined to get in on the action. If these examples have you eager to begin your AI adventure, stay tuned for part two in this blog, which we will release in the next few weeks. Don’t miss this examination of where the biggest opportunities and challenges are with AI right now and insights on how to strategically take advantage of AI’s vast analytical capabilities. Stay tuned. | https://medium.com/@marina-perla/cracking-ai-technologys-black-box-of-opportunity-91671bc5d94a | ['Marina Perla'] | 2020-12-01 02:56:01.963000+00:00 | ['Technology News', 'Technology Trends', 'AI', 'Artificial Intelligence', 'Recruiting'] |
2,912 | How to Structure Your AI Consulting Service | In recent years, I offered artificial intelligence or AI consulting services to several companies. Most of them suffered from a lack of AI knowledge. The lack of knowledge may create opportunities for experts to provide AI consulting services; however, it creates many challenges to defining project requirements and deliverables. I went through most of these challenges, and I want to share my experience dealing with them. If you want to offer or need an AI consulting service, I highly recommend reading this article.
Business Understanding — Step 0
In this step, you must determine the objectives and requirements of the project. You must identify what the customer wants to accomplish and how the customer wants to measure its success. Plus, you must define the required resources and project requirements. In short, you must aim to answer “What does the business need?”.
After defining the business objectives, you must determine technical goals, i.e., what success looks like from a technical perspective. A technical requirement plan must be created in this step, describing the required technologies and tools.
Then, you must conduct a cost-benefit analysis. I designed a framework to conduct the cost-benefit analysis in AI projects called the expectation-complexity framework. You can learn more about this framework here: If You Consider Using AI In Your Business, Read This.
Data Understanding — Step 1
In this step, you must identify the data fields needed to train machine learning or ML models. Then, you must write a plan for the customer to collect an initial set of data. The initial data is needed to explore and analyze. Many unknowns such as quality and sufficiency would be clarified during this process. In short, you must aim to answer, “What data do we have/need? Is it clean? Is it enough?”
You must identify required data, create a data collection plan, and provide tips to analyze the data. Anything that can help the customer to accomplish the project goals.
Then, you must help the customer to create a plan for large-scale data collection. You must document all the quality issues and surface properties of data in this step. For example, the relationships among the data must be determined. Visualization always helps to dig deeper into the data. You can read more here: On the Path to Conducting a Large Scale Data Collection.
Data Preparation — Step 2
In this step, you must decide on the required data to conduct large-scale data collection. The quality of data must also be recorded during the data collection. A large-scale data collection is an expensive process; so, you must make a plan to start conducting it. In short, you must aim to answer, “How does the company collect and organize the data?”
The final dataset must become clean before going to the next step. A common practice to clean or cure data is to correct, impute, or remove erroneous values. This is often the lengthiest step in the process. Without proper data curation, the project will encounter with “garbage-in, garbage-out” scenario.
Without a proper data curation, the project will encounter with “garbage-in, garbage-out” scenario.
The AI team may need to drive new attributes or features from the raw data to construct new data, a.k.a., feature engineering. According to model architecture, you may also need to combine the existing data and reformat them. For example, in many applications, string values are converted to numeric values. That helps utilize mathematical operations on textual data. A famous example of data reformatting is the Word2Vec model that is regularly used in text processing. You can read more about Word2Vec models here: Word2Vec Models are Simple Yet Revolutionary.
Model Training — Step 3
In this step, you must train and assess various ML models with different algorithms (e.g., random forest, XGBoost, or deep learning). That helps determine the best modeling techniques for the problem. The selected models will need further tunning and evaluation anyway. In short, you must aim to answer, “What modeling techniques should be used?”
No model can solve all the problems. An ML model that fits problems with tabular data may not work for those with image data and vice versa. Plus, an ML model that fits problems with small datasets may not work for problems with large datasets. And, many more!
Many people think model training or building is the most important part of an AI project. This is not true, at least, anymore.
Many people think model training or building is the most important part of an AI project. This is not true, at least, anymore. Using a large set of tools and libraries, such as SciKitLearn library, the model building is summarized into a few lines of code. The AI team must compare multiple models against each other and interpret their results based on domain knowledge and performance metrics. You can learn more about the story of an ensemble classifier that wins its competition here: The Story of an Ensemble Classifier Acquired By Facebook.
Model Evaluation — Step 4
In this step, you must extensively evaluate models and identify a model that meets the business requirements. Step 3 includes a series of evaluation tasks; however, its focus was mostly on standard performance metrics. Most industrial problems require problem-specific metrics, and standard performance metrics are no longer sufficient for them. In short, you must aim to answer, “Which model best meets the business objectives?”
Many industry problems are constrained by business and technical requirements other than machine learning criteria. For example, a business case that needs responses in less than a second must ensure using low-computation techniques.
You must constantly review the work accomplished using an experiment management system that helps you summarize results and identify mistakes if any.
To build an AI product, you must train a large number of models with different parameter configurations. These models are trained using a training dataset that evolves. The performance metrics can also be changed according to various business requirements. Nevertheless, you must manage this complex process and identify the best model that meets the business objectives. You must use an experiment management system to manage the evaluation process. You can learn more about the experiment management system here: Why Experiment Management is the Key to Success in Data Science.
Model Deployment — Step 5
In this step, you must create a thorough plan for deployment, monitoring, and maintenance. Most companies and experts look down at this step; however, that may generate issues during the operational phase. The best model must be deployed on the cloud to let customers access it. In short, you must aim to answer, “How do stakeholders access the results?”
A model is not useful unless the customer can access its results, it becomes updated to address unpredicted issues, and it would comply with the customer's technical and business requirements. For example, cloud computing services can become costly especially for ML models that are computationally intensive.
A model is not useful unless the customer can access its results, it becomes updated to address unpredicted issues, and it would comply with the customer’s technical and business requirements.
In the end, you must review the whole process by conducting a project retrospective. You must constantly review the process by answering these questions: [1] what went well, [2] what could have been better, and [3] how to improve in the future. If you want to be successful in building AI products, you must learn about common mistakes. You can learn about four common mistakes to build an ML product here: Build An ML Product — 4 Mistakes To Avoid.
The Last Words
We offer turnkey solutions for AI projects. If you need a consulting service for your AI projects, we are more than happy to help you. You can reach me through LinkedIn, Quora, and Twitter. | https://towardsdatascience.com/how-to-structure-your-ai-consulting-service-d1acb0c8a7d8 | ['Pedram Ataee'] | 2020-12-19 21:02:38.586000+00:00 | ['Data Science', 'Consulting', 'Machine Learning', 'Artificial Intelligence', 'Technology'] |
2,913 | The Power of Off | We can all agree technology has many advantages.
To list a few, technology promotes education, helps keep us safe, provides a closer reach to those who were once out of reach, saves lives, keeps us connected with instantaneous communications, and most importantly, allows a virtual window for some (you know who you are) to peek in on an ex-boyfriend or ex-girlfriend…just in case you find yourself curious as to how they’re doing.
We can all agree the world is a better place thanks to technology.
However, in light of all of the advancements, “Houston, we have problem.”
According to Nancy Colier, author of The Power of Off: The Mindful Way to Stay Sane in a Virtual World, the way we use technology is negatively affecting the relationship we have with ourselves.
“Our lives are filled with more possibilities than ever before to connect, consume, and discover-all good things-but in the face of these possibilities, we are also feeling less connected, less centered and less satisfied. The digital age is both an age of too much and not enough.”
“The average person checks their smartphone 190 times per day or every 5 minutes. We are bingeing on technology as if we were at a cruise ship buffet, using it to maintain a constant state of distraction and entertainment, and ultimately, to escape the present moment, and ourselves.”
“Technology is a powerful tool for communication, and yet the way we are using it and the authority we are giving it are also making it into a powerful impediment to our sense of presence and awareness.”
“We’re using it as an addiction. The only difference between technology addiction and other addictions is that we have all drunk the Kool-Aid; we’re all in on this one.”
The Power of Off offers compelling insights as to how we can “raise consciousness at a time when our society is undergoing an epidemic of unconsciousness.”
With the myriad of possibilities now available as a result of this technology, Colier imparts there is an opportunity for all users to begin to “nurture depth even as shallowness threatens to become the norm.”
BJB: Where can we begin to change the relationship with how we use technology?
NC: We want to start to shift our response.
When a thought or an impulse arises that says, “Oh, I have ten minutes. I could shop for shoes or search the internet,” rather than giving into the thought and following the impulse, we can flip the impulse. We can ask the questions, “What would I have to feel if I didn’t follow this thought or impulse? What is happening right here in my experience that I want to get away from?”
Rather than giving into impulsive driven aspects of ourselves, which take us deeper into unconsciousness, the impulses can become “red flags” which asks, “What’s happening here that’s making me want to go there?” Once we flip these impulses, they become opportunities and pointers to our own awareness.
The important thing is to identify the thought as something separate from the behavior.
Once we realize that we can question if that thought is a habit, or if that thought is taking me away from discomfort, we start to get curious about the thought itself.
That’s the practice of mindfulness.
BJB: What is “technical anesthesia”?
NC: The way we’re living with our technology is creating a kind of anesthetized life where we’re not present.
We’ve taken on mindfulness as this really interesting concept, and we’re using it to build our brand. It’s more about our identity. What we’re actually doing is leaving the moment. We’re literally staring at a screen and not feeling our feet on ground, not tasting the apple we’re eating, or not joining the friend we’re with in company.
Either we’re directly leaving the moment as we stare at our screen, or we’re at the art museum taking selfies of ourselves looking at the art; only to then post on social media that we’re a cultural person.
The cost of the ways we are now using technology is the direct experience of our life as it’s happening.
I consider this a “technical anesthesia” when we’re not awake to what’s happening as it’s actually happening. We are so busy using life to prove we have a life or build our brand that we’re missing out on life itself.
BJB: What is the importance of privately digesting an experience?
NC: Let’s say if you open the door for a woman with a stroller, for example, and you had a moment of sweetness that forms from simply being kind to another human being. In other times you might have walked down that street and spent some time alone owning that moment, and letting it steep inside.
Now we immediately post that along with hashtag gratitude. We then wait for the meaning of that event to come back to us as determined by the “likes” and external validation that tells us whether it mattered, and if we matter. We’re thinking we’re the most important thing in the universe but we’re also thinking we are of no validity unless we are publicly validated.
We’re relating to ourselves as if we were a vacuum. It’s a paradox.
What is also changing are our basic values. Our values used to respect mastery, or experience or brilliance. Now what we value in culture is popularity and fame. Given that fame is our most valuable asset everything revolves around that.
If I put something online that could make me an internet sensation for a moment, this makes me “valuable” if I don’t have an inner sense of what I value.
What’s happening with a lot of the millennial and younger generation is they are confusing this popularity or number of ‘likes’ with their inherent human value, and also I think with what will make a meaningful life.
Often times we’ll see before an idea has had time to bloom and fully develop into something of quality, we immediately go into, ‘How do I market it? What’s the elevator pitch? What’s the platform? What will it bring me?’ in a way like we’ve never seen before in history. The packaging of our message or who we are, or the valuable stuff is forced to happen so quickly that it happens at the expense of maturing to become something. | https://medium.com/thrive-global/the-power-of-off-breaking-up-the-dysfunctional-relationship-with-technology-addiction-85a5c0e82f78 | ['Bridgitte Jackson-Buckley'] | 2020-09-19 02:42:45.588000+00:00 | ['Mindfulness', 'Well Being', 'Technology', 'Unplug', 'Addiction'] |
2,914 | The BioShock That Wasn’t | The BioShock That Wasn’t
Hidden nuggets from the original pitch document
I love BioShock. It has one of the best — perhaps the best — twist in video game history. The series is notorious for the indelible marks it leaves on the human psyche. Few games have compelling stories, let alone at the business end of a shotgun. Masterful combat coupled with a gripping narrative set in morose environments remains a recipe for success, just as it was in 2007. But a pitch document from 2002 reveals that BioShock was a very different game from the one that hit consoles and PCs five years later.
This isn’t a knock on how the game turned out, because it was incredible. It’s merely an exercise to see what could have been, as well as possibly shedding some light on what the next BioShock game could be.
Source: Irrational Games Archive.
It’s interesting to see that the document mentions Carlos Cuello, a different protagonist altogether minus the underwater city backdrop that would become instantly recognizable to gamers: Rapture. There’s plenty of stuff here that would eventually find its way into other shooters and features that would revitalize the genre even in 2020. Fortunately, a good deal of the facets mentioned so kindly make an appearance in the game, some in full force and others in spirit. Here’s the side of BioShock that never saw the light of day; for better or worse.
An entirely different setting
“What is the measure of a man?
Is it the hands and feet?
The eyes and ears?
Or is it the Holy Spirit that animates him?
If the body is lost, but the soul is saved, is that anything less than a victory?” - excerpt from Irrational Games’ BioShock pitch document
A remote island. A religious cult. A princess in a cultists’ castle.
BioShock’s initial setting doesn’t sound like the most intriguing premise you’ve been to until you take into consideration that this is an Irrational Games title. The pitch document lists audio logs (they are just as prominent in the final version of the game) that speak of untold terrors that rocked the Isla de Salvacion’s shores. Religious sermons that forced aquatic life to beach themselves on the sand à la Death Stranding and humans undergoing physiological changes that would grant them terrible power reeked of suspense and intrigue. If a document could make one shiver, you can imagine how the game could have turned out. It clearly had potential.
But would I have picked a lush island over Rapture? Certainly not. It’s a decision that I am more than happy with. The eerie depths of a submerged dystopian metropolis certainly served BioShock’s ambitions well. While I’m not saying that Irrational Games couldn’t have made a remote island just as iconic, Rapture possessed a sense of claustrophobia and unrest that few games could ever hope to capture. It served as a setting that furthered the story just as much as the narrative did. The looters and genetic horrors that populated its deserted corridors won’t leave your headspace in a hurry.
Need I remind you of the Big Daddies and the Little Sisters?
BioShock 2. Source: Irrational Games.
Story-driven multiplayer
In addition to a rock-solid single-player component, a story-driven multiplayer mode was also in the works. While details are sparse as to what it may have contained, clues can be found in the game’s sequel. BioShock 2 featured a multiplayer mode created with the help of Digital Extremes, known for their work on the Unreal titles right from the nineties. While Irrational Games had MMO-esque proposals that promised scale and grandeur, Digital Extremes wisely toned down its aspirations to keep things realistic in terms of feasibility.
Unfortunately, this eschewed narrative components for a shooter that lacked the poise and depth of its competitors. Poorly balanced weapons and a wonky matchmaking system buried the series’ multiplayer aspirations. BioShock Infinite tossed out its prequel’s multiplayer and instead focused on the series’ greatest strength: the campaign.
Had Irrational Games got their way with BioShock’s multiplayer element, who knows what eccentric take may have been bestowed upon us?
Escape from Tarkov’s absurd gun customization. Source: PCGamer.
Gun mods
While plenty of games today feature an insane level of gun modification (Escape from Tarkov and Loadout come to mind), I can’t think of a game that let you mix and match gun components back in 2002. The crux of the system Irrational Games intended to craft would work on bonuses and tradeoffs; a tenet not uncommon in multiplayer shooters today. Weapons would have had the ability to be modified with all sorts of unique properties, from a full-auto shotgun to a chain-lightning tazer pistol.
The document even refers to snipers that spew acid-coated rounds, magnetic grenades that could pull robots in (eerily reminded me of Halo 4), and a silent rail gun. An arsenal worthy of the game’s aspirations, no doubt. And just like the end result, the developers wanted players to know that science had its limits. Meddling with your fashionable weaponry would make them unstable in combat situations and I’d hate to be on the business end of one of these denizens of destruction.
The regular archetypes of these weapons would be scattered across BioShock’s world for players to discover, akin to most recent narrative-driven outings. The pitch mentions resources of some kind that would let one apply said gun modifications, but they weren’t specific as to whether it would involve players raking up currencies or hunting for parts. All in all, it would have certainly made for a fine addition to an already impressive game.
Portal 2. Source: Valve.
Change the world
Tinkering with the game world isn’t exactly new; games that gravitate towards puzzles have implemented such systems in the past. But while things like fog and lighting merely serve aesthetic purposes in shooters, Irrational Games steps things up. Terminals peppered across the campaign let you meddle with the weather itself. This doesn’t just cause some cool visual effects; it drastically alters the playing field as well, affecting both friend and foe.
Dabble in humidity control to raise fog that can aid stealth encounters or increase the oxygen level in an area to give your explosives an extra kick. Metal flooring can be magnetized to slow down robotic foes but drags your bullets down with them. The best part? Grenades are effectively sticky bombs now. The possibilities sound too good to be true. And they’re just getting started.
Flip gravity around, ionize the air, or flood/electrify the room. The choice is yours. But be wary: every change affects enemies differently. Robots may be impervious to some attacks while temperature changes would affect cold-blooded and warm-blooded foes differently. Amp up the heat enough and thermal security scans won’t pick you up. But cold-blooded foes get a speed boost to counter said threat. It’s a cohesive system that gels well with a game that promises unique combat situations that can be adjusted on the fly.
BioShock’s basic yet fundamental gene modification system. Source: GameStar.
Genetic modification
Among the features on this list, this is perhaps the only one that had been implemented in BioShock’s final build, albeit in a limited manner. DNA-altering substances were all the rage in Rapture, luxuries that the wealthy stowed away; buried but not forgotten. BioShock players could use Plasmid serums to modify their bodies, granting them potent abilities like hurling fireballs or whipping up electric storms. But Irrational Games had intended to take things even further: biogenetic manipulation.
The cult was to possess unorthodox tech that would let one sacrifice their humanity to turn into an enhanced killing machine. Advanced genotypes could be retrieved from terminals on the map to augment the player’s malleable human form. The genotypes specified in the document point toward aquatic traits that could radically reconstruct enemy confrontations. The crustacean genotype would have made the character a tank with a meaty hard shell in addition to a crab claw for encounters up-close. If you’d rather have echo-electrical awareness and the occasional electric attack, consider the electric eel genotype. You could level these traits up as well, letting you prepare yourself based on the encounter ahead.
Just as the other features mentioned, benefits come with flaws as well. Switching to a hydrozoan (jellyfish) genotype might grant you gelatinous and invisible skin coupled with poison on contact, but robots could still spot you with their infrared vision. It’s rather disappointing to note that this innovative take on body modification didn’t make the cut. The game’s combat could have been all the more memorable. It sounds like it could have stood toe-to-toe against CyberPunk 2077’s futuristic augmentations, a game that showed up 13 years later.
BioShock gave the world the chills back in 2007. Source: 2KGames.
Parting shot
Irrational Games struck a delicate balance between bizarre and business with their original vision for BioShock and it’s a shame that not everything made it. Considering that a new game is in the works, I hope some of these features seep into their next outing. Gizmos that let you tinker with the environment and the player’s physical self have the potential to revamp sandbox titles as we know them. And few developers can do it better than Irrational Games. I wouldn’t trade Rapture for an island even if you asked nicely.
Nonetheless, BioShock is a crowning achievement in video game design, a narrative experience every video game enthusiast must discover for themselves. The twist is one that only a video game can offer. An absolute masterclass in storytelling that merits its own article all by itself. No other medium remotely comes close to delivering what I felt when the game sprung its trap. I suggest playing it before the next one shows up.
Would you kindly? | https://medium.com/super-jump/the-bioshock-that-wasnt-3f0b19331cc3 | ['Antony Terence'] | 2020-09-13 06:17:37.193000+00:00 | ['Technology', 'Gaming', 'Features', 'History', 'Culture'] |
2,915 | Upcoming Technology Radar review — Late 2020 Since Thoughtworks published Technology Radar in October, I now have been able to get time to do my usual review in two parts. Tools, Techniques in part 1… | Upcoming Technology Radar review — Late 2020
Since Thoughtworks published Technology Radar in October, I now have been able to get time to do my usual review in two parts. Tools, Techniques in part 1 and Languages & Frameworks, Platforms in part 2. Please stay tuned.
If you wish to explore the Radar yourself including the themes, you can do here:
Lastly, if you are interested to build your own Radar for internal reference — which I think you must — you can follow this link: | https://medium.com/cloudweed/upcoming-technology-radar-review-late-2020-e2c4c3ab26a1 | ['Karthick Thoppe'] | 2020-12-16 20:41:01.209000+00:00 | ['Software', 'Engineering', 'Future Technology', 'Software Engineering', 'Technology'] |
2,916 | 10 Tips on Writing a Proper Dockerfile | 10 Tips on Writing a Proper Dockerfile
Writing a proper Dockerfile is not too difficult
Image via IMGBIN.com
There are some tips and tricks to write a proper Dockerfile. And for the most part, writing a “proper” Dockerfile is simple, though not easy.
Without further ado, let’s dive right in to the topic.
1. The most lightweight base image possible
Right from the very beginning, you’d wanna start building from the most lightweight base image possible. Something usually related to the alpine docker image.
For example for running a python web application inside a docker container, I’d start my Dockerfile with something like the following:
FROM python:3.8.2-alpine
This makes your final image much smaller and desirable.
2. Place your most static commands at the top
Any command that is susceptible to the least changes possible in the future, should be placed at the top of your Dockerfile. That way you can take advantage of caching layers of docker, which is the act of using the result of previous builds in the current one.
For example, I would put the following lines just below my FROM line.
chmod +x /usr/local/bin/dumb-init && \
apk update && \
apk add curl RUN wget https://dumb-init-url -O /usr/local/bin/dumb-init && \chmod +x /usr/local/bin/dumb-init && \apk update && \apk add curl LABEL maintainer="Meysam Azad"
LABEL company="My awesome company" ARG service_workdir=/app
ARG service_port=8000
ARG username=webapp
ARG data_path=/data
ARG wheelhouse_directory=/wheelhouse
No matter what happens in the future, I am sure that these lines will most probably never change. These are my “static” lines and are placed at the top of my Dockerfile.
The rest of the commands would normally be placed below these, and should always be the commands that change more often in each build.
3. Never ever run your app with superuser
Just because your application is running inside a docker container, which is a isolated entity with it’s own file system and specifications, doesn’t mean you can ignore years of Linux security best practices.
That being said, the following lines would normally come after my “static” lines.
RUN adduser -D ${username} && \
mkdir -p ${data_path} && \
chown ${username} ${data_path} ...
# somewhere along the way
USER ${USERNAME} # initialized below
3 points worth mentioning here:
I have taken advantage of docker build arguments username from the above. These arguments are configurable upon every build. These commands are valid inside an alpine image, so you’d better find your own commands instead of memorizing these lines. You can see that these lines are also “static” lines and would never change except for the times that a build argument have changed. You can call these lines “semi-static”.
4. Use ENV and ARG instead of hard-coded values
As you saw previously, we made use of ARG command. And next, it’s cousin is ENV . You would totally use these combination to take your docker image to the next level by making it much more configurable for every use case.
For example these would be my ENV lines.
ENV WHEELHOUSE_DIR=${WHEELHOUSE_DIR:-$wheelhouse_directory}
ENV SERVICE_WORKDIR=${SERVICE_WORKDIR:-$service_workdir}
ENV SERVICE_PORT=${SERVICE_PORT:-$service_port}
ENV USERNAME=${USERNAME:-$username}
ENV DATA_PATH=${DATA_PATH:-$data_path}
I have used :- syntax in above lines which means: “if the argument on the left side is empty, use the value provided on the right side”. And in the above cases, all the right side arguments are variables holding a value from previous ARG lines.
5. Make use of docker multi-stage feature
This feature is a practical one. You can write commands on different stages, and run them on different occasions: dev , test or prod . You can also use this feature if you want to build on different versions. Like on A/B testing for example.
For example I would start my first line of docker like this:
FROM python:3.8.2-alpine AS base
And then, down the line when I’m ready to start the application, I would write something similar to this:
FROM base AS test
# some commands to run application on testing environment
... FROM base AS dev
# other commands to run application on development environment
... FROM base AS prod
# yet another set of commands
...
Now every time you want to build your application, you could either use:
docker build --target dev -t my-cool-image .
or use the following in a docker-compose file:
build:
context: .
target: dev
6. Copy contents within different stages if needed
Suppose you’re trying to run a web app inside a container. And let’s say your application needs to be compiled, or something similar.
You don’t need to have both of compile and run phase in the same stage, and rather you’d fetch dependencies and compile on one stage, and then run the binary executable output on another.
For example when I want to run a python app and there are some external dependencies involved, I would download those dependencies in one stage:
FROM base as download_wheels RUN apk add gcc musl-dev linux-headers COPY --chown=${USERNAME} setup.py ./ RUN pip wheel -w ${WHEELHOUSE_DIR} .
And then I would copy these dependencies to my final stage to be able to run the app:
COPY --from=download_wheels --chown=${USERNAME} \
${WHEELHOUSE_DIRECTORY} ${WHEELHOUSE_DIRECTORY} ... RUN pip install --no-index -f ${WHEELHOUSE_DIRECTORY} .
You can see that I have also changed the owner of the copied file to my own desirable user with --chown .
7. Always put a health check
You wouldn’t know if your application is running in full-power just by simply using docker ps . Because this command is only capable of telling if a container is running or not, and not able to figure if your application has been stuck somewhere, or whether it is fully operational and ready to service.
I would normally add a line like this for my web application:
HEALTHCHECK \
--interval=10s \
--timeout=5s \
--start-period=10s \
--retries=5 \
CMD curl localhost:${SERVICE_PORT}/v1/ \
|| exit 1
Everything is self explanatory I hope, but let’s not be presumptuous.
We try to run the command curl localhost:${SERVICE_PORT}/v1/ every 10 seconds, waiting 5 second for it to respond, starting 10 seconds after the container is started, and retrying 5 times if it fails. Otherwise just return a non-zero status code, informing the docker service that we’re not feeling good.
The endpoint /v1/ for me, is usually an endpoint which checks if all the other dependencies are also up and running. Like RabbitMQ, Redis or any other service that my app is interacting with.
8. Expose your ports if it’s meant to be
In a typical web application, you would run your app in some port, and so my advice to you is to expose that. This is desirable because when someone else is working with your image and inspects it using docker inspect they would easily find out which port to communicate.
It’s pretty obvious but just for reference:
EXPOSE ${SERVICE_PORT}
9. Also expose your working directory
Has it ever occurred to you that you have pulled a docker image from the registry, and without any knowledge about the image, you had to go through docker inspect to figure out it’s port and it’s data directory. Like postgres for example which I almost always forget where it stores its data, right before I check it using docker inspect postgres .
That’s why exposing your working directory is important and desirable. Just like exposing your ports, other people, or even yourself when coming back to it after sometime, would need to see the working directory (or data directory) of an image Therefore try to do this in your image where possible:
WORKDIR ${DATA_PATH}
10. Have different ENTRYPOINT and CMD
This is mainly because, on a regular day job, you would do the following whenever you feel the need:
docker exec -it my-container bash
This command will be executed after the ENTRYPOINT and so you would place something powerful in there.
For me I would normally use something like this:
ENTRYPOINT ["dumb-init", "--"]
CMD ["sh", "entrypoint.sh"]
Notice how I used brackets [ and ] . This is most desirable. Because if you use this notion instead:
CMD sh entrypoint.sh
docker would run ["sh", "-c", "sh", "entrypoint.sh"] which is very unnecessary if you ask me.
Dockerfile
If you want to get your hands on the complete Dockerfile, this is for you. Enjoy!
Conclusion
That’s it guys. In this article I have shared with you one of the coolest Dockerfile that I’ve come up with. I hope you enjoyed and got something out of it.
I’m also hoping to see your cool Dockerfile sometimes.
Acknowledgement
If you liked the above content, follow me as I plan to write regularly. Which I would say feels pretty good to be doing. So stay tuned and feel free to take a look at my other works as well. | https://medium.com/skilluped/10-tips-on-writing-a-proper-dockerfile-13956ceb435f | ['Meysam Azad'] | 2020-05-06 21:35:28.132000+00:00 | ['Technology', 'Coding', 'Data Science', 'Programming', 'Digital Life'] |
2,917 | The Increasing Need For Traceability In Pharma Supply Chain | As the current pharma supply chain is challenged by the lack of visibility and a number of new compliance regulations, most companies seek enhanced traceability. Pharmaceutical companies must adopt tracking technology to ensure product integrity and achieve global compliance.
Drug counterfeiting is a well-known and documented problem affecting human lives, but also the pharmaceutical industry reputation and ROIs. Within the United States, the FDA has enforced and implemented the Drug Supply Chain Security Act to protect consumers against counterfeit drugs. Act to protect consumers against falsified medicinal products. The Global Traceability Standard for Healthcare (GTSH) defines international process standard, explains and sets minimum requirements for all stakeholders in the healthcare supply chain. However, full implementation as a cohesive solution of global traceability standards is still in progress. Blockchain can be a comprehensive, all-inclusive solution which saves lives and dollars seamlessly in the future.
Critical support for combating counterfeit medicines and vaccines are substantial investments. Indeed, in the folding carton sector, anti-counterfeiting and safety procedures will continue to play a major role. The traceability solutions are one of the four biggest technological developments set to transform the market by 2022, according to the report by the design office of Smithers Pira titled “The Future of Folding Cartons to 2022.”
Indeed, when a drug package can be identified, the risk of receiving a fake product is reduced drastically. Increasing traceability procedures should help a huge worldwide counterfeiting industry, estimated at more than $200 billion worldwide in 2017 by the World Economic Forum.
In the pharmaceutical industry, serialization should help to monitor the supply chain and fight against the spread of fraudulent drugs and vaccines. The process can also bring additional benefits, including a more precise view of inventories with a better history of the origin and quality of the goods purchased. These advantages may also apply to other sectors facing counterfeiting hazards.
Serialization is a chance to modernize current traceability systems. This win-win model would significantly enhance action against parallel markets by controlling the supply chain and applying best practices driven by data. To ensure compliance with customer requirements and eliminate fake products from the global market, the pharmaceutical supply chain should be transformed with the largest track-and-trace network. | https://medium.com/@healthcarebusinessreview/the-increasing-need-for-traceability-in-pharma-supply-chain-e388ad5b31c2 | ['Healthcare Business Review'] | 2020-12-24 06:03:34.907000+00:00 | ['Supply Chain', 'Healthcare', 'Technology', 'Pharmaceutical', 'Technews'] |
2,918 | Choose your priorities and ruthlessly eliminate the rest | Originally published on JOTFORM.COM
Recently, I stood in my closet deciding whether an old gym T-shirt brings me joy.
I was following the advice of organization expert Marie Kondo, whose bestselling book “The Life-Changing Magic of Tidying Up,” has swept the world.
According to Kondo, we can tidy up our homes, and our lives, by deciding whether each of our possessions sparks joy — and getting rid of anything that doesn’t.
In a world that seems determined to make us consumers, there’s a contrasting wave of minimalism — people desperately want to cut the non-essential. Freed of excess clutter, people find joy, inspiration and, of course, their stuff.
The same is true of the workplace, and I don’t mean just cleaning your desk (though that’s one of my six daily habits). I mean taking a hard look at your bigger goals, choosing your top priorities, and ruthlessly eliminating everything else.
Just as paring down your domestic life can make you happier, narrowing your focus at work will lead to both increased satisfaction and productivity.
Don’t just take my word for it: some of today’s greatest minds are firm believers in the magic of tidying up your priorities.
The power of saying ‘no’
“Innovation is saying ‘no’ to 1,000 things.” — Steve Jobs, Apple founder
Let’s face it: it feels good to say “yes” — to be the person who always delivers. But what often divides successful from very successful people is their ability to say “no.”
Writes Greg McKeown, author of the New York Times bestseller Essentialism:
“Not just haphazardly saying no, but purposefully, deliberately, and strategically eliminating the nonessentials.”
Indiscriminately pursuing more diffuses our efforts and ultimately derails us from achieving our larger goals.
Bestselling author and Food Network host Ina Garten has amassed an estimated net worth of more than $40 million, according to Forbes.
Her recipe for success? At a Forbes Women’s Summit, Garten explained that if an offer doesn’t align with her brand’s vision and values, she will turn it down.
When I first started JotForm, without venture capital or investors, I wore every hat. I spent my days extinguishing customer service fires, leaving me little time to focus on growing the business. I was doing so much — and yet I felt like I was spinning my wheels.
Once I began hiring an amazing team of engineers and customer service specialists, I was able to focus on my priorities. I delegated customer service issues. And JotForm began to take off.
Though I learned a lot from those early days, they made me realize:
It’s not about doing more — it’s about doing more of what moves the needle for you and your company.
Consider the difference between speed and velocity: while speed measures of distance over time, velocity is displacement. When measuring speed, you can still run in circles, but velocity measures actual progress. We want to focus on velocity, not speed.
The pros of sticking to your priorities
Building a brand means being relentlessly selective about the opportunities you accept.
If you choose and stick to your priorities, you’ll be more productive on a daily basis, too. I’ve written before about the perils of multitasking: Shifting between tasks can consume as much as 40% of our productive time.
That’s why, in my daily work, I always have three concrete priorities. Currently, they are:
Hiring really great people Creating quality content Equipping our users to work more productively
These priorities guide everything I do. If a project or an opportunity doesn’t match one of them, I say no. I remove unnecessary deliberation from the situation and eliminate distractions so I can make real progress.
Economists and psychologists have found that too many options can even paralyze people or push them into making illogical decisions.
In a clever experiment, Columbia University professor Sheena Iyengar set up a booth to sample jams at a California market. Every few hours, she alternated between offering 24 jams and six jams.
Iyengar found that 60 percent of customers were drawn to the large assortment, while only 40 percent stopped by to check out the small one. However, 30 percent of people who sampled from the small assortment bought jam, while only 3 percent of those faced with two dozen flavors purchased a jar.
As Professor Iyengar explained, “people might find more and more choice to actually be debilitating.”
It’s no surprise that choosing a focus makes people happier, too. A study of over 4,000 working professionals found that sticking to a daily “MIT” (most important task) was correlated with higher levels of energy and happiness.
On a larger scale, a priority surplus can hurt your company’s bottom line. In a recent survey of 1,800 global executives, 64% said they have too many conflicting priorities. The researchers found that as an executive team’s priority list grew, the company’s revenue growth declined relative to its peers.
In sum: too many goals can sabotage your momentum.
How to choose your priorities
Setting your priorities demands a game plan, and there are numerous lists, matrices and charts to help you get started.
The Eisenhower matrix, for example (named after the industrious former US president), is a popular tool that identifies priorities based on importance and urgency. Tasks that are neither important nor urgent, are eliminated.
Urgency, however, can be a slippery slope. Too often we fall into the trap of hyper-focusing on the time-sensitive.
And it’s understandable — we feel instant gratification when we knock out a task, rather than saving it for a later date or taking the time to delegate. That’s why “inbox zero” is such an attractive concept. It’s satisfying to feel like you’re working from a clean slate.
Less important tasks also expand to fill the time. That’s why we need to focus on what’s essential.
For example, my son recently started kindergarten, and I decided to pick him up at 5 pm each day. Yet, I was used to staying late. If I had an overflowing inbox, I’d often stay until 8 pm to clear it out. Now that I have to leave the office at 4:45 sharp, I’m better at choosing and completing my top priorities.
If you still find yourself prioritizing time-sensitive rather than essential tasks, you’re not alone.
In a series of studies published in the Journal of Consumer Research, people typically chose to complete tasks with very short deadlines, rather than tasks with less pressing deadlines that were just as easy and promised a bigger reward.
But as I said before, when we focus on urgency, we end up putting out fires all day, instead of accomplishing what matters most.
Does that mean we should forget urgency entirely?
Not exactly. But I’ve found that I can delegate a lion’s share of urgent items.
I tend to agree with the experts who recommend choosing your priorities based on contribution and passion — the objectives that motivate us and give us a greater sense of purpose. And I would add importance to that list.
To choose your priorities, ask yourself:
What’s most important to me, considering my (or my organization’s) broader goals? Where can I make the highest contribution, based on my organization’s needs and my personal experience and abilities? What inspires me?
Next, taking a page from Warren Buffett’s strategy, write down 25 goals, then circle your top 3–5. Whether you choose three (like me) or five (like Buffet), it’s important to ignore the other items at all costs.
You don’t have to abandon your other goals altogether — but set them aside until you can give them your full, undivided attention, perhaps at a later date.
Looking back at my priorities, hiring great people is critical. I won’t delegate that responsibility to anyone else — especially when we’re hiring for key positions.
My goal of equipping users to work more productively falls between contribution and importance. Creating quality content, however, is my passion. Focusing on this priority energizes me to work more fervently on the others.
If something is not a priority, I either delegate if it’s urgent, or avoid it completely.
Building priorities into your daily routine
Periodically, we need to step back and re-evaluate our priorities. Once you’ve chosen yours, it’s important to build them into your daily routine.
That means blocking out uninterrupted time to work on your focus areas: turn off alerts, set up email auto-replies, and create quiet zones. Fiercely protect your workflow.
As Greg McKeown suggests, if you don’t prioritize your time, someone else will.
For many people, mornings are their prime time for deep work. I spend the first two hours of every workday writing out my thoughts and ideas, as they relate to my priorities. I don’t book meetings during this period and I wouldn’t think of checking emails.
Finally, communicate your intentions with your colleagues, so they know what to expect from you (and when they shouldn’t expect anything from you).
You may feel uncomfortable sharing your newly-prioritized work life with a manager, who has other plans for your time. But if you clearly explain how your goals align with the company’s big-picture agenda (or your manager’s agenda), chances are they’ll respect your commitment.
Tidy up your priorities and see what happens. It can truly be a game-changer. | https://medium.com/swlh/choose-your-priorities-and-ruthlessly-eliminate-the-rest-37d48f5a8b2 | ['Aytekin Tank'] | 2019-08-26 12:09:25.870000+00:00 | ['Innovation', 'Leadership', 'Life Lessons', 'Business', 'Technology'] |
2,919 | Back to basics: Decoding Audio Modems with Audacity | Hello there! It has been a while since my last post; If you have been following me lately, you know that in my last stories, I explored a bit what you could do with a very simple Software-Defined Radio (SDR) dongle like the ones based on the RTL2838 chipset that are sold for about 25$ at your favorite online store. We learned how to use it to listen to the commercial radio, intercept airplane and pager messages (yes, they are still used nowadays by some professions) and to transmit radio signals using a Raspberry Pi.
I also recently bought a HackRF One, so in order to make good use of it, I had the ambition to learn some GNU Radio and create my own radio protocol… With all of this free software ecosystem decoding and encoding radio waves, how hard could-it be?
Well, it turns out it is indeed pretty hard if you don’t have a background in signal processing and are just a curious geek such as me 😅. So I had to take a step back get to the basics: Instead of directly transmitting over the electromagnetic medium, I started exploring the transmission of data by sound.
What are audio fax modems exactly?
If you are from my generation or above, do you remember the sound of the early Internet connecting to the AOL servers, emitting a screeching sound from what seemed to be another world? That was your 56k modem
The typical sound of an audio modem connection (Source: absorbtions blog)
So, a modem is the device your computer uses to communicate with the other computers of the world: In order to transmit numeric bytes over an analog medium, one has to use digital modulation. The act of modulating a signal is to enrich it with information in a way that a message can be extracted (demodulated) by the recipient. A modem does just that: it is actually the abbreviation for “Modulator/Demodulator”
Morse Code Modulation
The simplest example of that would be Morse code: Imagine you and a friend are separated by a conductive wire: You could exchange messages just by turning the current on and off; In the case of Morse code, you both agree that there are two “symbols” that can be used to transmit “letters” which assemble into human-understandable “words”: A short pulse (•) and a long pulse (−):
Morse code is a simple form of modulation
Great, we just modulated the word HELLO into our on-and-off electric signal!
Now, considering that Morse has been invented in the 1830s, you can imagine it is not quite suitable for modern computers: While Morse code has been made to be easily translatable to human-readable letters, our computers need bits and bytes.
Digital Modulation
In order to transmit computer-understandable bytes instead of human-understandable letters, you could agree that those pulses are to represent 0s and 1s, which would lead your “words” to become bytes when arranged eight by eight and then suitable to be processed by a computer.
Congratulation: You just turned your Morse code machine into a digital modem! Here is what it would look like to modulate the ASCII representation of the letter “H” (first letter of our HELLO message) in Audacity using this invented modulation technique. The letter H is represented in hexadecimal notation with 0x48, so in binary it is represented by 01001000 :
Morse code used to transmit digital data (the ASCII representation of “H”)
Waves as a medium
Now, in the real world, it is not practical to have a signal that has to rely on “silence” on the medium you share with your friend: It does not scale to have a dedicated hard-wire for every destination you would want to reach (which is what the early telegraph systems and very early phone systems were actually doing).
What we will want to use for more flexibility are wave frequencies, so instead of alternating between “noise” and “ no noise” to isolate the signal, we will use something more like “this sound” and “that sound”, usually expressed as simple-frequency tones.
Let’s look at the same data, but this time we use tones of 1 hertz to represent zeros and 5 hertz to represent ones:
Frequency modulation used to transmit the ASCII representation of “H” — Generated using GeoGebra
Alright, we now have a frequency-modulated signal, generally more known under its acronym “FM”, the same modulation technique used analogically to send sound to your car radio! Now, since we used it to transmit a digital signal instead of an analog one, we call it a Frequency Shift Key signal (FSK), meaning that we shift frequencies to represent different symbols/bits.
This is just a toy example to illustrate the principle: If these were sound waves, they would be inaudible to the human ear, which is typically able to hear sounds from 20hz to 20khz. But we now have a model realistic enough for us to understand a real-life legacy audio modems 📠!
Realistic Example
A Bell-103 modem using Audio FSK modulation through the phone lines
So, I wanted to test a realistic example, and after playing with GNU Radio for a while with mixed results, I found a really neat piece of software called minimodem that could transmit and receive all kinds of real-life Audio FSK protocols. The output of the software can be a WAV file, ready to be read (or “listened”) on the receiving end of the transmission.
I generated a sample that encodes the “HELLO” message in ascii and sends it using Audio FSK at the rate of 8 baud; The baud rate is the number of “symbols” sent per second, and since in this case the modem encodes only symbols of “1” and “0”, it is equivalent for us to the bit rate, but keep in mind that it will not always be the case.
After you have installed minimodem on your favourite distro, here is the code to use for generating the very simple message we are to analyse:
echo “HELLO” | minimodem --tx --ascii --startbits 0 --stopbits 0.0 -f minimodem_hello.wav 8
(Warning: This might be LOUD in some headphones 🔊)
Now, what does that look like when opened with Audacity, an open-source audio visualization and manipulation software?
The default view is not super useful to analyze realistic modem communications
Well, that is not super useful; The slight difference between tones, even if audible when listening to it, cannot be easily differentiated from one another. But we are in luck: Audacity also has the capacity to represent wave in time-frequency manner instead of the classic time-amplitude. To do so, it uses a Fast Fourier Transform (FFT), a mathematics transformation frequently used in signal processing, but that also has plenty of very useful other applications that I will keep for another time 😏.
A Fast Fourier Transform is used to shift from a time-amplitude view to a frequency view. — Source: https://commons.wikimedia.org/wiki/File:FFT-Time-Frequency-View.png
In Audacity, there is what is called the Spectrogram View, that uses the aforementioned Fast Fourier Transforms in order to display frequencies detected at certain points in time. Lets try:
A second track has been added and switched to the spectrogram view; Now we can clearly see the two tones used in the modem communication.
Oh, that looks promising! Logically, the first letter to be transmitted should be “H”, which again is represented in ascii with 0x48, so 01001000 ; Let’s zoom-in and check:
Zoomed into the first byte
Bingo! What I discovered with this analysis is that the least significant bit is sent first for each byte, which for sure can seem counter-intuitive when we look at it with this visualization, but could be logical since the first bit of each byte is to be sent first.
There are other interesting options we can set on the spectrogram view; For example, we see that the scale on the right indicates that our two interesting tones are comprised between 1000hz and 2000hz, so we could “zoom” to these frequencies and have an even clearer difference between tones.
Conclusion
So, I just wanted to conclude by pointing-out a very interesting fact about the Spectrogram View: The more precise you want to be about the frequencies you observe, the less you will be precise about the time these frequencies were observed at:
You cannot be precise about the frequency of a sound wave and the time of “measurement” at the same time
I find that there is a kind of poetry that this is due to the same underlying mathematical principles as the infamous Heisenberg’s uncertainty principle at the heart of quantum mechanics, because fundamental particles are wave-like objects and as such, this is part of their nature. If you are interested in this topic, I can only recommend this video from 3Blue1Brown who finds ways to explain with much more clarity than I ever could how a Fourier analysis of wave signals demonstrates almost intuitively the uncertainty principle:
In the meantime, happy hacking! | https://medium.com/poka-techblog/back-to-basics-decoding-audio-modems-with-audacity-c94faa8362a0 | ['Maxime Leblanc'] | 2020-06-09 17:17:28.624000+00:00 | ['Technology', 'Hacking', 'Sdr', 'Modem', 'Mathematics'] |
2,920 | Building an Inexpensive 3-D Printed ROS Robot | Building an Inexpensive 3-D Printed ROS Robot
Built around a Pi4, low cost gear motors, and an RPLidar A1
Modified Weddell 2 from Thingiverse
I recently began playing with ROS in simulation, and am really enjoying it. I want a platform that I can use for experiments, particularly learning to configure a physical robot for SLAM and navigation. If you are not familiar with ROS, or have been intimidated by the steep learning curve, you might want to give my introductory article a look first.
I stumbled on the Weddell 2 ROS Robot by user pokpong on Thingiverse, and was very impressed. It was very close to what I was after. I couldn’t source the motors the original device used, so I made a remix of that design that was set up for inexpensive gear motors with encoders. I hope this write-up is useful if you want to do something similar. The modified files are here.
The chassis is printed in PETG on an Ender 3 Pro. The parts require rafts and supports to print without warping, and the supports need to be gently cut away with an hobby razor knife. Holes tend to print a little small because of the way the slicer works, so it’s best to drill them to size.
The original design used some German gear motors that I had trouble sourcing (and frankly, they look expensive). They are probably overkill for my application, so I opted to modify the baseplate to mount these inexpensive gearmotors with integrated encoders. I wanted a robot that was slow with plenty of torque, so I used a robot wheel speed calculator to figure out what RPM I needed to get the speed range I wanted.
I used OpenSCAD to modify the base STLs, by plugging the holes I didn’t need and punching new ones for the motor mounts. The same plate is printed twice to make mirrored sides.
The sides and decks are assembled with M3 socket head screws and M3 locknuts.
Base with motors installed with brass hex adapters for RC wheels
I used a 1" caster I had from a previous robot chassis. With these motors and this caster, I needed larger wheels to get the ground clearance I wanted, so I opted for this set of 85 mm wheels with tires. The brass wheel adapter included with the motor set will fit standard R/C car hex wheels, so you can mount a large variety of them.
Underside of base showing caster
The design of the masks by pokpong makes clever use of pockets that trap an M3 nut — once you remove the support material, they lock right in. The vertical printed structures are called “masks” in his design, and he includes several with different kinds of cutouts for sonar, battery sockets, and a Raspberry Pi camera. I only used the camera mask, and the plain mask for the rest of the supports.
I modified the masks to make them a bit easier to print — the originals came to a very sharp edge on the bottom, resulting in little contact area with the print bed. I had several of them fail in printing with the original, so I used OpenSCAD to clip off the sharp edge.
Detail of mask showing M3 captive nut
I selected these 50 mm aluminum standoffs in black for the vertical risers. The battery pack sits behind the motors.
Masks and wheels mounted
To join the standoffs between levels, you need M3 threaded rod. I didn’t have any, so I selected my longest M3 screws, cut the caps off with a rotary tool, and cleaned up the cut end with an M3 tap.
M3 riser connectors
My electronics deck is currently pretty minimal — it includes a Raspberry Pi 4 running Ubuntu and ROS, a motor driver, and an Arduino UNO running rosserial to set motor speeds and publish battery voltage. A mostly empty shield board has a voltage divider on the battery input to bring the battery voltage into the range that the ADC on the Arduino can read. This is seriously limited on RAM — I’m working on a replacement with a Teensy ARM processor that will be more capable. This board is OK for testing, but would not be able to run other peripherals like IMU and encoders without running out of RAM. The rosserial library uses a considerable amount for each publisher and subscriber.
Power for the Pi and Arduino is provided by a 5 volt 3 amp battery eliminator circuit normally used on R/C aircraft. Power is soldered to the test pads on the Pi. PCB standoffs are 3D printed.
Electronics deck
A Raspberry Pi camera is mounted to the forward mask on the electronics deck behind the glare shield. The standard camera’s field of view is not restricted by the glare shield. The camera uses little tiny M2 nylon nuts and bolts to secure it to the mask.
Electronics deck with camera mounted
Wires are secured with zip ties. Power is currently a 2200 3S LiPo RC drone pack with XT60 connectors, though I may upgrade to a larger pack once I am farther along.
Next steps are to get the Teensy board working with a SparkFun IMU and counting encoder pulses from the motor, and publishing the appropriate topics. If that works, it should be able to keep rough track of where it is by fusing the odometry and IMU data. That will be the subject of a future article! I expect that to take some work.
LIDAR deck
The LIDAR unit mounts easily on the top deck — I’m very impressed with how easy it is to get it up and running in ROS. I’m excited to get some SLAM going!
Initial testing has gone well under manual teleop control — it feels precise and has plenty of power. The PETG parts seem quite durable — we’ll see how they hold up under lots of use.
See room for improvement? I would love to get your input and ideas!
Author’s note: The links in this article are not affiliate links. | https://medium.com/swlh/building-an-inexpensive-3-d-printed-ros-robot-625ac7766f4a | ['Jason Bowling'] | 2020-12-27 07:57:26.200000+00:00 | ['Robotics', 'Raspberry Pi', '3D Printing', 'Technology', 'DIY'] |
2,921 | A New Perspective On Virtual Reality | Reap The Rewards For Challenging Your Intelligence and Leave Your Iconic Mark On The World
Virtual Reality is more than just a video game or visual simulation. The potential of VR will have far-reaching ramifications on the evolution of our species because it represents the expression of our multidimensional quantum computing imagination coming into “form.”
It’s where technology and the mystical/metaphysical unify.
The “machinery” of human psychology must be upgraded in order to process the higher/faster frequencies of consciousness in which the definition of “real” and the meaning of “reality” open up dramatically, and most aren’t be ready to face this shift.
Everything is possible and there must be space to handle paradoxes that the human mind can’t comprehend from lower states.
The more resistance one holds against the malleability necessary to allow infinite possibilities, the more conflict will reflect itself in one’s environment and on the planet.
We are not meant to “vanquish” lower non-preferred frequencies out of existence because they are “wrong” but realize cosmic consciousness is like a CableTV Guide with numerous channels.
The more an individual finds the power of their conscious attention and masters the mechanics of their own signature channel instead of unconsciously consuming the old mass collective one, the more rapidly the shift into The Passion Age will unfold. | https://medium.com/@jeremyalasman/a-new-perspective-on-virtual-reality-4fc525dc637a | ['Jeremy Lasman'] | 2020-12-02 18:46:36.977000+00:00 | ['Quantum Mechanics', 'Virtual Reality', 'Imagination', 'Technology', 'Legacy'] |
2,922 | Dark Patterns in Your Everyday Apps | Do not confuse it with Dark Mode
The hype of “The Social Dilemma” made many viewers become aware of the power of technology and its influence on all of us. For UX Designers, the use of dishonest tricks in digital platforms is not a new topic. We call them dark patterns.
Evil design patterns, unfortunately, are very common. To demonstrate, I created a compilation of dark patterns we can find every day.
Youtube Disguised Ads
Author/Copyright holder: Youtube. Copyright terms and license: Fair Use.
The very first thing Youtube displays when the app is open is not a video, but an ad that really looks like a video. When a user scrolls down the app, he comes across many of these ads disguised as videos, which the user can easily click by mistake.
Spotify Roach Motel
Remember when you created your Spotify account? Probably not. Maybe you only used OAuth and immediately got logged in with your Facebook account. If not, you simply filled a small survey with your registration data and you were in. What about deleting your Spotify account? If you ever tried to do it, you probably remember how painful it was.
Spotify’s webpage makes it easy for the user to find where to Log in or to Sign up. There are clear options in the navbar, as well as a highlighted button in the center of the screen for it.
Author/Copyright holder: Spotify. Copyright terms and license: Fair Use.
Author/Copyright holder: Spotify. Copyright terms and license: Fair Use.
If you click on “Login”, you’ll find out you don’t even need to create a new account to use Spotify. You can automatically login with your Facebook, Apple, or Google account. How easy.
Deleting a Spotify Account, on the other hand, can be a painful experience. You “only” have to complete the following instructions:
Navigate to support.spotify.com/us/contact-spotify-support/. Click on “Login” in the upper right-hand corner and enter your credentials. You need to work through a series of on-screen questions. Click on “Subscription”. Choose “I want to close my Spotify account”. Click on “Contact to Close”.
Author/Copyright holder: Spotify. Copyright terms and license: Fair Use.
When you click on the “Contact to Close” button, you are taken to a support form. This means Spotify UI completely prevents a user from closing his account without having to pass through a support procedure he doesn’t have any control over.
Reddit Bait and Switch
Author/Copyright holder: Reddit. Copyright terms and license: Fair Use.
When scrolling Reddit’s feed, the user can expand the images displayed by clicking on them. However, Reddit’s feed has plenty of “promoted” posts, which are actually ads. The user is tricked into clicking on the ad’s image, but instead of the default expanding behavior, he is automatically redirected to some ad website.
Instagram Roach Motel
Instagram uses the Roach Model pattern in a different form from the one used in Spotify. In this case, besides being a mobile application, Instagram accounts are impossible to delete within the app. The user needs to access a browser, which makes the process of account deletion unnecessary harder for him.
Excerpt of Instagram Help Documentation
Skillshare Forced Continuity
Forced Continuity: When your free trial with a service comes to an end and your credit card silently starts getting charged without any warning. You are then not given an easy way to cancel the automatic renewal.
Author/Copyright holder: Skillshare. Copyright terms and license: Fair Use.
Skillshare uses one of the most common dark patterns in the subscription services. The user is asked to provide his credit card data to access a free trial, a situation that leads to automatic debts as soon as the trial period ends. The user can cancel his subscription anytime, even before the paid period begins. However, many companies do not properly notify their users that they are about to be charged until it’s too late.
Wish Confirmshaming
Confirmshaming: The act of guilting the user into opting into something. The option to decline is worded in such a way as to shame the user into compliance.
Author/Copyright holder: Wish. Copyright terms and license: Fair Use.
When someone unsubscribes from the Wish newsletter, the confirmation dialog is worded in a way of attempting to make the user feel guilty from leaving. Not only does the title state “We’re sad to see you go”, but also the user is required to choose the option “I Don’t Like Discounts” instead of a neutral “Unsubscribe”.
AliExpress Price Comparison Prevention
Price Comparison Prevention: The retailer makes it hard for you to compare the price of an item with another item, so you cannot make an informed decision.
AliExpress is an online retail service with a big product offer. In the gift above, we see the search results for a makeup brush set.
The search results screen is where the user can compare his different options. At this moment, we see how the results are displayed in individual prices. However, when the user clicks on one product details, the price changes for a price interval. The user must choose between different options of quantity, color, and shipping source to obtain the final price of his product. This way of price display makes it harder for the user to compare different products and make an informed decision.
Broadway.com Hidden Costs
Hidden Costs: You get to the last step of the checkout process, only to discover some unexpected charges have appeared, e.g. delivery charges, tax, etc.
In the capture above it is displayed how, when the user selects the seats, the price displayed is $59.50 each. However, in the checkout step, $14.88 for Service & Handling is charged for each ticket, leading to an unexpected final price. This trick is often used in ticket selling platforms, making it hard for users to plan how much they are willing to spend.
Mariana Vargas is a full-time UX Engineer and part-time singer based in Lisbon, Portugal. You can connect with her on LinkedIn. | https://uxplanet.org/dark-design-patterns-in-your-everyday-apps-3627e439a8a1 | ['Mariana Vargas'] | 2020-11-19 10:46:15.659000+00:00 | ['Technology', 'Design', 'Creativity', 'Visual Design', 'UX'] |
2,923 | What I learned as a Product Manager at Google | tl;dr — While no longer as nimble as a startup, Google’s scale, strong culture and awesome people make it the ideal place to learn the nuts and bolts of product management and offers incredible opportunities to create products for millions/billions of users across the globe.
If you find my career development blogs interesting, you might also checkout:
I was a Product Manager(PM) at Google Health for the last ~2 years, launching health features on Search and Maps (like these features that help users find telehealth options on Search). I’m leaving Google to pursue an opportunity at a startup (more to come here). It’s a bittersweet moment since I loved my time at Google, so I decided to reflect on what I’ve learned as a Googler.
1) The “Googley” ethos is real and awesome
Everyone that interviews at Google is assessed for “Googleyness”. This includes being ambitious, humble, and doing the right thing. In addition to being whip smart, the large majority of Googlers I’ve met are so incredibly nice and helpful (every employee having the ability to give cash “peer bonuses” multiple times per quarter also helps :P). It makes working here a joy. Google corporate does everything it can to make our work environment as safe and comfortable as possible (e.g., the food, money to buy wfh accessories, lots of working hour flexibility, “face time” isn’t really a thing). An especially great part of the culture, is the “zero-blame” aspect. This transforms the company as it allows people to feel safe taking risks and when something does go wrong, teams can have transparent retrospectives and implement useful processes to prevent the mistake from happening in the future. Side note: imposter syndrome is real here — I constantly felt very lucky but also hopelessly unqualified to be surrounded by smarter/better people that I could learn tons from. Example, my manager was CEO of a Series B startup before coming to Google — so many ex-CXO examples like this!
2) Google is a large bureaucracy, launching something takes a village
Google is a $180B+ revenue company. The downside of doing something that harms its golden goose (ads, search, maps, etc.) is extremely high. Thus, there are very extensive processes in place to rigorously check/limit any potential user harm, production defects, PR risks. PMs need to be patient as this process can take months. As a result, things take a long time at Google. This is not unlike other large companies and I’d imagine Google is likely more agile than other companies of its size. These processes are important for the user experience, whether it’s making sure the search experience stays whip-fast or that user privacy is meticulously preserved according to various state and country-level regulations.
3) Core PM skills | https://medium.com/@lucyyin6/what-i-learned-as-a-product-manager-at-google-cef00f34b7ae | ['Lucy Yin'] | 2021-04-29 22:34:58.842000+00:00 | ['Careers', 'Healthcare', 'Product Management', 'Technology', 'Career Development'] |
2,924 | Industrial Strength Evolution, Genetics, and AI | Commercializing Genetic Algorithms | Towards AI
Bringing Scale and Commerce to Genetic Algorithms
Using genetic algorithms in commercial AI applications has taken a back seat to deep learning neural net technologies. Machine learning has captured all the headlines and the experiments of a few years ago are now real-world applications. This has not happened with genetic algorithms. They are certainly a lot of fun to play with. On a quick YouTube tour you will find examples that play Flapping Birds, Mario Brothers, drive cars and get both simulated soft and hard-bodied creatures to walk. How do we make the same transition from science experiments to industrial practice? Any commercialization is going to require this level of rigor.
We theorize about applying genetic algorithm techniques to AI. Can we evolve meaningful solutions that compare to the results being achieved by more commercial machine learning technologies? You can evolve very simple and small scale experiments. It is great fun to evolve a solution to playing a video game, however, where do those experiments lead?
You run into a wall when you start to think about genetic algorithms at a commercial scale. The idea that you can take a few hundred thousand simulated neurons, design a great fitness function, let it percolate overnight and get something as smart as an ant simply doesn’t work. It clearly does not work on a more complex brain (no disrespect to the ant brain). We can’t even simulate the few hundred neurons in a worm.
HERE is a great list of animal species and their neuron counts. The aforementioned ant brain has about 250,000 neurons. A modern Nvidia GPU has enough horsepower to simulate a network of that size. I have a poorly implemented neural simulator, running on a 5-year-old laptop that clocks in at processing one million fake neurons per second.
How do we take genetic algorithm technology and ideas and push it to a level of scale we see in nature?
It’s About Process
Genetic algorithms are attempting to be inspired by nature. We create phenotypes, genotypes, and chromosomes. We simulate genetic breeding (crossovers) and invent ways to determine ‘organism’ viability and breeding likelihood. It doesn’t function exactly like nature, but this inspiration does yield results.
There is a part of the natural inspiration that is missing. The history is missing. A typical GA experiment involves creating the mathematical model for the problem, creating a chromosome representation and a ‘fitness function’ that decides who gets to breed. You fire it up and see what emerges.
Human beings are specialized fish. Much of what we carry around inside trace back to our watery ancestors. Our brains are the same. We have layers and parts that have been passed down to us. The reason we run from danger and want to mate is driven by our amygdala. We’ve had that since we were all lizards.
Our model for GA development is not taking this into account. We run our experiments, get fun and sometimes useful results, then start over. That is a fundamental problem that limits scale.
An Evolution Platform
I’m going to take off my evolutionary biologist hat (I am a naive amateur) and put on a software engineering hat (fits better). This cries out for a platform solution. There is a need for a consistent platform. We are seeing that emerge in machine learning (TensorFlow, Google.ai, FaceBook’s FAIR). Matlab has GA tools available and there are a few other tool kits. However, there are no large scale platforms focused on neural simulation, genetic algorithms, and this historical component.
Key Features:
A set of standard interfaces and methodologies. Let’s define the basics of how a neural simulation works. We can create a definition of the chromosomes including how to vary it. You can imagine a standard mechanism for defining fitness functions. The ability to base work on past work. Nervous systems and parts of the nervous system can be used and reused. Artificial brain source control should show lineages and the kind of branching you see in nature. Each experiment does not begin from scratch. An environment for multiple and disparate contributions. The software world already knows how to do this. We need to create a repository of code (fitness functions, simulated environments, connection to other systems, GA rules, etc.) and evolved data sets that can be used by the community.
Prototype Environment
To test some of these ideas, I have built an experimental platform. There are links to demonstration videos at the end as well as the source code. This project is not meant to form the basis of any system but to test out ideas for how to build such a platform. I apologize in advance to anyone looking through the code. I have been out of the coding scene for 15+ years, so my skills are a bit rusty and my tool choice might seem quaint and very old school. Everything has been done in C++, MS Visual Studio 2017 and targeted as a Windows desktop app. A real platform would take a more modern approach. Consider this an invitation for others to join.
In this experiment, I have evolved creatures ranging from simple filter feeders to ones that swim around and can avoid objects. This prototype platform features a general purpose GA platform, a simple neural simulation model and data management tools.
Step One — A GA Platform
I first developed basic frameworks for creating controls and displaying results. I then implemented a basic set of C++ objects that can execute genetic algorithms. To test this, I had it solve the Traveling Salesman Problem.
The goal was not to solve this problem but to create and test the scaffolding needed to build a larger platform. This object set allows for the overloading of things like; the fitness function, crossover methods, breeding selection, and chromosome design. The platform retains the ability to run the TSP.
Step Two — Design a Neural Simulation Environment
The video presentation goes into greater detail. A brain is made up of a set of neurons and connections that behave in a very simple fashion. Each neuron simply determines its state by summing its inputs. This sum is then passed on to the next neurons it is connected to.
A chromosome for the purposes of GA is simply a pointer to a simulated synapse. A synapse has a start neuron, an end neuron and a polarity (plus or minus). Evolution occurs in the recombination of this list of synapses, not neurons. This may need to be revisited. The resulting neural simulator if fast and small. As stated before, I get a performance of about one million neurons per second before I’ve optimized any of the code. I am currently adding the ability to have nodes on this network reference other networks.
Step Three — Let Nature Take Its Course
I layered this neural simulation environment on to my GA platform and began to evolve little creatures. I created little filter feeders of varying sensory capability and creatures that can swim around and navigate obstacle courses. Again, check out the videos at the end to see these in action.
One of the key aspects of this experiment was to embody a historical approach. Each creature is not the result of a stand-alone experiment, but a collection of stepwise evolutionary successes.
This is a picture of all the filter feeders, called critters that I evolved. They all share bits of neural hardware that were evolved earlier. Each one more capable of sophisticated food sorting and based on a previous and successful organism.
My swimming creatures, called guppies, feature movement and eyes that can detect objects in a simulated world (Episode 2 — Video). My plan is to see if I can take this basic brain and add an associative learning capability, post-evolution.
Interested?
Building commercial applications and solving larger AI problems using evolution and genetic algorithms are going to take an industrial platform approach. My goal is to start that conversation.
You can follow the journey and sign up for updates etc at danlovy.com/critter
Facebook group:
Bio — A.I.
Episode One features a full description of the platform and first creatures: https://www.youtube.com/watch?v=BaAqFHr0nts
Episode Two features the swimming guppies: https://www.youtube.com/watch?v=0D6B1sU_Fiw
Episode Three features a more capable swimming creature: https://www.youtube.com/watch?v=Z6fmkZW9JCM
Some of the funniest YouTube videos featuring GA solutions: Mario, Soft Body Creatures, Driving Car, Flapping Birds. There are dozens more. | https://medium.com/towards-artificial-intelligence/industrial-strength-evolution-genetics-and-ai-db2d8c9b861 | ['Dan Lovy'] | 2019-05-20 15:25:10.189000+00:00 | ['Artificial Intelligence', 'Technology', 'Cellular Automata', 'Neural Networks', 'Genetic Algorithm'] |
2,925 | [Documentary] Texas 6 “2020” Episode 7 || (S1.E07) Full Episode | Streaming Texas 6 Season 1 :: Episode 7 S1E7 ► ((Episode 7 : Full Series)) Full Episodes ●Exclusively● On TVs, Online Free TV Shows & TV Texas 6 ➤ Let’s go to watch the latest episodes of your favourite Texas 6.
❖ P.L.A.Y ► https://tinyurl.com/y9wk24ab
Texas 6 1x7
Texas 6 S1E7
Texas 6 TVs
Texas 6 Cast
Texas 6 Online
Texas 6 Eps.1
Texas 6 Season 1
Texas 6 Episode 7
Texas 6 Premiere
Texas 6 New Season
Texas 6 Full Episodes
Texas 6 Watch Online
Texas 6 Season 1 Episode 7
Watch Texas 6 Season 1 Episode 7 Online
⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package.
✌ THE STORY ✌
Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing
(Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself.
✌ STREAMING MEDIA ✌
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content.
Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”.
This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation.
✌ TELEVISION SHOW AND HISTORY ✌
A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings.
A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video.
Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet.
The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 1, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRATexas 6 CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets.
✌ FINAL THOUGHTS ✌
The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature.
FIND US:
✔️ https://www.ontvsflix.com/tv/112994-1-7/texas-6.html
✔️ Instagram: https://instagram.com
✔️ Twitter: https://twitter.com
✔️ Facebook: https://www.facebook.com | https://medium.com/@tvsf-re-ek/documentary-texas-6-2020-episode-7-s1-e07-full-episode-7db9815fca4c | ['Tvsf Re Ek'] | 2020-12-24 08:40:29.314000+00:00 | ['Politics', 'Sports', 'Documentary', 'Covid 19', 'Technology'] |
2,926 | Family Guy ; Season 19 - Episode 9 "The First No L" | ➕Official Partners “TVs” TV Shows & Movies
● Watch Family Guy Season 19 Episode 9 Eng Sub ●
Family Guy Season 19 Episode 9 : Full Series
ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ
ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ
Family Guy — Season 19, Episode 9 || FULL EPISODES : When the family fails to help Lois with the Christmas shopping, she walks out on the family and the Griffins must try to save Christmas on their own. .
Family Guy 19x9 > Family Guy S19xE9 > Family Guy S19E9 > Family Guy TVs > Family Guy Cast > Family Guy Online > Family Guy Eps.19 > Family Guy Season 19 > Family Guy Episode 9 > Family Guy Premiere > Family Guy New Season > Family Guy Full Episodes > Family Guy Season 19 Episode 9 > Watch Family Guy Season 19 Episode 9 Online
Streaming Family Guy Season 19 :: Episode 9 S19E9 ► ((Episode 9 : Full Series)) Full Episodes ●Exclusively● On TVs, Online Free TV Shows & TV Family Guy ➤ Let’s go to watch the latest episodes of your favourite Family Guy.
Family Guy 19x9
Family Guy S19E9
Family Guy TVs
Family Guy Cast
Family Guy Online
Family Guy Eps.19
Family Guy Season 19
Family Guy Episode 9
Family Guy Premiere
Family Guy New Season
Family Guy Full Episodes
Family Guy Watch Online
Family Guy Season 19 Episode 9
Watch Family Guy Season 19 Episode 9 Online
⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package.
✌ THE STORY ✌
Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing
(Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself.
✌ STREAMING MEDIA ✌
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content.
Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”.
This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation.
✌ TELEVISION SHOW AND HISTORY ✌
A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings.
A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video.
Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet.
The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 19, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRAFamily Guy CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets.
✌ FINAL THOUGHTS ✌
The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature.
FIND US:
✔️ https://www.ontvsflix.com/tv/1434-19-9/family-guy.html
✔️ Instagram: https://instagram.com
✔️ Twitter: https://twitter.com
✔️ Facebook: https://www.facebook.com | https://medium.com/family-guy-s19e9-stream/ep-9-family-guy-season19-episode-9-the-first-no-l-1c12d3b65584 | ['Angga Putra'] | 2020-12-13 10:46:31.034000+00:00 | ['Animation', 'Covid 19', 'Technology', 'Cartoon', 'Anime'] |
2,927 | H+ Weekly — Issue #66 | MORE THAN A HUMAN
Elon Musk has recently hinted that he may be working on a “neural lace,” a mesh of electronics that will allow AI and the brain to work together. This could help human brains keep up with future enhancements in AI.
The story of Nicholas Huchet, founder of Bionicohand and amputee, who designed an affordable 3D printed prosthetic hand.
From defeating death and aging to merging with machines to gene editing, transhumanist movement is going to challenge our current worldview and ask some serious ethical questions.
Will gene editing allow rich to be better, healthier and smarter than everyone else? This is one of the concerns some people have when it comes to transhumanism and human enhancements.
ShoddyCast, a YouTube channel confronting games with science, takes Deus Ex and asks how close are we to technology depicted in the game.
Nootropics, or smart drugs, were once the domain of a select few biohackers testing the limits of brain enhancement. But Nootropics are becoming more mainstream, leaving many to wonder what effect smart drugs will have on esports — one of the most rapidly growing spectator sports in the world, where cognitive enhancement could become a competitive advantage.
ARTIFICIAL INTELLIGENCE
IBM Watson was challenged to create a trailer for a movie. The challenge has been accepted and here is the result. Watson analyzed hundreds of horror/thriller movie trailers. After learning what keeps audiences on the edge of their seats, the AI system suggested the top 10 best candidate moments for a trailer from the movie Morgan, which an IBM filmmaker then edited and arranged together.
Artificial intelligence will transform just about everything, but technologists should stop fretting that it’s going to destroy the world like Skynet.
Here’s an interview with Luis de Miranda, who with his colleagues at the University of Edinburgh created the Anthrobotics Cluster, where they try to develop a way of figuring humans, robots, and intelligent systems alongside one another.
ROBOTICS
Looks like EU will follow US example and drone owners in the EU might be requested to register their drones.
Case IH recently showed off a prototype of their “Autonomous Concept Vehicle.” It’s a farm tractor that can plant, monitor crops, and harvest without a driver.
I love drone racing and every time I see video like this one I love it even more.
At Haneda Airport in Tokyo, Japan, you might encounter a robot ready to help you. Because it’s Japan and it’s 21st century.
Quite impressive is how smoothly this robot named Jimmy moves.
Pentagon posted a $1 million contract award for a new drone for the US Army. The drones need to be quite autonomous because it’s required from them to able to fly into unknown buildings and explore them without crashing.
The UK’s Ministry of Defence has launched a public competition to evaluate how swarms of tiny drones could be used in future warfare.
BIOTECHNOLOGY
A Swedish university has served the first-ever CRISPR-Cas9 modified vegetables. What is a meal of the future? Pasta — tagliatelle, to be exact. But it wasn’t the pasta that made this meal futuristic. It was the vegetable.
Meanwhile, in the US, the corn engineered with CRISPR to be more resistant to drought may soon be coming to market and more can follow.
A short, but interesting interview with Liz Parrish, CEO of BioViva, on how gene editing can cure aging. I found the point about injecting neurons “preprogrammed” with a foreign language to be a quite interesting vision.
If you ever asked yourself who are the big players in the biotech, here’s your answer. | https://medium.com/h-weekly/h-weekly-issue-66-d8e36e075487 | ['Conrad Gray'] | 2016-09-11 08:06:01.356000+00:00 | ['Technews', 'Transhumanism', 'Artificial Intelligence', 'Robotics', 'Technology'] |
2,928 | The Four Ps of Product Innovation | The Four Ps of Product Innovation
People, Partnership, Problems, and Persistence — these four Ps hold the keys to break down any resistance and to bring the product innovation to existence.
Image Credit: Pinterest
“Once I got an idea Why not do something like Ikea But instead make things here in America I excitedly told my boss about my panacea He thought I had some hysteria Or side effect from my recent malaria He gave up his resistance when I heavily insisted But gave me so little time to get my idea tested”
95% of innovative startups end after 5 years. 75% of these failures are due to not having a good understanding of customer wants and needs.
Market-centered research has been the focal point of almost all the product innovations that saw the light of the day. This research technique aims to gain a greater understanding of the customer wants and needs. In simple terms, there are four key elements in a market centered research methodology. The first key is to find the most appropriate people who may provide the best insights related to the chosen area of the research. After the right people are identified, the second key is to build an effective partnership with them. This follows by the third key — gaining a deeper understanding of their problems. Finally, the fourth key after we have a good handle on the problems— the power of persistence that paves the way to build a sustainable solution to those problems.
Based on the research and my own experience, I believe that these four key elements (People, Partnership, Problems, and Persistence) are instrumental in the product innovation process. As such, I have given a name to this framework and can be a helpful perk to remember — The Four Ps of Product Innovation. Rest of this article provides more context on the four Ps.
P#1: People
“There was hardly any time to sit on the perch So I quickly began to conduct my search With a criteria to find people who liked Ikea It wasn’t that hard when I bought some gift cards And stood in front of Ikea’s yard”
Key first step in the product innovation process is to identify the people. These people represent future users who will buy and use the product once it’s ready. As such, it’s important to identify the right people. If not done right, this would lead to making wrong assumptions about the product and hurt it after launch.
At times, we choose to work with the internal stakeholders from sales, marketing, or other subject matter experts within the company. This approach is called inside-out. Sometimes there is an organizational bias to do so.
At other times, we work with our future customers, and explore their preferences. This is called outside-in approach that has been proven time and again that it’s a better approach for product to succeed.
Image Credit: Dilbert
However, it’s important to note that, internal stakeholders can act as a good proxy if they are also to use the product in future — this approach adopted by Apple during their early phases.
P#2: Partnership
“I kept looking at people coming out of store Occasionally checking my phone whenever I got bored When I gave some people card, they very much adored We chatted for a while And they gave me their number to call for more”
After we identify the right people, we need to find creative ways to partner/collaborate with them.
image Credit: Pinterest
Building a great partnership/collaboration can be accomplished with two things:
Art of listening: Listening is a powerful art but also hard. If done right, it helps empathize, makes our relationship tight, and takes our learnings to greater heights. Clear and concise speaking: As a human, we usually try to convey lots of information in a short duration. This information overload approach hurts in getting our points go across and often makes our relationship worse. Simple and concise speaking makes a long-lasting impression on the listener and results in a fruitful collaboration.
P#3: Problems
“I called some guys who were not that shy And had agreed to talk more After a warm and cozy intro They began to pore and got to the core Delivery delays and quality were main concerns Some people also called on the issues with the returns I thought I got everything that I wanted to learn”
We so far identified the right people and built effective partnership with them. Now the third ‘P’ is about understanding their problems. The key here is to spend as much time as possible to discover the problems.
“If I were given one hour to save the planet, I would spend 59 minutes defining the problem and one minute resolving it” — Albert Einstein
If the problems are not well thought through, we would end up building a wrong product. As such, this step is crucial. I practice the ‘power of why’ (link below) during this phase as it not only helps uncover the problems effectively but also assists in creating alignment within the organization.
P#4: Persistence
After we develop a solid understanding of the problems, key final step is to lead the team build the most optimal and sustainable solution (products and services). This is where our understanding of the problems is thoroughly tested. Goal is to not only guide everyone understand the problems well enough but also make them feel passionate about solving them.
“Nothing in the world can take the place of persistence. Talent will not; nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not; the world is full of educated derelicts. Persistence and determination alone are omnipotent. The slogan Press On! has solved and always will solve the problems of the human race.” ― Calvin Coolidge
We can demonstrate a high level of creativity in people, partnership, and problem space but in the absence of persistence to solve the problem, our journey to innovate will reach nowhere near success. Power of persistence makes things happen and takes the product innovation to the finish line.
“Now was the time to talk to my boss He kept staring at the problems I first thought he was at loss But soon he pointed me to some weak spots I wasn’t willing to stop And adamant to connect those missing dots So I asked for little more time and to my surprise He said — fine, your dream is now mine”
Summary
Product innovation is a four-step process — find people, build partnership, understand their problems, and demonstrate persistence to solve those problems. This process framework can be represented as “The four Ps of Product Innovation”. | https://medium.com/product-center-of-excellence/the-four-ps-of-product-innovation-a8b6ed7996bd | ['Harsh Vardhan'] | 2020-09-14 02:15:52.615000+00:00 | ['Technology', 'Innovation', 'Startup', 'Product Management', 'Leadership'] |
2,929 | Why You’re Still Not Winning at Life | My husband has always said that he wants to leave a legacy. I never understood until recently what that meant, when I started writing. I want to, in the most cliche way possible, add value to this world. I have found my voice through writing and discovered that by sharing my stories I can help others feel that they too can get through harder times. So, at the moment, this is what I’m doing, whenever I possibly can.
I have to plan meticulously in order to be able to work around our son’s sleeping routine. At the moment, it’s 5 AM and I know I have at least 1.5 hours to write before Andriel wakes up, so I’m doing alright for time.
I plan a lot for what I will do while my son is asleep because when I’m with him, much of my time is his. And since I don’t have all day like most people without kids, the time I spend working on my goals is extremely crammed into bouts of productivity.
I imagine Flash, my now new alter ego, taking my place as my son goes to bed. He types as if the world is ending. He tackles the rest of the house like the world depends on it. He comes to a halt at the sound of the baby monitor before doing one last round to finish the fight with the villain — the washing machine. A bra wire breached the security of the premises and Flash sweats as he fights for his life. Frantically, he wins the battle and quickly gets changed into me again when it’s time to go into the baby’s bedroom — and deal with a soiled diaper, for that’s what life boils down to in the end. | https://medium.com/illumination/why-youre-still-not-winning-at-life-60d6ddced4fa | ['Sylvia Emokpae'] | 2020-12-27 11:00:37.262000+00:00 | ['Parenting', 'Motherhood', 'Social Media', 'Energy', 'Technology'] |
2,930 | The Brilliant Scientist Who Stopped the Roman Army | The story of Archimedes
Archimedes was the resident of the city-state of Syracuse on the east coast of Sicily, founded in 734 BCE. During his time the state was a powerhouse of art, science, and commerce even rivaling Athens.
Archimedes shared a close relationship with the king and was often called to suggest solutions to civic problems within the city. From inventing a water pump to remove rainwater from ships to testing the amount of gold in the king’s crown (remember the Eureka moment when he ran naked?), Archimedes’s brilliance made him the most respected scientist of his time.
It was around this time that the Romans attacked the state. A huge Roman army under their famous general Marcellus laid siege outside the walls of Syracuse. Well versed in siege warfare, the Romans expected the conquering of the city-state to be a cakewalk as ships carrying ladders and grappling hooks sailed toward the city with the intention of scaling its walls.
But they had grossly underestimated the brilliance of Archimedes. Archimedes devised a series of devious engineering marvels that repulsed Marcellus and his army in every assault. What was expected to finish in two days went on for two years with the Roman army waiting outside the walls, frustrated and terrorized by a ‘local’ engineer as they called Archimedes.
Some of his marvelous creations were simply too brilliant even for today’s times.
The Archimedes Claw
The Archimedes Claw was a notorious invention in which huge beams could be swung out over the walls and some of also dropped huge weights, punching holes through the ships and sinking them.
Others had a claw or grappling hook, which grabbed hold of the rigging or rails of a galley, raising it, shaking it, and capsizing it. The terrifying spectacle of a ship being lifted and thrown stuck terror within the Romans.
The Archimedes Catapult Engine
The historian Plutarch describes the catapult engine as a series of “engines” designed to hurl arrows and rocks at attacking Roman troops and ships.
According to him, some of the rocks hurled from Archimedes’s catapults weighed as much as 10 talents — around 700 pounds. He also describes different types of catapult engines with varied ability to hurl or shoot projectiles at attackers both at great range and directly under the city’s walls.
The Archimedes Death Ray
This was the most lethal of Archimedes inventions. The invention involved a huge mirror that could focus sunlight onto the wooden Roman ships and cause them to burst into flames.
The device consisted of a large array of highly polished bronze or copper shields arranged in a parabola, concentrating sunlight into a single, intense beam. This single device spread havoc among Roman sailors who even mutinied rather than getting burnt to death.
Marcellus could not afford any more direct attacks and he suffered heavy losses. What began as a short siege had become a stalemate that went on for two years. | https://medium.com/lessons-from-history/the-brilliant-scientist-who-stopped-the-roman-army-f85f295063c3 | ['Mythili The Dreamer'] | 2020-11-06 19:35:45.848000+00:00 | ['World', 'Culture', 'Technology', 'Science', 'History'] |
2,931 | 10 tecnologías exponenciales explicadas | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/bbsc-blog/10-tecnolog%C3%ADas-exponenciales-explicadas-136cbc79a063 | ['Biscay Bay Startup Campus'] | 2020-12-23 11:57:09.416000+00:00 | ['Exponential Technology', 'VR', 'AI', 'Technology', 'Top 10'] |
2,932 | KIRA Testnet Program Phase 0 | We are excited to announce that on December 29th, 2020 we will be launching the Phase 0 of the KIRA Testnet program where a group of selected individuals will have the opportunity to participate and iteratively test the earliest releases of the KIRA Network.
TLDR
Testnet Program Phase 0 starts on 29.12.2020
Participants will have to complete milestones and submit reports
Fill out this form before 12pm UTC on 28.12.2020 to apply for the Testnet Program
Bounties ranging from $50 to $1k worth of KEX, Validator and Governance seats can be won for submitting detailed and valuable contributions
Overview
Phase 0 Testnet will involve a number of milestones (see scope below) that participants will have to complete in order to improve the launch experience of Phase 1. After these milestones are achieved, Phase 0 participants will take part in the launch of the Testnet Program Phase 1 that will involve further incentivisation programs such as War of Uptime and be geared towards coordinating governance members and network upgrades.
All selected Phase 0 participants will be provided with documentation enabling them to complete each milestone and appropriate forms to submit their findings. The selection of participants will be based on the factors such as their technical expertise and hardware specs.
Every accepted Phase 0 participant will be further eligible to receive bounties including seats in the governance and validator set for submitting valuable reports, suggestions and pull requests to the KIRA’s github repository.
How To Apply
To apply for access to the KIRA Testers group fill out this form. If you have multiple personal computers with different specs you can submit multiple forms. Please note that if during the testnet your machine specs change you will have to resubmit information provided in the form and change your machine nickname. You will have until 28’th December 2020 12 PM UTC to fill out and submit the form.
Scope of the Testnet Program Phase 0 will involve the following:
Collection of resource utilization statistics across different hardware specs
Deployment scripts compatibility with different hardware specs and CPU architectures (x64/ARM)
Command line (user) interface compatibility with different ssh terminals and consoles
Maximum continuous uptime and recovery from hardware faults such as hard reset, hibernation, pausing at random time intervals
Infrastructure deployment time and KIRA Manager ease of use
Basic communication with the blockchain application using console (token transfers, governance)
Basic communication with the blockchain application using web interface (token transfers, claiming tokens from faucet)
Connecting to the public network and Validator intranet
Every accepted Phase 0 participant will be further eligible to receive bounties ranging from $50 to $1000 worth of KEX tokens. Selected participants will also have the opportunity to acquire Governance and Validator seats for submitting valuable reports, suggestions and pull requests to the KIRA’s github repository.
We look forward to welcoming the first batch of Phase 0 participants. If you have not been selected or didn’t submit your form before 29’th, you will still be able to join KIRA Testers telegram group and have an opportunity to join the Phase 1 Public Testnet in the early Q1 2020.
We would like to wish everyone Merry Christmas and Happy New Years 2021 that will undoubtedly bring many new exciting developments : )
…
Website: https://kira.network
Telegram: https://tg.kira.network
Twitter: https://twitter.kira.network
Medium: https://medium.kira.network | https://medium.com/kira-core/kira-testnet-program-phase-0-f4a121eb340d | ['Milana Valmont'] | 2020-12-25 12:03:06.682000+00:00 | ['Cryptocurrency News', 'Blockchain Technology', 'Testnet', 'Kiranetwork', 'Decentralization'] |
2,933 | Introduction to Kubernetes | Before we learn about the use cases of Kubernetes, let us first understand what Kubernetes is; where and why is it used?
I would begin by talking about containers. I am sure many of you might have discovered this word before. So, let us review a few concepts.
When developers write an application, they write it in their development environment that has all the dependencies and libraries the application needs to run correctly. It is often the case that the development and production environments are different. Thus, developers felt the need of making the production and deployment of these applications in varying environments easier. It was then that ‘containers’ came into the picture.
In simple terms, a container indicates a box that carries some load. Similarly, a container in the technological world can be thought of as a box that encapsulates an application — often a single executable service or micro service — along with its libraries, frameworks, and other components. In the production of an application, that container can be run on any computer that has a containerization platform.
After understanding containers, let’s try to analyze where Kubernetes fits in our discussion.
In practice, a containerized application consists of several containers, each running the same or different micro services. The question now is, how do we coordinate and schedule these containers? How can we upgrade an application without any interruption of service? How do we monitor the health of an application? How can we scale the application, or how do we run a different container when one fails to meet requirements?
The answer to all of these questions is ‘Kubernetes’!
Kubernetes is a tool that can handle a large volume of containers and track interactions between containers and users. It is easier to automate and scale container-based workloads for live production environments with Kubernetes.
Here are some amazing success stories of Kubernetes.
Pinterest which has more than 250 million monthly active users and can serve over 10 billion recommendations each day is because of using Kubernetes. Pinterest had its workload supported on EC2 instances in the cloud earlier. But with time, their users and services grew in number which ultimately brought them into the realm of containerization technology backed by the powerful container orchestration technology, Kubernetes.
Initially, they moved their workload from EC2 instances to Docker containers and then took the next step towards Kubernetes, to effectively manage and scale their containerized application. Now they can take ideas from ideation to production in a matter of minutes, whereas earlier they used to take hours or even days. They have cut down so much overhead cost by utilizing Kubernetes and have removed a lot of manual work without making engineers worry about the underlying infrastructure.
Pinterest is one among the many businesses like Spotify, Tinder, Airbnb to have benefited from Kubernetes. Out of all the uncertainties in the world today, one thing is certain that the scope of Kubernetes is to increase in the coming years.
Thank you for reading till here. I hope you found this article informative and interesting. | https://medium.com/@inshiya-nalawala211/introduction-to-kubernetes-c9a697b98979 | ['Inshiya Nalawala'] | 2020-12-27 09:39:35.599000+00:00 | ['Containers', 'Kubernetes', 'Containerization', 'Technology', 'Container Orchestration'] |
2,934 | Hello World! | First blog.
Hello World!
This is the first blog in the series of many to come.
Photo by Lukas Blazek on Unsplash
Who are we?
We are a group of Software Engineers just like yourself. We have learnt things the hard way. Sometimes finding the solution in 19th answer to that question on stackoverflow.com. Sometimes going on for days without the solution and itching in the brain.
We try to work on some of the most interesting problems. We are open for a free consultation. If you have an itch, hit us up and we can have a call and discuss what you are trying to build and how should you go about it.
What do we write about?
Through this blog, we plan to share some of the things which reside in the deep dark part of our brain. We also plan to share some interesting ideas or at least small blocks which will make up the idea.
To begin, we want to write about Software Architecture, Design Patterns & Design Principles. Topic which is very close to our heart. We know everyone talks about it, but no one can explain it to you like you want to. In months to come, we are also going to write on topics which are more relevant in the technology space. Another topics which is also our favorite is Mobile Apps, who doesn’t like Mobile Phones. And Toys. Sure, we also want to write about that.
Photo by Tyler Casey on Unsplash
Keep watching our blog. We sure are going to have fun around here. | https://medium.com/@stellarsoftwarecompany/hello-world-e33f4d572d69 | ['Stellar Software Company'] | 2020-12-25 08:50:21.783000+00:00 | ['Technology And Design', 'Code', 'First Post', 'Technology', 'Stellar'] |
2,935 | Courage is what sets entrepreneurs apart from the rest of us. | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/technology-hits/courage-is-what-sets-entrepreneurs-apart-from-the-rest-of-us-111e0bcbfa07 | ['Tree Langdon'] | 2020-12-23 05:31:33.587000+00:00 | ['Self Improvement', 'Entrepreneurs', 'Technology', 'Business', 'Science'] |
2,936 | Mind you, exploitative contract provisions are illegal | When courts can refuse to enforce contracts
In Indian contract law, there are five grounds on which courts may refuse to enforce contracts (see Section 23 of the Indian Contract Act, 1872). One of these grounds is that, a Court may refuse to enforce a contract if it deems the contract to be opposed to public policy.
Public policy is essentially synonymous with “policy of the State in question”.
There is no exhaustive list of the broad principles of public policy — although we do have a non-exhaustive list of its broad principles, thanks to decided cases which have exposed them. The term “public policy” is incapable of precise definition.
In the most general sense:
Public policy may be understood to— be the values which form the groundwork of the law of a country.
Since we have a written Constitution which is the suprema lex of our country, it is a good starting point to look for the values which underline our legal system. It’s likely a contractual provision which places a monetary cost to disincentivize the exercise of a Fundamental Right will be opposed to public policy. It’s also likely a contractual provision opposed to the Directive Principles of State Policy (principles, but not rights, which guide all State conduct) will be opposed to public policy.
Exploitative provisions are unenforceable
Here’s the low-down for you— in Central Inland Water Transport Corporation v. Brojo Nath Ganguly (SC, 1986), the Supreme Court of India has stated the law unequivocally:
If a contract contains an exploitative provision the court finds unconscionable, that provision will render the contract opposed to public policy.
That people must not be exploited is a principle of our public policy. It follows from the Directive Principles of State Policy enshrined in our Constitution, which contain a number of guarantees against exploitation.
Since such a contract with an exploitative provision would thus be opposed to public policy, the courts would refuse to enforce the whole contract. In contract law terminology, the court will red-pencil the contract— metaphorically, it will take a red pencil and run it through the entire contract, trashing it entirely.
In some cases, courts do blue-pencil the contract— metaphorically, it takes a blue pencil, strikes out the offending provision but allows the remainder of the contract to stand. However, Indian law is very clear on this point— if a contract is opposed to public policy, you don’t get any concessions; the court will have to red-pencil it.
In other words, let me state this unequivocally: | https://medium.com/@sarkaroncontracts/mind-you-exploitative-contract-provisions-are-illegal-e7fed9828841 | ['Sagnik Sarkar'] | 2020-04-22 07:23:48.567000+00:00 | ['Technology', 'Law', 'Startup', 'Contracts', 'Company'] |
2,937 | Block Republic Founder Tom Howard joins Peer Mountain as Community Advisor | Peer Mountain is delighted to welcome prominent blockchain investor and founder of Block Republic, Tom Howard to its advisory board. His expertise and network brings significant value to Peer Mountain and its community.
In addition to helping Peer Mountain build new meaningful relationships with the crypto community, Tom has already secured initial allocations in the Priority List for the Peer Mountain token sale.
A crypto investor since 2013, Tom founded Block Republic to provide the community with the education, advice, and connections required to make smart investments and help the blockchain ecosystem grow. To date, the Block Republic community has participated in over 30 ICO investments.
“Peer Mountain has developed a truly scalable B2B2C solution with the compliance and speed that will make blockchain commerce a reality this year,” said Tom, “I’m looking forward to developing and strengthening ties between this groundbreaking service and the crypto community.”
Currently founding partner of BlockSauce Angel Group, Tom is also serving on the advisory boards of several notable blockchain tech companies that have run successful token sales. Previously, he was founder of Asian venture financing platform VentureMark and CEO of software consultancy Digital Engine. | https://medium.com/peermountain/block-republic-founder-tom-howard-joins-peer-mountain-as-community-advisor-8df631ae3640 | ['Peer Mountain'] | 2018-06-26 07:52:32.152000+00:00 | ['Bitcoin', 'Ethereum', 'Cryptocurrency', 'Blockchain', 'Technology'] |
2,938 | The Social Media “Suck” | Twitter. Facebook. Instagram. TikTok. Text messages. Pinterest. Tumblr. Snapchat.
The list could go on and on. And we keep getting on and on. Each and every day it’s reported we spend more time on social media than we do eating, socializing, grooming or socializing in person — with about two hours on social media platforms and a total of 11 hours every day in front of a screen of some sort.
What does that look like over a lifetime?
Image courtesy of socialmediatoday.com
Screen time
You’d think all that time behind the screen would make us get cabin fever but it’s worse. According to Psychology Today, too much time behind screens can literally change our brains! That’s right, overindulging in screen time is restructuring of the matter that makes up your brain. This can cause overall poorer cognitive performance as well.
Not enough reason to skimp on screen time? Just take into account that using technology excessively behind a screen may inhibit the emotional development of all us, young and old. Because let’s face it you don’t need to be empathetic, caring, or even listen to your computer or phone and it will start to show.
Are you pushing through the pain?
Strained eyesight, headaches, sleep disturbances, body pains, increased risk of heart disease and of course the most potent of all in social media suckage; Dopamine through addiction and reward seeking. The “feel-good hormone,” is part of the brain’s pleasure and reward circuits. Getting ‘Likes’ or ‘Loves’ on your content turns on similar brain regions as those linked to cravings for drugs and gambling — every time we see a new post or get a reaction to ours, it’s like a hit of brain high fructose corn syrup.
What would happen if you decided to spend even tiny amount of that time doing something productive? You’d be surprised how far that would go.
Too connected
However now, there’s even a proposed mental health condition ‘connected’ to being connected. It’s called Social Media Disorder and is similar to addiction and social (anxiety) disorders. More social media seems to mean more problems, many of which you may have exhibited or seen.
Think about these 10 questions…
Do you sometimes neglect other duties to use social media?
Are you using social media to get a feeling of escape from the troubles of your life at times?
Have you found yourself feeling anxious about your social media accounts when you don’t check them often?
Does looking at others social media sometimes cause you to compare your life to theirs or feel jealousy?
When you look closely at your friends on social media are many of them people you do not actually see in person often?
Do you interrupt conversations to check social media?
Currently is your social media usage more than you originally planned?
Is your average time on social media platforms more than three hours per day?
Have friends and family made comments about your social media habits?
Would you say that your social media persona is the image you want others to view you as?
If you answered ‘Yes’ to more than five of the questions above, you could be stuck in a social media ‘suck’.
How do you get out of a social media suck? | https://medium.com/@nerdynatali/the-social-media-suck-c147840b668b | [] | 2020-12-30 20:27:00.446000+00:00 | ['Relationships', 'Human Behavior', 'Psychology', 'Social Media', 'Technology'] |
2,939 | My Shortforms Experience | My Shortforms Experience
Probably I Just Don’t Get It
This is a shortform post without pictures. I didn’t use a title and subtitle. According to the latest update Medium published, the author’s home page should display a 30 words excerpt.
So far. So good. | https://medium.com/@michaelknackfuss/my-shortforms-experience-a18c3c1015a4 | ['Michael Knackfuss'] | 2020-12-24 10:45:35.240000+00:00 | ['UX Design', 'Visual Design', 'Technology', 'Software Engineering', 'Culture'] |
2,940 | Preliminary | 2020 was a difficult year and was also a turning point year in accelerating world change towards accelerating new technologies. Where many things have changed as has been said in these articles:
So in 2021–2022 is the year of adjustment of human life to the accelerated process of world change with the acceleration of technology. One of the things that are felt is the changing patterns of technology use in human life, one of which is financial technology, investment, shopping, payments, and conducting online meetings.
This may still be a bit difficult and uncomfortable in the middle of 2020, especially when things that are not considered disrespectful are normal and even become the main requirement for everyone. This makes people increasingly accelerate the race to keep up with the acceleration and adapt to changing world with existing technology.
One of the issues for some countries and even people today is how to create a technology that can help everyone and can be used comprehensively in all sectors of life and energy that make it work. If we look at the existing technology, several existing technologies are capable of doing this.
One of the things we hear most often is the word “Green Energy” which is a new and renewable energy that has extraordinary and unlimited energy. And if we look at this, several ways can be used with existing technology, one of which is by using or utilizing solar cells, water treatment, plant ecology, gas pipes, and so on.
But often it is plagued by various problems because there is no automatic supervision and control that can do this. So in the acceleration of technology that is happening at this time, the “Techno Home” must be created by utilizing the “Green Energy” that exists today.
Green Energy
“Green Energy” is one of the new ecosystems in this modern world order where all energy is created from all things that can be renewed again and will not be lost, even into infinity.
In today’s world, an energy that leads to infinite energy is indeed being developed. one of the ongoing uses is biofuel using CPO / palm oil which is widely grown in developing countries and some existing equipment as green energy, namely the “solar cell” which is used for solar power generation and “kinetic energy” which is used for wind and water power plants
But this still cannot cover the main role of coal and fossil oil which are very popular and very easy to use because they have been used for centuries. On the other hand, coal and oil commodities have an important role in stabilizing the existing economy. This is very difficult to get rid of or make it non-existent.
the role of “Green Energy” today is how to embrace all the energy that has been available to date and save that energy with materials or things that can be converted into electric or kinetic engines. Where the conversion we often call Hybrid combines 2 ways of entering energy (Green Energy and fossil energy). into a machine or vehicle so that it can move and run according to the function and purpose of the tool.
If we look at a growing sector as it is today, housing should be the first pilot project for green energy. Why? Because from the housing you can feel how the function and role of this green energy for daily needs. And housing is a land of “green energy” that is the most extensive and can coexist with humans.
This is evidenced by the small scale where the upper-class people especially use solar cells for hot water and the use of electricity for daily use. With proof on a small scale, it should be able to be made also for a large scale. Where is housing with the concept of Green energy?
But the next problem is the maintenance and maintenance of these tools, if you look at the existing problems, one of them is human negligence in the use of electronic goods that are closely related to green energy. but this can be minimized periodically with the existence of “Techno Home” which can guide, remind and carry out maintenance and periodically upgrade several functions of green energy equipment with the help of artificial intelligence.
Techno Home
Techno Home is a term used by some people to describe or describe a future house (future house) that has many smart electronic circuits and can help humans carry out all activities.
Usually, almost all rooms in Techno Home are based on a cloud system because the life in it is combined with smart features that can be connected to various devices based on artificial intelligence / smart electronics using the IoT system.
This is well proven by how all the care in the house can be carried out automatically and if it is controlled, it can use the smartphone in the owner’s hand. This also becomes the basis for how a Techno Home can develop and collaborate on green energy.
This is indeed a bit hard to imagine, let’s look at it as an example. For example, you can see from inside a normal house that there is a stationary bicycle for sports only. Meanwhile, in Techno Home, static bicycles have various functions which should be an initial detection of the body’s condition, electricity generators, physical data and sports activities, and so on. But if planned with green energy, then the stationary bike can turn into a small generator to produce green energy.
Not to mention talking about the flow of air which is seen as a means of refreshing / in the house for example a small waterfall in a fish pond can be a small power plant that helps create green energy and a place for air circulation for plants and the ecosystem as a whole.
Can we imagine if the statistical bicycle is only a manual static bicycle, will it be given that it is big enough? If the small waterfall in the pool is only for air circulation, does it have a big impact?
So nowadays this IoT system has a role that is very much needed by humans. And if there is no small system, this will be detrimental because everything has to be done manually by humans. So that in the calculation of current investment and technological advances, it is very disadvantageous which will take time, effort, space, and costs without the energy produced.
IoT System
With the IoT system, it is very helpful to collect information, carry out maintenance, notify if there is damage, and save power like a personal assistant. Where the IoT system is designed to help all human activities make it easier and can do all work from a considerable distance.
Techno Home with a little more complex mechanical system will produce purity of green energy which is very sustainable and has the potential to create unlimited energy. Where unlimited energy will come out of human behavior that is always moving, which is captured and converted into energy, especially electrical energy.
Advantage
If we talk about the advantages of using green energy, then we will talk about the energy that will not run out and the energy that is the main force in carrying out activities that 80% already use or depend on electrical energy. even with green energy and techno home, it will make electricity cheaper and have a polluting impact on the environment.
this affects not only one sector, but all life will change. Even if coupled with the game psychology program in IoT, it will bring impacts and a new ecosystem that is good for the environment and allows it to become a new, more modern work of life.
With Techno Home, all human routines will be read and assisted by technology to find statistical movements that will have an impact on the development of the energy source itself. And humans will help the technology to realize the intended device so that energy can be obtained, for example, solar cells, wind chargers, kinetic electricity, and the like.
Weakness
If there is an advantage, there must also be a drawback. where the Green energy and Techno Home sectors are still experiencing obstacles in terms of procurement of goods/investment. From the calculation of the procurement of goods carried out independently, it is found that a simple house costs money that can only be purchased by upper-class citizens. This is triggered by the high price of solar panels, quite expensive kinetic assembly prices, and many more
And if you look at the prices that have been stated, it is very difficult for green energy and techno home to materialize. so it needs the role and support of all parties for extraordinary development in the field of materials and the field of application of existing systems.
Solution
From the things described, we can see the tremendous potential of this project, and indeed the obstacles that exist to date are the investment costs that existed at the beginning of the construction and initial installation. This resulted in the project failing many times and no one even glanced at this solution. Only a few people use this stuff for very small purposes. Even when there is someone who makes this solar garden, it doesn’t respond well enough
So on this occasion, we are eager to make green energy and techno home based on clusters, where all 85% of the houses use green energy materials based on kinetic and solar cells. This indeed requires a lot of support from several companies as well as investors who are able and willing to try this basic thing.
Because after being proven by theory and calculation, it was initially a very large investment, but with the capabilities and things that Techno Home can do, everything will pay off and become even more profitable. This is because 65% of the electricity used comes from routine human work and all the electricity income earned will reduce the electricity load used.
It even occurred to us that we would create 2 Techno Home start-up rooms based on the Custer itself and booking a fitness lounge connected to several ID-based IoTs could help residents to enthusiastically generate extraordinary green energy for the region and themselves.
Financing
If we think about where will the funding for this project come from and if it is used as a cluster, who will buy the cluster? In our opinion, to finance this project, we will collaborate with several large vendors and developers who care about green energy for a better world. One of the vendors that are needed is a developer, network company, technology equipment company, health company, and many more.
And if based on our calculations, making Green Energy and Tecno home requires a lot of tools and a lot of investment costs, including:
Solar Panel
with a land area of 1 hectare, it is possible to make as many as 60- 65 houses of type 40 m². With the need for 12 panels to get a capacity of 3 kWp. The area of each panel is 1.7 square meters, so you need a roof of 12x1.7 = 20.4 square meters that is not blocked by shadows of trees or buildings throughout the day from sunrise to sunset. So that it requires 1 hectare of land to produce approximately 4000 watts per day.
The cost that must be incurred is 780 panels (an estimate of 65 houses and one house requires 12 panels) and the price of one panel is estimated at 9 million — 13 million rupiah. So the initial investment in the house is about 11 billion outside the home and excluding other expenses.
And if from PLN the price is Rp.1,467 (compass data, 2018), then the value of 11 billion can be BEP by itself in the range of 6 years with an estimate outside the use of other electricity. But with the age of a solar cell that is over 10 years old, the vendor’s investment will indirectly be profitable.
Wind Catcher
Given the area of land and buildings above, the next step is to make a wind catcher that functions as modified kinetic propulsion and will capture very small wind movements so that it can move to catch the wind that is outside the house or that will enter the house. With the estimated type taken 100A x 24V = 2400W / hour
Costs incurred for the price range in 5 million rupiahs, with an estimate of one house requiring a capacity of 3 pieces, so the price of 1 house is estimated at 15 million rupiahs, so the initial investment that must be spent is around 1 billion apart from other costs.
And if from PLN the price is IDR 1,467 (compass data, 2018), then the value of 1 billion can be BEP by itself in the range of 1 year with an estimate outside the use of other electricity. But with the age of the windcatcher above 5–6 years, the vendor’s investment will indirectly benefit.
GYM Kinetic
When we talk about green energy, then we don’t forget that we will talk about health which is very useful and has other benefits. Where actually from sports equipment used by humans, it can produce kinetic energy that produces the electrical energy that can be reused by humans.
But often people do not realize and don’t like to do this, but with the addition of a little game/loyalty, psychology can make people live healthily and generate electrical energy from what they do. However, for this program, it is necessary to consider several experts and to consider the conditions of each person in using supporting tools.
After we discuss the main technology of Green energy, we must also pay attention to the Techno home which is based on the IoT System which we often call a virtual assistant. Where the function of this virtual assistant has an important role to keep every process carried out by all major technologies can be properly recorded, perform simple maintenance and notify engineers or interested people if a problem occurs in the main technological process.
With the IoT system, it will indirectly change people’s lifestyles to switch to a more secure security system based on virtual assistants. Where all smart system-based devices will be connected to a smartphone that can be known about the condition of the device and controlled remotely.
Even for the cost, for now, it is very relative depending on the usability and how functional the virtual assistant is. But the development of this virtual assistant is very necessary and important for its continuity so that the financing of this will continue to grow and will adjust everything.
Conclusion
Of all the things that we have discussed above, it is a small part of the business plan of a series of business plans that should make a country have extraordinary unlimited power based on green energy. And if this is applied to a larger scope and the more multinational companies that join even from abroad, it will make this program more concise with very excessive benefits.
Even if it is seen from the amount of investment it looks very large, the existence of cooperation between several companies and mutual collaboration between one another will make things even easier. This can also be realized with a more thorough and developed repair detail,
The calculation is still very raw because it is very far from land prices, location, and other factors that greatly affect the calculation of a place. But indeed if this proposal is going to be implemented, we can start with a discussion about it in depth. Of course, in the calculation, there must be data on the value of utilization and the value of the assembly and maintenance of the existing tools. To get maximum results and implementation and have a higher value than the general function.
Hope
We hope that we can help to realize it in a company/institution under one of the vendors so that it can continue to be developed even more and can be run as a whole. To produce maximum results for the use of the wider community.
We hope that we can work together to do this big thing because, with cheap and reliable electricity, indirectly more and more data on the need for electricity usage will be directly proportional to energy income from green energy and techno home.
Papers as a Business Program Proposal in the PLN ICE (Innovation & Competition in Electric) By Samuel Liputra
CC from https://newsevengenerationsiklan1.blogspot.com/2021/06/idea-4u-development-of-green-energy.html | https://medium.com/@samuelliputra/preliminary-fdcc59d885af | ['Samuel Liputra'] | 2021-06-08 03:07:18.086000+00:00 | ['Elections', 'Future', 'Technology', 'Green Energy', 'Samuel Liputra'] |
2,941 | Did You Know That Only 26% of Computing-Related Jobs Are Held by Women? | Did You Know That Only 26% of Computing-Related Jobs Are Held by Women?
It’s high time that companies start investing in making technology departments more gender diverse
According to research conducted by Accenture in the U.K., nearly half (48%) of girls and women believe that STEM subjects line up more with male careers. This is the biggest reason for boys being more likely to choose STEM subjects over girls.
The following were the main reasons for young girls and women not wanting to do STEM subjects:
Two reasons that I find especially surprising in this list are:
“Perception that these subjects are more suited to boys”
“Worried I would be the only girl/one of only a few girls”
Being a female, born and brought up in India, STEM subjects are actually one of the most preferred choices for women — not just by women themselves but also by their parents, teachers, and counselors. As per a study done by 451 Research, India is now almost at gender parity in graduate-level study, in contrast to the decline in the U.S. and a stagnation in the U.K.
“Women now make up 34% of the IT workforce in India, with the majority of these workers under the age of 30. Indeed, the youth of the Indian IT labor force has significantly powered its rapid growth, and the country is now almost at 50:50 gender parity rate in STEM graduates.” — Katy Ring for 451 Research
From a workplace perspective, we still have a long way to go. As per a report in The Muse, women only hold 25% of IT jobs. In order to attract more women to IT jobs and to bring about some much-needed diversity, we really need to start from the bottom.
There are many programs — like Women in IT, Girls Who Code, etc. — where companies are trying to attract young female talent. However, I personally feel that’s not enough. Young women are always looking for female role models (I know I was back when I was growing up), and we need to do a better job of giving them these role models. Promoting women to C-suite and managerial roles can be the first step.
Today’s generation has so much screen time with smart phones and easy access to internet, and they gravitate towards “cooler” things, such as TikTok and YouTube, because they seem easy and like a quick money-making option. No disrespect to any of these platforms, but I think IT companies need to come up with creative ways to advertise how these “cooler” things are built and how the younger generation can help shape the next TikTok or YouTube. | https://medium.com/better-programming/did-you-know-that-only-26-of-computing-related-jobs-are-held-by-women-ace9aca97d21 | ['Asha Rani'] | 2020-12-23 17:03:30.559000+00:00 | ['Programming', 'Women In Tech', 'Technology', 'Diversity In Tech', 'Startup'] |
2,942 | Why we decided to take the lead on hiring new developers | Last year when the pandemic broke out, everyone assumed companies would downsize or close, that there would be an abundance of unemployed developers, and that recruiting would be easy.
Boy, were we wrong.
With an all-time high in funding rounds and new Israeli unicorns, the competition for top talent has never been greater. How much greater? Some estimate developers in Israel get up to eight job offers a day(!), which means you gotta move fast. Really fast.
So, how do companies quickly evaluate someone’s ability to code before they decide to hire them?
They use technical interviews.
Technical interviews have many different formats. Some popular ones are code reviews, algorithmic questions, at-home programming challenges, and even “guestimations” like “how many weddings are held each day in Melbourne?”.
The problem with many of these tests is that interviewers and interviewees turn to Google or popular resources like Gayle Laakmann McDowell’s “CRACKING the CODING INTERVIEW” for both the questions and the answers, turning technical interviews into nothing more than a pre-rehearsed formality.
(Illustration by Itai Raveh)
Where do great developers come from?
Talented developers come from many educational backgrounds. Most have computer science degrees, some are incredibly curious autodidacts, and others went to coding bootcamps.
Coding bootcamps promise to turn students into developers in a very short time, and one thing they teach their students very well is how to build a strong online presence, the kind that leads recruiters to view them as more experienced than they actually are.
Since great developers really can come from anywhere, you don’t want to rule someone out just for being a bootcamp graduate. This leaves it to the technical interview to discern between the developers who look great and the ones who are great.
What makes a truly great developer?
Being a great developer is about so much more than writing code.
We write code to serve a specific need or solve a problem. Developers tend to find solutions that match their skillset and enforce the problem at hand to fit into the framework or language they’re familiar with. However, great developers know that no technology is perfect and that there are always tradeoffs. Any technology or programming language has its advantages and disadvantages, which is why excellent developers must first and foremost understand the need they’re trying to answer and the logic behind different choices. The more you know, the more options you have to deal with a problem.
While developers must be able to break down a problem and understand how systems work to see the bigger picture, they also have to be exceptionally good at communicating these things to other people. A lone coder might be an excellent problem-solver, but they can’t work within a team without good communication skills.
(Illustration by Itai Raveh)
What developers are we looking for?
We work in diverse, multidisciplinary teams that approach problems from a broader perspective. In our eyes, it doesn’t make sense to take talented individuals and box them in job descriptions. Gifted developers can have any engineering background, like real-time, frontend, backend, or data. Being great is not about writing in a particular programming language. If you’ve got a theoretical understanding and strong analytical and problem-solving skills, picking up a new language will not pose a hurdle. Besides, tech stacks change and evolve all the time, so continuous learning is crucial for developers anyway. We’re looking for talented people regardless of their specific technological history. We’ll always find them incredible things to do.
Our technical interviews
Because we use such a different organizational model for the eko dev team, our technical interviews also tend to take a different course.
In one format, we ask developers to walk us through a project they’ve already completed at work, school, or on their own, and before we even get to the code — we want to know what goals they were trying to accomplish and for what purpose.
We ask our candidates why they made technical choices and how they handled problems, not because we’re trying to judge their decisions but rather to understand their way of thought. We also ask follow-up questions like “what would you change if X would be different?” or “how would you approach this project differently if you had to do it over again today?” Since the conversation is about something candidates have already accomplished — they feel comfortable and free to go as deep as they want into a topic they know.
In another type of technical interview, we ask candidates to solve a software problem outside their area of expertise. Then, we discuss it together to explore how they defined the problem, if they understood the need behind it, why they chose the technologies they did, how they planned their architecture, design, and APIs, and finally — we examine their coding skills.
These kinds of open discussions provide a pretty accurate simulation of what it’s like to work on a feature at eko while helping us identify great developers.
What we actually mean by “great developers”
Great developers understand the many layers of abstractions that make their software run and the deeper layers of other software (and hardware!) their code relies on to function. For example, a good frontend developer might be proficient at crafting CSS animations, but a great one will also understand the browser rendering process, how and why does the GPU come into play, and even how it all fits within the context of the OS and device the browser runs in.
The approach a developer takes to the practice of programming also talks a great deal about their proficiency. Other than addressing the task at hand, do they handle the less optimistic paths (errors and failures)? How do they test and using which testing methodology and why? What do they log, and how do they make sure to separate the wheat from the chaff? After all — it’s not all about writing code “that just works”. It’s about looking at the entire picture of programming in an organization: making that code solid and trustable, debuggable, maintainable, and readable to other humans (and themselves, in the future).
(Illustration by Itai Raveh)
How does all of this help us move fast?
Well, since we are, in fact, developers ourselves, we can figure out who’ll make an excellent professional match to our team a lot quicker than “someone from HR”.
Plus, because we’re the ones doing the interviews, candidates get to meet the people they’ll be working with right from the very start. And thanks to our flat organizational structure, we don’t need approvals from five layers of management when we meet someone we like.
…
Want to see for yourself what an eko technical interview looks like? Send us your resume here! | https://medium.com/ekoengineering/why-we-decided-to-take-the-lead-on-hiring-new-developers-b6be7494c888 | ['The Eko Devs'] | 2021-09-09 13:48:02.269000+00:00 | ['Software Development', 'Job Interview', 'Technology', 'Startup', 'Developer'] |
2,943 | Stealthmode and the Rising Entrepreneurship Tide | Because Stealthmode was born in 1999, in the height of the dot com boom, Ed and I, and the other five people who also wanted to start an accelerator naively thought it would be possible to start one and get paid for our efforts by funded startups. It didn’t take us very long to learn how little capital Arizona had for startup companies.
But we found a young entrepreneur with a good idea, and actually we were able to raise $8 million for him in Silicon Valley. Ironically, months after we raised the money for him he threw us out of the company and spent a year ignoring everyone’s advice. Ed and I were too new at the accelerator business to put our feet down the way we should have — all our previous businesses had been bootstrapped.
.At the end of the year, we had already been through the dot-com bust, and the company was out of money with no product. Limelight Networks bought the infrastructure assets built for pennies on the dollar. No chance we could have raised any more for this founder.
In the midst of our enthusiasm, we conveniently “forgot” that we were in Phoenix, rather than in the Bay Area. Stealthmode did have a few private clients but we quickly realized that most startups could not afford us and that we would have to do something else. Our other friends went back to lawyering and accounting, but I reached out to the Kauffman foundation, which was offering a program at that time called Fasttrac.
I decided we would try to become their southwest affiliate, and I went to Kansas City to train in their methodology. At the same time I was doing that, my friend Joan drew my attention to a city of Phoenix grant for technical assistance to entrepreneurs. Joan, who is always full of good ideas, suggested Stealthmode apply, and on a lark I applied to it. Just put it out there, I thought.
We got it. No one was more stunned than I was.
That began about 15 years of facilitating workshops and coaching for would be entrepreneurs all over the valley. Every city wanted entrepreneurship programs after we started the one for the city of Phoenix and we kept on saying yes, although at some point Ed decided he was tired of doing them and dropped out, and I took on another partner, Phillip Blackerby, who was ready to teach entrepreneurs forever.
This work was not super remunerative, but it had the incredible byproduct of helping me feel I was doing something much more useful than running a marketing business. It put me back on firm ground: mothering.
By pivoting the way it randomly did, Stealthmode not only survived the.com crash, but the great 2006 real estate recession, which caused an awful lot of people in Phoenix to be laid off and made our classes swell.
By now you must realize that I don’t take the time to plan anything — I just try things. Ed tells me we work well together because when it’s necessary to do something, I just jump, always landing in a field of clouds, while he is still packing his parachute. But this is why I love Ed. He looks at things completely differently than I do.
During those years, our friend Rob Dunaway suggested we start a not-for-profit to extend our capabilities. Again, something I knew nothing about but was willing to experiment with. In 2005, Joan, Ed, Rob and I founded the Opportunity Through Entrepreneurship Foundation.
It was important to me because my foster kids had bombed out in the education arena, and I realized that the only hope for them to achieve upward mobility was to become entrepreneurs. I thought that if I could make them into entrepreneurs they would have at least a fighting chance of keeping themselves in the middle class. OTEF has a mission of providing entrepreneurial services to the disadvantaged, including released felons because my foster son had gone to prison for three years during this period and found himself the victim of everything we as a society fail to do for released felons. Last year, we even got a grant from Craig Newmark.
Unfortunately I’ve never had much luck making felons into entrepreneurs because they don’t have the luxury of completing their skills once they get out of prison. All they want to do is get a job, although that’s the worst thing for them given the current climate for employing released felons. Most of them can sell, and many of them understand rudimentary cash flow and accounting, but they really didn’t understand marketing, hiring, or how any of those skills fit together to grow a business. Fifteen years later, thing are somewhat different, especially since Shaka Senghor became a celebrity ex-felon by writing his amazing book, “Writing my Wrongs.” I wish I’d known him in the early days of OTEF.
But while we failed with the felons we succeeded with domestic violence victims, autistic children, autistic adults, and other disadvantaged populations. We work by partnering with other organizations who already have those needs. The foundation is now 15 years old and just received a grant from Craig Newmark to continue this work.
Casting about for ideas on the best way to fund OTEF, Joan and Ed and Rob Dunaway and I settled on having an entrepreneurship conference for which people who could afford it would pay, the speakers would donate their time, and the proceeds would go to OTEF.
Thus began eight years of the Arizona Entrepreneurship Conferences, bringing out of town venture capitalists and founders from Silicon Valley to Arizona to “expose” them to Arizona. Among the amazing people who came to Phoenix at their own expense to help out were Gary Vee, Chris Brogan, Matt Mullenweg, Bill Reichert (Garage Technology Ventures,) Dave McClure (500 Startups), Mark Suster (Upfront Ventures) and Robert Scoble. Thanks, guys. And yes, we had many women speakers, but no keynote with that star power. It’s not as if we didn’t try, especially with a woman running the conference. There just weren’t that many at the time.
As conference organizers we had two goals 1)fund the OTEF programs, 2)swell the rising tide that would lift all boats for Arizona entrepreneurs, something Ed and I believed in 1999 and still believe in two decades later. And believe me, the Arizona tide has risen. To mix a metaphor, it was a heavy lift.
We forgot, however, to make the conferences profitable, and after 8 years of fighting the headwinds, I got discouraged and gave up — probably right before they could have become successful. But they were work, and I was done.
Because of all these activities, Stealthmode found itself in an informal economic development relationship with Phoenix and all the cities around it, although that wasn’t really something I intended. What I really intended was to create jobs to make my children come back home and live in the same city I did.
After each one of them went to college she stayed away, Chelsea in Chicago where she got a job and Sam in LA where she went to law school. Asking either one of them why they wouldn’t live in Phoenix produced this response: There are no jobs here for us. I set out to use Stealthmode and the Internet to create jobs.
Did it work? That depends on who you ask. For me it didn’t because both of my children still live elsewhere. For the community it may have, because after a long hard slog of about 15 years the tech community in Arizona took off, not so much because of me as because of a combined effort to diversify the economy that was undertaken by almost the whole community after 2006 caused havoc in Phoenix, and because of the Software as a Service (SaaS) movement, which loosened geographical boundaries and lowered the startup costs for entrepreneurs.
I even went so far as to follow my daughters out of town, buying a home in the Bay area when both of them briefly ended up there in 2005. My older daughter was married and pregnant, and I dreamed that I could become part of the Bay Area startup community and move my business interests up there.
And did that work? That also depends who you ask. I made great friends, had lots of fun, developed a great network, brought Social Media Club to Phoenix, and broke myself paying for two houses. I ended up short-selling my house in El Granada in 2011 before I became totally destitute and heading back to Phoenix having learned the lesson of trying to be someone I wasn’t. (a rich person)
Somewhere in that time frame, I began to come to grips with my advancing age. A year after Gerry died, his prediction about my back came true. He had x-rayed my back one day and told me I had the worst degenerative disk disease he had ever seen in a person my age. Since I was a runner and had completed half a dozen marathons and countless 10ks without pain, I pooh-poohed him.
And then one morning I couldn’t get out of bed. I had to hold on to things to straighten up.
I went straight to Barrow Neurological Center, where Volker Sonntag, a famous surgeon, agreed to see me because I was an important community member. Sontag kept me waiting an hour, waltzed in with a stream of epigone in tow, and told me after five minutes that if I didn’t have back surgery I’d be incontinent and paralyzed. Admittedly I was laying on the floor of one of his consulting rooms at that time, but I still thought he could have talked through some more options.
When I contemplated the six-month recovery from the complex surgery he suggested, I decided I would look for another alternative. I found it in yoga.
The first time I walked into a yoga class, probably around 1998, the teacher put us in a forward fold from which I couldn’t rise up. I was as inflexible and tight as a drum. But the alternative was extensive and scary surgery, so I became a yogi. Eventually my pain vanished, I coached a team of rising young yoga teachers from LA on how to run a business, and I became a lifetime committed student of yoga. I even took teacher training, although I’ve never taught a class.
I wasn’t all moonlight and roses between yoga and me. After all, I had been a tournament tennis player and a marathon runner, as well as an entrepreneur. I was very competitive, and I was in classes with people far younger than I was. As a result, I ruined my left hip trying to balance on one leg with poor alignment and in 2006 after about seven years of pretending I was a kid, I had to have a hip replacement. That same year I also had a retinal tear while swimming, and had to have cataract surgery. All these procedures took place in the first year I was eligible for Medicare. It’s like I limped across the health insurance finish line to Medicare, which welcomed me with open arms.
My Love Affair with India for the Past 20 Years
Of course when you become interested in yoga, you soon get interested in India, where it originated. India has been a destination for me four times, and I still don’t think I know it.
But it always teaches me something. In 2009, it saved my mental health.
Very fortunately, as a result of living in the Bay area and networking, I met an American man who worked for an Indian startup, Jiva Ayurveda. I was drawn to him and to the Indian family with whom he was in business. India was pretty backward at the time, but not the Chauhan family. They owned a school, a pharmacy, an ashram, and an Ayurvedic healthcare consultancy that they ran out of a Faridabad call center. They had already gotten a grant to work with MIT on what I now realize was an early form of remote patient monitoring.
Remember, this was before Health 2.0 became a “thing.” Jiva’s Ayurvedic doctors sat on telephones outside New Delhi and patients in remote areas of India were able to call the doctors on cell phones. A woman in each village was given a phone and charged people for calls to the Ayurveda doctors, who made their living on both the consultations and the prescribed herbs and pharmaceuticals. It was a real win-win, for rural India, for the individual women who were put into business with their phones, and for the company, Jiva Ayurveda. I thought it was brilliant.
I had been to India twice before by that time, both times with an Indian friend of mine from Intel who has since passed. The first time I saw the Dalai Lama, the second I bought a cow for an ashram run by one of Ghandi’s last living disciples.
But there can never be enough time spent in India. Each of my previous visits had been completely different than the other. This one, my third, was the first one for “work.” When my Jiva friend called me and said “we finally have the money to invite you down to India to consult with us,” I was thrilled, because I had no work at home anyway.The end of 2008 had seen all of my private clients disappear into the recession and I was once again afflicted by my fear of starvation. It didn’t help that I owned two houses with two mortgages.
Although Jiva Ayurveda could not pay me they could pay my expenses and then they offered to barter Ayurvedic treatments with me. I accepted. A few days after I arrived at the ashram-like “med spa” where I was staying, I had my bartered appointment with the Ayurvedic physician Dr. Partap Chauhan. It proved to be life changing.
If you have never been to an Ayurvedic physician you probably don’t know that they can diagnose you by staring at your hand, looking at your eyes, and having you stick out your tongue. After what looked to me like very rudimentary diagnostic procedures, Dr. Chauhan said to me “you are very stressed.”
Really? I thought. I let him have it with both barrels, all 5’2” of me screaming “of course I’m very stressed.” In parentheses I was thinking “you buffoon!”
I told him everyone in America was very stressed. “All the banks have collapsed, I just lost $50,000 in a new bank investment, many people are homeless, and I have lost everything. I have no clients except you. “
He’s staring at me as though I were nuts.
He said to me. “are YOU homeless?” And I had to answer “no,” because at the time I owned two houses. And then he said “do you have children” and I said yes. He said, “will they let you starve or live in the street,” and I said “no.” He added that no one in America should ever starve because there’s enough land there for everyone to grow their own food. He blew apart all my exaggerations.
“So what have you lost?” he went on. “You have lost nothing but your expectations. Everything you lost was on paper. Everything that is not on paper you did not lose.”
In a split second, my life changed. I would never again be able to blow things out of proportion like that. He was more than a physician, he was a Godsend.
That trip to India re-set my life. After that I went after life with much more courage again. I began to travel even more. I had already traveled to China, India, Europe, Costa Rica, Africa, New Zealand and Mexico on yoga retreats or with friends who were going and wanted companionship. My friend Lucie’s daughter moved to Shanghai for a few years, and I went there twice with her, revealing to me how quickly China was changing. I remembered the vacant island of Pudong from one trip, and the completely operational financial center on Pudong from the next one.
My friend Fred’s daughter trained for the Lake Taupo triathlon and I went with Fred as part of her crew. Another of my friends was organizing trips to Uganda and Rwanda to meet with NGOs who were helping with the AIDS epidemic in Africa and I went there.
But my big love affair is still with India, perhaps because its spirituality meshes with my own.
My first trip to India, well before the Great Recession, was with a former colleague from Intel, who was going on a “roots trip.” Sri also had a friend who was one of the last living followers of Gandhi, and he was also hoping to meet up with that friend.
I was sitting in a deli having lunch with him and my daughter Chelsea, when Sri told us about his upcoming trip. I said off-handedly, “I’ve always wanted to go to India.” Sri said, “come with me.” Chelsea and I exchanged glances, and I said yes. I would be going to India with a native.
I almost missed that trip because it wasn’t until the day before that I learned you need a visa to go to India. Phoenix didn’t have an Indian consulate, so I had to quickly fly from Phoenix to San Francisco and stand in line at the Consul’s office to get a same-day visa. Sri assumed I would know this, and I hadn’t bothered to find out.
I made the plane to India, however, and after a short visit to the Delhi neighborhood where he was born and a quick tour of Delhi itself, Sri informed me that there would be a change in plans. We had a chance to meet Dwarko-ji (his friend) on the way to Dharamshala and perhaps meet the Dalai Lama himself. Sri had been part of the great Indian diaspora of the 80s, when all the smart kids became doctors and engineers in the states. Delhi was polluted and dirty, and Sri’s neighborhood was uninspiring. I could see why he had left it to come work at Intel in the U.S.
And of course I wanted a chance to meet the Dalai Lama, if possible. To do that, we had to take an overnight train to a small town called Pathamkot and then ride in a rickety truck over narrow, one-way roads into the Himalayas to get to Dharamshala. (Now the route is on Trip Advisor, and one is advised to take a taxi between those two destinations. Then you had to know where you were going.)
The Delhi train station was unbelievably crowded, as almost all of India was. We bought our tickets, and I slept in a fold-down berth nose to nose with a perfect stranger, a Sikh who slept in his turban. At the beginning I was scared to death. Knowing nothing about Sikhs, I entertained fantasies about being knifed in my sleep, but I managed to do it and survive the trip. I don’t think I know anybody personally who has gone to India under such primitive conditions, because when we got to Pathamkot we stayed overnight at an ashram that had no running water, no bathrooms, and no electricity. The beds were just stone slabs with one-inch mattresses. Sri explained to me that it was an ashram for people who ran other ashrams and needed a rest. They were used to the Spartan accommodations. Now I could do that, too.
Supposedly, we were meeting Dwarko-ji there, but that never happened because Dwarko-ji had beaten us to Pathamkot and was already beyond it on the way to Dharamsala. We were effectively chasing him through the Himalayas. So we got into what looked like a cross between a bus and a truck and rode the rest of the way. The bus had no windows and the dust was inescapable. But I realized I would never get to do this again and I couldn’t exactly back out in the middle of India anyway, so I looked out and enjoyed the cows in the road and the monkeys running alongside.
When we got to Dharamasala, where pilgrims go to meet the Dalai Lama and to study Buddhist meditation, we stopped at the Vipassana meditation Center. Vipassana is silent meditation and the people who were there had come from all over the world to do it for weeks. I wasn’t ready for weeks of silence, but I did sit down and meditate so I could say I had done it, and I listened to the amazing sound of the Buddhist monks chanting in the Dalai Lama’s private quarters. They sounded almost extra-human, producing a hum more like vibrations than people repeating words.
I did not meet the Dalai Lama that time, but I did meet Dwarko-ji, who was in his 90s and still very active. He made several trips a year to the US to raise money and get volunteers for his work. My friend Sri was one of his biggest supporters, and was helping him with a succession plan to assure the continuation of the ashram after he died.
Dwarko-ji ran the ashram as a farm in Bihar, one of the poorest provinces of India. Parents who could not afford to feed their kids sent them to Dwarko-ji’s farm to learn how to be farmers and also to make antibiotics from cow urine. I was so impressed that I volunteered to go back to India with Sri the next year to visit Bihar during Dwarko-ji’s annual “eye camp,” an event where physicians from all over the world came to perform cataract surgeries on poor Indians who had been blinded by cataracts.
The eye camp trip was completely different from the Delhi trip. Delhi had been a city, however dirty crowded, and primitive. It had some beautiful sections that had been built by the British and were maintained with cheap labor. But Bihar, where Dwarko-ji lived, was India’s least developed province, a gang-infested jungle through which it was not safe to travel. Although tourists came there, like to Dharamsala, because the temple of Bodh Gaya, where Buddha supposedly received enlightenment under the Bodhi tree, was there, it was generally considered hazardous.
Sri told me we could travel there because Dwarko-ji was known and respected and had made peace with the gangs. And yet, outside the temple at Bodh Gaya, as we were admiring its beauty and I was staring at the Bodhi tree having a spiritual experience, a pickpocket stole Sri’s wallet.
The disappearance of Sri’s wallet broke the spell I had been under from realizing I was at the spot where Buddha received enlightenment. All of a sudden I realized we were in a place far from safe, a place that was very beautiful and equally dangerous.
When we arrived at Dwarko-ji’s farm, Sri and Dwarko-ji had to make arrangements to replace the contents of Sri’s wallet while I watched the children plowing furrows. Skinny cows pulled the plows. Here we were in the 21st century, and these kids, some very young, were learning ancient techniques of farming. I asked Dwarko-ji why he didn’t get one of his supporters to buy the ashram a plow, and he said it would teach the kids a skill that would be useless to them when they returned to their families. They had to learn how to use what they had.
That’s when I could see the importance of the cow to Indians. They drank the milk, ploughed the field, and used the urine for antibiotic production. Nothing could be more useful.
When I asked Dwarko-ji what I could do to help, he said he needed another cow. When I said I’d be happy to donate a cow, he took me to a cow auction, held in a dusty open field, where everyone in the area who had a cow to sell had brought it. Dwarko-ji was a real expert at appraising cows, and he explained to me what constituted the best investment. First of all, the cow had to be a real Indian cow, not some import from elsewhere. That was because of the quality of the urine Indian cows produced. My head spun as he enumerated all the other qualities he thought essential in the perfect cow. We ended up not buying a cow at that auction, as none met his standards, and I left him with $250 cash for the next opportunity. Been to a cow auction? Check.
The next day it was time for the eye camp itself, which turned out to be in yet another dusty field not far away from the cow auction. Most of Bihar at the time was a bunch of dusty fields, except for Bodh Gaya, which was a tourist and pilgrim destination.
When we arrived at the camp early in the morning, as far as I could see there were already squatting men women and children in patient queues waiting to have their cataracts removed. Nutritional deficiencies mean that in India people get cataracts way younger than they do in the United States, and at this eye camp, run by Dwarko-ji and his friends, Americans and other foreign physicians volunteered their vacation time to remove cataracts from poor people in the province of Bihar who could not otherwise afford surgery. The sheer number of waiting patients was overwhelming, and the surgical conditions extraordinary to say the least.
In this dry dusty field, people waited squatting for days in line until it was their turn. They were then called to a table in an open tent, where the operations were performed in the open air. Astoundingly, these doctors had done thousands of surgeries in the dusty conditions without a complication. The day after their surgeries, the patients, who had travelled from all over this part of India, and were still in the dust, had their bandages removed and headed “home.”
These doctors and their patients made it look no more complicated than taking out a splinter.
Bihar was somewhat scary, but incredibly worthwhile. I was fortunate to know Sri, who unfortunately died of malaria a year later after returning from a trip to Africa, where he was planning to organize another eye camp.
He had come home and fallen ill. He went to a hospital in Chandler Arizona where no one understood malaria, and they sent him home from the ER telling him he had the flu. In vain he and his daughter tried to explain the concept of foreign travel and malaria to the American doctors, and by the time they diagnosed him correctly and began to treat him, the disease had progressed too far.
I was devastated.
The Stealthmode Vision
By the time we hit the bottom of the Great Recession, Stealthmode had been in business helping entrepreneurs for almost ten years. I had become a yoga practitioner a year after Gerry died, and had been on yoga retreats in Mexico, Hawaii, Malaysia, Thailand and Costa Rica. I had been to Uganda and Rwanda visiting micro-entrepreneurs.
I had made 2 very successful angel investments (50x and 10x) in Richard Lang, one of my former community college students, and one moderately successful investment in New Times, whose founders were smart enough to buy back all the outstanding stock before the company really took off, aggregating alternative media all over the country and becoming Village Voice media and later, controversially, Backpage.com.
I made the investment in New Times the way everyone should make an angel investment, out of faith in the founder and with the realization that I was very likely throwing my money away. But I had been the first film reviewer for New Times, receiving no money but all the free movie tickets I could use. John Hardaway and I saw every movie of consequence in American film during the 70s, including Butch Cassidy, Five Easy Pieces, and Easy Rider. We had two babies at the time, and the premieres of these films took place at midnight, but we were there. I just don’t remember how we did it, when now it is a struggle for me to stay up past 9 PM.
So it wasn’t such a stretch for me, once I lived in El Granada in the beginning of the 21st century to become friends with Dave McClure, a member of the Paypal Mafia who started 500 Startups. I think we met at one of Mike Arrington’s parties on the grounds of August Capital. Those were legendary for both their size and their “guest list.” Arrington was the founder of Tech Crunch, and for a while he dominated the Silicon Valley scene with his technology journalism. He was opinionated, probably still is, and ultimately burned out on the California scene. He moved away and became an investor.
McClure was also colorful, and I still think he is a genius. He developed a theory of angel investing that involved writing a check for $25,000 to just about anyone with a promising idea. More conservative investors referred to McClure’s philosophy as “spray and pray,” but 500 Startups has had many big exits, and McClure’s idea formed the basis for current seed funds.
McClure started something else that heavily influenced Stealthmode’s vision: Geeks on a Plane.
Geeks on a Plane was a travel and entrepreneurship experience that flew a planeload of founders and investors to various places outside the US to explore opportunities. With Dave and his crew, Ed and I went to Tokyo, Seoul, Singapore, and China on one trip. On the next trip we went to the Middle East — Dubai, Turkey, Israel and Jordan — and on a third to Argentina, Brazil and Chile.
There’s no way we could replicate the opportunities we got from those trips. We coached entrepreneurs, heard ideas, made friends, learned how cultural differences demanded special products, and discovered that there is also an underlying “gestalt” at any given time that affected everyone. We learned where the global entrepreneurship opportunities were, and how to invest in them, although I never did. Everywhere we went, we judged pitch contests and were treated like visiting royalty because we were from Silicon Valley.
Although I was already at everyone else’s idea of retirement age, during the first decade of the 21st century I was probably more active than at any other time in my business life. Besides all the trips I took, I commuted back and forth between Phoenix and El Granada and invested heavily in getting to know how Silicon Valley worked. Of course I was spending money wildly on travel during those years, even as I was trying to negotiate myself through the Great Recession at the end of the decade.
Somehow the Recession that crippled Phoenix and forced people to walk away from their homes didn’t really exist in the Bay Area. Instead, up there it was the start of what was first called Web 2.0 and later the social media era. The most important things in my life were making sure the Social Media Club got to Phoenix and deepening my understanding of Twitter and Facebook, as well as a myriad of social platforms that didn’t survive the consolidation of platforms: Friendfeed, Foursquare, Plurk, and Path.
I became friends with Robert Scoble, who also lived in Half Moon Bay, and allowed him to guide me through things none of my Phoenix friends knew about, like Twitter lists and Facebook advertising. One night as Scoble’s +1 at an event a VC firm held in a Phoenix resort, I met Mark Zuckerberg. He was standing by the pool, awkward and shy, and I asked him if I could get on Facebook, which was until then limited to people from colleges. He told me I was fortunate because Facebook had just released the limitation and opened itself to the general public, and I ran home and signed up. I gave no fucks about privacy back then — it was all about connection. For a few days, I might even have been the oldest person on the platform.
I felt the same way about Twitter. I had been blogging since Ev Williams started Blogger, and when he sold it and started Twitter, I signed up for that as well. I immediately acquired 10,000 followers, because that was easy in the beginning. Being early on those platforms has given me a lifelong social media edge.
People like Ev Williams and Mark Zuckerberg were my heroes. They were entrepreneurs, and their businesses grew so fast that I knew I had a lot to learn from them. Moreover, they represented a new stage of technology that idealistically thought it was connecting the world. I was enamored of their youth and vigor even as I was getting my hip replaced and my cataracts removed. They connected me to the future. Ever the optimist, I found it difficult to find their flaws.
It was like when I ran the Arizona Future Society at Rio Salado Community College, only on a bigger scale. Combined with the trips, I now realized I was playing on a global stage. It was so cool to go to Arrington’s parties, to coach people young enough to be my kids, and to be respected for what I knew about entrepreneurship and technology. I was blessed, although I never could figure out why.
In retrospect, I realize it was because I took risks very few people my age would have. My friends in the real estate business in Phoenix were winding down, while I went to SXSW in Austin every year for six years, feeling that I was winding UP. For a while, I was an attendee at a variety of tech and social media conferences. Another way to spend my retirement money.
However, in 2011, I decided I shouldn’t stay in the Bay Area anymore.As I observed the Bay Area’s culture changing from experienced to youthful, I became aware that I might be aging. And the entrepreneurs were changing from “change the world” to “become a unicorn,” It was all about money, and it was too big a stage for me to continue to play on.
By that time, too, one of my daughters had moved to London, and she was the one who had given birth to my only blood grandson (my extended family of step and foster kids had given me 14 other grandchildren for whom I also felt responsible.)I knew I was going to want to spend more time in London, and I really didn’t have the riches to maintain two households and leave them empty while I was in London. It made more sense to develop business in London, which I did, although not much and not profitably. Just enough to defray my trips.
So I sold the house in Half Moon Bay, and moved back to Phoenix full time. Of course my network by this time was all over the world, and the internet was fairly well advanced, so it really didn’t matter where I lived. I actually lived on Facebook and Twitter.
Phoenix still needed a fair amount of development to be the tech town I wanted it to be, so I went back to trying to make it one while writing and consulting to support my community service habit, which had developed into a major addiction. I had been a partner in supporting the Arizona Software Association, now the Arizona Technology Council, and the Social Media Club, and the Arizona Technology and Information Council, and on and on.
I looked for the holes. It was obvious: diversity. So I started the Women Entrepreneurs Happy Hour, a coaching business for women, and joined the founding team of Golden Seeds, a women’s investment group. My foundation, the Opportunity Through Entrepreneurship Foundation, which started in 2005 but was largely ignored after the Arizona Entrepreneurship Conference years, came alive with a series of projects that involved partnering with other organizations, and through it I also worked with women.
I also spent a couple of years being depressed. I did not want to be aged out of things, considered old, or actually indisposed — not to mention dead. What could I do?
I became fanatic at following the anti-aging movement and doing everything it said to do: in 2011 I became WFPB (Whole Foods Plant Based), which largely meant giving up processed food and anything with a mother or a face. I never lost the weight everyone else did, but I had an unbelievable amount of energy and not enough places to put it. Among other things, I also started a Facebook group called Aging Revealed to call attention to some of the cliches associated with aging.
I spent most of my time taking yoga and going to the gym, walking the dogs, and dining with friends. Since Bergie (Gerry Kaplan) died in 1997, I have had one steady date, his former medical practice partner Fred Salamon, who has generously accompanied me to dinner every Saturday night so that I never feel unpopular. On other nights, I can sometimes have dinner with Ed Nusbaum, with whom I’m celebrating 20 years in business even though we hardly do any actual business together, other than just serving as a sounding board for each other.
On most Sundays I brunch with my old high school buddy Dan Pochoda, who never married. In fact, you’d be surprised how many boy friends — in the most literal sense of the word are available to me, since I seem to know many single men. Fortunately, no one has asked me to marry them since Gerry died, nor would I. I have learned all I need to know about marriage.
For a while, I was living alone and not enjoying it much. I was a regular at the bar at Hillstone in Phoenix, where the bartenders are awfully close to my age although the clientele is not. Then one day about ten years ago, I met a gorgeous Gordon setter named Tucker in the park near my home. Tucker’s owner, Max, became a friend. Max was renting a condo from a woman who allowed herself to be foreclosed upon in the middle of the Great Recession. I’m sure that was a good decision for her, because she was undoubtedly very upside down in the house, but it threw Max and Tucker into the street just as Max had been laid off.
Max and I had a conversation in which he confided to me that he was going to have to get rid of Tucker because he was going to live in his car for a while and he didn’t want to do that to the dog. Remember me? Foster Mom? I invited Max and Tucker to stay with me for a while as Max regrouped.
That arrangement lasted few years, until Max got a job as a tech support admin at a local community college, which gave him a paycheck. He immediately bought a house out of a bankruptcy and moved out on me to move to his own place. As Tucker was a member of my pack, Max often came by even when he didn’t live with me.
That lasted until the real estate turnaround, when a group of investors buying up single family homes offered Max three times what he’d pay for the house. One day he called me and said “how would you like to have a roommate for a few days.” By this time Max came equipped with a two year old German shepherd, as well as Tucker. I had three dogs myself, but I said yes. That might have been 2016. It’s now 2020, and the five dogs have moved house with us to a home that backs up against the Grand Canal in Phoenix, where I can walk various combinations of dogs without much fear of retribution from my neighbors, who have multiple dogs. Dog friendly and diverse, my neighborhood is. I couldn’t be happier with my little house.
I bought it a couple of years ago when a dear friend offered to give me a private loan. That was probably the only way I could have bought another house, since I had sold my last home before the recession (good decision), sold my Half Moon Bay house during the recession, and was a self-employed person in an era where mortgages required, above all, pay stubs. But as the karmic universe smiled on me, I became a homeowner once again.
Looking back on my life, I’ve been up, and I’ve been down. I’ve been very close to bankrupt more than once. But I’ve always managed to catch myself. When my children were growing up, there was a toy called Weebles. I still remember the commercials: “Weebles wobble, but they don’t fall down.” When I am most in danger of sharing the fear, frustration, and anger of my neighbors, I remember I’m a Weeble. To be a Weeble is to be self-sustaining. | https://medium.com/stealthmode-blog/stealthmode-and-the-rising-entrepreneurship-tide-b91de005f7ed | ['Francine Hardaway'] | 2020-07-06 16:36:35.138000+00:00 | ['Women In Business', 'Women In Tech', 'Silicon Valley', 'Autobiography', 'Technology'] |
2,944 | The Dilemma of Cybersecurity | by Jackie Swift
Shortly before the 1990–1991 Persian Gulf War, also known as Operation Desert Storm, two teenagers from the Netherlands hacked into the United States Department of Defense’s (DOD) new logistics system and gained control over it, according to Rebecca M. Slayton, Science and Technology Studies at Cornell University. “They might have stopped or diverted shipments of weapons or other critical supplies, and that might have had a devastating effect on military operation,” she says.
United States–led coalition forces achieved their military objectives in the Gulf War in a matter of weeks. But Slayton points out that things could have been quite different if the two hackers had had their way. The teens had offered to manipulate the DOD’s system for Iraqi President Saddam Hussein in exchange for one million dollars. “The Iraqi government declined,” Slayton says. “If it had not, we might think of the war in a completely different way today.”
Cybersecurity at the Department of Defense
Slayton is currently working on a paper about the history of cybersecurity expertise in the DOD. The story about the hackers and the Gulf War is an example of events that brought the DOD to the growing realization that the information technology it increasingly relied on for military strength was also a vulnerability. “There’s no way to make these systems invulnerable to hacking, and that can have really big military consequences,” Slayton says.
Even to this day, though, Slayton points out, the DOD is not nearly as good at defense of its own cyber systems as it is at attacking other systems. “U.S. Cyber Command (part of the DOD) is the most capable cyber attacker in the world,” she says. “So why are they so good at offense and not at defense? In general, defense is harder because the goals are more complex. You’re trying to keep a very complex information network running properly, without any malicious activity. If you’re an attacker, on the other hand, you might have a relatively simple goal, such as compromising one computer.”
In her current paper, Slayton takes things a step further, arguing that the DOD’s problem is more than just the complexity of defense. “The DOD’s information technology procurement and management has historically been very decentralized, which makes the job of cyber defense very difficult,” she says. “Additionally, war fighting is the military’s top priority, and cybersecurity often seems more like tedious technology management than combat. This leads to cultural problems, where some parts of the military relax cyber security to achieve goals that seem more urgent. Good security practice is not always convenient.”
How Do You Prove Cybersecurity Expertise?
Slayton’s research into the history of the DOD’s cyber defense is part of a larger book project, Shadowing Cybersecurity, in which she looks at the rise of cybersecurity expertise through time and across different organizations. “Expertise is really about trust,” she says. “It’s not enough to have knowledge or skills; you have to convince others you have them in order to be effective. So how does that work in the context of cybersecurity? Everyone who works in the field will acknowledge that they can’t give you perfect security, that if someone really wants to break into your system, then given enough time and resources, they will.”
“Everyone who works in the field [of cybersecurity] will acknowledge that they can’t give you perfect security, that if someone really wants to break into your system…they will.”
Since they can’t guarantee system security, cybersecurity experts often seek to demonstrate that they’re good at their job by hacking the system. “They break into it to prove they know it well enough to defend it,” Slayton says. “That’s an unusual way to demonstrate expertise. Doctors don’t break your arm to show that they’re good doctors.”
These problems make identifying expertise difficult for organizations looking to hire a security expert, and while there are some professional certifications, they only go so far. “Just having a credential doesn’t necessarily make you competent to do the particular job that an organization needs you to do,” Slayton says.
Securing Industrial-Control Systems
The needs of cybersecurity can run up against the needs of the system as well, especially when it comes to industrial-control system computers that run infrastructures 24/7. “You can’t shut down the electrical power grid for an hour to update security,” Slayton says. “And yet security often needs updating. Also, technology used in the electrical grid and other industrial-control systems has traditionally been purchased with the expectation that it will last 20 or 30 years, but computers have a very different timescale in terms of how quickly they need to be updated or replaced.”
These competing tensions make it difficult to decide how a system should be secured, Slayton says. As an example, she points to issues with protecting the United States power grid. The Federal Energy Regulatory Commission authorized an industry group, the North American Electric Reliability Corporation, to make and enforce Critical Infrastructure Protection (CIP) standards for the generation and transmission of electricity. “But what effect have these standards had on the grid?” Slayton asks. “Do they actually improve security?”
Slayton worked with Aaron Clark-Ginsburg, at the time a postdoctorate researcher and now at the RAND Corporation, to investigate these questions. The researchers found that the CIP regulations actually had a leveling effect, causing some companies to improve their security and others to lower theirs. For example, one of the requirements is that an energy supplier must both have a security policy and also enforce it. This results in some companies with high security standards — say, requiring computer updates every month — being penalized when they miss their own update deadline due to an unforeseen problem such as electrical outages.
“The supplier may have actually enforced the minimum federal standard, but they got dinged because they didn’t enforce their own, higher standard,” Slayton explains. “That ends up causing the supplier to lower their standards to a new minimum because they don’t want to get in trouble for not enforcing a better policy. The regulations set up a sort of perverse incentive.”
Retooling a Career
Slayton conducted her doctoral research in physical chemistry but found herself drawn to history and the social sciences. She retooled her career to focus on the history of science and technology in an effort to better understand the authority of science. “I have mixed feelings about the authority of science and technology,” she says. “Science and technology can be powerful forces for good, but they have also been developed and used in ways that are oppressive. Studying the history of science gives us insight on these processes. It shows that what we accept as true is always influenced to some extent by culture and by society, and it changes over time.” | https://medium.com/@CornellResearch/the-dilemma-of-cybersecurity-cea305939e55 | ['Cornell Research'] | 2020-12-21 22:03:05.987000+00:00 | ['Science', 'Cybersecurity', 'Technology', 'Hacking', 'Cornell University'] |
2,945 | >>>>HOCKEY⪻StReAmS⪼Canada vs Germany World Juniors 2021: (LiveStream), Ice Hockey Championship>>>>2020 | Canada vs. Germany Ice Hockey Live: World Juniors 2021 Live Stream Online. How To Watch World Junior Championship 2021: Time, TV channel, livestream, where, when, schedule
Watch NOW🔴➤ Watch World Juniors 2020 Live
The 2020 IIHF World Junior Championship in Ostrava and Trinec, Czech Republic, is taking place from Dec. 26 to Jan. 5, 2020.
Watch NOW >>
The United States and Canada are in Group B and will play their round-robin games at Ostravar Arena in Ostrava, along with Russia, Czech Republic and Germany.
Finland, the 2019 tournament winner, will be in Group A, along with Switzerland, Sweden, Slovakia and Kazakhstan. It will play its round-robin games at Werk Arena in Trinec.
The top four teams in each group will play in the quarterfinals Jan. 2. The semifinals are Jan. 4, and the championship and third-place games are Jan. 5. The semifinals and finals will be played at Ostravar Arena.
NHL Network will broadcast 20 games live, including the United States playing Canada on Dec. 26 (1 p.m. ET), the first day of the tournament, as well as the semifinals, consolation game and final.
The other three U.S. games in the round-robin portion of the tournament will be carried live on NHL Network, including games against Germany on Dec. 27 (1 p.m. ET), Russia on Dec. 29 (1 p.m. ET) and Czech Republic on Dec. 30 (1 p.m. ET).
Among the players to watch for the United States is goalie Spencer Knight, who was the third goalie on the 2019 WJC team. He was chosen by the Florida Panthers with the №13 pick of the 2019 NHL Draft.
The United States finished second at the 2019 WJC, losing 3–2 against Finland in the championship game at Rogers Arena in Vancouver. The United States is looking for a fifth straight top-three finish at the event after winning it in 2017, finishing second in 2019 and third in 2016 and 2018. Prior to its current streak, the United States had consecutive top-three finishes once, in 2010 (first) and 2011 (third).
NHL Network also will provide live coverage of Canada’s round-robin games. After opening against the U.S., Canada plays Russia on Dec. 28 (1 p.m. ET), Germany on Dec. 30 (9 a.m. ET) and Czech Republic on Dec. 31 (1 p.m. ET).
Canada finished sixth at the 2019 WJC but could have the top two prospects for the 2020 NHL Draft on its roster: left wing Alexei Lafreniere of Rimouski of the Quebec Major Junior Hockey League, and center Quinton Byfield of Sudbury of the Ontario Hockey League. Lafreniere had one goal in five games for Canada at the 2019 tournament.
WORLD JUNIOR CHAMPIONSHIP SCHEDULE
Thursday, Dec. 26
Russia vs. Czech Republic, 9 a.m. ET, NHLN
Switzerland vs. Kazakhstan, 9 a.m. ET
U.S. vs. Canada, 1 p.m. ET, NHLN
Sweden vs. Finland, 1 p.m. ET
Friday, Dec. 27
Kazakhstan vs. Slovakia, 9 a.m. ET, NHLN
U.S. vs. Germany, 1 p.m. ET, NHLN
Saturday, Dec. 28
Slovakia vs. Finland, 9 a.m. ET, NHLN
Czech Republic vs. Germany, 9 a.m. ET
Canada vs. Russia, 1 p.m. ET, NHLN
Switzerland vs. Sweden, 1 p.m. ET
Sunday, Dec. 29
Finland vs. Kazakhstan, 9 a.m. ET, NHLN
Russia vs. U.S., 1 p.m. ET, NHLN
Monday, Dec. 30
Canada vs. Germany, 9 a.m. ET, NHLN
Kazakhstan vs. Sweden, 9 a.m. ET
Czech Republic vs. U.S., 1 p.m. ET, NHLN
Slovakia vs. Switzerland, 1 p.m. ET
Tuesday, Dec. 31
Slovakia vs. Sweden, 9 a.m. ET, NHLN
Russia vs. Germany, 9 a.m. ET
Czech Republic vs. Canada, 1 p.m. ET, NHLN
Finland vs. Switzerland, 1 p.m. ET
Thursday, Jan. 2
Relegation Game 1, 4 a.m. ET
Quarterfinal 1, 6:30 a.m. ET, NHLN
Quarterfinal 2, 9 a.m. ET, NHLN
Quarterfinal 3, 11:30 a.m. ET, NHLN
Quarterfinal 4, 2 p.m. ET, NHLN
Saturday, Jan. 4
Relegation, Game 2 5 a.m. ET
Semifinal 1, 9 a.m. ET, NHLN
Semifinal 2, 1 p.m. ET, NHLN
Sunday, Jan. 5
Relegation, Game 3 (if needed), 5 a.m. ET
Third-place game, 9 a.m. ET, NHLN
Championship game, 1 p.m. ET, NHLN | https://medium.com/@wjclivetv/hockey-streams-canada-vs-germany-world-juniors-2021-livestream-ice-hockey-fe58dda0c17e | ['Hockey Tv'] | 2020-12-26 13:11:01.626000+00:00 | ['Hockey', 'Live Streaming', 'Technology', 'Canada', 'Startup'] |
2,946 | Embracing technology during Covid-19 | Webinar | Embracing technology during Covid-19
With remote work becoming the new normal, businesses are looking to level up their cyber-security and process automation.
Update: Link to the recording of the full webinar at the end of this story!
Covid-19, and the mandatory stay-at-home situation, has led to a massive shift in the way we work. For many, remote work will likely become a norm. In this new normal, businesses are looking more closely at cyber-security and robotic process automation.
Join us on Sunday, 31 May 2020 at 11:00 AM as we find out more about these technologies shaping business.
Webinar Details
Date: 31 May 2020, 11:00 AM — 12:00 PM
Participants are welcome to post in their questions during the session. A curated selection will be discussed after the keynotes.
About the Speakers
Venkat Iyer
Venkat Iyer
Director, Cyber Security at PwC
Venkat Iyer is a Director with the consulting team at PwC. Deeply passionate about technology, he has over 17 years of experience in the cyber security, digital, social media, analytics and mobility space having led teams across Asia, North America, Europe and Africa.
Prior to working with PwC, he also served as Marketing and Strategy Head for Telecom & Digital technologies at Tech Mahindra and has also co-founded multiple start-ups in the digital space focused on cloud, security and social media.
His recently co-authored publication, titled “Cyber Security India Market 2019-22 — What lies beneath?” released in Dec ’19 in partnership with Data Security Council of India, gives detailed insights into how the domestic demand for cyber security in India is going to evolve in the next few years.
Venkat holds a Masters in Management Studies (Finance) from Jamnalal Bajaj, Mumbai and a Bachelors degree in Engineering from Mumbai University.
R. Karthik
R. Karthik
Senior Automation Associate, S&P Global
Karthik is a Senior Automation Associate at S&P Global with over 14 years of experience in the data business. He has varied experience of working with technology teams and he is currently working to bring automation solutions to the business using various tools including RPA solutions. He is Blue Prism Level 1 certified. A self-professed technology geek, he has diverse interests including computer hardware, data analysis, and emerging technologies. He has been assembling his computers since he was a teenager and is a keen photographer. You can find his work at rkarthik.info | https://medium.com/basicolans/webinar-embracing-technology-during-covid-19-webinar-9e13dc68d502 | [] | 2020-05-31 16:36:39.361000+00:00 | ['Webinar', 'Technology', 'Events'] |
2,947 | UK pilot studies in the use of a novel social media platform across Post-16 Education | This article publishes the results of UK research on the use of a Social Media Application (Loopd.Life) as a tool to support the learning environment of students in Post-16 Education. Since the conclusion of this research Loopd is now in use across selected K-12 schools in the UK as well as Post-16 institutions.
Abstract
Previous studies of the use of social media in higher education indicate that there is a degree of ambivalence about their use. They are effective environments to build and encourage social and collaborative learning, and prove to be efficient at disseminating information. However, there is a reticence to use commercially owned software in an institutional setting amongst staff, and a resistance to mix personal and institutional identities amongst students. The development of a modern Social Learning Environment such as Loopd.life may overcome these previously identified barriers.
Literature Review
Use of social media platforms in higher education
The term ‘social media’ describes a wide range of internet-based communication platforms, from microblogging sites such as Twitter, to websites that focus on connections and sharing content in a professional capacity, such as LinkedIn and ResearchGate. The highest profile of the various social media platforms is Facebook; created in 2004, studies showed that by 2008, 94% of undergraduates were using the site. By 2011, this had dropped to 90% (Tess, 2013; A61) and figures have fluctuated in the low nineties since. Of the academics surveyed, it is suggested that a high percentage use Facebook, with one survey indicating that 73% of faculty members had Facebook accounts (Tess, 2013; A63).
The everyday use of social media is common due to a range of technical and psychological factors; in addition to mobile access, social media tools have simplicity in their function and fluidity around task performance and social interaction that is not replicated effectively in enterprise software. Theories such as Social Influence Theory and Social Presence Theory have been considered to understand the psychological factors that increase the usage of social media tools.
Social Influence Theory describes behaviour that drives participation in social media as due to three factors:
Compliance: the participant believes that if they participate they will be rewarded, and if not they will be punished, for example to be left out of social arrangements. Internalisation: the participant adopts shared goals with others, for example sharing social news, or discussing news events. Identification: the participant wants to maintain a “satisfying self-defining relationship with another group or person”. (Cheung et al, 2010; 1338)
Social Presence Theory explains the extent to which online social interactions can perform the same function that face-to-face contact can have in establishing relationships with others. Social presence is the ability to project oneself socially and emotionally in an online community (Caspi and Blau, 2008; 324) which requires the formation of a digital identity. For these forms of social interactions “evidence that the other is attending” is a critical factor (Rourke, et al, 1999, 56) as is the style of communication, and the use of rich media (Cheung et al, 2010; 1338).
Social media has been promoted as fulfilling three primary functions for educational provision:
Providing connections between students both before and during their studies. These connections provide a basis for collective working and creativity. Facilitating social learning and collective knowledge construction. This draws on the principle of social constructivism, i.e. that learning can arise from dialogue and collaboration. Enabling learner-driven approaches. Learning is less orientated around teacher-centred assimilative forms of teaching methods, and more based on learners making their own choices and creating meaning for themselves (Tess, 2013; A62).
Although there may be a theoretical justification for using social media, integrating them into a course of study requires time and effort to maintain their appropriateness for use as a learning resource. Since social media platforms were not developed for formal education, they may not necessarily be suitable to foster debate; and reservations exist around the ownership of, and boundaries within, sharing content via social media, which continues to challenge academics (Tess, 2013; A62).
Where higher education institutions have attempted to use social media in their teaching, the issues around ownership of information have led them to either use their in-house platforms, or tools such as Ning which have some facilities for social interaction, but in which access can be managed by the institution. In 2013, Tess conducted a review of the literature on the use of social media in higher education; the summaries of the findings from these studies are that:
Institutional platforms lack the intuitive format of commercial applications.
Teachers need specific guidance on how to best use the platforms.
Students resist using existing social platforms due to a desire to keep their university lives separate from their lives outside university, yet resisted adopting new social media because of the added time and cognitive overload of using additional technology. They also had concerns about identifying the original sources of content and thereby inadvertently copying others.
If Facebook itself was used, usefulness was limited by the inability to share content in the format of most learning materials, such as PowerPoint or Excel, but communication between students was high.
To be effective tutors have to be consistent in their usage of the platform.
However, students reported positively that the use of social media platforms was often more effective than face-to-face classes in enabling reflection and collaboration by commenting on others’ work and forming peer relationships. Analysis also indicates that learner engagement with the course content, projects and homework increases. (Tess, 2013; A62–63).
Neier and Zayer confirm this observation, noting “technology-mediated discussion is preferred versus face-to-face discussion in a conventional lecture setting” (2015; 141). Their findings were that:
“the most value of social media in the classroom is as a facilitator of conversation” and therefore “when used to enhance lectures, pedagogy rich in opportunities for discussion permits students to actively participate in their own learning process” (Neier and Zayer, 2015; 141)
“certain tools in particular…..are perceived as valuable in enhancing learning in the classroom by enabling the sharing and discovering of new content” for example including links to articles or media on websites where the conversations often originate. The ease and simplicity of incorporating these additional pieces of content encourage more engaging conversations.
“students expressed mostly positive views on educators and universities alike if social media was used in the educational environment, perceiving instructors as current and the university as displaying an exciting brand personality.”
Huppe surveyed the use of social media tools by university administrator and concluded that the adoption and regular use of these tools can help the educational community to feel more connected to their university as their communication is then via “an online environment that they have integrated into their daily lives” (2011, 14). Her findings were that:
Some students want their academic and social worlds to remain separate, particularly that their Facebook profiles do not become inundated with academically orientated information, reducing the sentiment of personal relevance (Huppe, 2011; 52), summarised by the statement “Some students see Facebook as their outlet away from academics” (Huppe, 2011; 55).
Students were also concerned that if administrative information was sent to their Facebook timelines, personal information could be made public (Huppe 2011; 53).
Preferences regarding the use of social media by institutions differed between the types of institution (Huppe, 2011; 54).
Student’s usage of Facebook and of the institutional platforms varied widely (Huppe, 2011; 54).
The relevance of these findings helped to shape the development of Loopd.life.
Why a bounded, institutionally owned system?
Social Influence Theory suggests that engagement within a platform is subject to a “underlying subjective norm (that) reflects the influence of expectations from significant others”. Users of highly interactive social media “are more exposed to other people’s influences as they interact in the social network” (Cheung et al, 2010; 1338). These underlying norms in the context of a socially relaxed peer group, such as often exists on Facebook, may not coincide with the behaviours and desired outcomes of an academic peer or project group. Social Presence Theory leads to a similar conflict, in that the digital identities created by participants in the wider social situation of Facebook, may not be appropriate for their representation in a university setting.
A bounded, institutionally focused environment has the potential to enable the more specific subjective norms of a smaller peer group (one’s university friends as opposed to all friends) to be more freely expressed, and for a digital identity specific to the university context to be created. The social influence of those university peers when communication is purely with those peers also strengthens that social influence. In other words, sharing exam stress, collaborating on projects or just general academic discussion can be done more freely when the rest of the world isn’t watching.
The literature also indicates that both students and staff are reticent to blur the boundaries of social and institutional spaces. Tied to this is the concern over ownership of material once it is posted to a commercial platform, in that the intellectual property of content is not protected, and confusions over privacy settings within the platform means that private correspondences can be exposed. Students also want their academic lives and social lives to be distinct. This holds true for institutions in that discussions about staff members, courses or academic content are preferably conducted within a closed environment, in which reputation can be managed more easily, and where intellectual property control (if that is still a concern of authors about course content) can be maintained. Also the use of materials not authored by teachers is permissible within a closed teaching environment, but if placed within a publicly accessible platform requires certain permissions. All these arguments support the idea of a bounded and institutionally owned system. On the other hand, users find the commercially available platforms more intuitive to adopt, both staff and students resist the idea of allocating time and effort to learning new systems.
The solution to these conflicting demands would appear to be to create a social platform that integrates social and academic aspects of university life, but also enables them to be separated. Creating an institutionally owned, and private space, would address the privacy concerns of students as well as enabling them to create distinct separate digital identities for their university lives and their outside lives. A social media platform that uses an intuitive interface (with rich media content) may also reduce the additional time and effort of acquiring familiarity with an additional platform, particularly if it could replace email and VLE/LMS usage, which students do not integrate into their daily practice.
Furthermore, making such an environment available to students before their arrival at university provides an opportunity for students to both socialise with their peers before studies begin (helping with induction and maintaining well-being during their initial months) but also enables students to anticipate and plan for what their studies will entail. This early interaction with students is likely to support a higher retention prior to their arrival and would be especially beneficial to international recruitment.
These considerations and learning outcomes led to the development of Loopd.life Social Learning Environment, the details of which are described in the next section.
Background to Loopd.life
Summary
Loopd.life facilitates the communication and organisation of all social events, extracurricular activities and academic opportunities that take place at University every day; this simplifies the process of engagement and empowers students to take greater responsibility over their education. Loopd.life uses modern social media theories and advanced mobile communication to build self-confidence, strengthen relationships and form a more supportive and connected educational community.
Many platforms exist to support academic progress (such as VLE’s and other education technologies) and students with clear academic purposes may gravitate towards these technologies. Loopd.life targets the wider student community and focuses on all social, extracurricular and academic pursuits; this provides a highly engaging platform that supports a broader student experience and broader student lifestyle. (Figure 1).
Figure 1: Venn diagram to show the three core areas that make up the modern student experience. Loopd.life particularly supports the natural communication overlap between each of the three core areas, engaging a broader student audience and working to increase the cross pollination of interests, skills and breadth of experience.
Educational concepts: Levels 1 and 2
Loopd.life allows students to quickly and spontaneously flick between their social, extracurricular and academic lives (as in Figure 2) and this engages students in either a social-seeking or work-seeking path of information discovery (represented at Level 1, in Figure 3). This comfortable and flexible approach (compared with other education technologies) offers greater individual relevance, giving students a direct sense of community, belonging and reward for their time and effort invested in the platform. A free or spontaneous choice that results in a fun and rewarding experience increases the frequency and duration of interactions.
When a student decides (voluntarily) to enter an educational environment, motivation is stronger and learning becomes more productive, this leads to faster and more efficient progress (Level 2, in Figure 3).
Figure 2: A screenshot of Loopd.life on a desktop platform to demonstrate the engaging way for students to quickly and spontaneously flick between social, extracurricular and academic opportunities in both the newsfeed (to the right) and in the messaging channel (to the left). The homepage allows multi-tasking between the newsfeed and messaging channels.
Educational concepts: The Enterprise Triangle
Loopd.life becomes a self-documenting portfolio of community activities and achievements that provide a very natural way to stimulate and inspire greater participation in others. Loopd.life therefore cross-pollinates skills and interests, which empowers students to form new relationships and take a more enterprising approach to their education (see the Enterprise Triangle, in Figure 3).
“Universities are highly collaborative communities with so much to offer from social interactions and extracurricular pursuits to academic teaching and career development. The most difficult challenge that students face is to understand what is available to them and how to effectively act on this. Universities believe they have to keep growing (more facilities, more technologies etc.) to stay competitive, but in reality it comes down to better communication of what is already available; this simple narrative is often overlooked or misunderstood.
Loopd.life simplifies the communication and organisation of everything that already exists. This raises awareness, creates curiosity and then provides a structure to take advantage of this.” Jonny Driscoll, CEO Loopd.life.
Educational concepts: Level 3 and outputs
As a result of new relationships and more effective collaboration, Loopd.life offers a greater visibility of shared experiences and personal achievement. This creates a more intimate and stronger sense of belonging, value and worth that ultimately perceive a more tangible return on investment for the student (Level 3, in Figure 3). These feelings of confidence and inclusion are intended to drive personal and group motivation and to cultivate a more empowered and proactive student community with higher ambition, retention and success.
Figure 3: Flow diagram to map the product user journey (by way of cognitive decision-making) and psychological concepts of Loopd.life. Initially students are channelled down either a social seeking or academic seeking path and engage in areas they feel most comfortable [Level 1]. These students then progress and develop within their favourite areas and begin to collect and share experiences [Level 2]. Improved exposure and simplified collaboration of these experiences act to inspire other students by cross-pollination; this in turn empowers students to broaden their educational interests, developing new skills, relationships and curiosities [Enterprise Triangle]. Ultimately more effective collaboration among staff and students improves trust, and creates an intimate sense of belonging that reinforces personal value, self-worth and confidence. These outcomes provide the maximum return on investment to the student through satisfaction and results, and to the university through retention and success [Level 3].
Achievements
Loopd.life began as a student enterprise winning a number of competitions and awards; some of these include, Technological Enterprise and Innovation (University of Birmingham, 2013) Santander Universities Student Enterprise (Santander Universities, 2014) Bseen Midlands Enterprise and Innovation (European Regional Development, 2014). Since incorporating in 2013 Loopd.life has raised private investment and worked with dozens of institutions on product development. In 2016, Loopd.life has replaced existing communication technologies in 19 of its pilot institutions.
Pre-pilot studies
Northampton University
In 2014, Loopd.life (then operating under the previous name of Unipin) was demonstrated at Northampton University as staff and students were invited to interact on the platform. Loopd.life was established as a good way to engage with students visually (and therefore emotionally) rather than via traditional text-based methods. A summary of opportunities were drawn up by the following disciplines:
Head of Learning and Development, Director of Institute of Learning and Teaching, Director of Student and Academic Services, Head of Learning Technology, Business Intelligence Manager, Executive Dean Schools of Education, Deputy Dean School of Arts, Director of Enterprise Development and Social Impact, Head of Academic Practice, Head of Employability and Enterprise, Strategic Bidding and Other Executive Officers.
Unipin offers something for everyone in the University
Unipin is suited for mobile technologies
Aesthetically, Unipin is a clean and fluid-looking software
A visual tool, such as Unipin, may engage memory, cognition and emotion in new ways (versus text-based methods)
The appearance of Unipin is a good way to communicate with students about various activities across the University
Unipin appears to be a speedy tool for student engagement and could give students a sense of belonging into their student experience very quickly
Student Union staff members were asked specifically to provide qualitative feedback to the question:
“What do you think is the most powerful impact of implementing Unipin at Northampton University?”
Responses to this question fell into one of two categories, those of organisation and engagement.
1. Organisation - The support for students to organise their various social, extracurricular and academic commitments in one place, was seen as a useful function.
i. “Communication of events and having a joint calendar so students can plan and engage.”-Vice President of Engagement and Participation
ii. “Students making the most of opportunities and tailoring an experience that best suits them.”-Memberships Service Manager
iii. “The ability to see what you can do, and what you can join, that fits in with your existing commitments at Uni.”-Marketing and Communications Manager
2. Engagement -The potential to discover new information and co-ordinate a range of activities helped with community engagement, but also the functionality and design of the platform matched what staff believed students struggled with using their existing VLE.
i. “Engagement with students and better, more realistic, communication. Solving a problem they struggle with from ….. NILE, Blackboard and too many social media outlets. Unipin is really engaging!”-Student Voice Coordinator
ii. “Students having access to events and opportunities easily in one place. Building a sense of community”-Student Groups Coordinator
3. Indirect effects -There are also a number of indirect effects such as obtaining data analytics from the different demographics of learners to improve both student and academic services, and the indirect benefits of student engagement to improve retention, progression and attainment.
i. “Unipin will allow the evaluation of relevant data around student engagement.”-Vice President of Engagement and Participation
ii. “Improving retention, progression and good degree attainment”-Membership Service Manager
Acknowledging the importance of community, and that organisation and engagement are common challenges that students face, could have a resultant effect on retention and achievement. “At the heart of student retention and success is a strong sense of belonging in higher education for all students. This is most effectively nurtured through mainstream activities that all students participate in.” (Thomas, 2013). This statement is particularly important since the four most influential reasons why students leave university (Table 1) relate to student engagement and the perception that the university does not care about them, their organisation or progress. These factors contribute to 84% of all student dropouts, and since (in most cases) support and opportunities for engagement are available, this perception of students must be due to a lack of communication.
Table 1. Most common reasons for student dropouts (Thomas, 2013)
A common misconception is that by focusing on motives such as teaching quality or personal and financial support, students will engage more and retention rates increase. Looking at Table 3 these factors are not key motivators for students and help to explain why new learning technologies often receive very low student engagement; this approach may also be counterproductive as it gives a detrimental impression that the university fails to recognise the true needs of students, and most likely encourages the common responses of poor service, treatment and the feeling that the university does not care. Table 3 indicates that learning technologies should focus on providing relationship intimacy, care and a stronger perhaps more emotional sense of reward in order to be effective and engaging.
At Northampton, it was calculated that increasing retention by just 2% would represent an annual income of £1.7m. Clearly to access this significant financial benefit, universities would need to shift their focus from teaching and learning, to engagement and participation; improving the internal communication and perception of existing activities and support, may bring about a higher financial return than implementing new strategies.
Chesterfield College
In 2015, an experiment with a class of business studies students at Chesterfield College showed how much faster students were able to retrieve homework using Loopd.life (<15 seconds) over Moodle (>45 seconds). In a school of 1000x students, over the duration of one year, an additional 30 seconds to locate each piece of homework (assuming that every student locates 2x homeworks per school day) equates to half a year of lost time. This frustrated the students and lowered their motivation to do work; it was also reported that during this extra 30 seconds, faster more engaging platforms (like Facebook, Youtube and Snapchat) distracted them from their initial good intentions. During the experiments, responses from staff at Chesterfield College were:
“Unipin is the perfect tool to engage, inspire and grow, and keep in contact with students”-Martin Cope, Business Enterprise Manager, Chesterfield College
“Unipin is definitely the future of Further Education”-Ben Owen, Head of Student Services, Chesterfield College
Since this initial study in 2015, information retrieval on Loopd.life is faster, with most data now being accessible within three screen taps (or 4–6 seconds) via the iOS and Android mobile apps; and in addition to faster access, files can be stored offline so as not to require the Internet. This approach to facilitate learning is focused on reducing common (and often habitual) motivational barriers that students create when considering to do work, especially outside of the classroom.
Quantitative study at University of Birmingham
During 2016, a survey of 361 students at the University of Birmingham was undertaken to identify their current software engagement using existing communication tools, and to assess the extent to which Loopd.life could support their needs. Students were asked to give feedback over Facebook, Canvas and Loopd.life. (Canvas is the institution’s VLE/LMS). The responses are listed in tables 2,3 and 4.
Table 2. Survey of students’ usage of Facebook
Table 3. Survey of students’ usage of Canvas
Table 4. Survey of students’ feedback of Loopd.life
*26% of these students said they’d check Loopd.life more than Facebook
The responses from the students indicate a clear preference towards social platforms and a mismatch between the institution’s expectation of its students’ interactions with their VLE/LMS and the actual preferences of those students. Only 34% of students said that they log into Canvas weekly, compared with 98% on Facebook and a projected 94% on Loopd.life. This is not specifically due to limited accessibility, time constraints, or failure to integrate the VLE/LMS into their working patterns, as 30% of students don’t value it and 9% said they have never logged in. Facebook usage is consistently higher with 83% of students using it on a daily basis and 31% of those accessing the platform >10x per day. Facebook use has been embedded into their daily habits.
The responses of the users who tested Loopd.life indicate that having a platform that also fulfils the function of a VLE could become habitually accessed like social media and would increase awareness of institutional communications. For approximately one in six students, this combination of social interactions and university activities would be used more than Facebook.
Qualitative study at University of Birmingham
In parallel with the quantitative survey, small ad hoc focus groups were invited to use the Loopd.life app and provide feedback. Responses were transcribed and categorised for consistency. Due to the small scale of the data collected, the analysis is not robust, however, as a pre-pilot experiment this does indicate some of the factors that students noted about the platform.
Responses fell into the following categories:
1. Integrates pedagogies of a VLE and Facebook -the students identified the strengths of the platform having similarities with Facebook in its ease of use and its visual nature, but with the added value of educational content: “Its basically just Canvas and Facebook put together” Loopd.life therefore supports many of the facilities of social media as reported in the literature, but also combines these with elements of pedagogical support that VLEs/LMSes provide. Comments included:
i. “It would be good especially for the stuff we’ve been doing like immunology stuff. We get given a lot of pre-lecture-content and it’s kind of hard to track down on Canvas so that would actually be really good.”
ii. “It would be good for group work.”
2. Replaces technologies that students struggle with -Students described having difficulty with locating information, organising their studies and scheduling activities, partly due to information needing to be sourced from different platforms. Most students referred to ‘giving up’ looking for information and instead asking their friends for answers. They particularly welcomed the calendar feature that drew together all activities and event notifications with class timetabling so that everything was accessible from one place.
i. “If I knew more about what was going on, then I’d be able to go to a lot more, rather than relying on people making Facebook events and stuff, and you find out too late. If you could click ‘Add to Calendar’ and move straight on to something else, I think that’s pretty good. I really like it.”
ii. “Yeah its good, you don’t have to do the research yourself. I find that hard currently with so many small things going on and so many different places to look, if it was all there for you in one place that would be really good.”
Another aspect that students reported having difficulty with is accessing email, to a point that it precludes them contacting their tutors. Since Loopd.life has a private messaging platform (using familiar social media technology) that integrates all social, extracurricular and academic conversations into one channel, it makes collaboration much easier.
iii. “Yeah I have to email them (lecturers), through an email address …. I don’t because its too much effort, but if it was on Loopd, I’d just drop them a message and it would go straight to them.”
Although a frequent barrier to the uptake of additional technologies is the extra time and effort in learning to use and implement them, replacing a pre-existing technology students currently dislike (such as email) with a familiar technology (such as mobile messaging) should reduce this barrier.
iv. “I think it would probably make things a lot easier, because everyone is always on the phone and if it was just on the app then it fits together. I think it’s all easier to find, it’s cool.”
v. “If it’s just on an app, everyone’s comfortable with social media as it is, so it wouldn’t take much adapting to, I don’t think. I think it’s really good.”
3. Provides user-controlled separation of identities and interactions -Finally, the main issue with using social media reported in the literature is the ability to have clear distinction between personal lifestyle content and university work. Loopd.life allows users to take control over their newsfeed and so what is usually a major barrier to engagement was not an issue for learners (Figure 4) Loopd.life supports the ‘everything in one place’ requirement to make student life easier, while maintaining personal and social preferences. Students also appreciated that the app was only for their institution, keeping their student lifestyle distinct from their family lives and the rest of the Internet.
i. Student One: “And is that like only academic stuff, (tap) only social stuff (tap).”
Student Two: “Ha you’ve got that.”
Student One: “Well its kind of obvious, its cool, I really like it.”
ii. “I think it’s a good idea, I think it’s a really good idea, having it just for University of Birmingham type thing.
Figure 4: The public newsfeed is a core feature of Loopd.life that supports both targeted information and serendipitous discoveries. It allows the user flexibility to filter between social, extracurricular and academic content.
Pilot study
Newcastle College
In September 2015 Newcastle College launched a twelve-month pilot with over 100 staff and 3000 students, the intention was to resolve a previous issue with students not reading emails. Table 5 shows the level of engagement (from both staff and students) over the past twelve months. Feedback from staff at Newcastle College emphasise the positive and transformational impact Loopd.life has had on communication.
“Loopd has transformed how we communicate with students and how they communicate with each other. I cannot recommend it more highly. Move over boring VLE, this is the future of modern education.”Deni Chambers, Director of School, Newcastle College
Table 5. Pilot activity summary at Newcastle College
Six months into the Loopd.life pilot a survey was conducted with 60 staff and 60 students to determine which type of content (documents, web links, media, text or other) users valued most (Figure 5). Both staff and students preferred a multi-media environment, focusing on the inclusion of user-generated video and the ability to link to external media and documents. The perceived value of text content users was noticeably lower than other forms of information sharing; although the results may be influenced by the novelty or excitement over Loopd.life since traditionally education technologies have been particularly limited in sharing multi-media content. Nevertheless, for the highest level of user engagement, a visually intensive and media rich platform is ideal.
Figure 5. Survey of 60 staff and 60 students at Newcastle College to determine the highest valued content by users. Each participant was limited to one answer.
A sample size of 1000 user-generated posts and events were analysed and categorised to determine the educational benefit of the platform (Figure 6). The largest individual benefit relates to community enrichment (20%) indicating that the platform is most valuable to improve social collaboration. This category is closely followed by academic enrichment (15%). There are however a number of smaller benefits such as employability opportunities, cross pollination of skills and support, information dissemination, achievement recognition and enterprise skill development, all of which individually range between 6–9%, but when combined in a more generalised categorisation (Table 7) demonstrate that almost half of the user-generated content is enriching with academic benefit.
Figure 6. An approximate categorisation of user-generated content uploaded to Loopd.life at Newcastle College during a twelve-month period involving 100 members of staff and 4000 students.
Table 7. Category analysis of posts and events
Conclusions
Although limited in their scope, responses from the pre-pilots (Chesterfield College, Northampton University and University of Birmingham) and pilot at Newcastle College, suggest that it is possible to use commercially owned software that integrates the pedagogical value of a VLE with the fast and efficient performance of a social media platform. This modern education technology is described by Loopd.life as a Social Learning Environment and integrates into students’ habitual behaviours, whilst effectively disseminating educational content. The pilot at Newcastle College has proven to be appropriately controlled to ensure the productivity and safety of the institution, yet sufficiently empowering to be flexible and engaging for the students.
“I really believe that Loopd.life is an amazing platform and I think could not only transform this college but be upheld as National/International good practice and an educational revolution.” — Tom Bradley, Head of Academic Curriculum, Newcastle College
User feedback during the development of Loopd.life and supporting literature suggest that the main issues with internal university communications are that:
Students are not accessing emails, and do not value their institutional VLE/LMS or use it regularly.
Students can access information more quickly (and therefore with less effort) in social media platforms than they can via a VLE/LMS.
Staff and students value an aesthetically pleasing and visually dynamic platform that supports multi-media content to enhance both social and academic engagement.
To address concerns regarding privacy and intimacy, a platform needs to be a closed and accessible only by students of their university.
The speed, accessibility and more intuitive navigability of social media platforms allow communication to flow continuously and hence engagement can become habitual. Academic respondents viewed this as having a beneficial impact on retention and progression.
Multiple communication platforms fragment the dissemination of information and cause organisation difficulties for students.
Students feel that Loopd.life empowers them to choose freely between academic and personal content creating an engaging medium that sufficiently drives academic benefit, importantly without intruding upon their social life. Due to the spontaneous nature of the pre-pilot studies and partly-subjective categorisation of data from the pilot study, further work is needed to determine the exact benefits of using the platform and to identify more precisely what factors encourage or discourage students to adopt a social learning environment. In addition, a longer-term experiment will help indicate if there is a measurable impact on achievement, retention and progression.
Further Work
Another interesting use to explore is the idea of admitting new students onto Loopd.life before their official start date. This allows an earlier, smoother and more personal induction process, perhaps leading to higher retention rates in the initial stages of university. These experiments have already begun at Newcastle College although it is too early to draw conclusions. A similar concept would be to give alumni access to Loopd.life after graduation and see whether this has a beneficial academic or career-related impact on current students when they leave higher education. Exploring the different ways in which Loopd.life meets the needs of students, staff and institutions will be the focus of additional research over the coming years.
References
1. Caspi, A, and Blau, I (2008) “Social presence in online discussion groups: testing three conceptions and their relations to perceived learning”, Social Psychology of Education, 11:323-346
2. Cheung, C.M.K., Chiu, P-.Y. and Lee, M.K.O. (2010) “Online social networks: Why do students use facebook?” Computers in Human Behavior 27 (2011) 1337-1343
3. Huppe, A. (2011) An exploratory study of students’ use of Facebook and other communication modalities in order to receive student affairs information, MSc thesis, University of North Texas, May 2011
4. Neier, S. and Zayer, L.T. (2015) “Students’ Perceptions and Experiences of Social Media in Higher Education Journal of Marketing Education 2015, Vol. 37(3) 133-143
5. Rourke, L, Anderson, T, Garrison, D R & Archer, W. (1999) “Assessing social presence in asynchronous text-based computer conferencing”. Journal of Distance Education 14 (2) pp. 50-71
6. Tess, P.A. (2013) “The role of social media in higher education classes (real and virtual) — A literature review”, Computers in Human Behaviour 29 (2013) A60-A68
7. Thomas, L. (2013) “Building student engagement and belonging in Higher Education at a time of change.”
Authors | https://medium.com/tablet-academy/uk-pilot-studies-in-the-use-of-a-novel-social-media-platform-across-post-16-education-14229763702 | ['Professor Steve Molyneux'] | 2017-09-04 12:46:24.406000+00:00 | ['Higher Education', 'Social Media', 'Education Technology', 'Education'] |
2,948 | Let’s Add Products in Android for E-Commerce App | Dependencies
To build our application successfully, we need the following dependencies
Recycler View
Card View
Retrofit
Gson converter
Glide
Copy-paste the dependencies given in the below blocks in the app level gradle file and click sync now on the top right corner of the IDE to get all the dependencies.
Recycler View makes it easy to efficiently display large sets of data. We supply the data and define how each item looks, and the Recycler View library dynamically creates the elements when they’re needed. As the name implies, Recycler View recycles those individual elements.
implementation ‘androidx.recyclerview:recyclerview:1.1.0’
Card View is gonna act as the basic building block element for our Recycler View. The Card View contents will be decided by us and the same block of elements will be repeating in our Recycler View.
implementation 'androidx.cardview:cardview:1.0.0'
Retrofit is the REST API client that can send requests and receive response from our RESTful service. Gson converter is used to send and receive data as JSON (JavaScript Object Notation).
implementation 'com.squareup.retrofit2:retrofit:2.4.0' implementation 'com.squareup.retrofit2:converter-gson:2.4.0'
With Glide, you can load and display media from many different sources, such as remote servers or the local file system. This dependency is to display the image of a product with its URL.
implementation 'com.github.bumptech.glide:glide:4.11.0' annotationProcessor 'com.github.bumptech.glide:compiler:4.11.0'
Now we have added all the required dependencies for the product. | https://medium.com/webtutsplus/lets-add-products-in-android-for-e-commerce-app-b8468e055001 | ['Nil Madhab'] | 2021-01-03 15:02:27.202000+00:00 | ['Ecommerce Web Development', 'Android', 'Technology Hits', 'AndroidDev', 'Mobile App Development'] |
2,949 | The Evolution of ARK’s Deployer: Form and Function Coming Together | In a previous post, we took an in-depth look at the new Deployer. However, to truly understand how important this update is, we need to take a step back and look at the changes Deployer has gone through in recent years. From the earliest days of Deployer’s development, the goal was to have Deployer be the embodiment of ARK’s slogan — Point, Click, Blockchain. Let us show you why this revolutionary product is more than just a fresh coat of paint.
Want to be the first to test the all-new Deployer once it comes out? Sign-up to our mailing list on https://Deployer.io
The First Steps Towards Blockchain Deployment
The first version of the Deployer was a lightweight deployment script that allowed developers to create their own ARK-based blockchains. Ideally, this script was for developers, hackers and enthusiasts to set up and explore a blockchain-based on ARK technology. Whether a developer wanted to use the V1 Deployer to learn more about blockchain, build a product or launch their own blockchain, we felt that our technology provided the flexibility necessary to meet those expectations. Additionally, this made the Deployer an ideal tool for hackathons where participants could set up their own custom blockchain and get to work. While practical, this was still a long way away from our vision of Point, Click, Blockchain.
This version of Deployer still required a lot of heavy lifting on the user’s side. In terms of where V1 was in the ARK timeline of products, this was prior to ARK Core V2 so a lot of the improvements brought about by ARK Core V2 were not present in Deployer V1. Additionally, Deployer V1 could only deploy one blockchain network at a time. For example, a mainnet or a testnet network.
We knew that in order to make Deployer into a robust solution within the industry, we needed to simplify a lot of the processes.
Point, Click, Almost Blockchain
Deployer V2 was a major step forward for simple blockchain deployment. This new version of the Deployer which launched in early 2019 had a graphical user interface (“GUI”), a documentation hub and a host of new features capitalizing on ARK Core V2.
The GUI laid out a three-step process for blockchain deployment: Prepare, Customize and Deploy. The prepare step had users read through documentation in order to understand network requirements and blockchain parameters. This step also involved initializing the necessary servers, connecting to the servers, preparing them and lastly creating a GitHub repository. After this was completed, users would move onto the customize step where they would name their blockchain, choose a ticker symbol and configure blockchain parameters such as the number of forgers and block time.
Having a GUI and detailed documentation really helped evolve the Deployer product into a more useful tool for developers and average users. Yet, the process wasn’t optimized. When it came to the deploy step, there were a lot of sub-steps required in the process. Whether it was adding network peers or importing genesis addresses and adding forgers, the process was prone to user-error. We knew that we needed to focus more on what happens after you customize your blockchain. Therefore, we went back to the drawing board.
More Than Just a Fresh Coat of Paint
With the newest version of Deployer, users who recall the previous versions will understand just how far we have come in terms of blockchain deployment. This isn’t just a sleek new user interface and design, the entire process has been overhauled and simplified.
A substantial part of the team’s energy and focus went into what happens with Deployer once you customize your blockchain. The only part of the process that requires you to work outside of Deployer involves you creating an SSH key for your blockchain. Aside from that, the entire process is streamlined. Here are just some of the things that users no longer need to do within the new Deployer:
No need to manually create servers via a server provider
No need to manually configure servers that were created via server provider
No need to manually run install scripts
No need to create your own GitHub repository
After customizing your blockchain, Deployer will provide you with server providers and source providers. You can choose from the available list and Deployer does the rest. A users no longer need to manually run installation scripts and can just focus on deploying a blockchain and getting to work. From setting the name of the blockchain to getting the genesis nodes up and running, Deployer requires very little input from the user.
Want to be the first to know when the Deployer Beta goes live? Sign up at: https://Deployer.io
In essence, Deployer provides more than just a better user experience. We’ve created the simplest way to deploy a blockchain. If you want to learn more about the new Deployer, we had a large roundtable discussion with ARK Team members below:
What’s Next?
If the previous versions of Deployer were focused on customizing your blockchain and the new version is about deploying your own, we recognize that there is still a gap in regards to what happens once your chain is live.
Luckily, we have already been hard at work on creating solutions and updates to Deployer that will make managing your live network post-launch easier than ever. Whether its detailed documentation, running a blockchain, distributing tokens or managing nodes — we have you covered. We will be covering these aspects in our next article in the Deployer series. Make sure to sign up for beta access as we continue making Deployer the simplest way to deploy a blockchain. | https://medium.com/ark-io/the-evolution-of-arks-deployer-form-and-function-coming-together-1203e2717af6 | [] | 2020-09-08 18:13:48.658000+00:00 | ['Cryptocurrency', 'Blockchain Development', 'Blockchain', 'Blockchain Technology', 'Crypto'] |
2,950 | Six observations on the future of K12 edtech | It’s hard to see what’s coming for K12 schools and districts around the bend, but it doesn’t mean we don’t try.
As a longtime education entrepreneur and now K12 impact investor in education, I’ve been around long enough to see many trends in education come and go. As political winds change, initiatives are born and then die. Just like everyone else, I don’t have a crystal ball, but I do have a point of view about what we’re likely to see more of in the future. I like to say that forecasts are usually worth what you pay for them, which is little to nothing. So instead of positioning this as a forecast, it’s a set of observations about what’s on the rise in K12 education and what I believe will become a bigger and bigger part of the education technology landscape. With that, here are six observations about what lies ahead for K12 edtech entrepreneurs.
1. More successful ventures will sell through and not to K12 schools and districts.
It’s no secret that many schools and districts are strapped for funds. Funding formulas like the one in my home state of Colorado keep schools and districts from benefiting from the growth in our economy proportional to how others are benefiting. As a result, schools have to do more with less, which opens doors for technologies that can help teachers and administrators do the jobs they have to do more efficiently and effectively.
One way to do this is to offer to schools and districts technologies that cost them little to nothing and tap into sustainable revenues from other sources such as parents. In this B2B2C model, entrepreneurs are selling through and not to schools by selling to parents. I’m excited to see this model in use by companies like Prodigy and Epic Learning, which are scaling rapidly. These startups offer a freemium version of the product which has no cost to teachers and students for school use, but then monetize by selling home subscriptions to parents and/or students who want to unlock premium features. My kids were so excited about using Prodigy that they even paid for their own premium accounts, so they’re doing something right.
2. Consumer-grade technologies will become a greater part of the edtech ecosystem.
Closely related to the first observation is the idea that more and more consumer-grade technologies will make their way into schools. For one, users and buyers within schools are becoming more sophisticated and expect better product experiences from the education technology they acquire.
For another, these same users and buyers are inundated with technologies and don’t have much time to spend learning a new technology. They have less and less patience for tools that require a user manual or long professional development sessions to get up and running.
Successful technologies will need to be so intuitive that one can start using them without much, if any, training. This is a critical aspect of successful consumer technologies today and is becoming a more important part of any edtech offering to a school. So, it’s with good reason that we’ll see more consumer-grade tools in edtech in the future. Tools like Pear Deck, Quizlet, and Kahoot! are examples of what we’ll see more of in the future.
3. Evidence of product efficacy will be in greater demand by educators and administrators.
When I was a VP of Sales serving schools and districts and engaging in conversations with school leaders and teachers, they typically asked for social proof before making decisions. Naturally, they wanted to know who else we worked with who was similar to them and whether or not those clients liked our software. This social proof is still important today, but the ability to show product efficacy is becoming increasingly important. For entrepreneurs, this means that it’s time to present authentic data about how your product works, who it works for, and the recommended dosage for teachers and students, especially if it’s a classroom tool.
The evidence-based requirements of the Every Student Succeeds Act (ESSA) will accelerate the movement towards demonstrating product efficacy. This process can become very expensive, with randomized control trials costing $250K or more. Fortunately, there are rapid cycle evaluation tools like LearnPlatform, which make it much easier and more cost-effective to run meaningful product trials. LearnPlatform is a wealth of evidence-based information about thousands of edtech products. If you haven’t made strides towards proving the efficacy of your products, now is the time.
4. Entrepreneurs focused on equity will get more and more traction.
As our nation deals with the consequences of rising income inequality, entrepreneurs focused on equity will experience success. The growing gulf in income is simply unsustainable as more and more people struggle to get by. I’ve long believed education is the greatest tool we have to raise people up and out of poverty and that access to equitable education is only possible with a generation of entrepreneurs focused on creating solutions that raise all learners up. Socially responsible investing is hot right now, so that should mean more access to capital over time for education entrepreneurs focused on equity. Genius Plaza is an example of company that’s combining a focus on equity in K12 with rapid scale.
5. Innovations like AR/VR, artificial intelligence, and big data will find their rightful places in education.
K12 education is not traditionally a sector that adopts new technology quickly, and this pattern also holds true for new innovations such as AR/VR, artificial intelligence, and big data. While these technologies have some level of adoption, it’s too soon to buy into statements that these technologies will “transform” education “very soon” as some have claimed. We’ve heard similar claims before about earlier generations of technology that didn’t pan out. That being said, I’m bullish on these particular technologies finding their rightful place in K12. To me, some of the most promising technologies are those that enable learning experiences that are simply unattainable with existing methods because of cost or time.
Labster is a great example of this. They enable learning experiences for students in virtual labs that can cost $1 million or more to create in an analog version. Too few K12 students have access to expensive labs and equipment to conduct hands-on science experiments, but Labster makes a simulated lab accessible and affordable with the click of a mouse.
6. Tools that facilitate collaboration and teamwork instead of independent use will be preferred.
As broadband has become nearly universal in schools and devices have dropped dramatically in price, more schools are implementing 1:1 computer or tablet environments where each student has a device. The first generation of technologies for these devices were designed for independent use. Students worked with their device and interacted with the technology by themselves. For example, there are numerous literacy and math platforms that enable students to work through content at their own pace and level independently. These tools aren’t going to go away, but their widespread use has educators, and parents concerned that students are spending too much time staring at screens and working alone. With this context in mind, I see growing demand from school leaders for tools that enable collaboration and teamwork with the aide of technology.
ThinkCERCA is an excellent example of a collaborative learning platform in which students work together to learn vital argumentative writing and critical thinking skills. I’m confident we’ll see more and more of these collaborative platforms as educators demand them.
Where do you see the future of K12 edtech going? I’d love to hear your thoughts. | https://medium.com/age-of-awareness/six-observations-on-the-future-of-k12-edtech-e1e41e64d190 | ['Graham Forman'] | 2019-05-13 20:05:27.874000+00:00 | ['K12', 'Startup', 'Edtech', 'Education Technology', 'Education'] |
2,951 | Is Your Computer Running Slow? | PC Optimizers USA is your best help when your computer stops working and a quick computer repair is required.
You do not want to take your computer to a shop to wait 3–4 weeks for a repair?
Do you need your computer daily and need a quick fix?
Call us and we will immediately (within 24 hours and on request even on Sundays and holidays) analyze and will fix the problem remotely, regardless which operating system It does not matter which operating system your computer works with.
✓All Operating Systems
✓Affordable, Reliable and Fast
✓Competent Computer
Technicians
✓Computer Diagnostics
✓Computer Repairs
✓Data Backups
✓Data RecoveryPC Optimizers USA
Online Remote Computer Repair
Phone +1(571) 554–6008
Web www.pcoptimizersus.com
Email info@pcoptimizersus.com
Location: 21776 Harroun Ter, Ashburn VA 20147 | https://medium.com/@pcoptimizer437/is-your-computer-running-slow-ad63d9cdea64 | ['Pc Optimizers Usa'] | 2020-12-09 15:16:24.024000+00:00 | ['Computer Repair', 'Computers', 'Information Technology', 'Slowcomputer', 'Slow Computer Support'] |
2,952 | We Need To Talk About This M1 Mac Mini: My First Impressions | The new Macs with M1 processors are making headlines in the technology press, and with good reason: Apple has surprised locals and strangers with the bet materialized in its new M1 chips, and among them, the most often talked about is MacBook Air and MacBook Pro. Energy efficiency, mobility performance, battery … there are many more points to consider in a laptop and that is why they have been placed at the forefront of many media.
We have already seen the transparency and simplicity of adapting all the applications in our first contact with laptops, so what difference is there with this Mac mini? We are facing the first desktop with an Apple Silicon chip, to which we already have to connect a monitor, speakers, and other accessories separately.
The Mac mini’s box and its unboxing leave no room for doubt: as with laptops, Apple doesn’t label its switch to proprietary chips at all. In fact, we don’t even have the memory and SSD storage labels, if we want to read the details of the machine we have to look for the fine print. It details that we have a Mac mini “with 8 CPUs, 8 GPUs, 256 GB of storage, and 16 GB of RAM.” Nothing else.
Connecting all the accessories has not been a problem for me. The initial setup has been done surprisingly fast, taking less than five minutes from when I first turned on the Mac mini until the macOS Big Sur desktop appeared. The only possible bump that we can find with this Mac mini is that we will need a wired keyboard to be able to do the initial configuration, something that I have been able to solve easily with my USB mechanical keyboard.
By default, macOS applies the retina effect at 4K resolution, turning it into a 1080p monitor. Personally, I have preferred to scale that resolution somewhere between that 1080p (too big for 27 inches) and the native 4K resolution (too small): I have kept the 2560x1440p resolution with which I already worked on the 27 inches of my iMac, and Thanks to the 4K resolution I get anti-aliasing that improves (and quite a lot) the general quality of the image.
With the general use of the system, I have noticed, and I say this without hesitation, a noticeable increase in the overall system. Intel applications run without us even realizing that they are emulated under the Rosetta layer, and applications already compiled for the M1 chip launch instantly, with the snap of the fingers.
It does not matter what application we are talking about, whether it is Twitter or Pixelmator Pro: both start so fast that it is absurd to time it. I am not one of those who is going to always demand maximum power from this chip, but it is clear to me that I have made a leap in performance as I have rarely experienced. I’ll break down the GeekBench results.
GeekBench Results Mac Mini M1 Chip 2020. Source: GeekBench
In Geekbench we have slightly better results than the MacBook Air and MacBook Pro, probably thanks to the ventilation that the device has. Although I have to say that I have not heard absolutely any noise from that fan during the tests, the Mac mini has endured them without messing up. The only effect I have noticed has been that the computer has warmed slightly in its rear area, very little. During the rest of the activity, such as while writing this article, the computer has been cold.
In the absence of working more time with it and while we wait for those new iMac, I do not hesitate for a second to say that this Mac mini is the almost-perfect desktop for any general user who works at a table many hours a day. It has envelope power even for those who dare to edit photo and video, so we could even recommend it for the small professional.
The only question I have left is: if this Mac mini is an entry model, what does the future hold? What will Macs be like with chips that prioritize performance over efficiency? The transition to Apple Silicon is just the beginning and the M1 chip is just a glimpse into the future.
Read more Medium Stories. | https://medium.com/macoclock/we-need-to-talk-about-this-m1-mac-mini-my-first-impressions-a2eb05780ca6 | [] | 2020-11-27 06:13:31.001000+00:00 | ['Mac', 'SEO', 'Technology', 'Future', 'Apple'] |
2,953 | Brigade Leaders Share Free Tools for Projects | In order of presentation:
A quick overview of resources that Code for America offers
(starts at 16:55)
On using Mapbox:
On using CARTO:
Kathryn Killebrew (Code for Philly) shows how they used CARTO to help the Delaware Valley Regional Planning Commission better plan bicycling infrastructure with CyclePhilly. (46:24)
(Code for Philly) shows how they used CARTO to help the Delaware Valley Regional Planning Commission better plan bicycling infrastructure with CyclePhilly. (46:24) Will Skora (Open Cleveland) shows how to use the CARTO dashboard to quickly visualize parking lot geometry and slice and dice your data with ease. (53:50)
(Open Cleveland) shows how to use the CARTO dashboard to quickly visualize parking lot geometry and slice and dice your data with ease. (53:50) Here’s how to use CARTO in your Brigade project.
On using Heroku:
Luigi Ray-Montanez (Code for Atlanta) on what Heroku is, if it’s right for you, and how to quickly get set up. (1:07:10) (View Presentation)
(Code for Atlanta) on what Heroku is, if it’s right for you, and how to quickly get set up. (1:07:10) (View Presentation) Jim Van Fleet (Code for Charlotte) talks about how CityGram has sent over 10 million notifications to inform residents about the nearby happenings in their city with Heroku. (1:17:00) (View Presentation)
(Code for Charlotte) talks about how CityGram has sent over 10 million notifications to inform residents about the nearby happenings in their city with Heroku. (1:17:00) (View Presentation) Here’s how to use Heroku in your Brigade project.
Code for America is happy to be able to offer these products for Brigades to use for free though the generosity of the sponsoring companies. So, thanks!
If you know of other products that would be helpful to add to the collection of available tools, please reach out to inkinds@codeforamerica.org!
The next workshop is scheduled for Monday, June 11 at 4:00pm PT / 7:00pm ET. The topic will be announced around the beginning of June. Hope to see you there!
A huge thanks to all the presenters for taking time to share their expertise with the community, and Veronica Young for helping plan and organize the workshop. | https://medium.com/code-for-america/brigade-leaders-share-free-tools-for-projects-ac19f6f0b91d | ['Tom Dooner'] | 2018-05-17 20:48:16.505000+00:00 | ['Code For America', 'Civictech', 'Civic Hacking', 'Civic Technology'] |
2,954 | These outdoor smart home security cameras are a must for your home | Your home should be a safe haven. But if you sometimes feel fearful alone indoors or skeptical about leaving your property unattended, a smart home security camera can offer repose. Discover our top picks in today’s Daily Digest.
That feeling of safety is priceless, and we have just the gadgets that can combat feelings of fear at home: outdoor smart home security cameras. Designed to monitor happenings surrounding your property, smart cameras detect suspicious movement and alert you of the activity. It’s like having a personal security guard keep tabs on your land 24/7, and we guarantee that you’ll sleep easier at night with one installed.
Related: The best Alexa gadgets to buy in 2021
But these products’ purpose isn’t solely to combat fear; they’re also convenient for communicating with guests at the door without in-person interaction. Some luxury models include Alexa Greetings to leave a scheduled message for a delivery person when you’re not home. Feel more settled in your home with our selection of smart home security cameras to act as your eyes and ears.
1. The Nooie Smart Cam Doorbell features intelligent human detection and an antitheft locking mechanism. It’s one of our favorite outdoor smart home security cameras for peace of mind indoors.
Upgrade your home’s security with the Nooie Smart Cam Doorbell. Use it for both convenience and peace of mind at home. It provides two-way audio to communicate with guests and delivery people. It also includes Live View, allowing you to see who’s at your door in real time. Moreover, it uses smart technology to detect humans from objects, minimizing unwanted notifications to your phone.
Order this smart doorbell for $149.99.
2. Weatherproof and wire-free, the Swann Xtreem is a smart home camera with a 6-month battery life and a 110-degree FOV.
Allow the Swann Xtreem to be another pair of eyes watching over your property when you’re on vacation. It boasts a generous 1080p resolution camera to produce high-quality footage. In fact, you can even see footage clearly in the dark up to 26 feet with the powerful infrared night vision.
Purchase this smart security camera for $179.99.
3. Designed with a 180-degree FOV, the Arlo Essential gives full coverage. It even works with Amazon Alexa and Google Assistant.
See more at your front door with the Arlo Essential. Featuring Object Detection, it recognizes the difference between a person and an object to reduce nuisance notifications. Additionally, with Intelligent Alerts, you can quickly contact emergency services via the app if you encounter suspicious or dangerous activity.
Purchase this smart camera for $179.99.
4. The WUUK Smart Antitheft Doorbell features a voice gender modifier to make children and women feel more secure when home alone.
Don’t fear communicating with someone at the door when you have the WUUK Smart Antitheft Doorbell. Its unique voice modifier changes the tone of your voice to hide your identity. Moreover, its adjustable motion sensors alert you when it detects motion, helping to protect your home right from your smartphone.
Order this gadget from Amazon for $89.99.
5. Featuring ultrabright LEDs, the Ring Floodlight Cam Wired Pro emits 2,000 lumens of brightness, so you can see anyone outside.
Additionally, the Ring Floodlight Cam Wired Pro features 1,080p HDR video resolution and color night vision for superb quality day and night. It also includes 3D Motion Detection and Bird’s Eye View features for maximum protection and convenience. And, with dual-band Wi-Fi connectivity and a weather-resistant design, it gives you years of use.
Order this smart security camera for $249.99.
6. Warn off intruders with the Wyze Cam v3. It features a siren and an IP65 waterproof rating for security and durability.
Capture moments outdoors and indoors with the Wyze Cam v3. This security camera features 1,080p resolution to provide clear footage during the day and night. In fact, it delivers colored night vision. Additionally, the Starlight ISP reduces noise from low-light conditions for clear visibility at all times.
Purchase this outdoor camera for $35.98.
7. Keep tabs on all your doors, windows, and hallways with the Kangaroo Motion + Entry Sensor. It’s a security device that provides flexible and customizable home monitoring for peace of mind.
Choose to monitor just motion or opening and closing of doors and windows with the Kangaroo Motion + Entry Sensor. It’s also designed to prevent false alarms when you’re at home. Moreover, this device detects motion up to 20 feet away and offers 110-degree FOV motion sensing. It even includes a built-in pet rejection feature for animals.
Purchase this home security device for $29.99.
8. The Ring Video Doorbell Pro 2 provides Head-to-Toe HD+ Video, 3D Motion Detection, and built-in Alexa Greetings.
Keep all the convenience and safety features you need with the Ring Video Doorbell Pro 2. This smart home gadget ensures you never miss an important moment with Head-to-Toe HD+ Video in 1,536p. Also, you can receive an aerial view of your yard to observe motion from a new perspective.
Order this smart doorbell for $249.99.
9. Answer the door from anywhere with the Ring Video Doorbell Wired. It includes the brand’s HD Video and Two-Way Talk features.
Take some of the stress out of your day with the Ring Video Doorbell Wired. This smart doorbell provides advanced motion detection and sends real-time alerts to your smartphone, Alexa device, or Ring Charm. Moreover, you can adjust motion settings to filter out motion on a busy street.
Purchase this Ring doorbell for $59.99.
10. Experience high-quality 4K video with the Arlo Ultra 2 Spotlight Security Camera. It lets you zoom in on objects with immense clarity.
Finally, the Arlo Ultra 2 Spotlight Security Camera delivers an ultra-wide viewing angle, six-month battery life, and noise-canceling two-way audio. You can even see clearly in the dark with color night vision. Furthermore, this smart camera works with Apple HomeKit, Amazon Alexa, and Google Assistant for convenient operation.
Order this Arlo camera for a reduced price of $249.99.
Make a house feel like a home with these outdoor smart home security cameras. They’re invaluable to you and your family’s safety and mental wellbeing. What are your go-to security gadgets? Let us know in the comments.
Want more tech news, reviews, and guides from Gadget Flow? Follow us on Apple News, Google News, Feedly, and Flipboard. If you use Flipboard, you should definitely check out our Curated Stories. We publish three new stories every day, so make sure to follow us to stay updated!
The Gadget Flow Daily Digest highlights and explores the latest in tech trends to keep you informed. Want it straight to your inbox? Subscribe ➜ | https://medium.com/the-gadget-flow/these-outdoor-smart-home-security-cameras-are-a-must-for-your-home-1f3242c41111 | ['Gadget Flow'] | 2021-07-14 18:27:43.244000+00:00 | ['Tech', 'Technology', 'Security Camera', 'Gadgets', 'Smart Home'] |
2,955 | Top Tech Skills to Master in 2022: Coursera Report, Pt. 1 | Top Tech Skills to Master in 2022: Coursera Report, Pt. 1
Global digitalization is here: the tech industry grows so fast that new solutions, which make every sphere of our life much easier, are released weekly, and nearly all jobs have digital elements now. No job is exempt from digitalization and the COVID-19 pandemic accelerated this transformation in nearly every industry. SimbirSoft Nov 28, 2021·4 min read
Coursera, an online course provider, published the Global Skills Index 2021, which analyzed the proficiency of its learners across 10 industries in a set of 26 skills. We would like to highlight the impact on the technology domain and how outsourcing can help companies find the people who possess the skills needed for a project.
According to Coursera’s research, each industry reported an acceleration in the need for technology skills to account for the lack of physical interaction induced by the pandemic. These are changes that appear unlikely to recede as the pandemic does; rather, companies will continue to move forward with digitalization at an accelerated pace.
To maintain the transformation velocity, specific technology and data skills are needed across industries — namely cloud computing, cybersecurity, data analysis, and software development. Currently, Coursera’s analysis of trending skills in the Technology and Data Science domains shows that people are highly interested in learning software engineering and data analysis skills — look at the picture below.
Source: coursera.org
Technology Industry Not Immune From COVID-19
This year, the overall skills proficiency ranking of the technology industry fell from number one in 2020 to number six. Last year, Coursera predicted that tech companies would need to remain deeply committed to addressing the continuously shrinking half-life of skills within their talent pool. That idea, along with the stress of a global pandemic and increased learning efforts in other industries, affected the skills ranking of workers in technology companies. In a pandemic that saw significant unemployment growth, skills shortages persist. It’s increasingly difficult for employers to find qualified candidates even though there is a growing number of applicants for many of the most-needed positions.
Technology industry learners do excel in technology skills. Though the number-one ranking from last year slipped to third this year, skills proficiencies remain cutting-edge overall with cloud computing (100 percentile), software engineering (89 percentile), and computer networking (78 percentile) topping the list. Technology skills that over-index for the technology industry are distributed computing architecture, software testing, network architecture, software architecture, computer architecture, software engineering, and operating systems. These are the same skills that software developers, quality assurance analysts, and testers will need. According to the U.S. Bureau of Labor Statistics, employment for those jobs will grow 22% by 2029.
Source: coursera.org
Outsource Everything
The answer is on the surface: outsourcing is a good option that can help businesses find the people who possess the skills needed for a project. Outsourcing the project or bringing the outsourced engineers to work on-site allows in-house employees to learn from the contractors while ensuring that the job is done with the necessary level of skill.
In addition to giving access to the top talent, outsourcing can benefit business in many other ways which explain why companies of all sizes look outside their company for their staffing needs. Here are some of them:
Rapid growth. Having more staff on hand may seem promising, but it results in expenses that go into hiring more full-time employees. In the end, it may limit the potential for growth.
Having more staff on hand may seem promising, but it results in expenses that go into hiring more full-time employees. In the end, it may limit the potential for growth. Flexibility. Outsourcing can help the business make it through a busy season without hiring in-house employees: after a big project is done, you can easily switch back to an entirely in-house team, and a good contractor can help you with scaling your outsourcing needs.
Outsourcing can help the business make it through a busy season without hiring in-house employees: after a big project is done, you can easily switch back to an entirely in-house team, and a good contractor can help you with scaling your outsourcing needs. Maintaining focus. Outsourcing secondary tasks allow your in-house team members to focus on internal tasks, helping your business run more efficiently.
There are several formats of outsourcing that can help businesses get access to the best specialists in the market. A company can strengthen its development team by inviting experienced outsourced specialists to take part in the project. The other option — a dedicated team, formed by a contractor to work under the client’s management. Thirdly — end-to-end software development: the outsourcer organizes a full cycle of work, including development, management, testing, and implementation.
In Conclusion
To sum it up, Coursera’s report reviews the state of digital transformation in each industry and how the pandemic has affected it. This information can help companies determine how they can best take advantage of opportunities to reskill and upskill workers to increase innovation and gain a competitive advantage. And if you are looking for a reliable IT outsourcing contractor or want to strengthen your in-house development team — contact us!
The full research was originally published on coursera.org | https://medium.com/simbirsoft/top-tech-skills-to-master-in-2022-coursera-report-f578bbbfbc26 | [] | 2021-12-10 16:00:11.955000+00:00 | ['Technology', 'Software Development', 'Coursera', 'Outsourcing', 'Tech'] |
2,956 | The Red Queen’s race | The Red Queen’s race
In Lewis Carroll’s Through the Looking-Glass, The Red Queen’s race is an event where The Red Queen and Alice are frantically running but don’t move. This is the exchange they have while out of breath afterwards.
“Well, in our country,” said Alice, still panting a little, “you’d generally get to somewhere else — if you run very fast for a long time, as we’ve been doing.” “A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!”
In evolutionary biology, the red queen’s race is used to illustrate that sexual reproduction and the resulting genetic recombination may be just enough to allow individuals of a certain species to adapt to changes in their environment. More importantly however, it is also used in environmental sociology by Allan Schnaiberg’s idea of the Treadmill of Production. In this example producers are continually driven to accumulate capital and expand the market in an effort to maintain relative economic and social position. Much like an evolutionary game-theoretic biological system.
These evolutionary forces dictating the direction of evolution are well understood. Utilising statistics game-theoretic models are able to predict the spread of a genome, they conclude that,
Targeted experiments based on our computational results could further help assess the BQ [Black Queen] Hypothesis.
The Black Queen
The Black Queen hypothesis is based on the card game Hearts. The significant rule in the game for this analogy is that the queen of spades, which must end up in a player’s deck, carries a very large penalty. The biological theory is that it can be beneficial for a cell to lose vital functions (i.e., losing the ability to process Sulpha) if the surrounding cells are able to fulfil that function for them. The main driver for this would be that the individual cells become more metabolically efficient as they no longer have to provide the function for themselves, allowing cells to specialize. There are again parallels here with business: if the KPIs (Key Performance Indicators) in a business model are correctly identified, it would stand to reason that these metrics can be used within similar biological models described above. The forces acting on both nature and businesses are measurable, thus can be modelled and tested.
Equal and opposite
Using the kind of thinking from above we maybe able to understand the secret to Bitcoin’s success. Why is Bitcoin able to resist attacks and what are the mechanisms behind that resistance? To answer this question we must look in detail at Bitcoin’s genome or the technical details of the way Bitcoin has been constructed.
The Bitcoin network, made up of miners, takes a snapshot of time roughly every 10 minutes via the probabilistic Proof of Work (POW) mechanism. Companies that decide to take on this process wherein they reward themselves every time they produce a valid snapshot are known as miners. This reward includes a subsidy and transaction fees. In order to validate other miners are worthy of their reward, each miner checks their competitors’ attempt at solving the proof of work by observing the blocks they produce. Other miners will only accept their work if it contains no double spends and follows a given rule set. It is this incentive mechanism that guards the network against would-be attackers that wish to double spend. Miners are encouraged to propagate their work as fast as possible to ensure their effort, money and time is not wasted on invalid blocks due to a competitor’s block orphaning theirs.
The fact that miners must follow the longest valid proof-of-work chain creates a balanced incentive mechanism. Not only to propagate the block being worked on top of as quickly as possible but also to validate incoming blocks as quickly as possible. The transition from one block to the next block is an important one, it must be done both quickly and with managed risk. As we know, if a miner begins working on top of an invalid block, the work is wasted. Yet to ensure that they are not left behind, work must be commenced on a valid block as soon as possible. This may mean that work begins before all aspects of an incoming block are validated.
The image illustrates this as a free body force diagram where miner ‘A’ has found a block and is positively incentivised (F1) to propagate their block as soon as possible. Miner ‘B’ who is receiving the block is positively incentivised (F2) to receive and validate the block as soon as possible. The weakening of either incentive would be a detriment to the system as a whole. The dictum to assume that everyone is always trying to attack is one that miners should not take lightly. One mistake could cost them everything that their business has been built upon. This financial force could be seen as a Key Performance Indicator of the Bitcoin network.
A seat at the table
Imagine a party of miners sitting at a table. A miner that is not sitting directly at the table, and thus not well connected, collects less revenue through mining fees due to not receiving transactions in a timely manner from other miners. They also experience more orphans due to not being able to communicate the block fast enough. All of which results in a loss of revenue for that individual miner and increased revenue for all other miners.
This is an evolutionary type mechanism, described by both the red and black queen hypothesis, just the same as in biology or in economic capitalism. This is the economic effect that determines which miners will be able to compete to find blocks; if they produce too many blocks that are orphaned they will not be able to make a profit and will find that they lose their seat at the table. Once we understand what mechanisms cause miners to be tightly connected we can understand the core technical and economic properties of the system.
Propagation
Both transactions and blocks are broadcast on a best effort basis. The exact mechanism used to propagate the information is not relevant to any of the core design on the Bitcoin protocol. However, it can have an effect on the core protocol if adjustments are made to Bitcoins economic protocol that might allow propagation to become more efficient, however the mechanisms proposed to increase block or transaction propagation time may not even achieve that. If propagation times are given as a reason to alter the Bitcoin protocol we must be very aware of how exactly this is achieved if anyone is asking for consensus changes primarily because the networking protocol should never need consensus altering changes. If a miner elects to use a different networking protocol and others cannot connect to them, this miner isolates themselves from the other miners.
Current and proposed methods of propagation are beyond the scope of this series as they are not protected by a consensus mechanism. It is much more like a language spoken between miners but has no effect on the way they process information they are given. Each miner can agree on different languages between themselves yet can work on the same protocol. There is no risk of one miner creating a block that is seen as invalid by some and valid by others so long as each miner has all the same information to complete the block.
Validation
Transactions
As we are going over the validation process for blocks it is worth mentioning the validation process for transactions. As soon as a new transaction is seen on the network a number of conditions need to be met for it to be accepted and thus enter the blockchain. These conditions involve a verification that the signatures are valid, the input is greater than the output, unless it is a coinbase transaction in which case there is no input, and there is a valid parent transaction. If any of these conditions are not met the transaction is rejected by being flagged, kept in memory as an incident and an alert is propagated throughout the network. Keeping the transaction in memory and propagating an alert is not vital however, it may help miners spot a pattern in malicious behaviour. There is also a loop that can be used to control a miners risk level to attacks, if there is a very large transaction chain that a miner does not have and is given the final transaction first a miner may find themselves continuously requesting data from their peers. The ‘j’ parameter will allow a miner to stop this loop and black list a peer that is behaving maliciously.
Blocks
The miner that is more connected and can produce blocks that are easiest to validate will find themselves at an advantage due to lower orphan rates, which again, cost the miners in revenue and profit. The opposite side of this is that the miner that is able to validate blocks the fastest will also have an advantage. Above is a simplified process which miners may choose to follow to ensure that they are on the longest valid chain for as long as possible.
A rule of this process is that the first valid block received is the one worked on. That means the block that validates first is the one that is to be considered part of the valid chain. Therefore if a block takes too long to download or validate the miner either reverts back to the previously saved block, or transitions to a competing block that is found to be valid first. One may notice that only one loop exists within this flow chart. It is this loop that miners can use to manage their risk level by setting the value of ‘k’. This is similar to the ‘j’ parameter used to protect against a DDoS attack for transactions. However, it is much more important. If this value is set incorrectly it is not just a transaction fee that is missed, but an entire block reward. If other miners accept and build upon the block just rejected by the miner under discussion, then it becomes the longest chain and work has been wasted.
It is also worth mentioning that a miner cannot produce a block larger than one that they are not able to validate as part of the building process is the act of validation. Only a more efficient or more connected miner may be able to produce a block that others cannot produce in the given time between blocks. These more efficient and resilient miners can faster adapt changing market demand for block space and are thus more competitive. They therefore may build blocks in accordance with a competitive economic strategy, in which a miner produces a large block that 51% of the network can validate whilst the other 49% find difficult to and thus fall behind.
Where are we going?
In this section we have outlined the validation process and identified two potential Key Performance Indicators (KPI) of the mining business. This KPI is a risk mitigation factor which will directly affect their profits and therefore chances of survival.
While this knowledge is not new or revolutionary, it is the underpinnings of how Bitcoin functions. Without understanding this it is difficult to have constructive conversations about the direction of the protocol. We are laying the groundwork for further more complex discussions. One may also notice that the processes above, although they are linear, allow transaction validation and tree construction to be parallelised. Chained transactions can be validated by a single thread, and the merkle tree can be built in sections. This will also be covered in more detail in the next block. | https://medium.com/two-hop-ventures/the-red-queen-8d0844aa5a20 | ['Alex Fauvel'] | 2018-10-25 14:09:23.395000+00:00 | ['Blockchain', 'Cryptocurrency', 'Fintech', 'Technology', 'Bitcoin'] |
2,957 | The Show Must Go On | Entertainment Industry Techs Continue With New Purpose
Their job is to be invisible, wear all black, and work in the dark, so nobody ever really thinks how many people were left.
They build entire towns in three days time for one night of amusement, the lights go on because of the work done in the dark. And while everyone is entertained, the invisible crew sleeps.
This is the life of an IT roadie, ruthless jokesters that jab at each other like family. They spend exorbitant amounts of time together, swarming fields and stadiums, March through September. The crew has a running joke that they are more of a disaster for towns than entertainment. But similar to stories heard far too often this last year, the tech family had to fight to find work and stay together.
The entertainment industry is much larger than the talent on stage, it can be easy to forget about the machine when you never see it run. That’s just how fluid the work is when your crew is tight.
To put on a single concert takes hundreds of people, the security, gate keepers, ticket collectors, concessions, techs, sound and light crews.
The average concert takes over 500 event professionals to go live. A music festival typically utilizes as many as 1,500 people behind the scenes.
Everything and everyone hidden behind the stage was, well, left behind when the industry curtain called last March, explains TOURtech founder and CEO Allen Cook.
With 245 events in 2019 and a staff of 25 and another 30 contractors that work during the season, 2020 was set to be the biggest season the crew had seen. No one saw the real disaster looming.
Cook runs the crew of 25, and had to make the “heartbreaking” decision to shut down in March, laying off the staff, including his own wife.
TOURtech lost a 6 figure job and then another customer backed out of a major event, which Cook was preparing a site survey for. Over the span of three days the entertainment tech group lost $800,000 in planned revenue.
But TOURtech is not alone, the industry as a whole suffers in unison.
According to statistics gathered from the Department of Labor and industry surveys, 12 million jobs were supported by live events, a 1.4 trillion economic impact of live events in the US alone. Altogether, 96% of businesses in events cut their staff, 97% of 1099 gig workers lost their work, and 77% of event pros lost 100% of their income.
The show must go on.
Tech For Good
Chris Patmon had been traveling with TOURtech since July of 2019 when he was laid off. Patmon was a veteran techie for the industry, logging 10 years running cable, programming gear, and hanging Access Points for shows.
Similar to his fellow industry workers, the entertainment shut down left him with financial uncertainty.
“That was my main source of income, and I was looking for options after TOURtech shut down,” he explains.
Patmon was trying to stay positive through the shock of losing his job, and facing payments with no income, in what he described as a “difficult year of change.”
When his phone rang.
I was cooking lunch when Allen called me and said “Hey, are you working?”
I said “No.”
Allen shot back “You want to?”
“He told me about a project [ITDRC] was starting up, and it sounded awesome,” Patmon recalls.
IT roadies were not hard to find, everyone in the industry was on the hunt for work.
Dennis Kenyon, a fellow roadie from the TOURtech family also got a phone call. “Allen reached out first about [projectConnect] and asked me if I wanted to do it…I said absolutely, the next week I was in Dallas,”
Kenyon had also logged a year with the group, setting up connectivity and driving trucks. During the off season Kenyon worked construction and as he admits, “I would rather be pulling cable instead of carrying lumber.”
The idea to put some in the entertainment industry back to work was one that would serve IT roadies while aiding the growing minds in underserved communities, affectionately called: projectConnect.
ProjectConnect was born out of the pandemic, and is a nationwide initiative by ITDRC to provide free community WiFi installations to connect students and families to the internet — especially those living in rural and underserved communities.
Once education shifted to an online environment, many students were at greater risk of falling behind without proper access.
Local WiFi in schools, public spaces, libraries, and coffee shops was limited during lockdowns in many cities, leaving struggling students with few options.
“That was a pretty great phone call, it changed my life,” Patmon recalls.
Layoffs expanded well past concerts and festivals, and into Broadway productions.
Asher Robinson, who has been working the entertainment circuit since his early years in community theatre works in New York’s Broadway tech circle.
The entertainment industry work is 100% of Robinson’s income and similar to the statistics above, his last real gig was March 12.
At first the party line was “we are just taking the week off,” but now Robinson says he has no idea when they will be going back, and most of the new productions he was working on have closed permanently.
With more free time than he would like, Robinson found ITDRC. It was tech he already knew, and for a cause he felt was worthwhile. After completing the required training, he was deployed to Louisiana after Hurricane Laura.
“After spending time in NY thinking I’ve got problems and then going to Lake Charles” Robinson explains “It was eye opening, the virus wasn’t gone but they had bigger problems to worry about because they didn’t even have a place to sleep.”
Free time is what the working world dreams about, but when you get it, it can make you realize what’s more meaningful in life. Robinson says he found that new perspective in a disaster zone, during a pandemic.
On The Road Again
When a year of trouble comes to your door uninvited, takes the house, and windmills the direction of your life, getting back up with purpose is a great challenge.
Before projectConnect, IT roadies were bunking up together and doing what they could with the family they had built. The majority of techs worked their last job in March and a few lost their homes or were considering selling.
“The entertainment industry shutting down affects so many people, that work so hard to do what they do” says Patmon, “Their work is what they love and for some of them that’s their life.”
Crews had no idea what lay ahead of them for the next year, but knew they wanted to stay together and survive the year. The roadies describe it as the easiest decision they made all year — and one that turned into something quite formative.
The techs gathered in Dallas, and set off across the country with a new cause after facing ruin. Patmon and Kenyon paired off in an ITDRC truck and headed to Washington state with 2 other crews.
“Joe was always telling me to go on break so I wouldn’t be burnt out but I could never be burnt out doing this job…I could do this job 165 days a year, we are making people happy, that’s the job, we are getting them connected,” Kenyon jokes.
Their longest stop was six weeks in Pennsylvania, but the job sites that stuck with the crew took them to Michigan. Where the team connected houses turned into educational hubs for nonprofit Brilliant Detroit.
Brilliant Detroit keeps underserved families at the heart of their work, focusing on creating success stories out of difficult circumstances. For these communities, the COVID-19 pandemic revealed the tears in the government programs they relied on and stretched their families even further to meet basic needs.
Brilliant Detroit grappled with children falling behind due to lack of connectivity and a small amount of devices. They had mothers attempting to work from home with multiple children on different learning levels and no connectivity.
The tight knit community has always relied on each other for support and has continually come together during hard times.
The hotspots brought the neighborhood together again during the hardest year they’ve faced together. While also allowing Brilliant Detroit to use the connectivity to focus on education, health, and family support.
Kenyon a former serviceman for the Navy, and his road partner felt their sense of duty grow for projectConnect after their time in Motor City.
“It helped bring a smile to people during a time when things aren’t the best… to give them free internet so they can talk to people on the phone and skype people so they can see faces… It feels good, it makes your heart warm to connect people.”
It was a chance to give back, in the middle of unprecedented times, an opportunity that kept the crew energized. To Kenyon, it was the chance to bring joy.
“I’ve seen a few of my friends go through some really hard times this year, COVID has hurt so many people. Being in that truck and going 2700 miles across this country, I’ve seen horrible situations that we are working to change,” Kenyon explains “That’s made it a wonderful year, just giving back has been a wonderful experience.”
They watched pure excitement rush over the kids who immediately downloaded Minecraft and danced with their friends to later upload on TikTok.
Patmon recalled families expressing gratitude, the connectivity made it possible for them to FaceTime with loved ones, who they had been estranged from since lockdown.
What began as an opportunity to get themselves out from between a rock and hard place turned into more than just another gig.
“For it all to come together at the end has been a lot to take in, it’s funny how it all worked out,” says Patmon.
Robinson believes art is meaningful and entertainment is something the world needs, but through the opportunities at ITDRC he has found another outlet for meaningful tech.
“It was a different perspective being able to give back with ITDRC, it was rewarding” said Robinson, who is currently installing WiFi hotspots in community centers around his hometown in NYC with Patmon.
To date, projectConnect has established WiFi homework hotspots in nearly 700 communities across the Continental US and Puerto Rico, with the project continuing in 2021.
“This experience means a ton to me, this is the first time since my military career that I finally feel like I’m actually giving back to the community again, after my journey with projectConnect I will always be with ITDRC,” Kenyon explains, as ITDRC’s new South Carolina state coordinator. | https://medium.com/@ITDRC/the-show-must-go-on-9d873182823f | [] | 2021-01-08 16:29:44.928000+00:00 | ['Technology', 'Giving', 'Covid 19', 'Wifi', 'Events'] |
2,958 | Prague - Europe’s Flourishing Tech Hub | My brother and I decided to establish the headquarters of our tech company in Prague. It was the best decision we’ve ever made.
People who have visited Prague before can confirm that it’s an astonishing city with impressively clean streets, stunning architecture, diverse cuisine, and a vibrant nightlife. But most people are only now starting to realize that Prague is also a hidden gem for tech companies.
Recent History & Economy
To understand why Prague is just starting to flourish, you have to look back more than 100 years and understand a bit of Czech history.
Throughout its history, the Czech Republic has always been a prominent economic player in Europe. In the early 20th century, it was actually an industrial powerhouse and held 10th place in the world industrial production. It was only after World War 2 when the Czech Republic became occupied by the Soviet Union and then ruled by communism, that its economy started to halter.
However, after the 1989 Velvet Revolution, which peacefully ended communism, the Czech Republic reestablished democracy and a market economy. Since then (over the last 30 years) the Czech economy has been growing at an impressive pace and it now proudly produces the most GDP per capita in comparison to all other post-communist countries.
Prague also has the lowest unemployment rate in the European Union, at just 1.3%.
Looking back at the last 30 years and seeing how much progress the Czech Republic has made leaves me and everybody living in Prague excited about the future.
Talent
The Czech Republic boasts a great education system.
I myself spent a few years in the Czech Republic during high school and can confirm it from first-hand experiences. When I was transferring over from my school in the US, I had to spend the whole summer catching up on algebra, physics, and chemistry just so that I could join the same age students and didn’t have to stay a year back.
Getting a university education in the Czech Republic is free, which gives the opportunity for everyone to enroll and lifts the education level across the whole population. The free education system also attracts many foreign students who wouldn’t have been able to afford an education at home.
Czech universities also have very high-quality computer science and mathematics programs, which serves as a solid foundation for future software engineers. The mathematics programs are especially key for those students who later on pursue careers in machine learning and artificial intelligence.
HackerRank, a California-based service that runs tests for software engineers, ranked Czech engineers 2nd in the world in mathematical challenges which reflect their skill in functional programming.
The Czech engineers ranked 9th in the world overall. The US ranked 13th.
Infrastructure & Strategic Location
If you’re reading this article and you’re sitting in a large US metropolitan area, you’re probably used to long commute times and have just accepted traffic as part of your life, as I did when I was living in the Bay Area. Prague doesn’t have this issue.
Its public transport system is ranked 5th in the world and is used by over two thirds of the city’s population. It has an affordable, yet sophisticated train, metro, tram and bus system, allowing you to comfortably get around the city within minutes.
Prague is located in the heart of Europe and has direct flight connections to all other major European cities such as Paris, London, or Amsterdam. Flight times are all under 2h and usually cost between $50 — $250 round trip.
Affordability
Prague is gradually becoming a global metropolis and is catching up to cities like Paris and Amsterdam, however, it is still incomparable when it comes to prices and is, therefore, a great place to work and live. You can afford to live a high standard of life with an average Prague salary, especially if you’re working in tech.
I ran a price comparison of San Francisco vs. Prague (my two hometowns) and this is where we’ve landed (1):
European Lifestyle
It’s also tough to get bored in Prague. It’s an energetic city with endless cool events to attend. If you’re into contemporary art exhibits, prosecco festivals, local food markets, underground techno raves, or just into chilling in the park enjoying the castle view, you’ve got it.
Over recent years, Prague has attracted a lot of foreigners and now has a thriving expat community, which makes up more than 25% of Prague’s workforce. Plenty of Americans and Brits have decided to move here as well and surprisingly in some districts you now hear English on the streets and at cafes more often than Czech.
Recent Startups
We’re witnessing the Prague startup scene flourish.
Just over the past few months, we’ve seen Prague startup productboard raise a $45,000,000 Series B round led by Sequoia Capital, Twisto, an emerging fintech startup raise $15,000,000 and Rossum, a startup extracting data from invoices and documents using machine learning, raise $4,500,000.
If you’re just launching your tech startup and are having trouble with hiring the right technical talent, or are shocked by the high prices and burn rate you’ll have in an oversaturated tech hub like Silicon Valley, you might want to consider making Prague your new home.
Feel free to shoot me a message on LinkedIn. I’ll be happy to give you tips and tricks for Prague. I’ve been here for almost 4 years and moving here was the best decision I’ve made. | https://medium.com/the-cache/prague-europes-flourishing-tech-hub-30704ce05ca8 | ['Theo Dluhy-Smith'] | 2020-03-26 21:59:35.757000+00:00 | ['Europe', 'Founders', 'Technology News', 'Startup', 'Prague'] |
2,959 | Material Design + Figma Styles = 🔥 | Material Type—I created text styles for the all of the styles specified in the Material Design spec. By default these are set to Roboto, but can be easily redefined to use your brand’s typeface. Changing a style to a different font will instantly propagate the changes across the whole system. Bear in mind, you may need to make some adjustments to type size and spacing in order to fine tune the sizing across the system. As the kit evolves, I plan to include some predefined size recommendations that work for some of the most popular Google fonts.
Some of the styles defined in the kit.
Material Elevation—I created a library of preset drop shadow styles for all elevations. Changing the elevation of an element is as easy as choosing the appropriate elevation—this greatly simplifies the process of applying Material shadows which are sometimes complex combinations of up to three different drop shadows.
Material Grid—Material is based on a 4dp baseline grid so I created a style for that too. You can apply this to any frame or component, and of course, you can add your own grids for desktop, mobile and tablet and apply them as needed.
Combining Styles with Components
Material Design encourages designers to control the shapes of UI components to better reflect the personality of the brand. They give users the option to customize surface corners, ranging from sharp rectangular corners, to 45 degree cuts, to varying levels of roundedness.
Using nested components to make globalized changes to card and button shapes.
To achieve this, I created some basic shape components for buttons, floating action buttons, and cards. This gives users the ability to adjust the size of the cuts or the degree of corner radius for any chosen shape. Then I nested those components within another master component which would become the base for corresponding components in the system. If you decided you wanted to have cut-off corners, it is as simple as toggling the visibility of the correct nested component.
That change is then reflected across the entire system (note: in some cases this will take a few seconds for the change to propagate through hundreds of components). Because you can access the nested layers within every component instance, if there are specific places where you want change the style, you can make that change as an override in any instance.
Material Design also give you the option to change between 5 different icon styles. To keep the weight of the kit down, we created 5 separate sticker sheet documents for each icon style, complete with components for each icon. You can copy the desired icons from those files, or better yet, publish them as a shared library to your team and pull them into your projects.
Interested in trying Figma Styles?
If you would like to try the Figma Styles beta, sign up here (EDIT: Figma Styles is now available for anyone to use. Simply sign up for Figma). Beta users will get access to the Material UI kit and accompanying icon libraries.
If you still need to support the previous iteration of Material Design, we’ve got you covered with another fully flushed out UI kit that is built to leverage the power of styles. Beta users will also have access to this kit.
Haven’t tried Figma before? Check it out. | https://medium.com/figma-design/material-design-figma-styles-98a7f0e2735e | ['Thomas Lowry'] | 2018-06-21 15:15:48.614000+00:00 | ['UI', 'Product', 'Design', 'UX', 'Technology'] |
2,960 | 100 Words On….. Existing | Photo by Austin Chan on Unsplash
Are we living “Digitally Vicarious”? It seems that relevance is achieved through multiple online profiles with many of our friends, family, and colleagues only knowing us through our electronic selves. We spend too much time managing our virtual existence rather than our physical one, and nearly always at the cost of what makes us human. Living online should only be complimentary rather than seen as a replacement. It becomes all too easy to hide behind avatars and projected perfection when the true beauty of life is the self with the electronic noise filtered. Experience life in analogue, not in digital. | https://medium.com/the-100-words-project/100-words-on-existing-62082f9e206a | ['Digitally Vicarious'] | 2020-12-17 04:46:36.853000+00:00 | ['Information Technology', 'Existence', '100 Words Project', 'Humanity', 'Mental Health'] |
2,961 | How to Finetune mT5 to Create a Question Generator 🤔(for 100+ Languages) | Its been a month since Google released the massive multilingual model mT5. I was really excited to perform some crazy experiments using mT5. The special quirk about mT5 is its ability to perform any seq-2-seq task in more than 100 languages. I experimented with mT5 on mainly two tasks i.e. for language translations and secondly Question generation. I found the second use case much more interesting. So I created this blog as a tutorial on how to use mT5 for finetuning mT5 to build a question generator.🤩
What is exactly a question generator!
Question generators generate questions. They can be used by teachers and students for generating a variety of questions from a giving text. They can be particularly helpful for comprehension type questions.
Now that we know what is questions generator, let's build one. I will build question generator for Hindi language and you can replicate it on any 101 languages of your choice on which mT5 is trained on. (provided you have the dataset) You can check out supported language here.
Task
Let's define the task. For generating questions we need to input a text context to the mT5 model and expect a question in return.
Here is an example in English:
Context: Donald John Trump (born June 14, 1946) is the 45th and current president of the United States. Before entering politics, he was a businessman and television personality.
Output: Who is Donald Trump?
Since we are mainly focusing on the Hindi language we will have Hindi Context as input and Hindi Question as Output.
Dataset Collection
One another major issue in languages other than English is lack of quality Dataset for training and fine-tuning transformers models. Luckily for us, I found two similar datasets which just fulfil our dataset requirement in the Hindi language.
First is Deepmind Xquad Dataset (Cross-lingual Question Answering Dataset)and Facebook MLQA (MultiLingual Question Answering).
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. MLQA (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance. MLQA consists of over 5K extractive QA instances (12K in English) in SQuAD format in seven languages — English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA is highly parallel, with QA instances parallel between 4 different languages on average.
I collected 1190 context-question pairs from Deep Mind Xquad dataset. From Facebook MLQA I concatenated test and dev dataset which consist a total of 5425 context-question pairs.
Since both datasets were originally present in JSON format for the ease of using I converted them CSV format. I divided the overall collected dataset into two sets: A training set with 6500 context-question pairs and testing set with 150 context-question pairs. The reason for keeping such low count testing set is unlike in classification problems, text generation problem doesn’t benefit much from a large testing dataset (since we do not have metric to evaluate generated questions or text generation in general).
Above is the script I created to perform these operations. (Colab notebooks are less convenient than Kaggle as Kaggle provide P100 GPU while Colab has Tesla T4 thus much lesser time while finetuning on Kaggle).
Finetuning mT5 using pytorch-lightning
We’ll be using PyTorch-lightning library for finetuning. Most of the code below is adapted from here. The trainer is generic and can be used for any text-2-text task. You’ll just need to change the dataset. Rest of the code will stay unchanged for all the tasks.
This is the most interesting and powerful thing about the text-2-text format. You can fine-tune the model on a variety of NLP tasks by just formulating the problem in the text-2-text setting. No need to change hyperparameters, learning rate, optimizer or loss function.
I trained the ‘google/mt5-base’ for 10 epochs with a batch size of 8. Here is the link to the fine-tuning script.
If you want to finetune mT5 on any of your tasks you need to take care of two things.
Create your own dataset in CSV format and change the dataset path in the fine-tuning Kaggle notebook. (dataset must have two files train.csv and valid.csv)
Secondly, change the QuestionDataset class in the above Kaggle notebook according to your use case. The input format I used while finetuning is
Input = "Hindi Context: %s" % (input_text)
# input_text is the given input to the model
Inference for Question Generation
Inferencing the model is the easiest part as we have done most of the heavy lifting. Let's check out the results.
from transformers import MT5ForConditionalGeneration, AutoTokenizer
model = MT5ForConditionalGeneration.from_pretrained("../input/mt5-hindi-question-generator")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
Input text (Any paragraph in the Hindi language)
article = '''Hindi context:पूरे विश्व भर में भारत एक प्रसिद्ध देश है। भौगोलिक रुप से, हमारा देश एशिया महाद्वीप के दक्षिण में स्थित है।
भारत एक अत्यधिक जनसंख्या वाला देश है साथ ही प्राकृतिक रुप से सभी दिशाओं से सुरक्षित है। पूरे विश्व भर में अपनी महान संस्कृति और पारंपरिक मूल्यों के लिये ये एक प्रसिद्ध देश है।
इसके पास हिमालय नाम का एक पर्वत है जो विश्व में सबसे ऊँचा है। ये तीन तरफ से तीन महासागरों से घिरा हुआ है जैसे दक्षिण में भारतीय महासागर, पूरब में बंगाल की खाड़ी और पश्चिम में अरेबिक सागर से।
भारत एक लोकतांत्रिक देश है जो जनसंख्या के लिहाज से दूसरे स्थान पर है। भारत में मुख्य रूप से हिंदी भाषा बोली जाती है परंतु यहां लगभग 22 भाषाओं को राष्ट्रीय रुप से मान्यता दी गयी है।
'''
For decoding, we can either use greedy decoding with single question generation or top_kp decoding with multiple questions generation.
Here are the results obtained
start = time.time()
encoding = tokenizer.encode_plus(article, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
print(article)
output = greedy_decoding(input_ids,attention_masks)
print ("Greedy decoding::
",output)
end = time.time()
print ("
Time elapsed ", end-start)
print ("
")
Output :
Hindi context:पूरे विश्व भर में भारत एक प्रसिद्ध देश है। भौगोलिक रुप से, हमारा देश एशिया महाद्वीप के दक्षिण में स्थित है।
भारत एक अत्यधिक जनसंख्या वाला देश है साथ ही प्राकृतिक रुप से सभी दिशाओं से सुरक्षित है। पूरे विश्व भर में अपनी महान संस्कृति और पारंपरिक मूल्यों के लिये ये एक प्रसिद्ध देश है।
इसके पास हिमालय नाम का एक पर्वत है जो विश्व में सबसे ऊँचा है। ये तीन तरफ से तीन महासागरों से घिरा हुआ है जैसे दक्षिण में भारतीय महासागर, पूरब में बंगाल की खाड़ी और पश्चिम में अरेबिक सागर से।
भारत एक लोकतांत्रिक देश है जो जनसंख्या के लिहाज से दूसरे स्थान पर है। भारत में मुख्य रूप से हिंदी भाषा बोली जाती है परंतु यहां लगभग 22 भाषाओं को राष्ट्रीय रुप से मान्यता दी गयी है।
Greedy decoding::
भारत में कितनी भाषाएं बोली जाती हैं?
Time elapsed 1.0903606414794922
So the question generated seems pretty legit😇😇.
Here is the top_kp output for multiple questions
Hindi context:पूरे विश्व भर में भारत एक प्रसिद्ध देश है। भौगोलिक रुप से, हमारा देश एशिया महाद्वीप के दक्षिण में स्थित है।
भारत एक अत्यधिक जनसंख्या वाला देश है साथ ही प्राकृतिक रुप से सभी दिशाओं से सुरक्षित है। पूरे विश्व भर में अपनी महान संस्कृति और पारंपरिक मूल्यों के लिये ये एक प्रसिद्ध देश है।
इसके पास हिमालय नाम का एक पर्वत है जो विश्व में सबसे ऊँचा है। ये तीन तरफ से तीन महासागरों से घिरा हुआ है जैसे दक्षिण में भारतीय महासागर, पूरब में बंगाल की खाड़ी और पश्चिम में अरेबिक सागर से।
भारत एक लोकतांत्रिक देश है जो जनसंख्या के लिहाज से दूसरे स्थान पर है। भारत में मुख्य रूप से हिंदी भाषा बोली जाती है परंतु यहां लगभग 22 भाषाओं को राष्ट्रीय रुप से मान्यता दी गयी है।
Topkp decoding::
['भारत कहाँ स्थित है?', 'भारत के पास कितनी भाषाएँ हैं?', 'भारत की मुख्य भाषा क्या है?', 'भारत में मुख्य रूप से कौन सी भाषाएं बोली जाती हैं?', 'भारत में कितनी भाषाएँ बोली जाती हैं?', 'भारत की दूसरी लोकतांत्रिक सरकार क्या है?', 'भारत का हिस्सा किस महाद्वीप के दक्षिण में है?', 'भारत में कौन सी भाषा बोली जाती है?', 'हिंदी भाषा की कौन सी संस्कृति सबसे अधिक लोकप्रिय है?', 'भारत की जनसंख्या क्या है?']
Time elapsed 1.3107168674468994
So as you can see few questions are grammatically incorrect but if we check the overall quality its good for basic uses.
If you want to experiment more with my Hindi question generator check out my notebook on Kaggle.
So I conclude my blog on the question generator using mT5. You can now finetune mT5 on the various task in various languages. All the Kaggle notebooks and dataset used is set to public (anyone can use). I have given references for so many notebooks which might be confusing so if you have any question you may ask in comments. So are you ready to experiment with mT5?🤔🤔🤔
References:
Original Suraj Patil Colab on Finetuning T5 | https://medium.com/swlh/how-to-finetune-mt5-to-create-a-question-generator-for-100-languages-4a3878e63118 | ['Parth Chokhra'] | 2020-12-04 12:51:59.441000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Google', 'Technology', 'Programming'] |
2,962 | We designers serve others | However, the path very often turns to be rough. There’s a lot of obstacles to deal with to make something just good. In between 18th and 19th centuries when the Industrial Revolution took place, and technological progress allowed to manufacture faster and in a high volume, the Three Pillars of Design were defined:
Voice of Business. How do you as a designer respond to a company’s business model to make a product relevant?
How do you as a designer respond to a company’s business model to make a product relevant? Voice of Technology. How do you design to create something within a budget and yet well made?
How do you design to create something within a budget and yet well made? Voice of Customer. How do you answer people’s needs? How do you solve people’s problems? How effectively do you solve those problems? How do you design to embrace individualities? How do you design for people to enrich their lives?
That looks like a challenge. And it is. You see, I think we as designers need a tremendous amount of self-discipline to be good in what we do. Answering those questions above, facing them in the most successful way, requires a thing that I find the most precious quality of us designers, a thing that we constantly tend to learn — humility.
Humility — ענוותנות, Hebrew “anavah” — a sign of strength and purpose, not weakness.
We often mislead humility with weakness, whereas it’s a strength. We can be highly skilled specialists, using the most advanced tools, but a result of our work comes from a process, from our relationship with whom we collaborate. Way before a product meets customers, we designers meet business stakeholders and engineers, people responsible for revenue and the fact that our ideas can come to life. That’s the place for us designers serving others with humility which is represented by:
ability to listening
not stealing somebody’s air
being aware that sometimes we may be wrong
There’s a beauty in the service of being responsible for giving a form to business intention and implementing solutions. By being designers we tend to brilliantly communicate however by asking questions. So much can be unheard when we’re not attentively listening to barely formed, fragile thoughts that have got a chance to be a foundation for a fabulous solution. During ideation and execution, any contribution is a new point of view, that can pivot a project in a way more beneficial to a final result, therefore we designers are expected to create a convenient environment for others to collaborate. And finally, constant learning means being ready to fail and seek an outcome from it to bring the value to our teams, to a project, and therefore to customers.
Sadly, very often customers can be victims of design, instead of being beneficiaries of it. Unfortunately, we stick at looking for a balance between a look and function. Obviously, a product looking good but having a very poorly solved functionality can be ugly. In the same time, we often try to believe that the thing that predominantly matters is functionality. But we very rarely ask the question:
“What do we want people to feel?”
You see, designing is three dimensional. I believe that we designers do what we do in the service of aesthetics, function, and feelings that someone, somewhere had using a product we designed. Do you remember the smell of just unboxed product that you had waited for a while? Do you remember the joy of the unboxing? Do you remember the texture of that shiny new product? Do you remember the sound of taking off a foil? We forget what a company did and said to convince us to buy their product, but we never forget how they made us feel. Those are outcomes of unreasonable intentions that are far from being just aesthetically and functionally correct. These are outcomes of the amount of care that was invested in every meticulously considered detail. The amount of care invested in expertise and research to find a solution, creative direction, material, manufacturing process that let translate reckless ideas into something unremarkable, something that will definitely be discerned but barely articulated.
Voice of Environment.
There’s one other thing that wasn’t mentioned in the Three Pillars and is ever so important. We designers, we also serve the environment. We more than anyone have a civic and moral responsibility in creating solutions, choosing materials, tools, and processes that will serve not only business, technology, and customers but also our common, the only one home we have — the planet. Our decisions are made within minutes but often impact years to come. An ignorance can turn into evil. But yet proactivity can avoid disasters and resolve problems. With deliberate solutions, we can address the quality of life of future generations.
Being a designer requires maturity, a self-awareness. We’ve got a power to build, to enrich, to embrace. We carry the responsibility of connecting creative voices. We can make trustworthy room for ideas incubation. We can shape the future.
To serve others is delightful, isn’t it? | https://medium.com/the-supersymmetry/we-designers-serve-others-49cdbfcccf0e | ['Radek Szczygiel'] | 2020-11-21 13:37:32.737000+00:00 | ['Environment', 'Business', 'Customers', 'Technology', 'Design'] |
2,963 | With the release of this iPhone, it was the talk of the iPhone fans all over the world. | With the release of this iPhone, it was the talk of the iPhone fans all over the world. This was due to the fact that the box that packed the iPhone did not contain the previously included charger and ear phone.
Apple says the goal is to reduce carbon emissions into the environment.
Due to this, the packaging box of the Apple I phone is small in size. This package contains only the phone, charging cable and paper works.
full artical on hear :-https://tecnews24hours.blogspot.com/2020/11/apple-iphone-12.html | https://medium.com/@tealth1234/with-the-release-of-this-iphone-it-was-the-talk-of-the-iphone-fans-all-over-the-world-85564be7516c | [] | 2020-12-15 07:58:44.818000+00:00 | ['iPhone', 'Iphone 12 Pro Max', 'Blog', 'Apple', 'Technology'] |
2,964 | Promising Blockchain Companies Compete at CoinAgenda Global 2020: BitAngels Pitch Day | We had a great time at CoinAgenda Global 2020: BitAngels Pitch Day last week. It’s a different world because of COVID-19, but BitAngels has pushed forward and shifted to virtual events this year so we can keep the community connected and play a part in maintaining the momentum of the blockchain industry. As much as we wish we could meet in person and shake hands these days, the virtual networking room after the event was a great substitute.
We had 23 companies present some great pitches. That’s a whole lot of ideas shared in one competition, and it was truly inspiring to see entrepreneurs working hard and remaining upbeat in the current climate. Surely, the blockchain industry is not slowing down, and compared to last year’s BitAngels Pitch Day, I’d say the competition has gotten even fiercer.
As inspiring as all the presentations were, in the end, we had to choose winners, and these three stood above the rest:
First place: Opolis
Second place: Splinterlands
Third place: Icecap.
Opolis is a Denver-based startup offering portable health benefits, financial automation, crypto payroll, and other shared services for freelancers, independent contractors, solopreneurs, creatives, consultants, and gig workers. The company also recently announced an upcoming rewards token launch slated for early 2021.
Splinterlands, a blockchain-based trading card game platform, lets players trade, sell, and lease their card assets. Their cards are compatible with third-party marketplaces, including OpenSea, PeakMonsters, and Monster Market. Splinterlands will begin selling the first set of digital land in the game starting Nov 7th, and players can craft new NFTs by harvesting, refining, storing, and minting new NFTs.
Icecap, a blockchain-based marketplace for investment-grade diamonds, uses Ethereum’s ERC721 non-fungible token standard to give diamonds their own digital token, which represents rights to the diamond and enables it to function as a tradable asset.
Whether or not we can meet in person, BitAngels will be here. Join our membership for free access to past recordings, and as always, you can submit your company to pitch for free at one of our upcoming events. The future of the space is bright, and we can assure you the next breed of entrepreneurs and ideas in the space will cement blockchain’s power and make it more impactful and user friendly. | https://medium.com/@bitangels/promising-blockchain-companies-compete-at-coinagenda-global-2020-bitangels-pitch-day-1760ae9a5888 | [] | 2020-11-03 22:11:44.371000+00:00 | ['Blockchain', 'Technology News', 'Blockchain Startup', 'Entrepreneurship', 'Business'] |
2,965 | The future of Uber | It is easy to think of Uber primarily as a ride-sharing company, similar to how we thought of Amazon to be just an e-commerce company 10 years ago.
But as Roy Amara said, we overestimate the impact of technology in the short-term and underestimate the effect in the long run.
Ten years from now, Uber’s going to look completely different. The company is silently going through a massive expansion, across verticals, that will change the DNA of its business. Over the next decade, Uber will create unanticipated revenue channels:
1) Outdoor Ads
Image Source: Uber OOH
Billboard ads will be re-imagined as physical and digital worlds merge. We’ll see ads on Uber cartops, which will open up a whole new platform for local businesses to advertise in-time ads.
This may be the first time that physical ads, at scale, can be made dynamic, digital, and location-data-driven. Read more about Uber’s OOH cartop advertising.
Knowing the financial success and dominance of Facebook Ads, we can tell how lucrative selling advertising space is as a business model would be for Uber, especially when it offers advertisers dynamic rates based on location, time, and content.
The competitive advantage for Uber would be that it knows about your physical movement, your favorite eating & drinking spots, your time preferences, and your willingness to spend money.
2) Local Commerce
Image Source: Photo by Rowan Freeman on Unsplash
What could be better than a two-day delivery guarantee? You guessed it right — a two-hour delivery guarantee.
Amazon is an undisputed leader in supply chain and has built a network of 800 warehouses in the US that allows it to capitalize on supply chain optimization. However, Uber with its ubiquitous presence and partnerships with local small businesses as well as Target, Walmart, and other big retailers, could compete with Amazon with a much larger network of micro-warehouses, i.e. the stores themselves. These micro-warehouses not only make deliveries faster but also make returns easier & less wasteful. Just for context, there are ~2000 Target and ~4000 Walmart stores in the US. And, the penetration of small businesses and local stories in Tier 3 cities and small towns is going to be even greater.
Uber can compete with Amazon on fulfillment the same way Shopify competes with Amazon on creating a digital presence for sellers. This could help with the much-needed resurgence of small businesses and give them back the power to compete!
Uber Eats is just a gateway drug to a much bigger market — Uber Direct. From fresh flowers & toothpaste to iPhones & laundry, all our household items could eventually be delivered by Uber. Could you potentially, someday, order a plumber or home cleaner from Uber? I wouldn’t be surprised if Uber competes with Handy and Thumbtack, and dominates, the handy space because of its reach.
3) B2B Offering
While Amazon started its journey as an online retailer, its technology and engineering teams soon realized the complexity of managing and scaling the IT infrastructure needed to deliver their digital experience with performance, uptime, and quality. Compute and storage wasn’t easy. But, they innovated because they had to in order to operate at Amazon’s scale. Soon, Amazon realized that this challenge wasn’t unique to their business but they were uniquely positioned to solve this problem better than anyone else. As a result, AWS was born by abstracting the technology they built for themselves and offering that as a platform to others. This then allowed other big and small organizations to outsource the complexity of IT infrastructure to Amazon per a “pay-as-you-go” model.
The timing was perfect because more and more businesses went digital and generated exponentially more data than before.
The timing is perfect now for businesses to outsource the complexity of logistics to Uber because more and more businesses are adapting to the needs for the “work from home” and “shop from home” culture.
Logistics-as-a-service can play the same role for Uber as AWS did for Amazon. Uber’s ex-CTO, Thuan Pham said “Uber’s plan is to become the AWS of logistics. The fraud-detection capabilities and the mapping capabilities and routing capabilities — all those things, we don’t see why we can’t ultimately offer that to other people to build on top of it”.
Just like Netflix uses Amazon Web Services for its computing, we shouldn’t be surprised if Uber’s competitors use their logistics service at some point. Uber Freight is going to shape this story over the next decade.
Uber Freight is just the beginning of this trend (check out their YouTube channel). There are infinite possibilities for Uber to monetize logistics AND the technology assisting the logistics as B2B offerings.
If you like this, please give it some love 👏. Do you think I missed something? Did you find any holes in my arguments? Did you think these were pretty obvious? Let me know in the comments.
If you’re interested in reading more about the possibilities that lie ahead, subscribe to this publication: Predict. Also, check out reading my other article: The future of audiobooks. | https://medium.com/predict/the-future-of-uber-a96b79056014 | ['Akash Mukherjee'] | 2021-09-17 21:04:24.436000+00:00 | ['Future', 'Future Technology', 'B2B', 'Logistics', 'Uber'] |
2,966 | Review of the Google Nest Hub Max | Review of the Google Nest Hub Max
By Akin Akin
The Google Nest Hub Max is an excellent smart speaker and display. It is a larger version of the original Google Home Hub. It features a responsive touch screen, smart speaker and far-field microphones. Above all, it pairs well with Google Assistant. Compared with the original version, the Nest Hub Max is better for video, music, podcasts and just about everything else.
Design
The Nest Hub Max is 7.2 by 9.9 by 4.0 inches. It comes with an all-white plastic and light gray fabric design dominated by a tablet-like 10-inch touch screen facing forward and titled slightly backwards. The appearance is like that of a curvy Android Tablet bolted onto a fabric-lined base. The Nest Hub Max is easy to put on a kitchen work surface, which is one of the best places for this smart speaker.
The screen is framed by an 0.6-inch flush white bezel with two microphone holes and a camera built into the top edge. The oval base holds the speaker, two 0.7-inch tweeters and one 3-inch woofer.
Display and Video Quality
The Nest Hub Max has a 1280 by 800 pixel LCD that gets pleasantly bright and colorful. An Ambient EQ mode senses the intensity of the light in the room and adjusts the brightness of the screen. You can, however, disable this mode and adjust the screen by yourself.
The Nest Hub Max displays a crisp picture — you will not notice any softness in the picture display. If you use Google Photos, the Nest Hub Max is the best digital photo frame you can buy by far. It helps you display cherished memories from holidays and outings around your house. Youtube is integrated, and the video quality is superb.
Set-Up and App
Setting up the Nest Hub Max is a simple process that uses the Google Home App.
When you plug the smart display in, it will prompt you to install the app on your phone and follow the instructions to connect it to your home network. It will also guide you into linking to your Google account and setting up any connected music streaming accounts like Pandora or Spotify. You also need to install the Nest App if you want to use the camera optimally. It will make the camera act like a smart home camera.
Biggest New Addition: A Camera
The Nest Hub Max’s biggest new addition is a camera, which is used for several things. The camera can be used for video calling. You can also use it to recognize faces and hand gestures. The face match system works on-device and it can be very useful.
When it recognizes you or a family member, it proactively shows information relevant to whoever is standing in front of the hub — ranging from calendar appointments for the day to your reminders.
Audio Performance
The Nest Hub Max has two microphones with good speech recognition sensitivity. The device can recognize your voice when you talk at a normal speaking volume even when there is another device playing music at the same speaking volume within the room. The sound quality is good too. The bass depth is great for a smart display. It will deliver your favorite tracks with enough punch and vitality.
Functions
The Nest Hub Max can help you launch your favorite channels on Youtube with a simple voice command. You can also watch all other entertainment channels you subscribed to like CBS All Access and Starz with a voice command.
You can group the Nest Hub Max with other smart displays, speakers and Chromecast devices to fill your entire home with music. It helps you catch up on the news of the day from a variety of your favorite sources. Simply say “Hey Google, play the news” to get started.
With the Nest Hub Max, you can manage all your compatible devices in one dashboard with home view. You can turn on the compatible lights across your home with a simple voice command. You can brush up your vocabulary skills while lying down on your bed by just saying something like ‘Ok Google, what does gregarious mean?’ Finally, you can enjoy peace of mind when you see your security camera video stream right on your Nest Hub Max.
Price And Availability
The Google Nest Hub Max retails at $229 as of this writing, and is available directly from the Google Store. | https://medium.com/do-it-yourself-home-automation/review-of-the-google-nest-hub-max-2e528657d829 | ['Diy Home Automation'] | 2020-12-04 18:58:53.019000+00:00 | ['Gadgets', 'Smart Home', 'Review', 'Google Nest Hub Max', 'Technology'] |
2,967 | Tips for Writing Medium Posts that Other Developers will Actually Read | This weekend we launched our open source community’s Medium publication. More than 200 developers immediately signed up as writers. With a bit of elbow grease, they’ll be sharing their coding insights with thousands of their peers, and you can too.
We’ll publish as many of your high quality articles as we can.
Here are some tips for writing content that will resonate with other developers:
Read through articles that we’ve already published. Try to write about things that no one has covered yet.
Seven minutes seems to be the optimal Medium post length. But don’t water-down your post to get there.
Good writing takes time. Keep rereading and reworking your post until you think it’s perfect. Always read it one last time before you publish.
Autobiographical posts are only interesting to other people if they offer useful, non-obvious takeaways. Try to teach your readers something.
When you write a technical article, your goal shouldn’t just be to look smart — it should be to inform and to be understood.
Avoid intimidating readers with a “wall of text”. Keep your paragraphs between 1–4 sentences, and break them up with headings and images.
Imagine that Lisa — the progressive, humanist, forever adolescent Simpsons character — will read everything you publish. Don’t publish anything that would make Lisa disappointed in you.
Share your Coding Insights
You most definitely should contribute to Free Code Camp’s Medium publication. Here’s how to do so, in 3 easy steps (and one harder one):
Create a Medium account. Add a headshot and bio. Send an email to team@freecodecamp.com with your username, requesting to become a writer for our publication. Write awesome posts. Submit them to us. We’ll review them and potentially publish them.
We get a lot of submissions. If we don’t publish yours immediately, message me in Gitter. I’ll give you the status of your submission and quick feedback.
Happy writing about coding!
If you liked this, click the💚 below. Follow me and Free Code Camp for more articles on technology. | https://medium.com/free-code-camp/this-weekend-we-launched-our-open-source-community-s-medium-publication-52954c08adea | ['Quincy Larson'] | 2016-06-29 02:23:40.765000+00:00 | ['Technology', 'Social Media', 'Marketing', 'Writing', 'Design'] |
2,968 | THE MODERN STEALTH FIGHTER TECHNOLOGY | ABSTRACT
There have been countless innovations in the field of aerospace military application as nations battle in the air and undoubtedly stealth technology has been the most compelling of all of them making fighter jets “invisible” and hence invincible. This article presents an overview into the implementation of the scientific knowledge of it including the history, basics of the radar technology and how the materials work and are made from. Not to mention, unorthodox detection methods of stealth jets are also explored in addition to investigate the other end of the spectrum and how it can be overcomed.
INTRODUCTION
Ever since the inception of wars and fighting, military personnel and people have wanted to hide themselves from their enemies to stay away from danger. An excellent way of doing so would be camouflaging oneself by evading predation. War planes, though fast and agile in the 1940s, were still detected by the nemesis through simple radar technology, making their moves predictable and therefore ineffective. With the advancement of stealth technology, fighters became more agile and untraceable to even the latest radar detection posing a huge threat to the camp on the other side.
The history of stealth technology could be traced back to 1958 where the Cold War was taking place. The United States researched stealth technology for its U-2 spy plane over the Soviet Union, but it was unsuccessful, but it was a good first step in the right direction.
Various kinds of shapes, sizes and materials are utilized to operate in stealth mode and give no notification to the adversaries [1]. To become invincible, there is not one but multiple technology that comes together allowing commanders to move with impunity. The exponential rise in stealth innovation could be thought to have culminated in the manufacture of the modern F-35 and other similar aircraft making it a huge threat in its time.
Figure 1: Stealth jet, Lockheed Martin F-35 Lightning II
RADAR TECHNOLOGY
To understand how stealth fighters were masked from the radar systems, it is essential to know the nuts and bolts of them to have a deeper comprehension of them. Radar simply uses electromagnetic sensors which recognizes things at substantial distances. As it is an active sensing apparatus, it sends out its own radiance when pinpointing desired objects, by sending out EM waves and analyzing the echoes of the reflected energy imaging it on the display. Incidentally, it was also the time of the World Wars when radar technology picked up its speed and a lot of countries experimented with it to take advantage of it knowing where their enemies are aerially and even on the sea.
As time went by, many countries had gained enough knowledge to operate radar technologies, the enemies moves and projectiles became well-known and could easily be dodged using straightforward predictions. Thus, this required the use of stealth technology that could make the jets not completely invincible but make it hard for an enemy to track the jet.
RADAR CROSS-SECTION
The physics of radar technology became more apparent as when the concept of the Radar Cross-Section (RCS) was discovered which was essentially an EM signature of the traced object on the radar. This made stealth more accessible as it was a quantifiable measure of how detectable an object is. The goal for stealth aircrafts was to keep the RCS as low as possible, giving the operators impunity to attack others. There are numerous factors that affect the RCS of an object, some of which are the material, absolute size, directivity, and the incident angle of the transmitted beam [2]. It is noteworthy that as stealth aircraft tend for lower RCS values, commercial aircraft need to have a higher value as they need to be always on the radar.
Figure 2: Typical Radar Cross-Section
In terms of shape, the aircraft should be fabricated in a way to reduce its RCS. Taking for instance Lockheed’s F-117 Nighthawk has a smooth surface which is very angled and flat. The purpose of it being angled is such that when an EM wave is hit on the surface of the aircraft, the sharp angled surface bounces back the wave into a forward scattered direction not letting the signal to be transmitted back [3]. The bottom line is that the incident radar angle would hit the skin of the aircraft and reflect at a higher angle than to the normal angle away from the source. In contrast, commercial airplanes have curvy surfaces ensuring that at least some of the signal will be returned through the normal direction.
MATERIALS SCIENCE
No matter the use of a vehicle, the material it is made up of has significant implications on its application and operation and it is no exception to stealth technology. Apart from directing the EM signal from radar sources, the Radiation-Absorbent Material (RAM) as the name suggests absorbs the wave energy preventing reflection of it in any way. It is an innovative material that soaks up the radio frequency radiation as much effectively as possible, lowering the power of the reflected radiation and hence lowering the RCS. The performance of the material is subject to its composition as the absorbency level of different frequencies vary for distinct materials [4]. It is significant that the coating of materials on the surface of the aircraft would not make it completely invisible for any frequency but does lessen the RCS.
One of the most common RAM in the stealth technology sphere is Iron ball paint. It is well-known that energy cannot be created or destroyed and thus is the case with the principles of the Iron ball paint. The Iron ball coated will convert the radar energy received from various sources to heat energy which is then dissipated from the aircraft. It is possible because of the paint which contains carbonyl iron. The radar waves bring about molecular oscillations because of the inherent alternating magnetic field in the iron paint. There have been multiple variations of this type although the core principles of it remain the same.
For instance, the infamous F-117 Nighthawk employs a similar technique. It is important to say the least to understand the electrochemical fabrication process that enables this radar evasion. The carbonyl iron balls are individually suspended in commonly used epoxy paint [5]. A layer of silicon dioxide is coated and its purpose is to act as an insulator to negate any electrical reactions given out. While the paint is still liquid, a magnetic field is applied such that it penetrates the liquid balls and thus creating a magnetic field pattern inside the balls. As the paint hardens and solidifies, the magnetic field is held in its place and locks them in the magnetic pattern. Once the paint is made as it needs to be coated, it says without going the application of it on the aircraft is as crucial. Remarkably the paint job is done by industrial robots which layers the paint on the F-117 with known and particular thickness and liquid properties.
Figure 3: Iron ball paint coating on the F-35
The iron ball paint method as a RAM has been used by several air forces such as Taiwan and the Mainland. Noticeably, the Taiwanese RAM stealth aircrafts is in response to their opponent China who were the first to employ stealth fighters [6]. This shows how keen countries are striving to take a stance on becoming military heavyweights by using state-of-the-art technology. There are various other materials that are embedded in the fighter aircrafts such as the foam absorber, split-ring absorber, etc. Most of which use the physical and chemical properties of the materials with nuances that differentiates the efficiency of evading the radar.
DETECTING STEALTH AIRCRAFTS
The realm of stealth technology is certainly vast and rapidly improving with improving sciences but on the other end of the spectrum, what could be done to detect these “invisible” aircrafts that could be literally fatal for some states.
There are also a number of spotting stealth aircrafts tricks but one the most promising technology is that of the usage of quantum radar [7]. Though there have been sizable organizations and regions in the field of it, Canada’s University of Waterloo has invested millions to their design of the quantum radar. The radar is expected to pinpoint aircrafts with higher accuracy than conventional ones and nevertheless perceive sly aircrafts. It is well known that the quantum world has given a large boost to existing technology in many fields such as computing and telecommunications and it is certainly the case for radar applications.
Quantum illumination works by the principle of quantum entanglement which is a physical feature of quantum particles. The entanglement entails that when the photon particles are connected, no matter the displacement, the state of each of one of them is known to the other photon and continues to affect it. This phenomenon of decoherence is the basis of detecting stealth aircrafts.
Under traditional circumstances, the photons emitted by the radar will get bounced off the surface of the stealth aircraft due to its sharp angles as aforementioned making it difficult to see on radars. However, using the quantum radar
would yield a different result when in contact with the object [8]. As it bounces off the surface of the aircraft, as the photons are entangled, the detecting source could trace back where the photons were fired at and keep shooting more of them until a picture of it is built up making it known to the radar.
Figure 4: Diagram depiction of photon entanglement
CHALLENGES IN STEALTH TECHNOLOGY
Although as seen there are rapid innovations in the domain of stealth innovations, there are several setbacks that have to be dealt with in order to bring about fully-fledged stealth mode. In terms of the Radio Absorbent Materials mentioned, it is critical to note that not all of the types of materials used work at all radio frequencies but are designed for specific radar frequency. With regards to the application of it on the aircrafts, it is not easy to be coated on moving or intricate surfaces which are usually the case in stealth aircrafts.
Moreover, the quantum radar used to detect these aircrafts are very much in their preliminary works and would need substantial design work to implement it. Primarily, the engineering challenge in the quantum radar would be the miniaturization of the electrical components in the system. Without saying, this would require more toil to integrate these complex photon detectors into smaller conglomerates which can make the detection more effective.
CONCLUSION
The advent of special materials and quantum technology has fostered the development of stealth technology. Even though there might be several drawbacks on either side of the spectrum the benefits most definitely outweigh them and with reassuring research and development, it is only going to get better with time.
All in all, in this brief article, stealth innovation has been looked at and how it operates from the contemporary radar technology and the radar cross-section to the fascinating stealth materials used and not to mention the revitalized modern quantum radar. The huge potential in stealth has provoked many countries to take part in it and it is bringing about a mini-arms race within the military forces making the future prospects of it truly riveting.
REFERENCES | https://medium.com/@rahmanfahd/the-modern-stealth-fighter-technology-and-the-technology-for-detecting-stealth-fighters-61f0a517a924 | ['Wavoo Fahd'] | 2021-09-12 02:23:10.837000+00:00 | ['Travel', 'Technology', 'Aviation', 'Stealth', 'Innovation'] |
2,969 | REVIEW: Unicomp’s “New Model M” mechanical buckling spring keyboard (2021) | As I tried to jot down the main things I like and dislike about this board, I realised that for almost every positive thing I had to say, I had at least one related negative point as well… so that’s precisely how I’m going to structure this. For the majority of this review, I’ll hit you with a positive, then follow it up with something correlated that I wasn’t so keen on.
Key feel
First thing’s first… is this actually any good to type on?
GOOD: The keys feel quite excellent. This is my primary concern so when it comes to getting me on-side, it’s a big win for the Unicomp. The New Model M is extremely satisfying to type on, particularly given that the mechanical board it replaced was a Corsair “gaming” keyboard with Cherry MX Blue slider switches of the sort of board that the mech keyboard community would quite justifiably turn its collective nose up at, and which I only bought in the first place because it was in a clearance sale to get rid of obsolete stock. Next to that board and more or less every other mechanical board I’d tried previously, this thing is a revelation. The amount of force required to actuate a switch is alarming compared to the MX Blues, which wasn’t particularly surprising given how the mechanisms differ. As I mentioned before, I don’t have a long history with buckling spring keyboards, so it’s entirely possible that the Unicomp one is crap compared to a mid-80s IBM, but from what research I did before buying one, I don’t get the impression that that’s the case. There are those who will tell you that they don’t hold a candle to the original boards from IBM or even Lexmark, but I get the feeling that part of the problem is that the older boards are unavoidably affected by their age and as such have a different key feel due to wear and tear. I also think that part of it is probably that the manufacturing tolerances and minor details in the method of manufacture had been gradually changing over a period of many years, long before the Model M manufacturing duties ended up in the hands of Unicomp; I suspect that you’d find a Unicomp Model M to be quite similar to a later Lexmark Model M, but that both would be more noticeably different from a 1985 IBM one. I can’t say from first-hand experience, though I have noticed that others who do have first-hand experience of a lot of Model Ms seem to be saying something to that effect (notably Thomas Ran, a.k.a. Chyros on Deskthority or Chyrosran22 on YouTube).
BAD: Sod all! I love typing on this keyboard. I know I just said that I had a negative for most positive points I had to discuss, but this one is the significant exception: I really have nothing bad to say about the key feel on this board. Again, I might have if I’d used Model M boards for the past 20 years, but I haven’t, so I can’t claim to know if they nailed it or not. If I were really reaching, I might point out that the force required to press a key properly might come as an unpleasant surprise to some, but I don’t think that’s a fair criticism because I suspect it’s about right for a Model M based on everything I know about the history of buckling spring boards. Just something to be aware of, but not really a criticism at all, in my view. Personally, I’m a big fan; it just took maybe a couple of days of adjustment. Now that I’m used to it, I just switched to testing out a different board (which I’ll probably write about separately another day) and found myself absolutely hammering the poor thing to death because it uses completely different mechanical slider switches that are much easier to press.
Quality control
One of the things that Unicomp has taken some heat for over the years is the quality of their keycaps, which (as I briefly mentioned before) was put down to the age of their manufacturing equipment, which is the exact same kit that was being used in the mid-to-late 90s by Lexmark (in Greenock, Scotland, I believe). The story goes that IBM sold the vast majority of their keyboard manufacturing business to Lexington-based company Lexmark in the early 90s, with only a very limited amount of first-party IBM manufacturing continuing beyond that point in Greenock of all places (which is only a few scant miles from me!) and Lexmark taking on the bulk of it moving forward. Lexmark was making boards for IBM in more than one location, one of which was Greenock as well, although I believe they were separate facilities (one IBM, one Lexmark, just both in the same Scottish town). When Lexmark eventually stopped making the Model M toward the mid-to-late 90s, some key Lexmark employees bought up a bunch of the manufacturing equipment and went on to form Unicomp, where they used the same old kit to continue making the Model M for years thereafter, although they moved all the manufacturing exclusively to Lexington in Kentucky, USA. (IBM actually continued making Model Ms in Greenock as late as 1998, I think, but when they shut that down they didn’t move it elsewhere, that was the end of it.)
The trouble with the equipment Unicomp inherited from Lexmark is that it was already starting to show its age by the time it changed hands, but they weren’t really making enough of a profit out of the manufacturing and sales of the boards to justify the pretty considerable expenditure of upgrading the old gear. Owing perhaps to the recent resurgence in the level of interest people seem to have in mechanical keyboards, Unicomp has finally decided that they can afford to spend the cash on some long overdue upgrades, which is how we’re getting better keycaps and two new keyboard models in the “New Model M” and the “Mini M” (which is more or less a reborn Space Saving Keyboard, or SSK) in the first place. There’s a caveat, I hesitate to add: not all of their keyboards fully benefit from the new/upgraded tooling, since they’re still using the old moulds for all other models, as far as I am aware. If you buy something like an “Ultra Classic” (the previous iteration of the main PC version of the Model M) or “PC-122” (their take on IBM’s giant 122-key scancode set 3 terminal keyboards), it is my understanding that you are still getting a chassis made from the old and somewhat worn out moulds. Their website actually goes so far as to heavily discourage ordering anything other than the New Model M or Mini M, which seems to support this. On most of the product pages, they now have this “product announcement” in gigantic red text to try to divert your attention to the New Model M / Mini M instead of their older boards (this one is how it appears on the Ultra Classic page at https://www.pckeyboard.com/page/category/UltraClassic):
Seems Unicomp *really* doesn’t want to sell you their older products…
So… were the upgrades worth the expense?
GOOD: The dye sublimation is impeccable and the moulding of the vast majority of keys is almost flawless.
The positive impact of the upgraded tooling really shows and I’m glad they’re finally addressing one of the main criticisms that has been directed at their output for years. There is no fuzzy lettering or weak, faded looking dye work going on here. The moulds for the keys aren’t absolutely perfect, but most of the few remaining rough patches are just at the pour-point of the mould, which is at the far edge of each keycap so you can’t see it when using the board (and this was always the case on IBM keyboards anyway).
But hang on, didn’t I just say that I wasn’t that experienced with Model Ms? Isn’t this my first Unicomp? How would I know if it’s any better or worse than their previous products, or any other Model M? Well… unfortunately, this is where things start going downhill.
BAD: The excellent dye sublimation is ruined by rather poor misalignment of the legends on several keys, and there are still some noticeable moulding issues present on both the keycaps and the case even with their manufacturing upgrades.
The whole F row is off, including the very first key on the top of the board (Esc), both legends on the 5/% key are much further to the right than the others in its row, and the nav cluster is all over the place. It’s so distracting to me that I’m going to have to switch them all for either classic keycaps off an old board or blanks that have no legends at all, lest it drive me insane.
The nav cluster alignment… isn’t great. The function row is similar.
There’s also one standout instance of a key having its legend drastically misaligned by design, as well, rather than by mistake: the Return arrow legend on the tall ISO Return/Enter key is in completely the wrong location on the keycap. I don’t think this one is just because of bad aim, though: I think it’s because there’s a version of it that would have text, and they’ve simply removed the text from the process of printing that version of the key and kept the arrow legend in the same position (because UK Model M boards generally don’t have text on special keys that have icon legends). The arrow legend should be near the top of the keycap as it is on pretty much all classic ISO Model Ms that aren’t terminal boards (terminal ones tend to make this key “Field Exit” instead) but on all the Unicomp ISO Enter keys I’ve seen, it’s more like 1/3 of the way down the keycap, which is way too low and looks wrong.
Due to the closeness of this shot, there’s a certain amount of lens distortion caused by the camera, but you can nonetheless see that many of the legends on the keys are noticeably misaligned. The whole Function row looks wrong (is it supposed to be vertically centred or not?), Esc is aligned differently from the rest of the row, the 5 key is further to the right than the other numbers, and you can see what I was saying about Return/Enter.
Many of the additional keycaps I ordered to customise the board with were quite shockingly bad compared to the stock ones installed on the board, so I ended up using almost none of them: I ordered a staggering number of custom keys so I’d have plenty of options, but as of right now, the only ones I actually have on the board are a novelty one with the Linux mascot “Tux” on it, and legacy Windows ones for the left Windows key and Menu key. There are major moulding defects on most of them, the legends are REALLY badly misaligned, and the sublimation is less impressive than the preinstalled keys as well. This — partially — is why I feel I can say with some confidence that the upgrades to Unicomp’s manufacturing equipment has made a noticeable positive difference on their current keycaps. It looks like many of the custom keycap sets that I ordered are older, manufactured before they upgraded the tooling. The difference is night and day.
Side view of the Caps Lock key from one of the two Linux keysets I ordered. Very rough indeed.
There are also some keys on the board itself with moulding issues, most notably the left Ctrl key, which essentially has a dent in it, except that it seems like it’s just moulded that way rather than being damaged after the fact. The board was extremely well packaged, so it clearly didn’t happen in transit.
Moulding issue on left Ctrl key (stock key, not a swapped-in alternate)
Finally, the case itself is not without flaws: mine has a very distracting dark vertical line running down the black case right in front of the space bar:
Moulding defect in my New Model M chassis
When I sent the first pic to Unicomp they claimed they couldn’t see the problem, so I took this horrible one with flash on to make it more obvious
Had this been on the underside or otherwise a little less obtrusive, I wouldn’t care as much… but it’s right there, man. Every time I look at the board, it’s the first thing I see. (Yes, I touch type, but I still want the thing to look good at this kind of price point.)
Function
The New Model M isn’t quite as revolutionary as the Mini M, but there are still some alterations to the full size design compared to its previous iteration in the form of the Ultra Classic.
GOOD: The New Model M arranges the keys on the bottom row more like the original IBM versions.
It still has the extra keys, but they’re located in places that mean any muscle memory you might have built up using a classic Model M (or one of the dozens upon dozens of keyboards that stole its layout) won’t be broken when you switch over to this keyboard. I’ve actually already talked about this a bit over on Reddit, where I shared this extremely professional MS Paint diagram to show what I was on about:
Diagrammatic representation of the bottom two rows on the New Model M compared to the original IBM design. If you remove the Windows keys and the Menu key, then extend the space bar to fill the space freed up by the right Windows key, you get the original Model M key arrangement.
This “new” layout may seem daft and counterintuitive at first glance, but it in fact more accurately reflects the key placement on a classic Model M. The extra keys have been added into space that was either completely unused on the old Model M or taken up by the right-hand end of the space bar, which they’ve shortened to make room. This has the downside of meaning that the key added to the right of the space bar is a weirdly shaped Windows key that’s 25% wider than the Windows key on the left side, but because everything else is in the right place, you can at least remove that extra key and swap the shorter space bar for a longer one instead, which very nearly (neeeeearly) turns the “New Model M” bottom row into a classic Model M bottom row except with the two gaps at the sides filled in (between both sets of Ctrl and Alt keys). As it happens, this is one of the first things that I did: I ordered a spare short space bar and one full-length space bar with my New Model M, and I installed the long one over the top of the right Windows key spot very quickly. The asymmetrical bottom row doesn’t sit right with me and I want to be able to switch between this and other Model M-style boards without my right thumb ever hitting the right Windows key by accident when I want a space instead. (I don’t think I’ve ever used the right Windows key in my entire life, although the left one gets some heavy abuse from my left pinky.)
My New Model M with some of the many additional keycaps I ordered so I’d have lots of customisation options. For the pic here, I’ve stuck blank black keycaps on the left Windows key and Menu key to show where there would have been gaps in the classic Model M layout, and installed the long space bar, which runs over the top of the space that had a right Windows key in it by default
BAD: You can’t opt for the most true-to-the-original bottom row layout unless you pick US ANSI configuration, you’re stuck with Windows & Menu keys regardless, and the space bar on the New Model M appears to be slightly misaligned on the chassis itself (so I can’t fix it without replacing the entire case).
That last one is the real killer, but I’ll explain the other elements first because they contextualise it a bit.
One of the things that the Model M was infamous for was stubbornly refusing to include Windows and Menu keys, ever (which were introduced with the frankly hideous Microsoft Natural Keyboard in 1994 for use specifically with Windows 95). This changed at some point when Unicomp took the reigns, although I’m not certain if they added these keys right away or if that only happened fairly recently. Either way, I would have much preferred to have the option to exclude them. This might seem like a nitpick, but it goes back to the same reasoning I gave for not wanting the short space bar: muscle memory. I want to be able to pick up a classic Model M or some other board that uses an equivalent layout, and lacks Windows or Menu keys. If I keep reaching for the left Windows key — which I know many people never use, but I probably use approximately seven thousand times in a typical day — and then I switch to a board that doesn’t have one, it’s going to be frustrating for quite a while until I adjust. What I do to combat this is map the Caps Lock key as Windows key, then map Shift+Win to toggle Caps Lock. Frankly, I almost never use Caps Lock anyway; more likely, I’ll just hold Shift, EVEN IF I WANT TO TYPE IN ALL CAPS. It’s not that hard. I don’t really see the point in a Caps Lock key, but I can keep the functionality there without needing a key entirely dedicated to that sole purpose. But in any case, I don’t have this option: Unicomp doesn’t offer it, and I can’t simply remove the keys because that would just leave the underlying keyswitch barrel exposed, which would not only be unsightly but would be somewhat detrimental to the health of the board because it would leave the barrels (and the springs inside them, unless I pulled those out) open to dust and dirt. Furthermore, to diverge briefly into aesthetics for a sec (which I’ll get into more thoroughly in the next section), the legends that Unicomp has chosen for the stock Windows and Menu keys are ostentatious and repulsive to my eyes, although I’m sure that their appearance is partly due to limitations imposed by Microsoft on how they licence these keys. Still, seeing Windows 8 logos on a keyboard that was fundamentally designed in the mid-80s nonetheless drags the overall look of the board down; I couldn’t keep those on any board I buy from Unicomp, personally.
These do not belong on any Model M, if you ask me (you didn’t, but I’ll tell you anyway)
As I mentioned previously though, I was at least able to swap out the shortened space bar for a longer one, which is obviously a good thing. What’s less good about it is that Unicomp offer this out of the box, but only if you order a US ANSI layout board. If you want any other layout whatsoever, they force you to pay a $15 uplift and you need to take the layout with the shorter space bar. Why this is, I don’t know, because there should be zero difference between the bottom row of the ANSI and ISO layouts. So instead of just choosing the “long space bar” option as I could have if I’d wanted a US layout keyboard, I had to pay an extra $15 to get a UK ISO layout with a short space bar, then another $4 for a separate long space bar, then pull off the stock space bar and right Windows key and install the long space bar over the top of the unused barrel and switch. There’s a redundant switch flipper and spring still sitting underneath the right end of my space bar now, which doesn’t appear to make any discernable difference whatsoever to the function of the board, but it feels a bit pointless and I would’ve rather paid slightly less to have it arrive like that out of the box. The stabiliser on the stock space bar was (barely and unevenly) factory lubricated, but I had to add a bit of lithium grease to the alternate longer one to avoid it rattling. (I tried damping the loose spring floating in the unused barrel under the space bar just in case it made any difference to the sound, but it didn’t, so I’ve just left the spring dangling in there so I don’t lose it or whatever if I take it out.)
My big problem here, though, is that the space bar is misaligned. No, this is not because I swapped it out for the longer one: it was off before I switched them, and it remains identical after switching them. It appears that the chassis itself must have the barrel for the space bar moulded just ever so slightly too far to the left, by about 1 mm or so, which results in the space bar riding so close to the left Alt key that it almost rubs against it when you press either of the keys. Conversely, on the opposite side, there is a very noticeable gap between space bar and right Alt. Considering what I was just saying about me switching out the shorter space bar and removing a perfectly functional key just to achieve symmetry, you’ll hopefully understand why I find this infuriating. It’s very difficult to photograph, but I gave it a try: | https://medium.com/@an_achronism/review-unicomps-new-model-m-mechanical-buckling-spring-keyboard-2021-8ba6903393ee | [] | 2021-04-29 02:37:25.497000+00:00 | ['IBM', 'Keyboard', 'Technology', 'Mechanical Keyboards', 'Review'] |
2,970 | Razer Kaira Pro Wireless Xbox Gaming Headset Review | Razer Kaira Pro Wireless Xbox Gaming Headset Review
Photo taken by the author.
NOTE: Razer graciously sent me a final retail unit of this headset to review alongside marketing assets and technical information. They also took the time to chat with me in a short video call about the design of the product. No money changed hands and I had full editorial control over this article.
As per my reviews policy, this article will never be monetized, but other additional content about this gaming headset, such as comparison articles I write in the future, might be. My posts contain zero affiliate links as I don’t personally believe in the practice.
Once again, Razer has surpassed my expectations with a surprising new headset. The Kaira Pro is an exciting new design that packs in all the features I’d expect from a premium gaming headset, and it’s also a far cry from the market trend of recycling an old model with Xbox connectivity shoved in.
Xbox Wireless support is still not a common feature in the gaming headset world. Microsoft based the wireless system in both the older Xbox One and newer Series X|S consoles on Wi-Fi Direct. Their proprietary protocol means that companies need to pay a licensing fee and go through a certification process in order to release a wireless headset for Xbox consoles. There’s two typical design routes that tech companies can choose from. They can either license a secondary USB dongle as a virtual Xbox controller (as controllers have audio support built-in), or go the tougher route and integrate the Xbox Wireless hardware directly into their headset.
In order to help mitigate these extra licensing costs, companies will often recycle an existing headset design for their Xbox version. They also sometimes pass the licensing costs on to the consumer, which inflates the price of Xbox headsets compared to PlayStation or PC models.
Official marketing image provided by Razer.
Razer did things differently with the Kaira Pro, their new Xbox headset that sells for $149.99 (official site here). This is a brand-new design, based in part on the excellent foundation of the BlackShark V2 series and the Razer Opus. It’s a closed-back design with both Xbox Wireless and Bluetooth 5.0 connectivity, Chroma RGB lighting that’s customizable with a new Xbox app, a detachable boom microphone, and a second built-in microphone for things like taking calls.
The Kaira Pro charges over USB-C, and Razer says it’ll take about four hours to charge a completely dead battery. The USB-C port is a bit recessed into the ear cup, and the divot is more square-shaped than most of the USB-C cables I own, so you may have to use the included cable to charge it. It’s a nice braided cable, similar in quality to the one included with the Xbox Elite controller. Battery life is rated at 15 hours with lights on and 20 hours with them off…and I consistently beat those numbers by a few hours in my testing, so that’s great.
Official marketing image provided by Razer.
Sound is handled by Razer’s “TriForce” drivers, which use a triple chamber design and are coated in titanium. These same drivers are inside the BlackShark V2 and BlackShark V2 Pro, and a non-titanium-coated version was used in the BlackShark V2 X (one of the year’s best budget headsets). The design allows Razer’s engineers to fine-tune the sound of the drivers more precisely, and the results are remarkable.
Bass is energetic and thumpy without any hint of bleed into the rest of the audio. Mids and highs are both clean and detailed, with just a hint of extra energy up top that should help with positional accuracy. Soundstage is wider than the average closed headset, too. This is all with the Kaira Pro set to its “default” EQ, which is one of four available settings. There’s also a bass preset and an FPS preset, which you can toggle to by double-tapping the Xbox pairing button. The final preset slot is fully customizable through the Razer Headset Setup app.
Screenshot taken by the author.
That app is wonderful, allowing the type of tweaking usually reserved for PC headsets. You can set up your custom EQ, adjust the lighting effects, adjust the EQ of the microphone (which is awesome), and also activate mic monitoring. The app is available in both the Xbox and Windows 10 stores, and will sync your settings instantly.
The Kaira Pro is first and foremost designed for Xbox gameplay, and it uses a dongle-less design that syncs directly to your console just like a controller. This means that once you’re synced to a console, you can also use the headset to turn the machine on. I used mine extensively with my Xbox Series S, playing hours of games I’m familiar with like Control, Borderlands 3, and Watch Dogs Legion. The headset handled the complex soundscapes of these games with ease, whether I listened in standard stereo mode or with Windows Sonic spatial audio turned on.
Screenshot taken by the author.
I also checked them out on PC, since I have an Xbox Wireless adapter. I used them for voice chat and Borderlands 3 multiplayer gameplay with a friend as part of a regular weekly online gaming session, and he said the mic sounded wonderful and was nigh-indistinguishable from the wired mic on the Razer BlackShark V2 X. Like that model, Razer has employed their “Hyperclear” microphone here, and it sounds incredible thanks to the enhanced bandwidth provided by the Xbox wireless connection. It uses a huge-for-a-headset 9.9mm cardioid mic capsule with great background noise reduction. Here’s a quick mic test I recorded with my PC.
The Kaira Pro also provides Bluetooth 5.0 support through a separate pairing button. It doesn’t support any enhanced codecs like AptX, but it still sounds clean and crisp in this mode. You can use both connections simultaneously, if you want to bring in music and notifications from your phone, or something like that. You can also use the Bluetooth mode while on-the-go or away from your Xbox. The headset will seek an Xbox connection upon startup, but after a few minutes it’ll time out and fall back to Bluetooth mode only. It’s worth noting that if you’re in range of your synced Xbox, it will power the console on, and if the Xbox goes to sleep the headset will turn off.
Photo taken by the author.
That makes it potentially challenging to use the headset in Bluetooth-only mode if you’re just hanging out around your house. Normally that wouldn’t be a problem, but right now I live in an area under virus restrictions and I’m not spending any time in coffee shops or wandering outside my apartment. When I was synced to my PC’s Xbox adapter, this wasn’t an issue, since the adapter can’t turn on the PC, and I was able to check out the Bluetooth functions for several straight hours without my Xbox shutting down the headset. It sounds just a tiny bit more compressed, but the multi-function button responds well for the many different functions it is burdened with, and the built-in ear cup mic was clear when I tried it in a call.
Aside from two pairing buttons on the right cup, there’s also a handy game/chat balance dial. This functions only on Xbox, and allows for quick adjustment of audio balance with a beep at each extreme. The left cup has a volume knob and a mic mute switch that are both easy to find with your thumb.
Photo taken by the author.
As far as I know, this is the first Xbox headset to have full proper RGB lighting. It has all the same features you might expect from Razer’s Chroma RGB, including different effects (breathing, spectrum cycling) and a wide gamut of selectable colors. As it’s controlled with its own unique app, it won’t sync to your other Synapse-based devices, but it’s awesome to see this level of lighting control on a console headset. When the lights are off, the Razer logos practically blend into the headset, which is great for those that care about subtlety. I enjoyed leaving them on.
Comfort and build and both just as exceptional as the sound performance, and essentially best-in-class. The ear cushions use Razer’s “FlowKnit” mesh sports material to reduce heat build-up, and are filled with a nice memory foam. The headband feels a little stiff at first touch, but it perfectly spreads the weight of the headset across my head and I had no discomfort even in multiple three hour sessions. The ear cups have plenty of swivel and strong clicky numbered adjustments. I have two extra clicks of room on my large head so it should fit most heads fine, and my ears don’t touch anywhere inside the cups thanks to angled drivers and ample space.
Official marketing image provided by Razer.
Most of the headset’s frame is plastic, though there’s some prominent metal reinforcement where it counts right near the ear cup swivels and through the adjustments into the headband. I’ve had no squeaks or creaks after several days of heavy use, and I don’t expect that to change over time. The design language is more subtle than Razer’s pre-2020 headsets and headphones, aside from the green color accents. The colors perfectly match the Xbox Series X. The plastic has a nice matte texture to it, and it’s a little bit thicker and more premium than I was expecting.
This is a wonderful headset overall, and I have only one small caveat to mention that’s due to both the underlying tech and the space I used them in. As Xbox Wireless relies on Wi-Fi direct, wireless interference can cause some small issues. My apartment building is a nightmare field of 2.4ghz interference, and a handful of times while using the headset, I heard some brief static noise as it changed channels to find a cleaner signal. This didn’t happen often enough to frustrate me, and it doesn’t affect the Bluetooth connection. But if you’re sensitive to that sort of thing, be aware it can happen. Also, make sure to install the firmware update the packaging material encourages you to install!
If you don’t need the RGB and Bluetooth functions, and don’t mind a permanently attached microphone and slightly shorter battery life, Razer also sells a cheaper standard version of the Kaira for $99. It’s awesome to see them hit that low price point with this level of performance for Xbox users.
Photo taken by the author.
I’ve tried a lot of Xbox Wireless headsets over the past few years…from the bad (Stealth 700 Gen 1) to the good (Rig 800, CloudX Flight, Astro A20). This is my personal favorite so far. The Kaira Pro excels in every category. It combines excellent sound performance for gaming and music with an awesome microphone, all-day comfort, decent battery life, seamless Xbox support, a cool new app, and a solid backup Bluetooth connection. If you’re an Xbox gamer who also owns a Bluetooth device you want to listen to, it’s an easy recommendation. It’s also a good PC headset, though you will need a separate adapter, and Razer sells plenty of great alternatives for that platform that tie more directly into their ecosystem.
Between their acquisition of THX, the release of two truly great budget headsets in the BlackShark V2 X and the Kraken X, the excellent performance of the Opus, and now the premium Xbox and Bluetooth experience of the Kaira Pro, Razer has done a lot to excel in the audio space over the last couple of years. Their hard work is paying off.
I think it was really smart to release a new headset alongside the new Xbox consoles, and it’s an easy choice to go for whether you’ve upgraded to a new machine, or you have one of the older consoles and you’re looking for a better audio experience. This is the new gold standard by which other Xbox Wireless headsets will be judged going forward. | https://medium.com/@xander51/razer-kaira-pro-wireless-xbox-gaming-headset-review-7f49537a1493 | ['Alex Rowe'] | 2021-02-16 23:23:21.664000+00:00 | ['Gadgets', 'Tech', 'Gaming', 'Music', 'Technology'] |
2,971 | CRYPTOCURRENCY TRADING PLATFORM ATANI RAISES $6.25M IN SEED FUNDING | Crypto start-up organization Atani announced on Tuesday that the company raised $6.25M in seed funding. Leading European VC funds also participated in the seed round, successfully led by investor JME Ventures. Other companies who participated in the round were Encomenda Smart Capital, Lanai Partners, and Conexo Ventures, aside from individual investors, such as crypto experts, serial entrepreneurs, legal affairs, blockchain technology, crypto taxation, and finance experts.
The start-up plans to use the funds to expand its business globally, in addition to introducing premium features, such as API trading, for professional developers and traders.
Based in Madrid, Atani is essentially a non-custodial desktop app offering portfolio monitoring and trading on 22 exchanges, such as Binance, Bitstamp, and Coinbase Pro. The crypto platform also has a taxation reporting tool that provides an auto report valid in more than 30 nations.
Sharing more information about the platform, Paul Barroso, Co-founder & CEO of Atani, stated, “We started investing in Bitcoin back in 2013 and have experienced first-hand the growing sophistication and fragmentation of the crypto market. The pains of interacting with different exchanges, managing multiple trading tools, or dealing with taxes drove us to build Atani.”
The seed funding round was carried out after a pre-seed funding round of $750,000 held in May 2019. With this, the Atani group’s total funding stands at $7 million.
An innovative and all-inclusive non-custodial trading platform, Atani caters to both investors and traders. The platform has been designed to carry out trade execution, charting, and portfolio monitoring across more than 20 cryptocurrency exchanges via a single trading interface.
The platform can be accessed for free of cost on Mac OS, Linux, and Windows. Atani also gives importance to security and thus does not access user’s API keys or funds. Platform users connect to their crypto exchanges with the help of the API keys, which, in turn, are secured by AES-256 encryption.
Read more | https://medium.com/@platinumcryptoacademy/cryptocurrency-trading-platform-atani-raises-6-25m-in-seed-funding-673936483019 | ['Platinum Crypto Academy'] | 2021-04-09 10:47:22.467000+00:00 | ['Binance', 'Coinbase Pro', 'Blockchain', 'Blockchain Technology', 'Cryptocurrency'] |
2,972 | What are the main characteristics of Blockchain? | What are the main characteristics of Blockchain?
Some of the main characteristics of Blockchain are as follows:
Global: In Blockchain, everyone can access and read entries in the database. It is globally accessible.
Transaction: If you want to change any value in Blockchain, then you have to create a transaction. This transaction has to be acceptable by other participants in Blockchain. Let say we want to transfer electronic currency from one account to another account in Blockchain, the transaction will be complete only when money is subtracted from sender and added to recipient. Crypto-signed: In Blockchain, a transaction has to be cryptographically signed by the sender. Due to this, only the person creating the transaction can transfer money.
Double spend: In case there are two transactions that want to subtract money from the same account. This may cause double-spending. In Blockchain this issue is overcome by maintaining the order of transactions. These transactions are bundled in a Block and these are executed one after another. Due to this, the second transaction may not be successful after the execution of the first transaction. | https://medium.com/@blockchaintrainer/what-are-the-main-characteristics-of-blockchain-13f5a2fa14c0 | ['Chintan Dave'] | 2020-11-13 16:37:55.343000+00:00 | ['Blockchain', 'Blockchain Technology', 'Blockchain Development', 'Blockchain Startup'] |
2,973 | OnePlus CEO confirms the first smartwatch to be released early next year | OnePlus CEO confirms the first smartwatch to be released early next year
OnePlus’s CEO, Pete Lau, has tweeted about a smartwatch that the company is currently developing. Lau has said that the watch is set to release ‘ early next year.’
Many of you said you wanted a watch, and as you might have heard over the weekend-we’re making one, to be released early next year. Wishes do come true.🎁 https://t.co/H1Fqv9srXj - Pete Lau (@PeteLau) December 22, 2020
Back in 2016, OnePlus’s co-founder shared designs of a circular smartwatch. At that time, OnePlus had already scrapped the idea of manufacturing a smartwatch; the company felt that it needed to focus on their smartphones before they jumped into wearables and other devices.
The company has since released many non-smartphone products; OnePlus Buds, the company’s truly wireless earbuds came out at the end of July earlier this year and their bullet wireless earphones have been in circulation for a while now.
No design specifics were revealed in the tweet by the CEO. OnePlus could keep the original circular dial design from the old sketches, or they could enter the square/rectangular-shaped smartwatches market alongside Apple and Huawei.
The Operating Software that the watch will house has also not been revealed yet. On the one hand, OnePlus could stick to Google’s Wear OS like the Amazfit and some watches by Fossil. On the other hand, OnePlus could enhance the Android experience by putting a skin over Google’s design.
The OnePlus 9 and the 9 Pro are set to come out in the March of 2021; the company’s first-ever smartwatch could also possibly be released alongside the phones. | https://medium.com/@fahadahmd-bs/oneplus-ceo-confirms-the-first-smartwatch-to-be-released-early-next-year-2650e5d3cdc | ['Fahad Ahmad'] | 2020-12-24 05:53:27.787000+00:00 | ['Samsung', 'Smartwatch', 'CEO', 'Oneplus', 'Technology'] |
2,974 | This Arduino-Controlled Dispenser Turns Bitcoin into Something Tangible: Candy! | Bitcoin has been making headlines for many years now, and a large percentage of the articles below those headlines focus on the inherent intangibility of the cryptocurrency. How can something that doesn’t exist in any physical form have value? If you have Bitcoin, it’s only as valuable as what you can exchange it for. To prove that those exchanges are practical, David Knezić has created a candy dispenser that accepts Bitcoin.
There’s now a candy dispenser that lets you pay in Bitcoin. (📷: David Knezić)
Aside from the obvious fact that everyone loves candy, Knezić’s build is important because it demonstrates the viability of very small Bitcoin transactions. The dispenser itself is built around an Arduino Micro, which is controlled by a computer through a USB serial connection. The computer monitors Bitcoin transactions through blockchain.info, which is a public website where you can see the entire Bitcoin blockchain — the log of all Bitcoin transfers. When the computer sees Bitcoin being transferred to the dispenser’s account, it tells the Arduino to dispense an appropriate amount of candy.
It works, but has two major shortcomings: time and fees. The Bitcoin blockchain isn’t updated instantly, and so it can take several minutes for the transaction to be verified and posted. Each transaction also carries a relatively high fee, which means that you may be paying more in fees then for the candy itself. Both of those might been solved soon by the Bitcoin Lightning Network, which is designed specifically for small transactions that need to be verified quickly. It’s yet to be seen if the Lightning Network will be successful, but the potential for small cryptocurrency transactions — like buying candy — is obvious. | https://medium.com/hacksters-blog/this-arduino-controlled-dispenser-turns-bitcoin-into-something-tangible-candy-b9bf34cee026 | ['Cameron Coward'] | 2018-05-29 21:12:37.153000+00:00 | ['Arduino', 'Blockchain', 'Bitcoin', 'Raspberry Pi', 'Technology'] |
2,975 | The Physics of the Graphene Revolution | The Graphene lattice. Image from: https://singularityhub.com/2018/08/05/beyond-graphene-the-promise-of-2d-materials/
It is a poorly kept secret that graphene is a material with the potential to revolutionise the world of electronics. Alongside its optical transparency, its flexibility and its unparalleled physical strength, it is among the most efficient conductors of electricity known to science.
In some ways, it is remarkable that graphene conducts electricity at all. It is, after all, a lattice of carbon, which occupies the “non-metal” bin of high-school chemistry, so we might expect that it would be a poor conductor of electricity like, say, crystalline sulfur or phosphorus. And indeed diamond, a crystal of carbon, is an extremely good insulator. So why is graphene any different?
The electrons around any atom occupy distinct regions of space known as orbitals. These can come in a wide range of different shapes and sizes, but for carbon there are only two types of orbital present: two spherical s-orbitals and one dumbbell shaped p-orbital. Each of these contain two of the carbon atom’s six electrons.
Examples of s and p atomic orbitals. The p orbital shown is in the z direction, which is the orbital left unhybridised in the carbon atoms of the graphene lattice. Image from the Encylcopaedia Brittanica https://www.britannica.com/science/orbital/images-videos
More exciting is what happens when atoms bond. To do this, orbitals combine in a process known as hybridisation, and this can happen in a few different ways. In diamond, the outer s-orbital and the p-orbital hybridise to form 4 sp³ hybrid orbitals, each containing one electron which can then be shared with a neighbouring carbon atom to form an atomic bond. This process utilises all of carbon’s 4 valence electrons, meaning that there are no electrons free to move to neighbouring carbon atoms and carry a current through the material.
In graphene, and its layered counterpart graphite, the hybridisation is different. Instead of forming 4 sp³ orbitals, the orbitals hybridise into 3 sp² hybrid orbitals, leaving one electron alone in an unhybridised p-orbital. It is the interactions between these orbitals of atoms in adjacent layers which hold crystals of graphene together. More importantly for our purposes, these orbitals are not involved in atomic bonding, so electrons can freely move between the unhybridised orbitals of neighbouring atoms, allowing a current to be carried through the crystal.
Ok, so graphene can conduct electricity. But why is it so good at it? The best measure of how conductive a material is, unsurprisingly, its conductivity. This is the ratio between the current generated by a voltage applied to a material, and the magnitude of that voltage, with some factors related to the size of the sample to allow comparison between materials. So, if a small voltage is applied to a material, and it generates a large current, then the material has a large conductivity and is a good conductor of electricity.
In the simplest terms, a voltage corresponds to extra energy given to electrons within a material. If that extra energy isn’t large enough to break electrons free of their atoms, then no current can flow at all, and we have an insulator. Conversely, if it is easy to break an electron free of its atom, most of the extra energy goes to its kinetic energy — the energy associated with its movement through the material — and the electron can move quickly. Current is the rate of flow of charge, so electrons moving quickly through a material corresponds to a large current.
The electrical conductivity of some common metals. Notice that group 11 elements occupy the top 3 most conductive metals. Image from http://wiki.robotz.com/index.php?title=File:Conductivitymetalchart0.jpg
For example, in the most conductive metals, group 11 elements such as gold, silver and copper, the outermost valence electron is on its own in a higher energy subshell than the rest of the atom’s electrons, meaning it takes comparatively little energy for it to break free from its atom. This means that when that electron receives energy from a voltage, most goes to kinetic energy meaning that it can move quickly through the metal. Combined with these outermost electrons of neighbouring sites all moving similarly fast, the current generated by the voltage is large, and these metals have extremely high conductivities.
So what about graphene? Graphene is a honeycomb lattice: its atoms form hexagons such as those one might find in the local beehive. We find that the dispersion relation, which describes how the energy of an electron varies with its momentum, has two bands: an upper band known as the conduction band, and a lower band known as the valence band. The gradient of the dispersion relation gives the electron group velocity, which can be broadly thought of as the velocity of the electrons as they move through the lattice.
The dispersion relation for a honeycomb lattice such as graphene. Here, momentum is normalised against the magnitude of the momentum at the K and K’ points, so the two bands meet at k=1 and k=-1.
Crucially, the bands touch at two momenta, which we refer to as the K and K’ points. Near these points, the dispersion relation is linear, which means that the momentum of the electrons increases at a constant rate as energy of those electrons increases. This is the behaviour of a massless particle!
When you increase the energy of a photon, for example, its momentum increases at a constant rate, namely that of the speed of light c. Just like photons can’t travel at any speed other than the speed of light, a conduction electron in graphene cannot travel at a speed other than a velocity of around 1 million m/s. So, even for extremely small energies, the electrons move at very high speeds. This means that these small voltages generate a large current, which mean graphene has a large conductivity.
The dispersion relation of a honeycomb lattice close to the K point, which has been defined as q=0. Colour has been used to differentiate the conduction band (red) and the valence band (blue). Notice that the dispersion relation is linear, such as would be expected for a massless particle.
However, for fast electrons to be generating a large current, they must, for the most part, be moving in the same direction through the material. Ideally, the voltage ensures this by setting up an electric field in the material, attracting the negatively charged electrons to the positive terminal, but the universe is not all sunshine and rainbows. Electrons can be deflected, or scattered, by numerous impediments. Indeed, a wire is an ever-changing assault course of impurities, vibrating atoms and other electrons, all of which can disrupt the progress of any one electron moving under the influence of the voltage.
We thus have another factor which affects conductivity: how often electrons are scattered as they move through a material. If the electrons are scattered often, then most will be prevented from moving in the direction of the electric field associated with the applied voltage, and the current associated with that voltage will therefore be small. This leads to low, poor conductivity.
A diagram of the generic honeycomb lattice, indicating the inequivalent A and B sites, as well as the primitive lattice vectors and nearest-neighbour vectors. Image from Gilles Montambaux. Artificial graphenes: Dirac matter beyond condensed matter. Comptes Rendus Physique, Elsevier Masson, 2018, 19 (5), pp.285–305. ff10.1016/j.crhy.2018.10.010f
Does this crush our dreams for graphene? Thankfully not. On a honeycomb lattice such as graphene, there are two “types” of lattice sites, which we call the A and B sublattice. Any electron state can be thought of a superposition of being located “on” the A sites and the B sites, and we can treat this behaviour, known as a pseudospin, in the same way that we do the spin of the electron. For example, an electron on an A site can be thought of as being spin up and an electron on a B site can be thought of as being spin down.
Importantly, for the electrons which carry the current in graphene, this pseudospin depends on the direction of the group velocity. Near the K point, an electron with a positive group velocity is spin up, while an electron with a negative group velocity is spin down. This coupling of spin and velocity is known as chirality, and we can think of it as electrons moving in a positive direction on the A sites and in a negative direction on the B sites.
The dispersion relation near the K point of a honeycomb lattice with the pseudospin in each band, here represented as left and right as opposed to up and down, superimposed. Note that any backscattering, represented by a horizontal translation on the graph, requires a flip of pseudospin direction.
So, what happens when a chiral electron encounters a scatterer? For it to scatter backwards, it would require a flip of this pseudospin since the group velocity changes sign. In other words, to backscatter would require a move from the A sublattice to the B sublattice, which is not possible in most cases. We therefore see complete suppression of backscattering, and an overall suppression of all scattering.
As such, electrons in graphene can travel for up to micrometre distances without scattering at all. This behaviour is known as ballistic transport, and this reduced scattering ensures that the full power of the speed of the conduction electrons in graphene can be brought to bear.
The final factor which affects the conductivity of a material is simply the density of conduction electrons within a material, known as charge carrier density. Electrons are what carry the charge and so the more of them that move, the greater the associated current. In graphene, we can control this quantity by doping. This is where atoms such as phosphorus or nitrogen, which can donate two electrons for conduction compared to carbon’s one, replace some carbon atoms in the lattice to increase the number of available electrons for carrying a current.
Even while the carrier density of graphene could never be as high as in most metals, its conductivity is still remarkable. The theoretical limit of its conductivity is up to a million times better than that of silver, the best known metallic conductor. Indeed, the conductivity of graphene samples is actually often limited by interactions with the material on which they are mounted, rather than the properties of the graphene itself.
Even so, for a material as light as graphene to exhibit such impressive conductivity is extremely exciting. For example, one of the major hurdles in the world of electric transport is the sheer weight of lithium-ion batteries which dramatically limits the range of such vehicles. Current graphene-based cells are an order of magnitude more efficient, an energy-to-mass ratio of 1000 Wh/kg, compared to 180 Wh/kg for Li-ion batteries.
Energy efficiency is as important a goal as any as our global society hurtles headlong into an increasingly inevitable climate crisis. More efficient conductors such as graphene, and potentially even the long elusive room-temperature superconductors, have the power to reduce energy losses in just transporting energy around our power grids. This could provide one of the many necessary spears in the phalanx of efforts to combat the encroaching climate disaster. | https://medium.com/predict/the-physics-of-the-graphene-revolution-3feef2b090b5 | ['Jason Segall'] | 2020-12-23 21:27:49.084000+00:00 | ['Electronics', 'Technology', 'Physics', 'Graphene', 'Science'] |
2,976 | Voice Assistants And Smart Home Technologies Transform The Lives Of People With Disabilities And The Elderly. | Voice Assistants And Smart Home Technologies Transform The Lives Of People With Disabilities And The Elderly. Debra Ruh Jul 17, 2020·11 min read
We are in the golden age of innovations, an era in which digital technology is transforming the underpinnings of human existence, transforming lives within the community of disability and the elderly.
LaMondre Pough CSO of Ruh Global IMPACT using Alexa
This revolution is impacting every industry, including the one you may have never considered: accessible and adaptive technology. From Artificial Intelligence and Machine Learning to Voice-Enabled Technology, the way we interact with technology is evolving right before our eyes.
Now, with advancements in voice technology such as natural language understanding, people with disabilities and the elderly are empowered to do more to improve humanity’s quality of life.
Technology has the power to transform all our lives; from the television to the smartphone, and from a video camera to a 3-D printer. Technology can promote improvements in efficiency, reliability, and speed, and can reach across boundaries of class, gender, and geography to deliver the benefits to all.
And while much of this technology is accessible and valuable to people of all ages and ability, it is in the world of assisted technology where we can find global technology companies such as Alexa, Google at the forefront of developing new devices, equipment, and apps that will enhance the day-to-day lives of the elderly and people with disabilities in assisted or independent living during the COVID-19 pandemic and beyond. Read more about Amazon donations in devices for communities around the world.
Disability affects more than 1 billion people on Earth, and in the United States, according to the American Community Survey (ACS), an annual survey conducted by the US. Census Bureau in 2018, the overall percentage of people with disabilities 13.1% of the U.S. population.
The population age 65 and over numbered 49.2 million in 2016 (the most recent year for which data are available). They represented 15.2% of the population, about one in every seven Americans.
Improving accessibility is very important for all these people. It is also essential for people concerned with electronics or the elderly. Indeed, seniors can face constraints due to the decrease of some of their capacities but also difficulties in learning and mastering the use of new technologies. The field of action of voice recognition is therefore very broad and crucial!
Voice technologies enhance independent living
There are many smart devices that can help improve the quality of life for the elderly and people with disabilities. They provide more independence for them and life easier for all users.
Individuals in the community of disability, and in a senior living community can talk to smart speakers the way they would speak naturally. Voice assistants allow members to speak naturally as they communicate what they need or request the information they are seeking, immediately benefiting those with limited dexterity or sight.
From an accessibility perspective, voice-controlled, home-based intelligent personal assistants (IPAs) have the potential to greatly expand speech interaction beyond dictation and screen reader output.
When you think about smartphones and smart home technology, you probably picture Millennials making their living spaces as convenient as possible. That is not an inaccurate picture, but it is not a complete one either. Boomers and seniors are an enthusiastic and growing market for these features, which can make living independently easier, safer, healthier, and more enjoyable for the 55-plus crowd.
Voice assistants, or virtual assistants, are more than just the cool, often-female voices that respond to your verbal requests: They are the point of communication between you and all your connected devices. But whether your voice assistant is the hub of your smart home, or simply a smartphone-based helper that tells you if it is raining, the best assistants streamline your relationship with technology.
Voice-controlled intelligent personal assistants (IPAs), have introduced a new interaction paradigm into the mainstream. These devices provide a conversational interface in the home to allow users to ask for and save information (e.g., check the weather, ask for the time, add to a shopping list), control smart home appliances, control home lighting or door locks by voice, and perform a range of online actions (e.g., shopping, banking).
Although some accessibility challenges exist, users with a range of disabilities are using Amazon Echo and Google Home, including for unexpected cases such as speech therapy and support for caregivers. Richer voice-based applications and solutions to support discoverability would be particularly useful to users with visual impairments. These findings should inform future work on accessible voice-based IPAs.
People with disabilities are very adept to deal with Artificial Intelligence (AI) and voice technologies. In the next few years, the big tech companies will put voice technologies in every home, to enable people to empower themselves and Another key aspect will be peer-to-peer learning, connecting people who can share experiences and creating a community of AI users.
The voice-enabled technology is truly a breakthrough in elderly care because they can accomplish tasks by simply doing something, they have been doing all their life –and that is through talking! Older adults, service providers, and caregivers are quickly taking to voice-assisted home care for its sheer simplicity and usability. As life expectancies increase, the time elders spend at home will also increase. Voice technology can make staying at home more comfortable and connected. For example, see how to use Alexa and Amazon Devices to stay informed and connected during the COVID-19 pandemic?.
According to Pew Research Center. Adults above the age of 74 are using more tech these days, too: the same study shows that nearly 40 percent of the Silent Generation have smartphones, and 68 percent of Baby Boomers own smartphones, and 59 percent use social media. In other words, older adults are increasingly open to using technology. What makes voice technology so great for senior living, though, is that it has no learning curve.
“The use of Smart Speaker Technology is a huge advantage for Senior’s with Dementia or Alzheimer’s. As their brain disconnects with how to physically perform tasks, (like using a light switch), asking a smart device to perform the task still offers them the ability to retain some level of independence and quality of life.” Richard Streitz COO Ruh Global IMPACT
Smart home technology transforms the lives of vulnerability
A growing number of regular household items such as light bulbs and door locks are becoming smart, one of the major benefits being that they can be controlled with your computer or smartphone — even if you are not home.
While each item usually includes its own dedicated app, if you choose to incorporate several smart items, you can opt for a system that allows you to control them all from one central hub. You can add a smart speaker into the mix. As these also include a microphone for voice commands, all the above to be operated simply by speaking aloud.
People with Disabilities
In recent years, creating a truly accessible home has become easier thanks to smart home technology. Gone are the days of needing to purchase a disability-specific, specialized device just to perform one simple task. Today the Internet of Things and a smartphone give us access to tools that can transform our environments.
For people living with chronic diseases, be it multiple sclerosis, arthritis, or any number of other conditions, reduced physical mobility can cause a problem when it comes to everyday tasks around the home. However, the rise of ‘smart’ homes — in which household items become connected — can greatly improve accessibility and be life-changing for people living with disabilities. The key change for many people with disabilities is the introduction of alternatives to typing information into a device and transform towards a world where input via voice — to a smart speaker or a personal assistant on a wearable device or smartphone, this technology has the potential to make their homes accessible in order to foster self-reliance and have the opportunity to live life as they choose.
Thanks to integrated systems such as Amazon Alexa and Google Assistant people with disabilities can now expand their smart home capabilities bit by bit, starting with what they need most.
The Elderly
We all have to eventually deal with the challenges of getting older and gradually losing some of the abilities that once came naturally to us. For elderly individuals, those shift’s inability and mobility often cause resentment and depressed moods. Then, their caregivers — which may be professionals but are often family members — struggle with feelings of worry and inadequacy, especially if they cannot always stay with their loved ones due to personal obligations.
Although smart home technologies cannot typically replace the types of assistance those devoted people give, companies are coming up with innovations that could let them know when something happens in a senior relative’s home, such as a fall.
Older adults are also at high risk for feeling lonely and isolated from social problems that smart home technology could help ease. However, emerging technology from smart devices and robotics companies may very well help.
A smart speaker with a voice-activated virtual assistant can be very helpful to seniors living alone — both as a tool and a digital companion. The smart speakers are easy to use, and they can improve the elderly living everyday lives. Voice assistants can offer everyday administrative, clinical, and emotional support to them.
On the administrative side, voice assistants can solve many of the common pain points in community-wide communication. Overall, communities struggle to circulate information, and residents with memory loss and mobility limitations may easily forget or miss announcements.
On the clinical side, smart speakers can also help the elderly manage their individual healthcare needs, and for the emotional, voice technology can help prevent isolation and loneliness in older adults.
Intelligent Voice Assistants
Artificial Intelligence (AI) is an increasingly critical competitive advantage for companies, especially the big technology companies that build voice personal assistants.
There are billions of voice assistants in use today. Those billions speak to the ubiquity of voice assistants today. Smart speakers and smartphones may be the most common way to interact with them, but televisions, cars, office equipment, and even clothing can all offer access to the voice of a powerful AI. The ways people use voice assistants are expanding almost as quickly as the options for interacting with them.
Voice assistants are on track to become a staple of modern tech use. NPR and Edison Research’s 2019 Smart Audio Report found that more than 53 million Americans now own a voice assistant. That is nearly one in three households. Voice assistants are changing our lives every day.
Smarter Voice Assistants
Amazon Alexa and Google Assistant are the market leaders when it comes to voice operating systems for both systems.
The best smart speaker for you will depend on several factors. For many of us, it is between two brands: one of the new options from the Amazon Echo range or one of the smart Google Home / Nest speakers.
Every device in both ranges, as well as the flagship Amazon Echo and Google Home / Nest speakers, make fantastic smart home devices.
What is more, Amazon Alexa and Google Assistant (the smart Artificial Intelligence voice assistants you will find inside the smart speakers) are both becoming more useful and even smarter with every passing day too.
Applications of Amazon Alexa and Google Assistant
The software behind the Google Assistant app and Amazon Alexa app is constantly improving. Every platform has its own focus points. For example, Amazon Alexa can be linked to more smart products, and, Google Assistant has its own cloud service to store your favorite music. Both platforms are compatible with both Android and iOS devices. The integration of existing apps on your smart device goes smoothly in both the Google Assistant and Amazon Alexa app.
For the smart home integration, the gap between Alexa and Google Assistant has closed.
Both Alexa and Google Assistant let you combine your devices into rooms, so you can say things like “turn on the living room lights,” and both support Routines, which let you combine multiple actions into one command.
Voice Assistants and maintaining our collective privacy
Digital assistant’s technology undoubtedly presents exciting opportunities, including making it easier for people to access the online world or control other devices. But public concerns about smart speakers have been expressed. Many of these focus on the seemingly intrusive aspects of the devices and the use of the data captured. Others have raised questions about their longer-term disruptive impact on the consumption of information, user profiling, and people’s relationship with technology. Now, the future is here, and this future is embedded, augmented, and ubiquitous.
Smart speakers can be found everywhere. They have recently undergone a massive transformation and run on operating systems that are fueled by artificial intelligence (AI).
They observe and collect data in real-time and have the capability to pull information from different sources such as smart devices and cloud services and put the information into context using AI to make sense of the situation. Although we have come a long way in the design and execution of these AI technologies.
In the digitally divided society, someone who is privacy savvy would not invite such equipment into their lives, while others may accept or rationalize such behaviors.
Respecting others’ privacy is a social norm that we must work together to maintain. First, we need to educate ourselves on CyberSafety and the potential risks of digital technologies. We should also be proactive in keeping current with the latest news on technologies and act when required.
Privacy law in the United States is fairly patchy right now. However, the direction of travel is unarguably towards tighter regulation and greater consumer rights.
“Although Privacy Laws are important, it is critical to balance privacy with the nature of why voice data is captured and stored. This provides the benefit for increased data sets for AI speech algorithms when smart devices are used by those that may have progressive speech difficulties.” Richard Streitz COO Ruh Global IMPACT
The government’s role in this complex paradigm is critical. We need stronger privacy laws to address privacy issues associated with personal digital assistants.
California, which has long been a frontrunner in protecting its residents’ privacy, has recently passed some of the strictest data protection legislation that the US has ever seen with the California Consumer Privacy Act (CCPA).
Now, companies such as Amazon, Google, and others are making the rules for addressing privacy issues with personal digital assistants.
Conclusion
Voice assistant technology has advanced rapidly in recent years, and the question naturally arises as to how this technology can be used to help promote the independence of people with disabilities and the elderly.
Coming to fulfilling the needs of the elderly and people with disabilities, intelligent voice assistants or assistive robots score higher than human caregivers. Although technology can only underpin and not replace human service, a well-programmed robotic assistant could be more patient and efficient than a human caregiver.
These smart devices allow persons with disabilities and people over 65 that are aging into disabilities to remain more independent and to have a better quality of life. My daughter Sara Ruh born with Down syndrome now lives in a supported apartment and she uses many smart home devices to stay independent. My husband Ed, has acquired Early Onset Dementia because of a Traumatic Brain Injury (TBI) he sustained when being hit by a car a child. We utilize many voice technologies to help him remain as independent as possible. | https://medium.com/@debraruh/voice-assistants-and-smart-home-technologies-transform-the-lives-of-people-with-disabilities-and-e3607968176 | ['Debra Ruh'] | 2020-07-17 20:28:08.075000+00:00 | ['Technology', 'Disability Rights', 'Smart Home'] |
2,977 | By 2025 one can expect to see many millions of electric AVs on our roads. | By 2025 one can expect to see many millions of electric AVs on our roads. By then too, electric aerial autonomous taxis could be available for longer journeys because they would usually be more practical for this than AVs. | https://medium.com/@ericbjensen/by-2025-one-can-expect-to-see-many-millions-of-electric-avs-on-our-roads-7e484dca17e3 | ['Eric Jensen'] | 2020-11-10 03:02:30.114000+00:00 | ['Technology', 'Autonomous Vehicles', 'Transportation', 'Self Driving Cars'] |
2,978 | From “MD” to “MKT”: digital marketing as a kick-start for career change | “Sports medical doctor with nutrition specialization”. That’s how I present myself professionally. Since 18 years old, when I got into med school, the impression is that my name has an extension. “Joan.med”. I love what I do, but sometimes it’s more of the same. Maybe that’s why I have more than one specialty. And they are the reason I’m connected to the digital world today.
The quantity of misinformation about sports medicine and nutrition is wide. It’s so easy to find millions of opinions (so called “broscience”) but very hard to find a cool evidence-based post, in a way most people can understand, not only the professionals. Specially written in Portuguese (I’m from Brazil).
So, in 2013, the blog Nutroesporte was born to translate scientific information to the general public. My husband (an IT deserter) helped and made all the domain, hosting, server stuff while I was responsible for content. I was improving the blog (many “for Dummies” books were read) and, with my many limitations, I did quite well. Nutroesporte had 72 visitors in 2013 and they were more than 10 thousand in 2016.
Still, in my pisces parallel universe, where I thought I should be a reference at that time, people got in and got out the blog, they didn’t stick to it despite the amazing content. I tried another channels, I tried another template, but I kept hitting a wall. What was I doing wrong? | https://medium.com/personallydigital/from-md-to-mkt-digital-marketing-as-a-kick-start-for-career-change-b7f7449ae7d | ['Joan Amato'] | 2017-05-17 19:38:53.320000+00:00 | ['Career Change', 'Women In Tech', 'Digital Marketing', 'Content Creation', 'Technology'] |
2,979 | Computer Vision and CIFAR10 | Computer vision has been a target for many Deep Learning models that perform tasks such as Image Classification […] Continue reading
photo by: Juan Antonio Piñera García
Computer vision has been a target for many Deep Learning models that perform tasks such as Image Classification and Image Captioning among others. *CIFAR10* is another dataset with a long history in computer vision and machine learning. It consists of a set of 60,000 color images of size 32×32 pixels, each belonging to one of ten categories: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck.
State-of-the-art deep learning methods for this dataset are as good as humans at classifying these images. In this post, however, we will cover a simpler method that will obtain decent performance.
Loading the CIFAR10 Dataset
The first step we need to take is to obtain the training data we’ll use later to train the model. In order to do that we need to download the Python version of the dataset and extract the files into a local directory. You should now have the following files:
data_batch_1, data_batch_2, data_batch_3, data_batch_4, data_batch_5
test_batch
batches_meta
readme.html
The data_batch_X files are serialized data files containing the training data, and test_batch is a similar serialized file containing the test data. The batches_meta file contains the mapping from numeric to semantic labels. The .html file is a copy of the CIFAR-10 dataset’s web page. The code is based upon the guide done by Tom Hope in his book about building Deep Learning Systems.
Since this is a relatively small dataset, we load it all into memory:
class CifarLoader(object):
def __init__(self, source_files):
self._source = source_files
self._i = 0
self.images = None
self.labels = None def load(self):
data = [unpickle(f) for f in self._source]
images = np.vstack([d["data"] for d in data])
n = len(images)
self.images = images.reshape(n, 3, 32, 32).transpose(0, 2, 3, 1)\
.astype(float) / 255
self.labels = one_hot(np.hstack([d["labels"] for d in data]), 10)
return self
def next_batch(self, batch_size):
x, y = self.images[self._i:self._i+batch_size],
self.labels[self._i:self._i+batch_size]
self._i = (self._i + batch_size) % len(self.images)
return x, y
In the previous code, we need to make use of the following utility functions:
def unpickle(file):
with open(os.path.join(DATA_PATH, file), 'rb') as fo:
dict = cPickle.load(fo)
return dict
def one_hot(vec, vals=10):
n = len(vec)
out = np.zeros((n, vals))
out[range(n), vec] = 1
return out
The unpickle() function returns a dict with fields data and labels, containing the image data and the labels, respectively. one_hot() recodes the labels from integers (in the range 0 to 9) to vectors of length 10, containing all 0s except for a 1 at the position of the label.
As a final step for the data loading, let’s create a data manager that includes both the training and the test data:
class CifarDataManager(object):
def __init__(self):
self.train = CifarLoader(["data_batch_{}".format(i)
for i in range(1, 6)])
.load()
self.test = CifarLoader(["test_batch"]).load()
Convolutional Neural Network
We will use a rather simple Convolutional neural network that will take as input the 32x32x3 images contained inside the CIFAR10 images, add two convolutional layers, each followed by a pooling layer. After that, we flatten the output and add a couple of fully connected layers used for classification into the CIFAR10 class labels.
The following code represents the model:
cifar = CifarDataManager() x = tf.placeholder(tf.float32, shape=[None, 32, 32, 3])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
keep_prob = tf.placeholder(tf.float32)
conv1 = conv_layer(x, shape=[5, 5, 3, 32])
conv1_pool = max_pool_2x2(conv1) conv2 = conv_layer(conv1_pool, shape=[5, 5, 32, 64])
conv2_pool = max_pool_2x2(conv2)
conv2_flat = tf.reshape(conv2_pool, [-1, 8 * 8 * 64]) full_1 = tf.nn.relu(full_layer(conv2_flat, 1024))
full1_drop = tf.nn.dropout(full_1, keep_prob=keep_prob) y_conv = full_layer(full1_drop, 10)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_conv,y_)) train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Running the model with the CIFAR data
Now that we have defined the model, we must create a TensorFlow session that will instantiate such a model and then feed it the training data sequentially to slowly train it. Also, we could use a way of knowing just how good is our model doing with the test data, which means data the model did not see while training. The following code does precisely that:
def test(sess):
X = cifar.test.images.reshape(10, 1000, 32, 32, 3)
Y = cifar.test.labels.reshape(10, 1000, 10)
acc = np.mean([sess.run(accuracy, feed_dict={x: X[i], y_: Y[i], keep_prob: 1.0})
for i in range(10)])
print "Accuracy: {:.4}%".format(acc * 100) with tf.Session() as sess:
sess.run(tf.global_variables_initializer()) for i in range(STEPS):
batch = cifar.train.next_batch(BATCH_SIZE)
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob:0.5}) test(sess)
Once you have trained this model, it will achieve approximately 70% accuracy within a few minutes of training. As of now, the state-of-the-art deep learning methods achieve over 95% but using much larger models and usually many, many hours of training. If you are interested in seeing another example of TensorFlow being applied to classification problems, check out this other post.
https://technopremium.com/blog/ | https://medium.com/@technopremiumusa/computer-vision-and-cifar10-554a96ee51c3 | ['Techno Premium'] | 2019-09-02 13:53:49.500000+00:00 | ['Machine Learning', 'Technology', 'Benchmark', 'Computers', 'Deep Learning'] |
2,980 | ‘Flip-the-Switch Risk’ Is a Threat to All Internet Users | Encryption makes the internet work. It consists of a few elegant math equations that scramble data before being sent over the internet where prying eyes could otherwise intercept it, read it, and manipulate it. Without encryption our use of the internet would be limited to unimportant communication; anything valuable or interesting could and would be tampered with.
Encryption is the reason everything from financial transactions to state secrets get whipped around the internet nearly instantaneously— unlocking untold amounts of innovation, wealth, and prosperity as a result.
But not all encryption is created equal. Some forms of encryption expose the communications of internet users to private companies and others those companies choose to share your data with.
A lot of technology companies claim to have products that are “end-to-end encrypted”. This is often misleading. For example, as recently as March Zoom claimed in their security white paper that hosts could “enable an end-to-end encrypted meeting” with the click of a button. After backlash, Zoom quietly changed the language in their white paper to avoid using the term “end-to-end encrypted”.
The backlash was due to a critical distinction that Zoom failed to acknowledge between standard web encryption sometimes called “client-to-server” (C2S) encryption and true “end-to-end” (E2E) encryption.
The difference between C2S and E2E encryption can’t be over overstated. Simply put, it is the difference between communicating privately, and having everything you do monitored.
The difference between C2S and E2E encryption can’t be overstated. Simply put, it is the difference between communicating privately, and having everything you do monitored.
As the above graphics illustrate, C2S gives the company access to unencrypted data since the data is held on servers that sit behind the point where encryption occurs. Encryption was meant to secure communication between the sender and the recipient. But C2S encryption has an achilles heel. A vulnerability giving the company that runs the service, application or device that you use access to your communications.
E2E encryption covers the achilles heel of C2S and preserves totally private two-party communication. This is what E2E means. One “end” is the sender and the other “end” is the recipient; there is no pesky server in between allowing a third party to listen in.
This Problem is Not Limited to Your Laptop
Most homes today have some sort of internet-connected device in them. Whether it’s your refrigerator pinging the manufacturer to let them know the temperature gauge isn’t working right, or it’s an Alexa device telling dad jokes on demand by pinging Amazon, your home is almost certainly connected to the internet.
This new reality means the decision of a company to use C2S or E2E encryption has implications for the safety of the devices in your home, pocket, and on your wrist. Given this, the security and privacy practices of private companies increasingly affect your physical safety. In other words, encryption is getting physical.
C2S Encryption Exposes Users to Flip-the-Switch Risk
I refer to the dangers of C2S encryption as flip-the-switch risk. What is flip-the-switch risk? Let’s say you buy a product from a company you absolutely love and trust unconditionally.
A company that comes to mind that fits this description for a lot of people is Apple. Apple makes incredible products. Imagine Apple rolls out a new iPhone where all the phone’s data is encrypted on Apple servers using a form of C2S (disclaimer: this is not how Apple encrypts iPhone data today).
You trust Apple. And this iPhone is so jam packed with upgrades like ear-ID, a camera that can zoom in far enough to see cells, and a processor that can calculate Pi’s final digit. You buy this iPhone. You buy it because you assume no one at the company will use your new phone’s data to blackmail you or to steal your credit card information to go on a spending spree. Or at least, you feel the low risk of something like this happening is worth the incredible new features.
But the Apple of today may not be the Apple of tomorrow. Let’s say a wealthy, secretive group of investors buy up a majority stake in Apple. They then oust the board of directors, install a new CEO who then decides to sell your data to the highest bidder. This phenomenon is known as flipping-the-switch. The fact you trust the people at the reigns of a company that holds your sensitive data today, does not protect you from those people ultimately leaving, and having the switch flipped on you.
While it’s unlikely Apple will be subject to a hostile takeover that leads to selling user data en-masse, this example illustrates a larger risk that users of companies who use C2S encryption expose themselves to.
Flip-the-switch risk manifests in lots of smaller, real ways. For example, Fitbit was acquired by Google in late 2019. If you were one of the 28 million Fitbit users at the time of acquisition your sensitive health data was suddenly handed over to a new company who you may or may not trust. Amazon’s acquisition of PillPack in 2018 is another example of a tech behemoth acquiring their way to sensitive user data.
Flip-the-switch risk also applies to insiders. In fact, this is the most common way sensitive user data gets exposed. An engineer who is also a spurned divorcee spies on their ex. Or a network administrator who is also a crazed super fan stalks a celebrity. Amazon fired several Ring employees in January for spying on customer footage without consent. All of these are examples of flip-the-switch risk. Your data gets exposed the moment the wrong person gets access to it. C2S encryption opens a pandora’s box of exactly this kind of exposure.
As a user, you can never trust that the people with the keys to your data now won’t hand them over to someone else in the future. This is the essence of flip-the-switch risk. Demanding E2E encryption is one of the ways you can insulate yourself from flip-the-switch risk, removing the possibility of anyone else ever gaining access to your data. | https://medium.com/swlh/switch-the-flip-risk-is-a-threat-to-all-internet-users-3057de5c57d | ['Dean Patrick'] | 2020-05-06 16:58:30.244000+00:00 | ['Technology', 'Internet', 'Encryption', 'Privacy'] |
2,981 | Naked Zoom Users Targeted By New Cybercrime Campaign | An ongoing cybercrime campaign targets Zoom users who may have got naked or intimate on camera.
Within days of reports that a high-profile reporter and TV analyst had been caught exposing himself during a Zoom chat, cybercriminals started to exploit that news. The reporter told Motherboard that he thought he was off-camera, having muted the Zoom video, and nobody on the call could see him. The cybercriminals behind the latest sextortion campaign, who have already targeted at least 250,000 individuals since October 20, leverage fears that someone might have been inadvertently filmed naked or while being intimate.
First spotted by researchers at the Bitdefender Antispam Lab, the extortion attempt uses all the usual emotional and psychological, not to mention technological cues to convince the target they have been caught out.
The sextortion campaign, while targeting Zoom users, is executed by email. The majority of the emails sent in this particular campaign have been to recipients across the U.S. according to Bitdefender. The first psychological cue is right there in the email subject of ‘Regarding Zoom Conference Call,’ which might just be enough to catch out plenty of people in these pandemic-altered times in which we live. The first line looks to reinforce the legitimacy of the content to come by stating that “you have used Zoom recently,” which would resonate with most of us.
The scam then goes on to use the emotional and technological angles to good effect. I have very unfortunate news for you; I used a zero-day security vulnerability in the Zoom app; I got full access to your camera.
Even the most hardened of users might be forgiven for having to read this twice before deciding it’s a con. Especially if they might have been naked in front of a webcam even if they weren’t on a Zoom chat or any chat for that matter. If they’ve been intimate online with a partner, not an uncommon occurrence during pandemic lockdown periods, the fears of being exposed can quickly gather pace and overtake rational thought.
Following more attempts to get into the head of the potential victim, including trying to get them to feel sorry for the attacker who says they are only doing this as they got into debt after contracting COVID, the criminal conclusion is revealed. “Pay me $2,000 (£1,500) in bitcoin,” and the supposed video won’t be made public.
There are so many things wrong with this that it’s hard to know where to start. But how about with while it’s certainly possible a hacker could access your webcam, computer or smartphone, by way of a vulnerability or malware, it’s also highly unlikely that they have. If that were the case, then to ensure the best chance of getting paid, a short clip of the footage would be attached to the email as proof. The scammer even instructs the recipient not to reply to the email, another big hint that there’s no actual video involved. This is a typical sextortion scam that relies upon fear, upon the way people react to that fear, and allows usual thought processes to be diluted. It gives a three-day limit, piling on the pressure to pay up quickly before the alleged video is published online.
Of course, even if the scammer did have footage, then paying the ransom is a bad idea, and there’s no guarantee they will delete it as promised. I wouldn’t trust a criminal to keep their word, would you?
If you’ve received an email that sounds similar or is worried that you may have been sexually active, or even just naked, within range of a laptop or smartphone camera, I’d say you can rest easy. The chances of any such communication being genuine are ridiculously remote.
In the meantime, to help assuage any fear here’s some good baseline advice:
If you do get naked, or get busy, near your computer or smartphone, always make sure the camera is facing the other way, the laptop lid is closed, or you have shuttered the webcam lens. Many standalone cams have shutters built in these days, and you can buy stick-on sliders for your laptop.
Source: Forbes | https://medium.com/@digitaltimes-2020/naked-zoom-users-targeted-by-new-cybercrime-campaign-f9750845d1a9 | ['Digital Times Africa'] | 2020-11-02 16:58:32.775000+00:00 | ['Zoom', 'Technology', 'Technology News', 'Cybercrime', 'Tech'] |
2,982 | My Neighbor Alice Land Sale Lottery: A Reference Guide | Frequently Asked Questions
Purchasing a new property can be daunting and confusing with information from different sources. The same goes for digital Land Sale purchases. We’ve compiled the most asked questions from our community to help you better understand the land purchasing & lottery process at My Neighbor Alice Land Sale.
If your question isn’t here, feel free to reach out to us on telegram or email. We will be happy to chat about any questions you may have.
Q: When will I be able to make a deposit?
A: From the 26th of May.
Q: When does the staking begin?
A: From the 28th of May.
Q: For how long do I need to stake my tokens?
A: You need to stake minimum 1 day to get 1 ticket.
Q: Which tokens are eligible to stake?
A: ALICE and CHR.
Q: When is the last day to start staking?
A: 26th of May + 14 days = June 10th (before 10:00 UTC.)
Q: Since you can get several lottery tickets, can one user buy more than one plot?
A: Theoretically, yes.
Q: How can you stake 50 ALICE when you are actually staking LP tokens?
A: User has to send 20 ALICE tokens for plot collateral (which will be held should he win the plot, or can be withdrawn otherwise), We will use the sending address and check how many tokens of that addressare in liquidity pools. We can add locking contracts for people that don’t want to be liquidity providers should we see the need.
Q: You mentioned that we need to send 20 ALICE to register for the sale. Will details on that be announced later?
A: You’ll be able to transfer the 20 ALICE from May 26th when the land sale begins.
Q: Can we get bonus tickets for staking longer than 14 days?
A: No, you can get tickets for 14 days only.
Follow up Q: Is it a concern that liquidity pools will empty after the 14 days? Why not allow users to continue accruing tickets for subsequent sales?
We will continue to encourage people to participate in the liquidity pools and design yield farming programs in the near future. We will keep the land sales event separate from yield farming and for the follow-on land sales, we will inform the rules before the actual sales.
Q: Is this game only for rich people, what about the people who really want to play this game. It’s very expensive to buy land, even after a discount.
A: You don’t have to own land to be able to play and enjoy My Neighbor ALICE. 50% of the land will be reserved for rentals and public facilities. So even if you won’t be able to own a piece of land, you could still have fun in the game.
Land can be shared among friends (atm 10, but we plan to increase this limit). This means a cost per capita of less than 20€.
Q: It seems that after a user transfers the 20 ALICE to qualify for registration, they can use their tickets on said plot and the more tickets they use, the more likely they are to win the registration. Is this correct?
A: The tickets are fungible. The lottery gets one winning ticket per each plot. So the plot assignment will be casual. It is not possible to express preference atm.
Q: In terms of “For CHR”, you could stake the tokens in the official staking contract”, you mean the staking on Chromia´s official website?
A: Yes
Q: If I stake CHR and only hold CHR and win the lottery, will you convert my CHR to 20 ALICE to purchase the land or must I hold ALICE?
A: You need to deposit 20 ALICE in order to be able to participate in the lottery.
Follow up Q: Since you can only send the 20 ALICE on May 26, so just staking would be enough to generate tickets?
Yes, staking will be enough to generate tickets.
Q: Where do I buy the land if I win the lottery?
A: On the My Neighbor Alice marketplace website which will be available from May 26th.
Follow up Q: Will winners be granted access to the marketplace website at the same time and just race to buy the land they want?
A: They win a specific — random — land. In the future we plan other sales mechanisms.
Q: Why are you using this kind of model? Why can’t you offer land sales with a fixed amount for anyone to buy like other games have done.
A: We want to make the game accessible. If we had an auction only rich people would be able to buy the land. If we just have low prices there won’t be enough land available for everybody.
Q: When should they send the 20 ALICE to the address?
A: From 26th of May.
Q: What is the correct Uniswap link to the ALICE-ETH pair?
ALICE-USDT: https://info.uniswap.org/pair/0xde93684627d4e34d6ef96adf5bcabf10bbd8dd81
ALICE- ETH:
https://info.uniswap.org/pair/0x30bc873193466dc59fc08159460c49f26f609795
ALICE — BNB in Pancake Swap:
https://pancakeswap.info/pair/0xe022baa3E5E87658f789c9132B10d7425Fd3a389
Q: In what form would the ticket be?
A: Digital.
Q: What happens when my 50 ALICE in the liquidity pool fluctuates? Does that disqualify me for a ticket?
A: No, we calculate how many ALICE tokens you have at the time of the snapshot in the LP. So you can expect variation from one day to the other but your tokens will be counted.
Q: Do you have a fixed time in UTC on when you take the snapshots?
A: Done. It’s 10:00 AM UTC.
Every 24 hours we check if your tokens are still there for 14 days.
Q: As I am staking CHR for the lottery tickets, and ALICE for the possible land purchase, how do you guys know, I am the same person staking CHR and ALICE? Does it need to be from the same wallet?
A: We will issue tickets for the staking participants, and the receiving address will be the same as the address that participates in the land sales.
Q: If my CHR has just been staked in the site since last year, do I need to restake it on the 26th or do I automatically qualify once I send the 20 ALICE over to signify my intent in joining?
A: You will need to claim the tickets once the land sales are started.
Q: If I win twice in the lottery, do I need to send the 20 ALICE right away? When?
A: You will need to send in the 20 ALICE first before being qualified for the lottery.
Q: What happens to all of the ALICE bought from the land sale? Is it burned?
A: Proportion of the sales revenue from land sales will be contributed to the DAO in the future and that token holders could decide either a buy-back and burn design or to fund community development initiatives.
Q: Can I use LP tokens to farm? Will this affect their ability to get a lottery ticket? Since if LP tokens are in the farming pool, then their balance will not be displayed.
A: Yes
Thank you for your support and good luck in the land sale! | https://medium.com/@myneighboralice/my-neighbor-alice-land-sale-lottery-a-reference-guide-4a3b4f83a7a3 | ['My Neighbor Alice'] | 2021-06-09 14:27:23.327000+00:00 | ['Gaming', 'Nft Collectibles', 'Blockchain', 'Blockchain Technology', 'Nft'] |
2,983 | Turning Too Fast: The 2013 Santiago de Compostela Derailment | Hundreds of responders soon filled the scene, the Spanish national police alone provided 320 men. Chaplains and psychologists were involved in the response, supporting the witnesses, responders and survivors. Responders worked into the night to rescue the survivors and recover the passengers, handing the site over to the investigators the next morning. Celebrations planned for the next day, a regional holiday, were cancelled and three days of nation-wide mourning were declared by the Prime Minister. The local region of Galicia extended the mourning-period by another 4 days. King Juan Carlos and Queen Sofia visited survivors at the local hospitals in the following days, demonstrating their compassion for the dead or injured.
Responders tending to survivors next to a train car that had been thrown up onto the neighboring road. Note the roof-mounted suspension mentioned earlier.
Mister Amo was detained either immediately or a few days after the accident (no clear information), with police officers guarding him at the hospital. Investigators allegedly feared that he may try to escape prosecution. Journalists dug up a post he’d made on Facebook in March 2012, showing a photo of a train’s speedometer at 200kph/124mph and writing about how much fun it is to go so fast and how he’d love to drive parallel to a road and trigger a police speed trap at that speed without getting fined.
Mister Amo’s photo on facebook, and his photo of a train’s speedometer at the scheduled top speed.
A few days after the accidents heavy-duty cranes (both for the road and for tracks) were brought in, pulling the wreckage apart to be taken away either on flatbed cars (once the track was repaired) or with flatbed trucks on the adjacent road.
A crane removing the remains of a train car that was torn to pieces in the derailment.
While investigators acknowledged that 200kph/124mph was perfectly normal to drive on Spanish high speed lines they did note that the post and some matching comments made on the post can be interpreted as a perhaps risky obsession with travelling at high speeds. An unproven theory was thrown around that, maybe, Mister Amo had tried to show off to his coworker. The recovered data-logger supported the theory of excessive speed being the cause of the accident, but there was nothing to prove that Mister Amo had been speeding intentionally. Amo insisted that he had taken the photo while a coworker was driving the train, meaning it wasn’t unauthorized use of a cell phone on duty.
The leading motor car at the beginning of the wreckage. Note that the forward generator car is still attached to the first passenger car by the shared axle as it rests on its side.
Investigators criticized the layout of the railway line ahead of Santiago station, with the rather sharp turn following an 80km/50mi stretch of near-straight track where trains are expected to travel at top speed. When the section of the new railway line was opened engineers called the site of the accident “challenging”, but it was up to code and with the high speed line meant to use ETCS L1 there was no chance of excessive speed causing an accident since the trains would be unable to leave the last tunnel too fast. The turn itself was never fitted with ETCS, which was meant to end at the main signal 200m/656ft ahead of the spot where the forward generator car derailed. With ETCS being disabled after the trial-period that safety-net was gone, meaning it was down to the drivers to remember their position and/or spot the pre-signal telling them it was time to decelerate. After the accident RENFE changed the signaling-layout at the site of the accident, instead of a single signal marking a switch from 200kph/124mph to 80kph/50mph the speed is now reduced over a longer distance in steps of 160, 60 and finally 30kph (99/37/19mph). The speed limit are aided by so-called balises, track-mounted programmable transponders which communicate with the passing train’s control systems and can trigger an automatic stop if the speed limit is not obeyed. At the same time RENFE announced a thorough review of its entire network to eliminate any avoidable safety-risk. The European Union Agency for Railways (ERA) harshly criticized Spain for their investigative office for railway accidents (CIAF) not being able to operate independently from the political oversight of the ministry for public affairs, claiming a risk of biased investigations/results. ERA backed up their claim by pointing out how RENFE and ADIF (the company building and maintaining the railway infrastructure) were part of the investigative team, pulling the neutrality of the investigation into doubt. A report by the EPA concluded that this involvement of parties involved in the accident in the investigation was in violation of European guidelines. When a new government was elected in 2018 the risk of biased investigations ceased to exist. Regardless, in 2017 the European Commission decided to run a parallel independent investigation. RENFE also decided to reintroduce and expand the ETCS-system, but by early 2020 the site of the accident still was not covered by the system.
The rear part of the wreckage during recovery, the field of debris was several hundred meters long.
On the 29th of July 2013 a memorial service was held at the Cathedral of Santiago de Compostela, attended among other guests by the Spanish prime minister and a large part of the royal family. The same day Mister Amo was charged with 79 counts of homicide by professional recklessness and an undetermined number of counts of causing injury by professional recklessness. While he was later sentenced to 4 years in jail and banned from driving trains for six years several officials who were also investigated for running the railway-network with lacking safety saw their charges dropped, further fueling suspicions of a biased investigation. Meanwhile survivors and relatives of victims sought over 40 million Euros/48.5 million USD in damages from RENFE, who in turn demanded damages from ADIF. The government at the time blocked a possible trial between the two state-owned groups, but after a new government started work in June 2018 the traffic ministry promised a new, thorough investigation and to finally sort out the mess and erase suspicion of some people being “immune from consequence” due to their job or friends. It’s unknown if any money changed hands since, or if any more people faced charges/were sentenced.
The leading generator car being removed from the site, its bogie remained attached to the motor car.
RENFE faced further criticism even from private enthusiasts, who, in long-winded discussions in various message boards, figured out that the generator car design was recklessly “undercooked”, with the car being overly heavy, having a high center of gravity in an unfortunate spot (relatively high and not centered in the length of the car) and being a bad combination with the suspension design of the neighboring passenger cars. The generator car weights approximately 28 metric tons including 2000l/528gal of diesel, twice as much as the passenger cars, and is a mixture of the passenger cars and the motor cars (the two-axle bogie on the leading end is the same as the motor cars’, just with the motors removed). Several of these discussions and investigations ended up suspecting that the creation of the series 730 had been done on a (too) small budget and in a time-crunch to try and get better high speed connections into parts of Spain that don’t have stations on electrified high speed lines. In some places the suspicion that the development was urged along as a favor to politicians lingered, unable to be disproven. | https://medium.com/@mx-schroeder/turning-too-fast-the-2013-santiago-de-compostela-derailment-19e510b8bc2c | ['Max S'] | 2021-08-24 10:54:57.395000+00:00 | ['Accident', 'Trains', 'Railways', 'Technology', 'Spain'] |
2,984 | Bat Coronavirus Rc-o319 Found in Japan: New Relative of SARS-CoV-2 | Bat Coronavirus Rc-o319 Found in Japan: New Relative of SARS-CoV-2
This study tells us there’re other undiscovered bat coronaviruses, even outside of China.
Background vector created by articular — www.freepik.com
The Centers for Disease Control and Prevention (CDC) has released a study from Japan, titled “Detection and Characterization of Bat Sarbecovirus Phylogenetically Related to SARS-CoV-2, Japan,” this month. In this study, a new bat coronavirus called Rc-o319 is discovered, which belongs to the same evolutionary clade as SARS-CoV-2 and RaTG13. This article will discuss the significance of this finding.
(SARS-CoV-2 is the novel coronavirus that causes Covid-19. RaTG13 is a bat coronavirus that is the closest known relative of SARS-CoV-2. SARS-CoV-2 and RaTG13 belong to the coronavirus's beta genus under the sarbecovirus clade — betacoronavirus, sarbecovirus. So, Rc-o319, RaTG13, and SARS-CoV-2 will be called sarbecoviruses from now.)
The study’s rationale
Horseshoe bats of the Rhinolophus species are infamous for being reservoirs of betacoronaviruses. RaTG13 is one such bat sarbecovirus that is 96% identical to SARS-CoV-2 at the genetic level. Current evidence suggests that SARS-CoV-2 evolved from a common ancestor of RaTG13.
RaTG13 is first sampled from a bat cave in the Yunnan Province of China. In fact, most of the bat coronavirus studies are from China. But Rhinolophus species and other bats are also found in other parts of Asia, Europe, and Africa, and nothing much is known about the coronaviruses they harbor.
“We provide a hypothesis that a bat sarbecovirus with zoonotic potential might exist even outside China, because Rhinolophus spp. bats inhabit Asia, Europe, and Africa.”
Thus, Shin Murakami, associate professor at the Department of Veterinary Medical Sciences of the University of Tokyo, led a study to characterize the complete genome of a bat sarbecovirus called Rc-o316 in Rhinolophus cornutus, a bat species endemic to Japan.
What the study did and found
In 2013, the researchers captured four R. cornutus from a cave in the Iwate prefecture of Japan. They then extracted RNA genetic material from the bats’ feces to screen for any presence of betacoronaviruses. Once candidates were identified, they proceed to sequence the full genome in 2020.
Sequence analyses revealed that a new bat sarbecovirus called Rc-o319 is 81.47% genetically identical to SARS-CoV-2. While 18.5% of genetic differences are massive, the full genome and key genes (spike protein and ORF1ab) of Rc-o319 still qualify as a place in the same clade as SARS-CoV-2 and RaTG13.
The study also showed that Rc-o319 could not infect human cells expressing the human ACE2 receptor. Another distinction of Rc-o319, the study found, is that it does not require TMPSSR2 to complete cell infection. Thus, the bat’s ACE2 receptor alone is sufficient for Rc-o319, whereas human ACE2 and TMPSSR2 are required for human SARS-1 and SARS-CoV-2.
Adapted from Murakami et al. (2020). Phylogenetic tree of full genomes of Rc-o319, SARS-CoV-2, RaTG13 (highlighted in yellow), and others. Phylogenetic trees of other genes (spike protein and ORF1ab) can be found in the main paper.
“Among R. cornutus bats in Japan, we detected sarbecovirus Rc-o319, which is phylogenetically positioned in the same clade as SARS-CoV-2. Sarbecoviruses belonging to this clade previously were detected from other Rhinolophus spp. bats and pangolins…in China and could have played a role in the emergence of SARS-CoV-2,” the authors concluded. “We provide a hypothesis that a bat sarbecovirus with zoonotic potential might exist even outside China, because Rhinolophus spp. bats inhabit Asia, Europe, and Africa.”
With the current phylogenetic tree, at least five ancestors are standing in between Rc-o319 and SARS-CoV-2. So, while Rc-o319 is related to SARS-CoV-2, it’s very distantly related.
The study also admitted that Rc-o319 is unlikely to jump directly to humans as it cannot bind to the human ACE2 receptor, unlike RaTG13 that also uses the human ACE2 receptor. However, as R. cornutus live in caves or tunnels with other bat species, and interact with other wild animals during the daytime, Rc-o319 may transmit to coinhabitant animals.
A closer look at Rc-o319
First, the study did not suggest that Rc-o319 is involved in the origin of SARS-CoV-2. Rather, the study tells us that other undiscovered sarbecoviruses could still change the current phylogenetic tree — just like the Japanese study added a new member, Rc-o319, into the sarbecovirus clade.
Rc-o319 is only 81.47% genetically identical to SARS-CoV-2, compared to RaTG13 with 96% identity. Scientists have predicted that the 4% genetic differences between RaTG13 and SARS-CoV-2 represent about 50 years of evolutionary time gap. Indeed, a published study in Nature suggests that the most recent common ancestor of RaTG13 and SARS-CoV-2 arose around 1950–1980.
As follows, the most recent common ancestor of Rc-o319 and SARS-CoV-2, as well as other sarbecoviruses in between, would be dated back even further. With the current phylogenetic tree, at least five ancestors are standing in between Rc-o319 and SARS-CoV-2. So, while Rc-o319 is related to SARS-CoV-2, it’s very distantly related. The different biological functions between Rc-o319 and SARS-CoV-2 further supports this notion. To restate, compared to SARS-CoV-2, Rc-o319 uses a different form of ACE2 receptor and does not need the TMPSSR2 co-factor to complete cell infection.
Is it possible that the Covid-19 pandemic started somewhere outside of China? Perhaps so, if a very closely related sarbecovirus of SARS-CoV-2 is discovered outside of China, which is certainly not Rc-o319. At this point, the Yunnan Province of China, where RaTG13 is sampled, is still the leading candidate region where Covid-19 started.
Adapted from Murakami et al. (2020). Cropped portion of the phylogenetic tree depicting the associated common ancestors.
Short abstract
Japanese researchers discovered a new bat coronavirus called Rc-o319 that belong to the same evolutionary clade (betacoronavirus, sarbecovirus) as SARS-CoV-2 and its closest known relative, RaTG13. But Rc-o319 is only 81.47% genetically identical to SARS-CoV-2. By contrast, RaTG13 and SARS-CoV-2 are 96% identical, and these 4% differences entail about 50 years of evolution. Thus, while Rc-o319 is related to SARS-CoV-2, it’s very distantly related. Still, this study tells us that other uncharted coronaviruses — even outside of China — may possibly alter our current knowledge of the SARS-CoV-2 evolutionary tree. | https://medium.com/microbial-instincts/bat-coronavirus-rc-o319-found-in-japan-new-relative-of-sars-cov-2-d6221d90e8d2 | ['Shin Jie Yong'] | 2020-11-22 11:54:15.117000+00:00 | ['Innovation', 'Life', 'Technology', 'Coronavirus', 'Science'] |
2,985 | The Beauty of the Impossible Nintendo Switch Port | The Nintendo Switch proves that sometimes playing a game in bed is more beautiful than the shiniest graphics.
By Jordan Minor
I don’t think Sniper Elite 3 is a fantastic game. The parts that work are the ones that clearly received the most effort and attention: the countless opportunities for players to snipe Nazis from creative vantage points and watch the resulting slow-mo carnage. But that singular focus only serves to make the rest of the stealth game surrounding it seem that much more uninspired. It’s not something I would normally be interested in for very long.
However, because Sniper Elite 3 is on the Nintendo Switch, I’ve played far more of the game than I would have otherwise, taking potshots on the train during the morning commute (when I still had those) or in between weekend errands. The open-world North Africa levels may seem like a poor man’s Metal Gear Solid V, but that still looks nice and runs pretty smooth on the small screen. It’s just cooler and more impressive in this form.
With more publishers now quenching the Switch port thirst with modified versions of cutting-edge and technically demanding AAA console games, I’ve reached a conclusion: Just because a game may look worse on Nintendo Switch doesn’t mean it’s a worse version.
Wild Hunt
There’s a kind of hardcore gamer condescension to the idea that you should only play The Witcher III: Wild Hunt (or any AAA port) on Switch if you have no other choice. It fails to consider that by just having a portable option, this version may best fit into some folks’ lives. Obviously, people are free to have whatever gaming preferences they like, but these days I praise Nintendo not just for its games, but because it keeps doing things that don’t align with these increasingly rigid priorities about the One True Way To Play.
A video game culture that prioritizes graphics and raw quantitative technical specs above anything else is prone to ignoring other, more experiential benefits you may gain from giving up some of those specs. It’s the classic example of trading the strength of a gaming PC for the convenience of a console. The uniquely underpowered yet portable Switch makes this continuum even more extreme. Would you trade the dream of the most premium, yet isolated and time-consuming gaming experience possible for something more modest that fits into your busy life and lets you play more games on average? It’s the same reason why Labo VR on Switch won me over, despite weak specs; it was something real people could theoretically use.
Everyone has different values and limits for what they’re willing to trade off. I don’t have a problem with ambitious ports of Burnout Paradise, Doom, Journey to the Savage Planet, Mortal Kombat 11: Aftermath, Wolfenstein, or 2K’s Bioshock, Borderlands, and XCOM collections on Switch. However, I found the poor, pre-patch performance in, say, Bloodstained, Mutant Year Zero, SpongeBob SquarePants: Battle for Bikini Bottom, or The Outer Worlds on Switch too much to bear.
For another example, consider A Hat in Time (published by Humble, owned by the same company that owns PCMag). The crowdfunded 3D mascot platformer always seemed like a perfect fit for a system that literally has Mario on it. Because the game ran on Unreal Engine 3, as opposed to its more flexible successor Unreal Engine 4, the developers at Gears for Breakfast originally maintained that a Switch port was “impossible.” A similar problem plagued the Switch port of Rocket League at first.
It turns out that wasn’t the case. A Hat in Time is now on Switch and the game is better for it. Does it look quite as good? No. The game’s art style constantly threatens to tip over into generic if not for truly bizarre details like a planet of mobsters and a war between two bird movie studios. I experienced some hitches, as well. Still, compared to something like Yooka-Laylee, the scale and creativity of these sandboxes along with your hat-based (sorry, Cappy) platforming tools for playing in them are simply joyous. On Switch, in my case the Switch Lite, you can enjoy these big worlds in bite-sized pieces thanks to the formula Super Mario 64 already laid down almost 25 years ago.
Power Versus Flexibility
Someone else might feel differently about different examples. Some may also care more about the cost of physical cartridges keeping these ports of old games at higher prices than on competing platforms-the dreaded Switch tax. But for me, and surely others based on the Switch’s success, the added ability of easy and frictionless quick portable play is more often than not a bigger material benefit than some extra technical fidelity. I rarely notice or care about those flaws unless I’m watching a (however fascinating) Digital Foundry video on frame rate and resolution. I’m not saying I don’t also enjoy seamlessly switching to the TV and using a pro controller. I’m just saying being completely tethered to a TV is a compromise, too. Folks just aren’t as used to realizing that.
An objective difference in graphical quality doesn’t mean an objective difference in overall quality if those drawbacks allow for a version in a form players may honestly enjoy more. I’m not playing Dragon Quest XI on Switch because I’m some poor soul with no other options. I work in an office surrounded by some of the beefiest PC gaming rigs on the market. I have the current most powerful console in the world, the Xbox One X, which now makes a Taco Bell noise.
No, I genuinely prefer playing Dragon Quest XI on Switch because I just like the cool intimacy of holding a huge world anywhere in my hands. Handheld gaming has real, proven value. There’s an entire new nifty retro machine, the Analogue Pocket, dedicated to celebrating the history of handheld play. So I appreciate the Switch bringing those strengths to games you wouldn’t expect, even if it has to make some sacrifices along the way.
That’s also why I love playing The Witcher III: Wild Hunt on the handheld. Having heard the highest praise for this game for years, my expectations were pretty steep. When I first started playing I couldn’t stop hearing Geralt as Solid Snake and noticing all the gameplay systems these later Assassin’s Creed RPGs ripped off. I also couldn’t stop admiring the exquisite character writing, deep action role-playing mechanics, shockingly fleshed-out Gwent card game side mode, and lush open fantasy worlds to explore for hours and hours. Yes, lush worlds to explore, even on Switch where it’s an uglier yet clearly recognizable (and nicely stable) version of the same massive game.
You can see how The Witcher slots in nicely with similar expansive fantasy epics in the Switch’s portfolio like Skyrim, Xenoblade, and The Legend of Zelda: Breath of the Wild. The appeal of portable play that publishers and players alike saw in those games applies to The Witcher as well, even if Geralt’s luscious white hair is a little blurrier when it bounces as you watch him get out of the tub. It’s details like those that make the gigantic file size of this (and basically every game mentioned in this article) worth it. A later update even lets you transfer your save file between Switch and PC, so there’s truly nothing to lose.
Another prominent AAA Switch port, Overwatch, is a bit of a trickier proposition. And not just because buying the popular hero shooter means tacitly supporting Blizzard’s past controversial decisions surrounding China. Overwatch is a twitchy multiplayer shooter that requires a constant online connection. So, you can’t whip it out on a bus. While I haven’t yet experienced any technical issues while playing the game, other writers have, including entire colorful character models going missing in the middle of a match. Hopefully patches have helped. Plus, esports players more hardcore than me are more likely to care about wonkier voice chat and the bump down to 30 FPS, even if the added motion controls makes shooting more precise compared to a controller.
Despite those caveats, the Switch has made me more motivated to casually play Overwatch for the first time since launch three years ago, and not just because I wound up liking some of the new characters I missed out on, like the cowgirl and the hamster. As the Wii U proved, a handheld device has its positives even when you can’t exactly take the device anywhere. Off-TV play on the GamePad to me was the biggest selling point of the doomed system, because I often found it more comfortable to play in bed or lying down on a couch as opposed to sitting at attention in front of a TV. The Switch obviously has far more freedom than the Wii U. So, even with its internet requirements, Overwatch also gains a subjective but real qualitative gameplay benefit from being on Switch. I reject the idea that it’s inherently inferior across the board.
Any Port in a Storm
I understand why some people prefer that their Switch games have no differences compared to versions on other systems. That’s the ideal situation. Obviously, Nintendo first-party exclusives don’t have this issue. Those Switch ports without compromises do exist, typically in the smaller indie space. The brilliant, brain-contorting, lo-fi murder puzzle game Return of the Obra Dinn on Switch is the same acclaimed game that launched on PC last year, now on a handheld. Another smooth translation is Ruiner, a slick little top-down shooter oozing with dirty, futuristic cyberpunk production value in its moody city and junkyard battlegrounds.
It’s also totally valid that some people would just rather not play certain games on Switch when there are alternatives they consider superior, especially if they have zero interest in portable play. The upcoming PlayStation 5 and Xbox Series X will make the gulf in power even more difficult to deal with unless Nintendo releases an enhanced Switch of its own. Ports may dry up. Overwatch 2 is coming to Switch but Cyberpunk 2077, CD Projekt Red’s Witcher follow-up, probably won’t.
Still, even when parity isn’t possible, I just want to tell the developers (and contractors like Iron Galaxy, Panic Button, and Virtuous) attempting the impossible that their efforts are worthwhile. They aren’t just making a worse product to cash in on a trend. For players like me, sometimes shrinking down a game can have big potential new benefits. | https://medium.com/pcmag-access/the-beauty-of-the-impossible-nintendo-switch-port-a3aa46fe6faa | [] | 2020-07-02 16:01:01.607000+00:00 | ['Technology', 'Gaming', 'Entertainment', 'Nintendo', 'Videogames'] |
2,986 | Dharma Markets Report #2: Shorting in DeFi | Short selling is a crucial component to any well-functioning market — it gives market participants a way of expressing a negative view, leading to more liquidity, additional hedging avenues, and most importantly, more efficient price discovery. In this way, shorting plays a particularly important role in dampening irrationally exuberant asset bubbles.
Using Bitcoin — and the crypto capital markets more broadly — as an example, the effects of short selling are clear. While investors had the ability to short Bitcoin for a few years prior, it wasn’t until 2018 that exchanges made short selling much more accessible. This was likely a result of the competitive pressure created by the CBOE and CME futures launches.
Before more short-selling avenues were available, the primary crypto-asset investing strategy was ‘buy and hold.’ In this system, skeptical investors had no way to participate in the market, which contributed to asset prices losing sight of reality.
Short-selling also provides a monetary incentive for investors to unearth assets that are grossly-overvalued, something that is generally beneficial to the market. For example, it was short sellers who were the first to realize that something fishy was happening with Enron.
Similarly, Tetras Capital was one of the first active funds to conduct deep diligence on ETH, persuasively arguing that its price had outstripped its current value given the technical risks presented in the roadmap and the excessive leverage created by the ICO bubble.
Given the the integral role that short selling plays in traditional markets, it’s important that this mechanism is enabled in the decentralized financial system being built on Ethereum. Fortunately, a number of trustless lending platforms already enable short selling — setting the stage for more liquidity and better price discovery within the Ethereum ecosystem.
What is a short sell?
To better understand short selling, let’s review exactly why an investor would want to short, and how such an investor would execute this strategy.
Short selling is most often used by investors as a way to capitalize on an asset that they think is going to decline in value. Instead of sitting on the sidelines, waiting for the price to drop, investors can take advantage of market mispricings by selling the underlying asset before they actually own it, and then buying it back at a lower price. In order to sell the asset before you own it, you must borrow it.
Remember that the investor doesn’t want to actually own the asset because it’s overvalued. Instead, they want to acquire it for the sole purpose of getting exposure to its price decline.
To acquire the asset, an investor must first borrow it from a willing lender. This can be done through a broker or in a peer to peer manner. As with any loan, the investor must repay the debt at a defined point in time plus any agreed upon interest. Once the investor acquires the asset, they sell it on the open market for the asset that their debt is denominated in, most often a fiat currency. The investor then waits for the price to drop to what they deem is a fair value. At this point, the investor would buy back the asset and repay the loan, keeping with them the profit from the price decline. The act of purchasing the asset back to repay the loan is commonly referred to as ‘covering a short.’
This all works out well for the investor if they accurately predict a price decline, but remember, they have to take out a loan to initiate their position, so a move in the opposite direction can push them deep underwater. In the event that the asset rises in price, the investor is forced to buy it back at a higher price than when they sold it for, and therefore when they repay their debt, they will have actually lost money in the full transaction. Sometimes, a heavily shorted asset that rises in price forces a number of short sellers to simultaneously close their positions. All of the short-sellers simultaneously buying back the asset in order to repay their loans leads to a rapid price increase, commonly referred to as a ‘short squeeze.’
Short selling in DeFi
Short selling is dependent on the ability to easily borrow and lend assets. As we’ve previously discussed, crypto debt markets happen to be one of the largest and fastest growing use cases today, making short selling in DeFi very accessible.
The most popular avenues for shorting Ethereum based assets are Dharma Lever, dY/dX, and Compound. While dY/dX is a popular avenue for shorting ETH, an interesting aspect about their platform is that smart contracts custody the user’s loan at all times, meaning an investor can’t actually move borrowed assets to a different exchange. While this results in an easily-tradable token, investors lose the freedom to choose the best exchange rate. It’s also important to note that while Maker is the most popular platform for getting leverage, it doesn’t currently accommodate short selling as users can only borrow DAI, not ETH itself.
To illustrate how to execute a short sell through DeFi protocols, let’s run through a quick example of short selling ETH through Dharma Lever.
Select an asset you want to borrow (ETH)
2. Specify an amount you want to borrow
3. Deposit your collateral to the Lever smart contract
4. Receive the principal in minutes!
5. Send the principal to an exchange and trade it for USDC or DAI
6. Repay your loan after a price decline
7. Enjoy your profits!
ETH Locked in DeFi: 1/21/19–02/03/19
The amount of ETH locked in DeFi saw a mild uptick over the last two weeks. Uniswap continued its steady growth on the back of the Donut experiment, dYdX saw a mild drop off as volatility dampened, Compound experienced mild withdrawals from their capital pools, the infamous house elections market closed on Augur, and Maker surpassed the 2 million locked ETH mark, huge congrats to the Maker team!
Maker saw a modest increase in the amount of ETH locked in their smart contracts, but the big milestone was the 2 million ETH threshold. Maker now holds 92.79% of all WETH and 1.92% of the entire ETH supply, totalling to roughly $215,000,000 USD worth of ETH backing $76,000,000 worth of DAI. | https://medium.com/dharma-blog/dharma-markets-report-2-shorting-in-defi-5216142d8f08 | ['Max Bronstein'] | 2019-03-05 02:59:58.407000+00:00 | ['Blockchain', 'Ethereum', 'Finance', 'Blockchain Technology', 'Decentralization'] |
2,987 | Blockchain technology’s impact on the financial sector | The blockchain technology that once started as the underlying framework for Bitcoin exchange now finds a multitude of use cases across the different industrial segments. One of the biggest impacts was felt on the financial sector. Companies like JP Morgan have openly embraced Blockchain technology. The financial sector suffers issues of data security, faster transaction, transparency and other bottlenecks that hamper the growth of the businesses that are relying on the banks and NBFCs for monetary transactions. Hence, Blockchain could be a probable solution here. With the intervention of Blockchain, banks and financial sectors can easily overcome the drawbacks that are holding back the seamless functioning of the banks.
Some of the key transformations that we have seen in the field of Blockchain is the development of Blockchain platforms like Hyperledger Sawtooth, Hyperledger Fabric, Corda etc. this permissioned Blockchain, not only ensures that the system is running seamlessly, but also guarantees that the transactions take place at a faster pace. This eventually makes the banking system work even better and in a more efficient manner.
How is Blockchain impacting the financial sector:
1. Providing a secure platform- One of the biggest problems that most of the banking and financial institutions are facing is the need for a secured platform. Since most of the transactions and other work has now been digitized, most of the banks and other allied companies are looking for a secured platform which is free from any errors or flaws. Moreover, the need for a platform which can effectively combat data breach issues is also on a high rise, and so we have Blockchain. This DLT platform works by time-stamping every information or data on it. Thus ensuring complete security. And with the advent of permissioned Blockchain networks, we are even more assured about the security feature.
2. No third party- Time lag and paperwork are two drawbacks of the financial sector which tends to hold back the processes, and it also eventually impacts the performance of the business. With the help of Blockchain technology, we can overcome these issues and hence ensure faster transactions. Blockchain technology works on peer-to-peer transactions which means that there is no need to rely on a third party for validation and approval, this fastens up the transaction process.
3. Tracking and tracing- These features can be extremely beneficial for the banking companies. Banks spend a huge sum of money in validation and verification, despite all the effort the cases of fake identity, and fraud claims are rising, with Blockchain, we can easily put an end to it. Since tracking and tracing of data become easier, and one can easily trace back the history, it becomes easier to rely on this platform as compared to the conventional technology that banks are using.
These are the three major advantages that the banking and financial sector can reap from Blockchain. Blockchain developers and Blockchain experts are in great demand because of this, and in the times to come, we are going to witness a rise in this number.
Conclusion-
Blockchain Council is offering the best online certificate program in Blockchain. This comprehensive curriculum will let you imbibe all the information related to Blockchain while learning its implementation as well. So what are you waiting for? Enrol for Blockchain certification today. | https://medium.com/@sophiacasey008/blockchain-technologys-impact-on-the-financial-sector-96c3066cb8d7 | ['Sophia Casey'] | 2020-12-03 03:38:06.492000+00:00 | ['Financial Services', 'Blockchain', 'Financial Planning', 'Blockchain Technology', 'Blockchain Development'] |
2,988 | Data-Driven Attribution and How it Differs across Google Products | Data-Driven Attribution and How it Differs across Google Products
Google offers several products with data-driven attribution, but what are the differences between them? How can you select the best service for your business? In this article, we review and compare the most popular products that offer Data-Driven Attribution.
According to Gartner, about 74% of CMOs expect to spend more on digital advertising in 2021 than they did in 2020. But how can you assess your channels to know exactly where to invest more? Which ads make potential customers move to the next step of the funnel?
The solution is hidden in attribution — how the value of a conversion is distributed across channels that move the user through the funnel. However, some attribution models show you only part of the picture. And these gaps in data might be critical. After all, according to the rule of seven touches, the actual purchase frequently happens only at a customer’s eighth interaction with a brand. However, all steps affect one another and eventually lead to the conversion. So how can we objectively assess the conversion path?
As an ad giant, Google offers multiple attribution solutions, from standard attribution models to advanced options with the possibility to track multiple channels. In particular, several products allow you to set up a Data-Driven Attribution model that will help you dive deep and accurately credit marketing channels.
But how can you decide which service will best fit your business? What’s the difference between Google Ads and Search Ads 360? In this article, we review and compare the most popular products that offer Data-Driven Attribution.
What’s Data-Driven Attribution?
Data-Driven Attribution (DDA) by Google focuses on your advertising account’s data as a unique starting point for analysis. Unlike standard models with predefined formulas, DDA uses algorithms to analyze every case differently and assess the mutual influence of channels in the funnel, even if it is complicated, inconsistent, and multi-step.
To satisfy various business needs, Google offers DDA in a range of services. The differences between these services are in the data analyzed, the algorithms applied, and the level of customization. Some of them are designed only to track ad clicks and optimize keywords and paid campaigns, whereas others provide a full analysis of a customer’s online journey.
Before selecting a particular product, consider the following:
What’s your advertising budget?
What are your business goals?
How many conversions do you have on average every month?
Now, let’s take a closer look at what products Google offers that include Data-Driven Attribution.
Data-Driven Attribution with Google Analytics 360
With Google Analytics 360, you can use Multi-Channel Funnels (MCF) Data-Driven Attribution based on the Shapley Value method. This algorithm analyzes the path of your users through existing touchpoints, then creates an alternative variant where one of the touchpoints is missing. This shows you exactly how a specific channel influences the probability of a conversion. Data-Driven Attribution assesses data from organic search, direct traffic, and referral traffic along with all the data that you’ve imported to Google Analytics, including data from other Google products (e. g. Google Ads, Campaign Manager 360). With DDA in Google Analytics 360, you get an overview of all users’ online actions in your funnel and how each channel influences conversions. This option is most suitable for large websiteswith high volume of conversions.
Let’s check Google Analytics 360’s minimum requirements for using DDA along with the pros and cons of DDA in this tool.
Minimum requirements:
A Google Ads account with 15,000 clicks and 600 conversions during the past 30 days
Ecommerce Tracking or Goals must be set up
If you meet these requirements, you can start using DDA in Google Analytics 360. To keep using it, you have to meet the following minimum conversion threshold for the past 28 days:
400 conversions of each type with a path length of at least two interactions
10,000 interaction paths in a specific view
Pros of DDA in Google Analytics 360:
Get a full analysis of a customer’s online journey
See which ads, keywords, and campaigns have the biggest impact on conversions
Distribute credit for revenue based on past data for a conversion
The amount of credit assigned to each touchpoint depends on order of touchpoints
Data analysis starts immediately, and the report on your first model becomes available within 7 days
Cons of DDA in Google Analytics 360:
High cost of an account: starts at $150,000/year
Hidden calculation logic: no explanation in the report
Requires a consistently high number of clicks and conversions
Doesn’t include offline data (phone calls, transactions in CRM)
Requires a Google Ads account
Data-Driven Attribution with Google Ads
The default attribution model in Google Ads is last click, but if you meet the minimum requirements you can configure Data-Driven Attribution. By default, data-driven attribution analyzes all clicks on your ads but not the entire customer journey. Based on these clicks, the model compares users who purchase to those who don’t and identifies patterns among those ad interactions that lead to conversions. To increase the number of conversions, you can use an automated bidding strategy that’s optimized based on information from the DDA model.
In contrast to Search Ads 360, Google Ads doesn’t allow you to run marketing campaigns across multiple engines and provides less detailed reports.
This product is suitable for medium-sized and bigger businesses that need to optimize marketing campaigns and keywords.
Now, let’s get to the minimum requirements and compare the advantages and disadvantages of using DDA in Google Ads.
Minimum requirements:
3,000 ad interactions in supported networks in the past 30 days
300 conversions in the past 30 days
To continue using this model, you have to meet the following minimum conversion threshold for the past 30 days:
2,000 ad interactions
200 conversions
Pros of the DDA model in Google Ads:
Helps you optimize keywords and paid campaigns
Helps you optimize bidding
Shows which ads play the most important role in reaching your business goals
Cons of the DDA model in Google Ads:
Don’t get the entire overview of the online user journey
Need to maintain the necessary level of conversions and clicks for 30 consecutive days before you can see data in Google Ads
If your data drops below the required minimum, the attribution model will automatically be switched to Linear
Data-Driven Attribution with Search Ads 360
Search Ads 360 helps you manage marketing campaigns across multiple engines (Google Ads, Microsoft Advertising, Yahoo! Japan Sponsored Products, Baidu, and Yahoo! Gemini) due to native integration with the Google Marketing Platform.
By default, Search Ads 360 uses the last click attribution model, but you can also configure DDA if you meet the minimum click and conversion requirements. Unlike Google Analytics 360 and Google Ads, Data-Driven Attribution in Search Ads 360 analyzes activities in Floodlight, the conversion tracking system for the Google Marketing Platform. The attribution focuses on paid marketing campaigns and shows you how clicks on keywords influence conversions. You can also adjust or create a new bid strategy that will automatically optimize bids based on the model’s data.
The Search Ads 360 service is suitable for websites with a high number of conversions who need to optimize their paid campaigns.
Let’s see the minimum requirements for and the pros and cons of using data-driven attribution with Search Ads 360.
Minimum requirements:
15,000 clicks in the last 30 days
600 conversions in the last 30 days
Pros of using DDA in Search Ads 360:
Get reporting data in near real time
Optimize bids automatically using Smart Bidding technology together with DDA
Create up to five DDA models to compare data with different channel groupings
Possible to upload offline conversions
Accounts for cross-environment conversions
Cons of using DDA in Search Ads 360:
Ignores search and display impressions
Might be not fully accurate: Search Ads 360 uses machine learning and historical data to model the number of conversions if it’s not possible to measure all conversions
Only tracks the number of conversions attributed to paid search
Additional setup required to realize all advantages: Campaign Manager, a set of Floodlight activities, and Search Ads 360 Natural Search reporting
Impossible to analyze conversions tracked by Google Ads, Google Analytics, or other conversion tracking systems
Attribution with OWOX BI
Google’s Data-Driven Attribution model is one algorithmic model that can ensure a granular approach to analyzing your data. Just like data-driven attribution by Google, OWOX BI ML Funnel Based Attribution assesses the effectiveness of your advertising campaigns and channels on the customer’s way through the funnel. It also provides you with real-time reports and allows you to import calculations to optimize bids. Unlike the Google model, however, OWOX BI attribution is based on Markov chains — a sequence of events in which each subsequent event depends on the previous. Using this algorithm, OWOX BI attribution shows how difficult it is to move from one step to another: the higher the difficulty of moving on from a step, the greater the value a channel receives.
On top of that, due to transparent calculations, you get a solid understanding of the figures behind each report so you can safely reallocate your budget. Finally, in comparison with Google products, attribution by OWOX BI provides meaningful results with smaller amounts of data required for analysis.
Let’s take a look at what you get with OWOX BI attribution.
Minimum requirements:
The minimum number of conversions depends on the number of sessions. For objective results, we recommend the following correlation between sessions and conversions:
Image courtesy of the author
Pros of OWOX BI ML Funnel Based Attribution:
Track a user’s offline actions
Control purchases and returns in your CRM
Assess the effectiveness of each advertising channel
Customize your funnel according to your business needs
Exclude unmanaged channels from your assessment
Compare funnel stages and evaluate their effectiveness
Analyze data based on thousands of projects with machine learning
Figure out a specific approach for each user cohort
Get ready-made reports in OWOX BI Smart Data
Use gathered data to manage bids and audiences
Conclusions
Google products that offer Data-Driven Attribution allow you to track different channels, determine which online ad is the most and least effective in Google Search, and analyze users’ online journeys in detail. Even though Data-Driven Attribution by Google is generally considered as one model, its implementation differs across products. To effectively measure data, you need to choose a service that fits your data type. Here are the primary focuses of each product:
Google Analytics 360 tracks all user actions, clicks, and displays based on multiple channels and their interrelations in the funnel.
tracks all user actions, clicks, and displays based on multiple channels and their interrelations in the funnel. Google Ads tracks ad clicks in Google Search.
tracks ad clicks in Google Search. Search Ads 360 tracks Floodlight activities and paid campaigns.
With OWOX BI, you don’t have to select among several services. You can get the benefits of Data-Driven Attribution by Google with transparent calculations on top and fewer minimum requirements all in one product. | https://medium.com/digital-diplomacy/data-driven-attribution-and-how-it-differs-across-google-products-22c644e57193 | ['Maryna Sharapa'] | 2020-12-08 14:45:25.165000+00:00 | ['Attribution', 'Data Driven', 'Technology', 'Google', 'Google Product'] |
2,989 | Balancing tech constraints and business solutions | Software development is at the heart of any organization at any level nowadays. From managing your customer loyalty program, to knowing when a specific purchase will reach your inventory, to enabling eLearning capabilities to your team or calculating the chance that this particular retail branch will perform above quota next month. All these dynamic requirements have a say in modern business and are used in an omnichannel way, available 24–7 and with increased usage of cloud-based solutions that provide elasticity needed in this exponential world.
So, what is the role of a technology provider firm? At Arion we believe that companies of all sorts do not only require tech solutions… They need tools to solve business issues. And that will translate into a mix of new methodologies, a different understanding of constituencies and how to deal with them and the right technology development. Technology itself can also require a different approach. Let’s take one example from a recent project.
This customer wanted to innovate and reach better financial returns for their land freight transportation. They operate a large fleet of trucks, trailers, drivers, and warehouses that require perfect dispatch decisions to maximize asset use, customer satisfaction, driver’s efficiency and state of the art connectivity with customer real time, online requests. Customer approached Arion with an initial model solution. It was this software language, that database, this timeframe. Our team went back to understand more about customer reality. Their view of competitive advantages. The current focus of the management team and their available time to focus on this new development.
We also went through a series of technology approaches. Finally, a full new way of reaching results was presented. We suggested Constraint Programming (aka CP), a development paradigm that solves optimization problems (vehicle routing, scheduling and bin packing to mention some examples). OR-Tools is a complete suite for CP, open sourced and backed by Google. The architect definition also included extract, transform and load (aka ETL, a data warehousing technique) to separate the current customer database from the set of tools and prepare them for future-proof solutions regardless of the data set.
Finally, in order to add elastic resources, both operational and financial, we selected Amazon Web Services (AWS), using Elastic Beanstalk, RDS, SQS and Serverless Lambda Functions as infrastructure, where you just pay for computing time and no fixed associated costs. And the customer got the solution in 6 sprints of 3 weeks each. All business issues packed in a state of the art, dynamic and future proof solution. We do not like to tag that as a developer solution. This is digital transformation at work.
Companies today must cope with changing environments both at consumer behavior and talent acquisition strategies. Uncertain times add to the mix, and globalization sets the comparison universe to an extraordinarily complex level. This year, more that 120 billion dollars will be spent offshore on software development. And most of that money is spent on “keeping the lights on”. Just making sure that the systems work.
But what happens to new consumer demands and expectations? According to Everest Group, a leading research and advisory firm on Global Services, Latin America is expanding its footprint mainly based on innovation sourcing. That means, teams that work on new developments, moving efforts and tools from back to front lines, working on new UI/UX, integration with cloud-based tools and more than anything focus on product development. At Arion we deploy services from our two centers one in Montevideo, the capital of Uruguay, a thriving country in South America that has been credited as the best city to live in Latin America and is the #1 IT exporter per capita in the region. We also are set in Medellin, Colombia, the innovation capital of the 2nd largest city in Colombia that brings talent and more cost options. With two locations you immediately get this global flavor that is well connected to customer requirements.
What we have learned during these special times?
Agility is the name of the game. It’s very difficult to predict what is going to happen in the next three months. Companies need to adapt to sourcing changes; teams are learning how to deal with at-home work and remote conversations; technology is quickly adapting to serve a new set of requirements and you see digitalization growing everywhere. But this also requires novel processes, ways to capture the right data, creative environment, and fast answers. Arion employs design thinking methodology to assess customer requirements and collaborate on the best possible approach.
Digital Transformation is perhaps the best name for this virus. It has clearly demonstrated if companies were prepared to deal with the digital world. At Arion we work with our customers understanding the full customer journey. And with that we know which developments need prioritization. What areas should be automated. What technologies to leverage to make sure the customer is the center of everything.
And in an ever-changing world, it’s impossible to be the single source of the solutions. You should be able to quickly partner and enhance your abilities and skills while servicing a specific project. Our Alliances strategy is key to our business success. We have built partnerships with colleagues around Blockchain and Data Science. We became Salesforce consulting partners so we can help those looking for great, readily available tools to put the customer at the center of everything.
How can you engage Arion?
In all our initial meetings we bring senior members that empathize with your requirements, constraints, and dreams. We can do the tech talk and we can also understand business.
If your issue is Scale and Growth, we will work to make sure timing, technical readiness and experience are met. Sometimes we are asked to perform tests. Then together with our CTO we will review options and get you ready to start. This includes setting the programming environments, connectivity requirements as well as any special security, both digital and legal.
Some customers would like our services to help them define, refine, and execute a Product development. Or improve an existing customer journey. In this case, we will understand your business goals, talk about potential technology ways to solve this, and look at different scenarios. And then we propose agile methodologies with required sprints. You will get a constant understanding where we are and what we have accomplished. And if business changes, it will be easy to adapt the process to the new normal.
And then, Software Engineering. Because there is always room to write bad code or good code. We take pride in constantly and competitively looking for the best projects and talent. We can help support performance enhancements, Cloud interactions, API definitions. All that under the discipline of documented activities, release controls and using team working tools to make everything connected and smooth for you.
We believe in digital transformation. We believe in exponential growth. We believe that working together with our customers improves everyone’s results. Give as a call and let’s talk about your needs. | https://medium.com/arionkoder/balancing-tech-constraints-and-business-solutions-2be424153b01 | [] | 2020-09-02 15:09:23.032000+00:00 | ['Constraints Programming', 'Business Solutions', 'Software Development', 'Business', 'Technology'] |
2,990 | How to remove items or duplicates from a list in Python. | How to remove items or duplicates from a list in Python. Don't run around in loops to edit lists if there are built-in functions to help you out.
Remove duplicates from a list
Remove elements from a list
Clear a list
Our list:
my_list = [1,1,1,1,2,3,4]
Remove duplicates from a list
Sets quickly kill duplicates. Remember to convert it back to a list.
my_list = list(set(my_list))
Remove elements from a list
remove() or pop() will remove elements from a list. pop() returns the value you remove.
my_list.remove(1) #note that this will only remove the first instance of 1.
use_number_later = my_list.pop() #default removes last item in list. Add index to pop another. (my_list.pop(0))
Clear a list
This is easy as lists in Python come with a method called clear() . | https://medium.com/@martinaaberge/how-to-remove-items-or-duplicates-from-a-list-in-python-81b56baab7d7 | ['Martin Andersson Aaberge'] | 2020-12-16 06:09:02.821000+00:00 | ['Technology', 'Python', 'Software Development', 'Lists', 'Data Science'] |
2,991 | Christmas Dedicated Bare-Metal Deals | Photo by Kieran White on Unsplash
Happy holidays from the team at Spin Servers! With the pandemic, shipping issues, global shortages, there is one thing that hasn’t changed since… the valuable time spent with family and friends. We hope everyone across the globe makes time to spend with their loved ones and push through these tough times.
With this in mind, we would like to spread from holiday cheer of our own with some insane discounts on some of our best dedicated bare-metal builds.
From now until January 2nd, 2022, take advantage of these deals while they last. There is no order maximum! Just hope you can snag a deal before someone else does.
Dallas, TX Location:
12x 2.40 GHz
64GB DDR3
1.6TB SSD
10 TB / 10Gbps
In Stock
$79.00/month
PROMO CODE: 2630LV2XMAS21
Deploy Now Dual Intel Xeon E5–2630L v212x 2.40 GHz64GB DDR31.6TB SSD10 TB / 10GbpsIn Stock/month
16x 1.80GHz
64GB DDR4
1.6TB NVMe
10 TB / 10Gbps
In Stock
$99.00/month
PROMO CODE: 2630LV3XMAS21
Deploy Now Dual Intel Xeon E5–2630L v316x 1.80GHz64GB DDR41.6TB NVMe10 TB / 10GbpsIn Stock/month
San Jose, CA Location: | https://blog.spinservers.com/christmas-dedicated-bare-metal-deals-1517b676e020 | ['Jesse Jacobs'] | 2021-12-16 22:35:15.559000+00:00 | ['Technology', 'Web Hosting', 'Dedicated Server', 'Ecommerce', 'Cloud Computing'] |
2,992 | Why Did Google Buy DoubleClick? | The Digital Advertising Landscape Then And Now
To understand why this acquisition was such a big deal, we need to first step back and understand the digital advertising landscape. Advertising is a huge industry. In 2019, total ad spend in just the U.S. across all channels (digital, TV, print, etc.) was estimated to be $240 billion. Of that, more than half ($130 billion) was digital — digital ads are ads served via the internet to people searching on Google, watching YouTube, browsing Facebook or Instagram, etc. and includes both mobile and desktop. Digital (especially the mobile portion of it) is also by far the fastest growing component of the advertising market, having relegated TV to second place a few years ago and it’s never looked back since.
But back in 2007, when Google acquired DoubleClick, digital ad spending was still in its relative infancy — digital made up just 12% of total ad spend compared to TV’s 39% (though digital was growing many times faster than all the other types of ad spending).
The makeup of digital ad spend was very different back in 2007 too — 40% of it was paid search. Paid search is when companies bid for the right to show their ads on a search engine’s search results page (by picking keywords of interest and an associated bid amount for each of those words). And Google, the number 1 search engine then just like now, dominated paid search with a 75% share of the market. Why did paid search have such a huge share back then?
The reason for paid search’s early success and prevalence is that it’s easy to use and measure. A little bit of background will help drive home this point.
Traditional ads are characterized by limited inventory — there are only so many billboards by the highway or so many time slots for commercials during that basketball game. And advertisers must show the same ad to the entire audience (everyone watching the Super Bowl sees the same ads) with some exceptions (the local programming portion of a national TV/radio network allows them to bombard us with local ads; but even then the entire local audience sees the same ad). This means that in order to get the best ROI, the ad and its product must appeal to the average audience member. In its nascent stages, digital advertising had the opposite problem. With internet ads, there’s a virtually unlimited supply of individualized ad slots. Every second that someone spends on the internet browsing websites or playing games is an opportunity to serve him or her an ad of a product that is uniquely relevant to that person’s specific interests — all of a sudden advertisers had more potential ad slots than they knew what to do with. The problem was targeting. Without knowing enough about a user, they had trouble identifying those interests and serving up relevant ads, which meant that digital ad campaigns often experienced lower than expected ROIs.
Photo by Edho Pratama on Unsplash
Paid search though was an exception. It’s much easier to target paid search ads because the user clearly states his subject of interest or intent with each search. When someone searches “best place to buy a used car”, that person’s intent becomes very clear (even with little or no prior data about this user). Ads for used cars placed in this person’s search results will have a very high likelihood of clickthrough (clickthrough is when a user clicks on an ad), allowing Google and its search engine competitors to charge a high price, a.k.a. cost per click, for each of these ads.
But paid search doesn’t work for all types of ads. At a high level, advertising (and marketing in general) tries to do one of two things — get people to perform a specific action (such as purchase product A) or build brand awareness. Paid search ads work really well with the former (because even though it’s an ad, it’s also related and relevant information) but not so much with the latter. If I have a reasonably clear idea of what I’m looking for when I perform a Google search, then I probably won’t spare the time to watch a touchy feely video designed to reinforce an existing brand or introduce me to a new one. Rather, I am likely to see these attempts as distracting and annoying. Most people don’t go to Google’s website to browse; rather, it’s the quick first step in their information finding journey.
Photo by Ylann Meyer on Unsplash
That’s where display and video ads come in (when I say display going forward, I mean video as well). Display ads are interactive images or videos that appear when we visit and browse websites (they could appear as a banner, or hover to the side, or even momentarily take over our entire screen). Display ads are the closest thing on the digital side to a traditional T.V. commercial. And today, many of the same companies that spend millions of dollars on television ads also spend just as much or even more on display ad campaigns (delivered via Facebook, Instagram, YouTube, or popular websites like IMDb). It’s the growth in video ads (and to a lesser extent other display ad formats) that’s allowed digital to take so much advertising market share from television during the past decade.
Today, Google is a major player in serving users display and video ads. Odds are that when you visit a random website on the internet (that seems to have nothing to do with Google) and see an ad, it was brought to you indirectly by Google-owned technology. But back in 2007, things were very different. | https://towardsdatascience.com/why-did-google-buy-doubleclick-22e706e1fb07 | ['Tony Yiu'] | 2020-05-06 03:26:17.848000+00:00 | ['Investing', 'Data Science', 'Business', 'Strategy', 'Technology'] |
2,993 | Ethereum Blockchain Changed The Gaming Industry In 2019! Here’s How | Blockchain is currently a widely trending topic. Several industries are moving towards adopting blockchain for their businesses like the banking industry, ride-sharing, business, academic, healthcare industry, etc. The Ethereum blockchain platform, especially, has been undergoing significant growth over the past year. Ethereum blockchain is a decentralized platform with global access that allows users to program smart contracts and Dapps that can be executed autonomously when the favorable conditions are met. This blockchain can be accessed anywhere in the world. Here’s a glance of the above mentioned massive growth of Ethereum in the past year,
In 2019, there were more than 20 million Ethereum accounts created.
History says there were about 4 million new active users of Ethereum in the year 2019.
The Ethereum developer community multiplied 4x more than for any other cryptocurrency.
There were 8, 516 live Ethereum nodes over the year, and 520 new Ethereum Dapps created.
The decentralized Ethereum platform has influenced the finance industry bigtime over the past year
Ethereum is predicted to continue growing and evolve much more in the year 2020, in fields like DeFi, DAOs, supply chains, and more.
The stats mentioned above are some of the significant magnifications Ethereum went through in the past year. Ethereum with blockchain collectively, have influenced and benefitted various industries. In this article, we are going to focus on the gaming industry and how an Ethereum blockchain platform serves the gaming sector. Let’s dive in and find out.
The gaming industry and how Ethereum blockchain benefits gaming:
The gaming industry is a significant on-demand sector. It has made tremendous progress over the years. The demand and growth for this industry are never-ending, and the e-gaming industry is gradually becoming the future for gamers around the world. Ethereum blockchain offers a decentralized gaming platform that will highly benefit both gamers and developers. Let’s have a quick look at the benefits of Ethereum game development.
The smart contracts integration feature on the ethereum blockchain game platform, enables fundraising and lets users sell their tokens for financing any game that they want to create.
The Ethereum platform allows developers to connect to the liquidity network directly.
Ethereum platform guarantees a secure payment process, thereby eliminating any fraud/ risk factors.
Games on the Ethereum platform are comparatively cheaper than other platforms like Apple, Google, Microsoft, etc.
This platform helps to build and provide access to a broader community of gamers and crypto users all around the world.
The Gamers get 100% anonymity benefits.
Gamers can purchase anything within the game without any extra fees.
Gamers can get in touch with the developers directly for any queries.
Speaking of this, if you are a gamer or a developer looking to build your game on the Ethereum platform, you should be well aware of what is currently happening in the market with the excessive competition to sustain and stand out from it. Want to find out how the blockchain era has been influencing the gaming industry? Let’s look at some impressive numbers!
How Blockchain Transformed gaming industry: | https://medium.com/@sugiyanto.wale/ethereum-blockchain-changed-the-gaming-industry-in-2019-heres-how-92d6a730c6cb | ['Helix Cloud'] | 2020-02-23 15:09:01.432000+00:00 | ['Ethereum Blockchain', 'Blockchain Technology', 'Btc', 'Blockchain', 'Ethereum Wallet'] |
2,994 | Understanding Asynchronous JavaScript Callbacks Through Household Chores | If you’ve ever done the laundry, you can understand how callbacks work.
The other day I read Kevin Kononenko’s brilliantly funny and easy-to-understand guide to MVC frameworks. He explains the Model-View-Controller paradigm through ordering drinks at a bar, and it has been one of my favorite programming explanations I think ever!
I really appreciated it because it was written without an air of pretension or elitism, and it made me wonder why a lot of other experienced coders can’t help newbies without the l337er-than-thou attitude?
I teach English in South Korea at the moment, and we teachers have to think like Kevin all the time. Nowadays it is really frowned upon to explicitly explain grammatical concepts, so good teachers try to contextualize the target language (i.e. the grammar or vocabulary they want to teach) with stories, film, music, and images.
This teaching methodology was influenced by British linguists in the 1980s, which has informed modern foreign language pedagogy today. Maybe the same thing is happening right now for coding education!
Kevin is going to be a hard act to follow, but I would like to explain how asynchronous callbacks work in JavaScript through the context of doing common household chores.
Synchronous Honey-Do List
Shout out to my wife who has been doing double her share of the chores at home while I learn to code. I owe her big time!
I usually help out around the house on Sundays, and her honey-do list for me looks like this:
Do the laundry Give dog a bath Sort the recycling Balance the budget Figure out what we’re doing for dinner.
Technical aside: At the core, JavaScript is a synchronous programming language, meaning it runs one line of code at a time. It cannot move on to the next line of code until the current line has finished executing. Consider this example:
function syncChores() {
console.log('Do the laundry');
console.log('wash the dog');
console.log('sort the recycling');
} syncChores(); /* Output appears in the same order it was written: Do the laundry
wash the dog
sort the recycling */
Now imagine if I did my chores synchronously in real life. What would happen? What would that look like?
If you go back to my list, you will see that doing the laundry is the first item. It takes about 35 minutes for a typical wash cycle to finish and an additional 45 minutes for a load of laundry to dry. So for 80 minutes, I am just sitting on my lazy butt, not doing any other chores, as I synchronously wait for the laundry to finish.
Here’s what that looks like with pseudocode:
function doLaundry() {
startWashCycle();
switchToDryer();
foldAndIronClothes();
} function washDog() {
// imagine some dog-washing code here
} function sortRecycling() {
// and imagine some sorting code here
} doLaundry();
// Now wait a full 80 minutes before completing other functions // Now I can finally wash my dog!
washDog();
sortRecycling();
Not very efficient, is it? In real life, busy adults would tackle these chores asynchronously, meaning they would start the laundry, continue doing other tasks on the lists, and go back to the laundry when the wash cycle has finished.
This action of going back to the laundry when it’s ready is analogous to the JavaScript callback function, and our washing machines quite literally call us back with some alarm or buzzer! This allows us to go on and do other chores and then continue with the laundry chore when it is ready for us.
Asynchronous Honey-Do List
Let’s do the chores again, this time asynchronously. What would that look like in pseudocode?
function doLaundry(callback) {
// imagine initial code that kicks off wash cycle
// takes 80 minutes to complete wash cycle callback(err, cleanLaundry);
} doLaundry(function(err, cleanLaundry) {
// sometimes our washing machines break down
// better handle that possible error if (err) throw err; // if no errors, switch to dryer after wash is complete // Tada! Our call back alerting us that washing is complete! switchToDryer(cleanLaundry); }); // as we wait, JavaScript will run this stuff now! washDog(); // still time for more chores! sortRecycling(); // the following will be undefined because it is not yet ready console.log(cleanLaundry); // Now the laundry is ready!
// Let's go back and switch clothes to the dryer // The clothes are drying. Let's continue doing more chores.
// Tanya will be impressed with my productivity! balanceBudget();
Like Kevin’s article, this was only meant to clear up the concept of callbacks. If you want a more practical guide, check out Callback Hell.
Your Turn
It helps if you can apply abstract concepts to real situations. Can you think about what you do at home, school, or work that resembles synchronous and asynchronous code? Write them in the comments below! | https://medium.com/free-code-camp/understanding-asynchronous-javascript-callbacks-through-household-chores-e3de9a1dbd04 | ['Stephen Mayeux'] | 2016-06-10 07:26:28.517000+00:00 | ['Technology', 'Education', 'Programming', 'Tech', 'JavaScript'] |
2,995 | Selling Industry 4.0 — what I have learned — Part #1 | The universe of what we now call Industry 4.0 is rapidly expanding to encompass technologies including AI; process robotics and autonomous devices. Industry 4.0 started as the Fourth Industrial Revolution of manufacturing but the use cases are growing at pace into the energy; infrastructure and logistics industries to name just a few. Before the idea of Industry 4.0 was conceived I was leading the growth of some of these technologies in tech businesses and more lately, helping Industry 4.0 companies grow as an advisor. Here I would like to share some key things I have learned about what works — starting with an Industry 4.0 success story.
Ocado (https://www.ocadotechnology.com/) is an award winning, multi billion pound technology company. To some it may be an unlikely exemplar of what is possible with Industry 4.0 today, as many will know it as an online retailer in UK. However, while maintaining its public face online selling groceries and on the roads of Britain with freshly branded delivery vans, Ocado has been executing a revolution in how our groceries get into to our kitchen cupboards and refrigerators. Over ten years ago, to optimise their grocery delivery business, Ocado started to develop their own software after failed attempts to integrate off the shelf applications. Ocado’s Smart Platform now integrates the upstream supply chain with customer’s online orders, warehousing and order fulfillment. This is delivered with AI, IoT, Process Automation and Robotics integrated on a cloud platform and enabled by 4G mobility. At the heart of the operation is “The Hive” warehouse where over 1000 robots work in a 3D matrix among a handful of human operators to sort and move over 10,000 grocery items every hour, ready for delivery to customers.
Outline of Ocado’s Smart Platform integrating key Industry 4.0 techs ©Ocado
Technology has now become Ocado’s core business with the announcement of deals to sell their platform and warehouse tech to US grocery giant Kroger and French supermarket Casino, among others. This transformation culminated in a £1.5bn Joint Venture spin-off of the online retail operation earlier this year. The result — at £8bn, Ocado is now valued more like Amazon than grocery rivals such as Tesco or Asda. So, what does Ocado’s success tell us about selling Industry 4.0?
What Industry 4.0 can do is much more important than what it is
Awareness of Industry 4.0 is growing very rapidly — as can be seen in the graphic below, related Google searches increased by ~70% globally in the last twelve months.
Google search trend for Industry 4.o (courtesy Google Trends)
But, when I talk to middle to senior managers in tech companies and end-users, there is a real lack of agreement on what Industry 4.0 really is and there is even less clarity about what it can do. When it comes to sales, what works is not talking about the tech but about what it can do for end-users — in terms of efficiency, operating cost reduction or safety for instance.
Be focused and relentless about the Value Proposition
The performance improvements mentioned above are the elements that are going to get the deal over the line. These are the things that your customer can attach real value to and will support a business case. This makes it essential to understand what business problems your tech addresses, how it does so and what results the customer can expect. This is more achievable when you are focused on few potential use-cases (see below). Be brutal about how the solution delivers value — is it direct value (or does it rely on factors out of your control?); is it easily measurable?; does it address your customer’s priorities?
Testing the value prop frequently is really important and customer interaction here is critical — end-users are experts in their business — you are probably not. Finding the right people, testing your ideas early, getting buy-in from one or two customers will build your domain knowledge, understanding of the business problem and critically the value of your tech. Executing well will also turn those first customers into the greatest ambassadors for your business!
Choose industry use-cases carefully and stick with them
In most Industry 4.0 solutions, the performance and value will be quite specific to some industry sectors or verticals. Each vertical has specific priorities; problems and needs, dependent on a huge range of factors. Understanding real-world problems can be difficult for very tech-focused companies but this is critical to gaining traction for your tech. You will understand what the tech does and what technical problems that solves. The key is to understand the operational and commercial value of that and to identify the verticals where it will get the best results for your business (in terms of speed v scale). For early phase companies, I would usually advise speed over scale.
In a recent assignment with an early phase AI driven start up, I identified the tower operations of mobile telecom and power transmission companies as potential sectors for their software. Although globally there are probably 10 times more power transmission towers than telecom towers, the telecom industry was the one that was really successful first because it was more commercially dynamic (not least due to the advent of 5G) and the solution has value across multiple use-cases.
It can be quite easy to “bounce-off” a vertical — common issues are not getting enough basic understanding to be clear or credible, struggling to find an entry point and engaging with the wrong decision roles in end-users. It is really worth engaging with a vertical from multiple angles to get a range of insights before you plan your next move. Once you have chosen a vertical, stick with it until you have solid evidence.
Reach beyond the business case with vision
While its critical to attach quantifiable value to get the deal over the line, at the same time, its important to give a vision which while being less simple to quantify promises something more game-changing and is likely to get support from the C-level. e.g “as well as doing the task 4 times faster with 80% less risk to people, we can harness all the data gathered to provide value add services to your customers improving margins, customer service and retention”. Do your research and make sure what you are suggesting is credible and that there are no show-stoppers to delivering it.
Conclusion
While there is a lot of excitement, there is a real lack of clarity and understanding about what Industry 4.0 really is. But that doesn’t matter — we are in the business of selling tech, not concepts.
Most decision makers in end-users want to understand the possibilities of Industry 4.0, what it can do for them more than trying to understand what it its. To achieve that we need to focus hard on use-cases where our Industry 4.0 tech is most valuable and to understand that value we need to learn the priorities and business problems — this is a perfect example of the importance of the Why and the How (1).
Using some of these approaches, you will win Industry 4.0 deals, build industry use cases with great customers and grow your business. By doing so, you will be among the people that define not just what Industry 4.0 is, but what it can do for industry, the economy and for all of us.
(1) “The Golden Circle”: Start with Why — Simon Sinek — 2009.
Look out for the next articles in the Sales 4.0 blog series covering topics to include: How is Industry 4.0 changing the B2B Sales Process?; Hot issues for startups selling Industry 4.0 and; Who make the best Industry 4.0 sales people?
Sales 4.0 is a new type of consulting business that helps B2Bs meet the challenges and opportunities of selling Industry 4.0. We help small to large enterprises grow fast and to compete in Industry 4.0, providing advisory services including commercial and sales strategy; sales performance improvement; sales infrastructure and outsourcing. Find us on LinkedIn and at sales4point0.com. | https://medium.com/@david.clitherow/selling-industry-4-0-important-things-i-have-learned-part-1-1b89dd2df8be | ['David Clitherow'] | 2019-06-18 08:12:38.383000+00:00 | ['Industry 4 0', 'AI', 'Sales', 'Technology', 'Startup'] |
2,996 | IG Drones: Revolutionizing the Transmission Line Industry | Image Credit: DJI
Technology is driving innovation. Technology is driving creativity. Its technology that decides, Mankind’s sustainability.
In the 21st century, it’s indisputable that we are all dependent on technology for our survival. While technological advancements have been regarded as both a boon and a bane to society, let us throw some light on one of the biggest boons of technology - THE DRONES and its use in THE TRANSMISSION LINE INDUSTRY
With each passing day, the use of drones in business is significantly increasing especially in the transmission line industry. They have been providing the industries with efficient and practical solutions to the problems that commonly arise in that field.
Wondering what benefits a little device as a drone could possibly serve. Come on! Feed your curiosity with the super informative article below.
IG Drones: Reinventing Transmission Line Inspection
With the exponential growth of industries in recent times, a significant rise has been observed in the use of drones in transmission line industries. They say, “Time is money”, so we’ve got to save it right? While walking among the transmission lines to detect underlying defects is not only tedious but time-consuming as well.
IG Drones not only save a lot of time but also provide us clear-cut, detailed information about the problems along with high-resolution, thermal images of the issues. This is the reason why many operation providers are being replaced by drones to increase power electricity and reduce maintenance costs. IG Drones not only help us to do predictive maintenance of the towers effectively but also keep the transmission line industry safe as this is a vital part of their work.
Transmission Line Tower captured in India by IG Drones
Inspecting Transmission towers
Tring tring…..(the telephone rings) “There is no current supply in our colony since last 5 hours. What’s the problem?”
This type of calls is really common in the electricity board offices. Solving the problem isn’t tough, its detection is. Conventionally, the lineman used to climb up the towers to check where the actual problem lies. But again, it is not necessary that the transmission tower will always be in a suitable place. The ways to get up to the tower might be through access to private property, or the tower might be above a sequence of trees, obstructing views from the ground. Without a drone, it is undoubtedly a dangerous task. It may take numerous days to walk through private property. But with a drone, and a pilot already available, it can be done within a fraction of seconds.
Drones are unstoppable. It can be used in any area, such as area too close to the homes and areas that are difficult to access manually by ground inspection. Without any working hour hazards, one can easily get an exact look at the transmission tower, inspect the problem correctly and hence solve it with versatility.
Regular ground patrols
During regular patrols, a team member can directly deploy a drone to capture a high level of detailed inspection. The issues which might have been missed by the normal eye can easily get detected and classified efficiently. While during routine maintenance, a lineman may observe any potential issues with the transmission pole, but a company with trained ground patrol teams and drones can easily outdo the conventional methods of using bucket trucks or climbing towers.
Substation maintenance, upgrades, and inspection
Usually, the substations are easily accessible. But during maintenance or any sort of up-gradation, it needs to be turned off as it is not safe for a human to carry out the work with the current flowing. In rare situations though, this might cause a power outage for consumers leading to a lot of inconveniences.
Storm Restoration
Say, for instance, your area has been hit by a tornado or cyclone which has caused serious damage to several transmission towers. With the entire area being affected, it’s not possible for the linemen to assess the damage caused. Hence, rather than linemen, the drone can be used to carry out the inspection more effectively. With the help of a drone, we can use its software, to inspect the damages by taking detailed photos, uploading them, and creating one compatible map. This in return, allows us to take necessary measures to recover the damages rapidly and more efficiently.
CONCLUSION
We are living in a world where smart work counts more than hard work. The drones and its camera technology have the following advantages:
Make the task of inspection quite easier
Captures aerial-thermal, detailed photographs of the damages
Makes the process of repairing more efficient
Is less time consuming as compared to manual inspections
Reduces the cost of manual labor
Significant reduction in the rate of occupational hazards
Clearly, the smart work every industry would love to adapt to.
The versatility and work efficiency of this device is the main reason behind its enormous popularity and demand in the transmission line industry today.
For drones, ‘Sky is the limit’ and for the industries enforced with this technology, ease of work and profit is limitless.
Watch our latest Youtube Video on Transmission Line Inspection
ABOUT US:
IG Drones provide specialist inspection services at height and difficult to access areas, via the use of drones. Technical end-to-end solution to help you ease your operation. Capture the smallest of details with ease. Raise your operational standards & Imagine more with us. Imagine Inspire Innovate.
Follow us on — Twitter |Facebook |Instagram | LinkedIn | YouTube Channel | https://medium.com/@igdrones/ig-drones-revolutionizing-the-transmission-line-industry-8592a1438554 | ['Ig Drones'] | 2020-11-25 20:48:04.420000+00:00 | ['Development', 'Drones', 'Powerline', 'Power Transmission', 'Technology'] |
2,997 | My Funding Picks For Last Week (W50) | Every Monday, I sit with my team to review the funding activity of the previous week. From that list, I pick out three companies that I would have loved to invest in or find founders doing similar things. Click here to know about my rationale behind this weekly exercise.
Last week, 23 startups raised $118 million.
The ecosystem is striking back!
The deal momentum sustains at a deal every 7 hours now, with the amount raised staying over $100 million 2 weeks in a row. As I said before, the revival is a harbinger for bigger things to come. We will see higher funding weeks as we close out the year and possibly a strong start into the new year.
This week, 18 deals were in the early-stage rounds (compared to 15 last week), making the cut for my weekly analysis. After sifting through the news (aggregated from Tracxn, Inc42, and YourStory), I picked three as my favorite funding news from last week!
Name: Sawo
Amount Raised: Rs 5.5 crore from StartupXseed.
What does Sawo do?
Edited from Tracxn: SAWO Labs offers a secure authentication solution for apps and websites, removing the need for password authentications and OTPs. It provides Authentication as a Service (AaaS) for app publishers and IT/software enterprises deploying authentication for their products. Some of the features include end-to-end encryption, device-based security, dashboard, etc.
Why do I like Sawo?
As a user that is irritated by entering OTP’s multiple times during the day, especially when I’m traveling to places where the network fluctuates — there is an immense need for this service.
As someone who continually watches telecommunication costs balloon for our startups with growth in sales — the promised savings are welcome with open arms! Our teams have started connecting our FinTech companies to Sawo, and I will be super happy if we could cut these OTP verification costs across our portfolio.
Sawo could potentially be an acquisition target for Microsoft, Google, or other enterprise players — definitely, one to keep an eye on!
Name: Signzy
Amount Raised: $3 million from Vertex Ventures
What does Signzy do?
Edited from Tracxn: Signzy provides AI-based customer on-boarding & KYC solutions for NBFCs. Its features include identity verification, facial recognition, regulatory reporting, fraud detection, customer identification, data management, and more. Signzy offers an image and video-based fraud detection system along with smart contracts based on due diligence and algorithmic risk intelligence solutions.
Why do I like Signzy?
Another exciting startup that is streamlining the backend processes of the finance world. In a contactless world, the services of video KYC platforms like Signzy will find ready takers at banks that continue to operate on archaic IT infrastructures with outdated operating procedures.
I’ve been complaining about HDFC Bank’s confusing IT systems on my social media channels.
Besides, they’ve got racked lately for providing broken digital journeys for their customers. Hence companies like Signzy are of pressing priority. As the old war dogs start implementing services like this, they will realize not only is it timesaving, but it also delivers better customer satisfactio
Name: The ePlane Company
Amount Raised: Undisclosed from VC Speciale Invest and FirstCheque, JavaCapital and Sharechat Co-founder Farid Ahsan.
What does The ePlane Company do?
Edited from PitchBook: Manufacturer of an aerial vehicle intended to transform urban transportation by air — moving goods and people. The company’s vehicle is designed to uplift payload up to 6 kgs, with an impossible range of 100 km and a cruise speed of 50 km/h, enabling users to have a human-less flight to fly in cities.
Why do I like The ePlane Company?
I do not know if their business model will make sense or whether the aerial vehicle could withstand India’s extreme weather conditions. Nonetheless, I picked up ePlane purely for the SciFi factor and the fact that it will be uber-cool to watch one of these flying around. Is there a waiting list to be one of the first to get a delivery? | https://medium.com/@showmedamani/my-funding-picks-for-last-week-w50-49573b9a7f8a | ['Anirudh A Damani'] | 2020-12-08 04:42:21.023000+00:00 | ['News', 'Venture Capital', 'Startup', 'Funding', 'Technology'] |
2,998 | Explore these Video Streaming Services | HOME ENTERTAINMENT
Explore these Video Streaming Services
photo courtesy of Fran Jacquier on Unsplash
The video streaming service Netflix is now a household word, as ubiquitous a brand name as Kleenex. Amazon and Hulu are gaining in users too. When you get right down to it, there isn’t one big media company that isn’t in the streaming game. New players and genres are being added all the time. The pandemic has only reinforced this trend to the point where the very survival of the cinema is now in doubt. Oddly, the nearly extinct drive-in is suddenly making a comeback.
If you are a library member in your jurisdiction, you have probably discovered the many alternative forms of literature, entertainment and media that are being made available to card holders like DVDs, e-books, databases, news clips, reference libraries, video games, and online publications from behind a paywall.
Streaming video services are also available although I have yet to hear of one of the giants like Netflix or Amazon being offered through a public library network. I discovered two video streaming services at the start of the pandemic whose content was a refreshing departure from the better known providers.
Hoopla Logo — Screenshot by author
Hoopla offers streaming content in the categories of television, music, comics, audiobooks, e-books and movies. The host website is www.hoopladigital.com. A link from the public library website of your city should connect you. You will have to create a separate account for accessing Hoopla content. Hoopla can also be downloaded as an app from your favourite app store for use on your wireless device.
TELEVISION
Top television categories include:
animation and cartoons
children
crime
documentary
drama
health and fitness
history
mystery
science fiction.
Closed captions are available if included in the original version and users can rate content with a star system.
MUSIC
Music content categories include every well known genre as well as:
music awards shows
faith based
Christmas specials
age specific
artist spotlights
genre playlists
karaoke
Broadway
era specific
meditation
fitness soundtracks
The audio experience is just like listening to a CD. If your home computer is not your household sound system for enjoying music, you might want to use your allocated monthly “library loans” from Hoopla on selections from other media than audio. Sitting in front of my home work station is not my preferred way to enjoy music.
MOVIES
Movie categories include all popular genres as well as:
art house
anime
comics inspired
live performance
indie
health and fitness
mental health
sign language
age specific
The search function has many criteria to narrow down your search including:
release date
date added
language
format
user rating
featured
popular
category
You can also search on all these criteria from within the children’s section.
E-book categories are largely non-fiction with a lot of cooking, how-to and children’s content.
There are more Audiobooks than ebooks with more fiction categories as well as non-fiction genres like bestsellers, nutrition and how to.
As my library’s agreement with Hoopla does not include comics, I was unable to browse selections from this category. Every jurisdiction will have it’s own policies as to what Hoopla content is available. | https://medium.com/illumination/explore-these-video-streaming-services-47b42ffcaae4 | ['Stuart Grant'] | 2020-12-22 13:09:34.736000+00:00 | ['Documentary', 'Public Libraries', 'Video Streaming Service', 'Technology', 'Film'] |
2,999 | Delivery Automation | Delivery Automation
In my previous post we discussed warehouse automation, including fulfilment and distribution center logistics. There we explored how objects are stored, inventoried, picked, and packaged for shipment, and how robotics will play an increasing role in that work going forward. Today I want to explore what happens to the packages after they leave the warehouse, and how robotics technologies are being leveraged to deliver them to other destinations. This includes autonomous freight shipping as well as last-mile delivery to businesses and residences.
As we look to automate more of the logistics and material-handling needed to operate a distribution center it is natural to consider the warehouse operations of companies that carry packages from the warehouse to customer’s doorsteps. Companies like FedEx, UPS and DHL are all utilizing robotics to ensure their operations are more resilient. Fetch Robotics and 6 River Systems which are both active in the warehouse automation space also count DHL as a customer. And DHL announced in March 2020 that they would be expanding their partnership with another AMR (autonomous mobile robot) developer, Locus Robotics, to 10 new locations this year (Zdnet). FedEx recently began using robotic arms from Yaskawa America, equipped with computer vision technology from startup Plus One Robotics, to address the volume of packages in its Memphis facility (TechCrunch). In 2019, UPS brought on 22 new automated facilities across the world (which yielded 25% — 35% efficiency increases) and said that 70% of its packages passed through automated facilities, an increase from 50% in 2017 (Business Insider).
In 2016, McKinsey projected that within a decade 80% of all items delivered would be done so autonomously, and COVID has only hastened this timeline. The need for social distancing and reduced contact between people has also led to increased demand for contactless delivery of packages, groceries, takeout and other goods. While there will probably always be a need for human delivery personnel, part of this last-mile delivery demand might soon be satisfied with autonomous, land-based robots or vehicles and by aerial drones.
Safety and Regulation
One of the biggest hurdles facing any robotic system operating out in the wild in an uncontrolled environment is maintaining safety of the public, with significant regulatory barriers (as well as liability concerns) that come along with that. The most pressing concern for any company designing a robot to operate in the vicinity of humans is that it must be *extremely* unlikely to hurt anyone. This is true for both fully-autonomous systems as well as tele-operated or remote operated systems, and is one of the main hurdles holding back level 3,4, and 5 autonomous automobiles. The way we think about the difficulty involved with designing autonomous and robotic systems for different environments could be divided into three categories. The first, and easiest involves operating a robotic system in a closed environment with no humans present, or where humans are excluded from dangerous zones by cages or barriers. The next level of difficulty occurs when robots and humans will share physical space, but where the humans are instructed or trained to gain familiarity with the particular robotic systems in their vicinity; such as warehouse employees trained in what to watch for and how to respond. The third and most difficult situation to design for is when a robotic system will operate around people unfamiliar with the equipment, unpredictable in their responses, and potentially unaware that the robot is even present. This last category is what must be addressed for a delivery robot operating out in the open in an uncontrolled environment. One way that this addressed is by being slow moving, or small, such that a collision is less likely to cause injury or property damage.
Air-based drone technologies have some apparent safety advantages in this regard for urban delivery settings, in that they don’t need to interact with humans much (if at all) to accomplish a delivery mission. However a drone falling from the sky may be moving quite fast with an unpredictable trajectory, leading to a potentially dangerous impact and unpredictable consequences on the ground. There are also many considerations to be made about commercial airspace and keeping drones out of the flightpaths of larger aircraft.
Ground-based delivery is in some ways easier because it can be performed by smaller robots restricted to slower speeds. Further, those that can operate on sidewalks that do not need to drive on the road with other motorists can have reduced complexity and risk in executing a delivery by ground than by air, especially in urban settings. The trade-off to be considered here is that a sidewalk environment (as well as an urban roadway) can be a chaotic setting for the robot to navigate, more so than the relatively clear airspace above.
Ground-Based Autonomous Delivery
FedEx has experimented with autonomous delivery of same day orders via its Roxo robots (developed by the inventor of the Segway), which it began testing in 2019 (The Verge). There are also many startups in this space taking a variety of approaches to contactless delivery. Starship Technologies (founded by a pair of Skype co-founders) uses autonomous, 20 pound robots to deliver food and groceries in markets such as Tempe, Washington, D.C., Irvine and Milton Keynes, U.K. (TechCrunch). Nuro, which uses autonomous on-road vehicles to deliver food, groceries, and dry cleaning among other things, raised $1 billion from SoftBank in 2019 (The Verge — Robot delivery startup Nuro raises nearly $1 billion from SoftBank); and is expanding into prescription delivery with its recent partnership with CVS (The Verge — Nuro’s driverless delivery robots will transport medicine to CVS customers in Texas). At the start of the pandemic, Unity Drive Innovation (UDI) delivered meal boxes and produce via an autonomous van in the Chinese cities of Zibo, Suzhou and Shenzhen. Since 2018, UDI’s vans have been used by Foxconn to transport parts inside its 200,000 worker Shenzhen campus (IEEE Spectrum). Chinese e-commerce company JD.com also began deploying autonomous van delivery services for last-mile delivery inside Wuhan when the pandemic first emerged and the city was under lockdown (RTE Ireland). Amazon is also in on the last-mile delivery robot game, with its cooler-sized, six-wheeled Scout robot being used in Irvine and Snohomish County, Washington for over a year now (DOGO News). Scout was also deployed in Atlanta, Georgia and Franklin, Tennessee starting in July 2020 (USAToday). In line with the movement towards a RaaS (Robotics-As-A-Service) business model, companies like Kiwibot are offering last-mile delivery robots as a service to restaurants, governments and delivery apps.
Aerial Drone-Based Autonomous Delivery
Aerial drone delivery is a tantalizing option for retailers and logistics companies because it reduces shipping costs, makes last mile delivery less cumbersome, and results in quicker shipping times for customers. But drones are limited in the weight and dimensions of the packages they can carry, and the regulatory environment at the federal and local level can be tough (and sometimes impossible) to navigate. Nevertheless, there are many companies working on drone delivery, from startups, to large tech firms like Amazon and Google, to traditional retailers like Walmart, to logistics companies like UPS. Amazon’s Jeff Bezos put drone delivery on the map with his 2013 interview on 60 Minutes, and Amazon continues to invest heavily in it’s drone delivery capabilities, known as Prime Air; it was originally slated to launch in August 2020, but has yet to materialize, most likely held back by FAA restrictions (BusinessInsider). BusinessInsider also notes the following about Walmart:
“In 2019 Walmart was on pace to file more drone patents than Amazon for the second year in a row. With drones having a fairly small range of about 15 miles, Walmart is perfectly positioned to dominate the commercial drone industry thanks to its giant network of stores in the US.”
Walmart also partnered with Flytrex, a Tel Aviv based drone delivery startup to deliver goods from a local Walmart store to residents of Grand Forks, North Dakota in April 2020 to address the needs of sheltered-in-place shoppers (Forbes). Wing, Google’s drone delivery services, is available in a number of locations, including Virginia, Finland and Australia, and saw its deliveries in Virginia double when the COVID outbreak began (Bloomberg). Wing’s electric-powered drones also deliver certain FedEx packages and products from Walgreens (BusinessInsider). UPS’s Flight Forward drone delivery service became the first of its kind to obtain FAA approval to operate as a commercial airline in 2019 (BusinessInsider). In May 2020, UPS announced that it would begin delivering prescriptions from CVS to a retirement community in Florida, the company had been testing the service in North Carolina prior to this (The Verge). UPS uses drones built by startup Matternet. Airbus completed the first shore-to-ship drone delivery in Singapore in early 2019, and is undertaking further trials of the service (Airbus). In May of this year Zipline, a drone delivery unicorn focused on delivering medical supplies to healthcare providers, began delivering personal protective equipment via drone to Novant Health Medical Center in Charlotte, North Carolina, a process the The Verge described as such:
The service has begun by delivering supplies to Novant Health’s Huntersville medical center from a depot next to its facility in Kannapolis, North Carolina. Once the drones reach their destination, they drop the supplies via parachute, meaning the center doesn’t need any additional infrastructure to receive deliveries. Zipline says its drones can carry almost four pounds of cargo and travel at speeds of up to 80 mph.
Another name in the space to watch is Flirtey, which helped Domino’s Pizza execute the first drone delivery of pizza in New Zealand in August of 2016 (UAV Coach).
There are also companies pursuing larger format semi-autonomous aircraft capable of delivering hundreds of pounds of cargo, using airframes more similar to a winged airplane or VTOL craft. One such company is Elroy Air, capable of delivering 250–500lbs of cargo up to 300 miles, with autonomous cargo loading and unloading at the endpoints. Another such company is Sabrewing, boasting up to 5500 lbs of payload with VTOL ascent, and a 1000 mile range. Both of these companies are expecting to be significantly geographically restricted in where they can fly, intending to only operate their early systems above sparsely populated areas and warehouse-to-warehouse routes to comply with safety requirements. The entire autonomous-flight industry eagerly awaits the regulatory frameworks still in development by the FAA and other airspace organizations that will govern how such autonomous and semi-autonomous craft will be allowed to operate in the future, especially in the vicinity of urban areas.
Autonomous Trucking
Before items and packages can be dropped off at our doorsteps via drone, robot or autonomous vehicle, they need to get from factories, warehouses and other facilities to distribution centers. And autonomous trucking will soon be playing an important role in that part of the process. In some ways, autonomous trucking is more straightforward than autonomous cars, because “Unlike self driving cars, autonomous freight delivery is more predictable and easier to map since the services run on fixed routes and typically stick to major highways with few intersections or pedestrians” (Research and Markets). As with ground and drone deliveries, there are a number of large companies and smaller players jockeying for position in this market. UPS is piloting self-driving delivery vans and trucks in Arizona with both Waymo and startup TuSimple. In Arizona, Waymo, which is owned by Alphabet/Google, is delivering UPS packages from stores to sorting facilities, and is also delivering car parts for AutoNation. It plans to expand testing to New Mexico and Texas this year (VentureBeat). TuSimple which “uses Navistar trucks outfitted with the startup’s own self-driving tech, which sees the world largely through nine cameras” (The Verge) counts UPS, Nvidia and Navistar as corporate investors. The company is planning to expand to Texas this year, where it will service cities like Dallas, El Paso, Houston and San Antonio. Trucks equipped with TuSimple’s technology still must have a human driver present to take over if needed. Ike, named for President Eisenhower and the interstate highway system he helped create, is a San Francisco based autonomous trucking startup founded by former Apple, Google and Uber employees, and which started off by licensing technology from Nuro (Bloomberg). Ike raised $54.5m, including from Fontinalis Partners, whose founding partner is Bill Ford, the Executive Chairman of Ford Motor Company. I personally appreciate the systems-based philosophy that Ike is taking in their design-work, where they are “focused on an entire system that accounts for everything in the self-driving truck, from its wire harnesses, alternator and steering column to durable sensors designed for the highway, computer vision and deep learning that allows it to see and understand its environment and make the proper decisions based on that information. That systems approach also includes proper validation before testing on public roads.” (TechCrunch). Other companies in this space include Embark Trucks and Kodiak Robotics.
It is worth noting that despite the massive amount of development work happening in this space, fully autonomous vehicle technologies may still be far away. Uber, for instance, shut down its self-driving truck project a few years ago (MIT Technology Review). And YC-backed Starsky Robotics, who are credited as the first company to complete a 7-mile highway journey without a human onboard, shut down their company in March 2020. In a summarization of several recent MIT papers, Supply Chain Digest noted that the MIT researchers concluded that fully autonomous trucking “is likely decades off, and the near term step in freight movement is likely semi-automated platooning for long haul moves.” Semi-automated platooning involves a lead truck driven by a human with a self-driving truck or fleet of trucks following behind. This approach is employed by startups such as Peloton Technology, which has taken investment from corporates like UPS and Volvo; and Locomation, which completed a public road trial in August of this year (VentureBeat). This paradigm of tandem human/autonomous trucks is a likely stepping stone to fully autonomous trucking.
Autonomous Watercraft
Autonomous watercraft for cargo transport is another way to reduce human intervention with goods being shipped. Even prior to COVID-19, water-based transport technologies were already becoming important for their ability to mitigate human error (and the consequent financial losses) in the shipping process. “It is estimated that 75% to 96% of marine accidents can involve human error” and that between 2011 and 2016 human error in sea-based cargo transport accounted for $1.6 billion in losses (Allianz). As a result, companies have been working to reduce the potential for human error by developing autonomous watercraft for shipping cargo; in 2019, a vessel developed by SEA-KIT executed the “first commercial crossing of the North Sea to be made by an autonomous vessel” (BBC). In addition to reducing or eliminating the potential for human error, autonomous ships will yield additional benefits: “Free from crewmembers, ships will be redesigned to be more efficient, since ship builders can eliminate accommodation structures such as the deckhouse and living quarters, as well as energy-expensive functions like heating and cooking facilities. Crewless ships will undergo a radical redesign to eliminate excess features and increase efficiency and carrying capacity” (DigitalTrends — Autonomous ships are coming, and we’re not ready for them). Rolls-Royce has been a leader in autonomous cargo shipping, and their VP of Marine Innovation, Oskar Levander, said in 2016 that “This is happening. It’s not if, it’s when. The technologies needed to make remote and autonomous ships a reality exist … we will see a remote-controlled ship in commercial use by the end of the decade” (Digital Trends — Rolls-Royce’s cargo ship of the future requires no onboard crew). In 2019 Shipping giant Kongsberg purchased Rolls-Royce’s Marine division, which had been conducting tests of autonomous ships in Finland; Rolls-Royce netted $500m from the transaction (Maritime Executive).
While the technology to facilitate autonomous ships is being developed rapidly, its proliferation will be slowed by regulation. The International Maritime Organization (IMO) is an arm of the United Nations that sets regulations for international shipping. The chair of the IMO’s working group on autonomous cargo shipping, Henrik Tunfors, has described the IMO as “a slow-moving organization” and that “The pessimistic scenario is that regulation will fall in place between 2028 and 2032, but if we start working straight away, we may have some regulations by 2024. But that’s very optimistic” (Wall Street Journal). That being said, the Journal does note that “Autonomous ships that do short trips on national waters will only need approval by local regulators.” Nevertheless, it is unlikely that autonomous watercraft will proliferate quickly enough to have any impact on reducing the spread of COVID, but they may play an important role in ensuring resilient supply chains that can withstand another outbreak. | https://medium.com/prime-movers-lab/delivery-automation-f559d5d1d22a | ['Dan Slomski'] | 2020-12-17 09:21:43.331000+00:00 | ['Delivery', 'Robotics', 'Technology', 'Automation', 'Venture Capital'] |