Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
0
Data Sovereignty Is Complex: Ask These Five Questions Before Moving Your Data to the Cloud
Guest Post by Rusty Chapin, Manager of DevOps, BitTitan It’s not hard to see why more and more organizations are moving to the cloud. Given the shift to remote work during the pandemic, the benefits are clear — robust collaboration and engagement among remote employees, as well as the ease of sharing and securing data. But as organizations look to move workloads to the cloud, there is one often-overlooked issue they should be aware of: data sovereignty. Put simply, data sovereignty is the idea that an organization’s data is subject to laws and other governance structures of the region or nation where it’s collected. When migrating data to the cloud, data sovereignty plays an important role in how and where the data is moved. Knowing what questions to ask potential vendors prior to migration is the key to data sovereignty compliance. Before we jump into these questions, we should understand what makes data sovereignty so important. First, it’s a security issue with serious ramifications for noncompliance. Medical, financial, educational or governmental data have specific requirements about how and where the data is stored. In some instances, data can’t leave its country of origin. For instance, government agencies’ documents and emails cannot be processed on another country’s servers. Noncompliance with data sovereignty initiatives can mean hefty fines, legal action against your organization, or even jail time in the cases of federal laws like HIPAA and FERPA. It’s vital to know exactly where your data goes and how it’s secured during migration. Asking these five questions before choosing a vendor can ensure the move complies with growing data sovereignty initiatives. What happens to my data while it’s in transit? Ideally, there are no intermediary stops for your data during migration. If the potential vendor tells you that your data will temporarily be stored in a private data center on its way to its destination, that should raise red flags. Such data centers are likely not as sound and secure as a public cloud offering. These data centers might also be outside the region or country prescribed by data sovereignty regulations, which could result in noncompliance issues. It’s important that your vendor understand the different certifications and compliance issues they must adhere to with your data. Clearly communicating the various data sovereignty requirements with potential vendors is an important first step. Does the vendor store my data during and after migration? Additionally, the vendor shouldn’t store your data in any form. This includes writing it to a disk or keeping it in a database. Instead, they should be handling all your data in application memory. During the migration, any point where your data stops is a risk. It’s better to avoid those risks if possible. How should I encrypt my data during migration? Sometimes there are bottlenecks during migrations. For example, maybe the destination is a little slower than the source, and this creates an overflow of data. If this happens, it needs to be written to a disk. In this situation, it’s essential that the data be encrypted using strong encryption and then removed once the migration has been completed. Data at rest (as opposed to data in transit) creates a window of opportunity for theft and other compliance issues. That’s why this window needs to be as short as possible. If your data comes to a rest during migration, it needs to be cleared immediately after it reaches its destination, shortening the window of resting time. How does the vendor handle credential security? Migration credentials are the keys to the kingdom. Whatever vendor has access to those credentials also has privileged access to the source and destination of a data migration. So the utmost care must be taken to secure these credentials. The best practice is for the vendor to encrypt the credentials whenever storing them. Be sure to ask potential vendors about their own protocols. On the client side, you should always create accounts separate from normal administrator accounts for the vendor. That way, when the migration is complete, you can immediately disable the vendor accounts. What happens to my data after migration? Even after the migration is complete, there are still windows of opportunity for data insecurity. Knowing how a vendor handles your data post-migration is important. First, they should clear all data once the migration is successful. Deleting it isn’t good enough. Any copies of your data should be destroyed, as should the operating environment in which that data was processed. Anything that leaves a memory footprint provides the opportunity for someone to bring your data back to life. The cloud is a powerful tool, especially with the rise of remote working. But it’s important to do your due diligence before migrating to the cloud. Look at the vendor’s reputation in the industry. Ask to audit the migration logs to see exactly what was done, who was contacted during migration and what calls were made. That way you can see if anything out of the ordinary pops up. With issues of data sovereignty, it’s better to be proactive rather than reactive if issues arise. Finally, look for vendors that are established and tested, not those who slid into the market after seeing an opportunity. When it comes to data sovereignty, you want to make sure that your organization and its data are in the hands of a vendor you can trust. About the author Rusty Chapin is the manager of DevOps at BitTitan, where he leads a team of engineers to deliver first-class cloud solutions to MSPs and IT professionals. His areas of expertise include database development, SQL server clustering, large-scale SQL server deployment, datacenter operations, IT management and executive mentoring, monitoring system design, and process analysis.
https://digitizingpolaris.com/data-sovereignty-is-complex-ask-these-five-questions-before-moving-your-data-to-the-cloud-b28dcc18d3fa
['Virginia Backaitis']
2020-12-18 05:37:41.764000+00:00
['Cloud Migration', 'Information Technology', 'Cloud Computing', 'Digital Transformation', 'Information Management']
1
Esports vs “Real” Sports
Esports vs “Real” Sports I’ve been an esports player and fan since Starcraft LAN parties in the early 2000’s, and I couldn’t be more excited about where esports are going. Between Blizzard’s OverWatch League announcement and the acquisition of Team Liquid by Monumental Sports, it’s clear that things are changing. Players can now expect sponsorships and salaries. Sports and esports practices are starting to converge. But some huge, fundamental differences exist between esports and sports that cannot be ignored by anyone more than recreationally interested in the space. So, here’s my list of 10 things that differentiate esports from “real” sports for those of you that still prefer soccer to Starcraft. 1. The core audience is niche. Traditional sports are mainstream. Esports were created for and by a subculture. It’s not the same culture that created football, baseball, and hockey. The esport audience (for now) somewhat mirrors the demographics and characteristics of gamers and game developers. But, you might be surprised. Compared to the general population and to all gamers, esports fans are: More likely to be a millennial Wealthier, and more likely employed More likely to be male But wait, wasn’t I supposed to say esports fans are more introverted, weirder, or geekier? I’ll let this very normal picture from an Oakland Raiders Game speak for itself. 2. There are no referees. In traditional sports, you have to remember the rules and play by them: if you touch the soccer ball with your hands, the other team gets a penalty kick. In esports, the rules are embedded in gameplay. You physically cannot touch the soccer ball with your hands. The game won’t let you. Since it’s impossible to break the rules, you don’t need a referee watching for in-game violations. In esports, the rules are embedded in gameplay. 3. The rules change. Every few months, game studios update the rules. Sometimes this is to make things fairer, sometimes just to keep the game interesting. If hockey rules worked like esports, it would look something like this: — Season 1: Ice now has less friction. — Season 2: All goalies move 20% slower. — Season 3: Goalies can now use two sticks. This keeps the game new. It also makes esports more difficult to understand than traditional sports. 4. The games are more complex. Because rules are programmed directly into the game and change regularly, it can take years to fully understand a Multiplayer Online Battle Arena (MOBA) like League of Legends or a Real Time Strategy Game (RTS) like Starcraft II. This will likely change over time. As more money gets involved and esports cater to a broader audience, games are already getting better at visual cues and statistics to help casual fans understand what’s happening. 5. The games change. New games come out, old games die. This is great for the core esport audience since it keeps things fresh. But it also makes it hard to accomplish structured objectives like building arenas and investing in branding, merchandise, and leagues. Popular esports in 2012: DoTA 1, Counter-Strike, Starcraft II, League of Legends Popular esports in 2016: Counter-Strike GO, Overwatch, Hearthstone, League of Legends, DOTA 2 6. There are many, many, many leagues. Anybody can make a league, which means there are a lot of them. Here’s a current example on DotaBuff of just how many leagues are out there.) In sports, there is usually one big league, maybe a few minor leagues, and a few large tournaments (football has the NFL and Superbowl). But in esports, there can be several “official” leagues, an unlimited amount of tournaments and events, and same teams can play in all of them. The Riot Games World Championship at Sangam. [Riot Games] Anyone with enough money and motivation could start a league, put up a large cash prize, and (with effective management and promotion) have little shortage of interested participants. If traditional sports worked like this, anybody with the resources to do so could essentially host their own Superbowl. 7. It’s global. Esports live on the internet, so you can watch or compete from anywhere. That gives nearly every tournament an amazing, Olympic-like tension. (Go USA!) You could even argue that esports are fundamentally intergalactic. Live long and get rekt, noob. [Life at UofT] 8. Viewership is less dedicated. While traditional sports are broadcast through television, esports usually reach their viewers through live streaming. It’s more difficult to hold a dedicated audience (and reach them with ads) through streaming than TV. When there’s so much content available on the internet, it’s easier to switch your attention elsewhere. As an example, this was my weekend: Television: BlizzCon Hearthstone Championship iPad: ESL One Frankfurt 2016 Qualifiers Computer: DoTA Personalities streaming on Twitch Do you think I paid attention to any advertisements at all? Nope. I just turned my attention to one of the other three screens I was watching. And that brings me to… 9. Advertising and sponsorship is harder. Since internet fame is not gated by brands or media, authenticity, intimacy, and content matter more than institutional support. Star players won’t necessarily give up trolling people on Twitch to earn endorsements — if they do, they can lose their fanbases. Moreover, esports fame is ephemeral. Players can be good one year and completely irrelevant the next due to gameplay changes. Coaches leave to attend college. Team names and ownership shift dramatically and often. All of this makes traditional sponsorship and team management difficult. 10. Esports teams are structured differently than sports teams. The players are transient, the games change, the rules evolve, and there are a lot of leagues. Viewership and income sources are fragmented and decentralized. As a result, big esports teams look and act more like brand franchises than monolithic businesses. Most institutional team brands like Team Liquid and Cloud 9 have multiple teams for different esport games, including Dota 2, League of legends, Starcraft II, and CS:GO. This sounds confusing, but everything on the ground — what sports are hot, which teams are good, which teams exist — can change in the space of a month. This construct allows the brand to stay above the volatility in the trenches: Team Fnatic endures, even as its individual esports teams come and go.
https://medium.com/@cmfoil/esports-vs-real-sports-b8e6db1ff793
['Cheryl Foil']
2017-05-31 15:39:00.012000+00:00
['Esports', 'Gaming', 'League of Legends', 'Overwatch', 'Technology']
2
Should Your Company Shift to a Hybrid Workforce? Look to the Cloud for the Answer.
Should Your Company Shift to a Hybrid Workforce? Look to the Cloud for the Answer. Steven Perlman Apr 27·4 min read Think back five or ten years to when your company first started considering the idea of moving to the cloud. Likely, a few people wanted to move entirely to the cloud, a few were entertaining the idea of a hybrid model, and even more thought the idea of a complete migration was ridiculous. Where are all of those people now? Likely, it would be unfathomable to imagine your company without the use of the cloud in your day to day life, especially now that the modern workforce is in a transitional period. Think about now how you and your colleagues think about a remote work environment. Likely, you have a similar dynamic. Some of you are ready to work from home full time, others think a full return to the office is the only way, and still others are pushing for a hybrid model. Onsite = On-Prem | Remote = Cloud Years ago, the companies who were early adopters implemented a cloud-forward strategy and were considered the risk-takers. They were the ones everyone waited to see if would fail, and now they’re considered the revolutionaries. The only way to access a good internet connection was to log onto the big, bulky desktop at your workstation in the office (I’ll bet you can still hear the dial up tones if you try). No one would have ever imagined you’d be able to work in that same spot without having eyes on the physical place where your data is being stored securely — remember the big server room (remember paying rent for a big room for your data)? It’s now been rendered all but obsolete. We are hinging on the same type of transition when it comes to a remote workforce. The onset of the COVID-19 pandemic radically changed the way that people were able to work. There was no choice but to send employees to their homes to work, and while everyone originally had thought it a temporary fix, over a year later, it seems like it may not be so much. Hybrid Workforce = Hybrid Cloud. It’s a good option, but it’s better used as a transition. There were absolutely more conveniences to having some of your data in the cloud as opposed to none of it. As you started utilizing the cloud, you likely had some info on there and the rest on-premise, then slowly transitioned until you were totally virtual. You’re likely having a similar experience now with the remote workforce. Maybe you have a rotation going of who comes into the office every two or three days, or everyone meets once a month for a certain meeting. There’s probably talk of when it will be safe to return to the office full time, and some employees would rather be back as soon as possible while others would prefer to stay virtual. This is where the cloud vs. on-prem metaphor returns. Making a move to an entirely remote workforce is the same choice that your predecessors had to make about moving entirely to the cloud. A daunting concept at first, but now looking back, it’s hard to believe anyone was ever against it. In five or ten years from now, it will be hard to believe anyone ever stood against a remote workforce. There are many benefits to a remote workforce that haven’t even been realized yet, the same as it was with moving to the cloud. For example, when people made the choice to move to the cloud a decade ago, how could they have known it would be vital to allowing employees to work from home during a deadly global pandemic? Talk about an unforeseen benefit! More immediately, think about the money you’ll save as a company by going fully remote. You won’t have to pay rent, utilities, insurance, or anything else that you would normally pay on a physical space. You’ll be able to hire #TopTechTalent from anywhere you want to — there’s an entire untapped market of technology professionals you can choose from when building out a team. Your reputation as a Best Place to Work is also likely to skyrocket. Remote work capabilities are a huge benefit for companies, and it will definitely give you a leg up over your competition. You don’t want to miss this chance to be at the front of an evolving workforce. Steve Jobs once said, “Innovation distinguishes between a leader and a follower.” Which one are you going to be remembered as?
https://medium.com/@syftersteve/should-your-company-shift-to-a-hybrid-workforce-look-to-the-cloud-for-the-answer-ee1b00f7bc77
['Steven Perlman']
2021-04-27 15:54:11.739000+00:00
['Remote', 'Technology', 'Best Practices', 'Cloud', 'Hybrid Cloud']
3
Efficient energy storage and future PVS by Solar DAO
The solar power station is a specially equipped system for converting sunlight into electricity, which is powered by solar modules. At the moment, Solar Energy is one of the fastest growing types of renewable energy. Solar DAO — a tokenized fund designed for everyone to easily participate in PV solar plants construction across the globe. SDAO tokens on: YoBit, Bitafex, Idex. Solar module + inverter = electricity The main problem of any renewable energy source is the irregular generation of electricity. Compared with the usual non-renewable sources, peaks of generation and consumption of electricity by users do not match each other. So, wind station works only at a wind, and the maximum power output of PVS is reached in daytime and sunny weather. Using power storage can solve this problem. It is important to note that efficient storage is not only storage devices (batteries), but also technical solution, controllers and software. The main types of drives used are: Acid batteries . Huge in size devices with a short life period and significant losses in conversion and charging. . Huge in size devices with a short life period and significant losses in conversion and charging. Lithium batteries. A fairly effective option, but still has a huge cost and danger due to heating. A fairly effective option, but still has a huge cost and danger due to heating. Mechanical storage. Work on the principle of raising and lowering the load. Used rarely because of the huge energy loss. Work on the principle of raising and lowering the load. Used rarely because of the huge energy loss. Hydrokinetic storage . Electricity is sent to pumps that move water, and then accumulated water potential is used to operate the turbines. The main disatvantage — huge size of buildings. . Electricity is sent to pumps that move water, and then accumulated water potential is used to operate the turbines. The main disatvantage — huge size of buildings. Hydrogen storage. Hydrogen is produced by electrolysis. After it can be converted back to the electricity and heat. The high cost, explosiveness and fire hazard don't allow this method to spread. Hybrid solutions are also used, which also have their pros and cons. The main problem of experienced industrial storages — high cost and low capacity, there is practically no technology that is economically profitable. Existing batteries are too expensive and have low efficiency. It means that batteries are more expensive than the PVS themselves. Last year, SDG&E showed the world’s largest lithium-ion battery energy storage facility. 30 MW energy storage is capable of storing up to 120 MW hours of energy. The $1 billion solar farm for Riverland is expected to be built in 2018. Riverland Solar Storage’s 330MW solar generation and 100MW battery storage system will be the biggest one for Australia. And what about Tesla? For reference: Nowadays Tesla provides two types of power storage systems — Powerwall and Powerpack. The first can be used in residential and small office space, the second — to work in enterprises, cost $ 250 per 1 kW. Solar DAO and the best solution The Solar DAO team has been engaged in research and choosing the most profitable engineering solutions for many years. We also come very carefully to the issue of power storage. Together with our partners, we plan to construct a project with a capacity of about 40 MW in Turkey. Our partners will help to build the PVS by selecting the appropriate kinetic storages and providing the appropriate support: Nord Systems . Providing development potential, specialized software and information display systems. . Providing development potential, specialized software and information display systems. Atlant Energy . Innovative equipment, kinetic energy storage. . Innovative equipment, kinetic energy storage. Natpromlizing . Financing projects, supporting modernization. . Financing projects, supporting modernization. Sberenergodevelopment. Full support of the project, selection of technical solutions, financing, assistance in implementation. After the first successful project in Turkey we plan to do several pilot projects in India and expand this experience to Israel, Germany, Malta, Cyprus.
https://medium.com/solardao/efficient-energy-storage-and-future-pvs-by-solar-dao-14a9638558d1
['Solar Dao']
2019-01-28 12:03:58.427000+00:00
['Solar Energy', 'Blockchain', 'Technology', 'Solardao']
4
How do you hold a climate-friendly conference? Log in.
How do you hold a climate-friendly conference? Log in. Air travel is a huge source of carbon emissions — and part of the job in academia. Does it have to be? Illustration by Francesco Zorzi By Schuyler Velasco Air travel is a huge source of carbon emissions — and part of the job in academia. Does it have to be? Stephen Allen hasn’t taken a flight for work since 2006. The management lecturer at the University of Sheffield studies ways that large organizations can operate more sustainably. And he decided early in his career that he needed to embody what his research recommends. “I’ve taken a fairly draconian approach,” Allen says. Last year, he refused to fly from his home in the United Kingdom to a conference in Croatia. “Once you start looking at the carbon footprint calculators, you think, my God, me cycling and walking to work or whatever, it’s wiped out the moment I get on an airplane.” Frequent flying, like eating meat and driving a gas-guzzling SUV, is a choice facing ramped-up scrutiny in an era of climate change. In January, the annual World Economic Forum drew some side-eye when a record number of private jets landed in Davos for the event — even as organizers touted global warming as a key topic of conversation. Climate activist Greta Thunberg refuses to fly; her activism has helped kick-start a “flight-shaming” movement worldwide and especially in her home country, Sweden. Many professionals are grappling with how to adapt their work habits to a warming planet, and the stakes feel even higher in higher education. Flying to research sites and academic conferences has long been an essential part of a professor’s job. But many climate and sustainability researchers, who spend their careers thinking about ways to lessen humanity’s environmental impact, feel uncomfortable with the trappings of a typical conference: a round-trip flight, a days-long stay in an energy-guzzling chain hotel, wasting paper with all those programs and name badges. Still, the pressure can be high to take part in industry confabs, says Jennie Stephens, a professor of sustainability science and policy at Northeastern University. “Particularly within some disciplines,” Stephens says, there’s a feeling that “you really need to show up.” Or do you? Stephens and Allen are among the academics trying to give the traditional conference model a climate-friendly makeover — and convince their colleagues to find other ways to gather the world’s experts in a given field. “There’s not getting to know people over a coffee or a beer, but there are other upsides.” — Stephen Allen, a management lecturer at the University of Sheffield The Tyndall Centre, a collaboration of climate change researchers from the UK and China, has created a decision tree to help its members make more climate-efficient travel choices. A case study of air travel at the University of British Columbia recommends a host of adjustments, from encouraging employees to travel economy-class to incorporating an emissions tracker into the university’s financial management system. And this past November, when Allen and colleagues at the University of Sheffield held a symposium on reducing academic travel, they did it virtually, tapping speakers on three continents and hosting 100 participants from 19 countries. In June, Northeastern is teaming up with the KTH Royal Institute of Technology in Sweden to hold a conference on sustainable consumption that has two hubs — one in Boston, one in Stockholm. The goal is to allow both European and North American participants to attend, with little or no flying involved. “Culturally, a lot of what we think we need [is] not very sustainable,” says Stephens, a co-chair of the Northeastern event. “So we are trying to be innovative and the travel component is a part of that.” Air travel makes up roughly 2 percent of global carbon emissions — a small but rapidly growing share — and flying is one of the most consequential choices a single person can make about their carbon footprint. But most air travel is done by a small group of frequent flyers, many of whom are traveling for business. In the UK, 15 percent of the population took 70 percent of the nation’s flights in 2013, according to a government survey. Academia’s contribution to that is hard to quantify, but some figures offer clues. The University of British Columbia estimates that the carbon impact of university-related air travel is equivalent to about two-thirds of the annual impact from operating its main campus. Allen says that around 20 percent of a typical research-intensive university’s carbon footprint comes from flying. “Some people’s academic identities are totally bound up in traveling,” Allen says. “That’s how they understand their jobs. That’s how they understand being successful.” Still, when Northeastern began planning to host this summer’s Sustainable Consumption Research and Action Initiative (SCORAI) conference, the transatlantic flights it would’ve required of many Europe-based researchers seemed discordant. After all, the theme of this year’s gathering is Sustainable Consumption & Social Justice in an Urbanizing World, and it will include round-table discussions on “Lifestyles, morality, and the climate crisis” and “Flight-free vacation practices,” to name a few. “We reached out a hand to the Boston team and said, ‘We think it would be nice if we could gather in Northern Europe and skip flying,” says Daniel Vare, a researcher at KTH and the project leader for the Stockholm hub, which will host about 100 people. The two teams created a compromise between a classic conference and a virtual event. Keynote speakers will be split between Boston and Stockholm, and talks will be live-streamed. The schedule will run during the workday in Boston and into the evening in Stockholm, to bridge the six-hour time difference. Each site will hold parallel breakout sessions. Organizers are keen to avoid a situation where one hub or another feels like the “main” location. Networking events, which Stephens still believes are a crucial reason to hold conferences, will take place at both locations. “We hope that this could be the recipe for a more in-between version, where you get the social interaction, but still lower emissions and flight miles,” Vare says. SCORAI’s menus will be plant-based and locally-sourced. The 150 or so Boston participants will have the option to stay in campus dorms (which generally operate more efficiently than hotels) and will be encouraged to take public transit. The agenda in Stockholm includes a train trip to ReTuna, a Swedish mall where everything on sale is recycled. Putting on more sustainable events like SCORAI and encouraging remote interaction has plenty of practical upside beyond the climate, Stephens points out. “It’s cheaper for the organization, and it’s more time-efficient not to have to travel as much,” she says. Plus, it can make for easier logistics. The virtual nature of the University of Sheffield’s November symposium allowed Allen and his team to book speakers who might not have made it otherwise. One gave a keynote from Sweden, then spoke at another virtual conference based in Spain the same day. Participants took advantage of other opportunities to connect, chatting on an online platform and in virtual breakout sessions. “There’s not getting to know people over a coffee or a beer, but there are other upsides,” Allen says. Still, the traditional conference model has endured for a reason. At the Sheffield symposium, one presentation raised the possibility that anti-flying measures could disproportionately impact early-career academics, who might feel pressure to turn down can’t-miss career opportunities. Even among climate researchers, there isn’t total agreement on ramping down air travel. Individual flights are still a drop in the ocean of global carbon emissions, and some see getting the message out and fundraising as more crucial than giving up flying. “It’s quite a controversial topic,” Allen says. “It’s still a marginal community of people keenly trying to even think about these questions.” But Vare, in Stockholm, thinks that even if conferences remain mostly the same, new models for remote events could force people to reconsider their regular work travel. He says he’s already seen smaller meetings and research collaborations become virtual more often in just the past few years. He hopes a climate-friendly academic conference could offer inspiration for conference organizers in other industries, leading to more semi-virtual events. “We’re modeling for our students, for our partners, for other nonacademic community members,” Stephens says. “So we should be leaders.”
https://medium.com/swlh/how-do-you-hold-a-climate-friendly-conference-log-in-d36576bc0f30
['Experience Magazine']
2020-03-01 11:03:33.972000+00:00
['Technology', 'Climate Action', 'Higher Education', 'Telecommunication', 'Academia']
5
Guide to Material Motion in After Effects
I’ve already shared why Motion Design Doesn’t Have to be Hard, but I wanted to make it even easier for designers to use the Material motion principles I know and love. After Effects is the primary tool our team uses to create motion examples for the Material guidelines. Having used it to animate my fair share of UIs, I wanted to share my workflow tips and… My After Effects sticker sheet Download this basic sticker sheet to see a project completed using my streamlined workflow (outlined below). It contains a collection of Material components, baseline UIs, and navigation transitions. Download it here 👈 Available under Apache 2.0. By downloading this file, you agree to the Google Terms of Service. The Google Privacy Policy describes how data is handled in this service. Importing assets into AE First things first, we need assets to animate. Most of the visual designers on our team use Sketch, which by default doesn’t interface with AE. Thankfully Adam Plouff has created this plugin that adds this functionality. I used it to import our library of baseline Material components from Sketch into AE. These assets are found in the sticker sheet’s Components folder. Creating UIs With this library of baseline components, new UIs can quickly be assembled by dragging them into a new AE comp.
https://medium.com/google-design/guide-to-material-motion-in-after-effects-9316ff0c0da4
['Jonas Naimark']
2019-05-22 14:32:47.463000+00:00
['Animation', 'Technology', 'Visual Design', 'Material Design', 'Design']
6
Foursquare Predicts Chipotle’s Q1 Sales Down Nearly 30%; Foot Traffic Reveals the Start of a Mixed Recovery
When Chipotle came on the scene, the chain earned lots of fans for its approach to “food with integrity,” including antibiotic-free meats, GMO-free ingredients, and fresh local produce. However, the last six months have been a tumultuous ride. Since the first E. coli reports emerged in October 2015, reports popped up across the country raising skepticism about its products and processes, and Chipotle has been racing to squash the issues, institute better training and manage its reputation. The fast casual Mexican-themed chain is still dealing with the repercussions, and an even more recent norovirus outbreak in March at two stores. In February, the CDC gave the chain a clean bill of health. To take a deeper look at how the downturn and recovery has gone, we analyzed the foot traffic patterns at the more than 1,900 Chipotle US locations and compared them to the previous year. At Foursquare, we have a trove of anonymous and aggregate data on where people go, based on the 50 million people who use our apps (Foursquare and Swarm) and websites monthly. Many users passively share their background location with us, which our technology can match up with our location database of over 85 million places, giving us clear insight into natural foot traffic patterns. (Here’s a video that shows how Foursquare maps a large Chipotle location in downtown Portland, Oregon.) A Look Back Foot traffic to Chipotle started to follow the same directionally downward seasonal winter traffic trend in 2015 as in 2014. But as time went on, it became clear that 2015 was no ordinary winter for Chipotle; traffic was down in a more significant way. The chart below shows the share of visits to Chipotle restaurants in comparison to visits to ALL restaurants in the United States. In the 2015–2016 winter, visits to Chipotle restaurants declined more significantly than in 2014–2015. Visit share began to recover in February 2016, marked by the CDC’s conclusion of its E. Coli investigation and Chipotle’s ‘raincheck’ promotion launch, ostensibly for customers who were unable to satisfy their burrito cravings during the company’s system-wide closure on February 8. Foot traffic took another dip, albeit much smaller, following the more minor norovirus outbreak in Boston in two locations in early March. Sales Projections Chipotle has publicly reported its weekly sales for the first 10 weeks of Q1, giving us ample data to build statistical models to project sales for the rest of the quarter. Taking into account reported sales, redeemed coupons and other factors, along with Foursquare foot traffic data, we estimate that Chipotle ended Q1 2016 with same store sales down roughly 30% year-over-year (which we expect to be confirmed by Chipotle when it reports earnings on April 26). Foot traffic estimates, however, tell a brighter story. Foursquare data shows that year-over-year, Q1 same store traffic declined only about 23%. The gap between sales and foot traffic is likely a result of all the free burrito coupons that were redeemed, which lured in people, though not revenue. We believe the 23% decline in same store foot traffic is the more meaningful number that shareholders should focus on, rather than the 30% decline in sales. It shows that Chipotle is building trust back with customers, which is more important to its success long-term. Although it sacrifices revenue this quarter by giving product away, it is proving to be a winning strategy for getting people comfortable with coming back. The trick is in making sure that these customers come back again and spend money in the future. “The problem is that Chipotle’s customers are already so darn loyal.” — Chipotle CFO, Jack Hartung (during a July 21, 2015 earnings call to analysts) Chipotle Needs to Focus on Loyalty We looked at how frequently customers went to Chipotle over the past year and found some interesting insights. Last summer, just 20% of Chipotle customers made up about 50% of foot traffic visits. Because this cohort of loyal customers reliably returned to Chipotle month after month, they contributed to an outsized percentage of foot traffic, and likely sales. Interestingly, it’s this group of faithful customers that have changed their Chipotle eating habits most dramatically: these once-reliable visitors were actually 50% more likely to STAY AWAY in the fall during the outbreak, and they have been even harder to lure back in. While those who infrequently visited Chipotle last summer have returned to Chipotle at similar rates as before, the formerly loyal customers have been 25% less likely to return. The loss of these important customers is what has really hurt Chipotle, since losing 2–3 loyal customers is the equivalent of losing about 10 other customers. To demonstrate that this is an unusual loss of loyalty, versus natural attrition, we compared this pattern with a cohort of frequent Panera goers. The chart below illustrates that while both chains experienced a similar seasonal dip, Chipotle has lost much more traffic from its loyalists than Panera has. Chipotle has famously dismissed the idea of having a loyalty program, stating that it didn’t believe that loyalty programs help turn infrequent goers into loyal visitors. According to Chipotle CFO Jack Hartung, “The problem is that Chipotle’s customers are already so darn loyal.” Looks like it’s time to reconsider those famous last words. So, where are they headed instead? Foursquare foot traffic data reveals that they have replaced their usual Chipotle visits with visits to other popular chains such as McDonald’s and Starbucks. They have also been slightly more likely than the average person to visit Whole Foods, for which there are some naturally overlapping qualities in offerings that have more integrity and healthfulness. Looking to the Future Two weeks from today, Chipotle will share its official Q1 earnings. We, alongside most analysts, anticipate the bitter pill the restaurant chain will have to swallow as they report on losses. But we also see a more unique, nuanced and slightly rosier picture, and urge Chipotle to continue building brand loyalty — one burrito at a time. ### Foursquare’s Location Intelligence Paves the Path To Recovery The data looks promising for recovery, but there will be trouble for Chipotle if they don’t lure back the formerly loyal visitors or nurture a new group of faithful fans. Some ideas for how to do this effectively: Analyze foot traffic patterns by using Place Insights powered by Foursquare. Target advertising at the formerly frequent Chipotle visitors who have the power to bring back larger percentages of sales by using Pinpoint powered by Foursquare. Measure all of its digital marketing initiatives so they can double down on the programs that are most effective at turning visitors into loyalists; they can do this through Attribution powered by Foursquare. When you’re operating a brick-and-mortar business, location intelligence is critical. Chipotle’s burrito-based bottom line proves it. Interested in any of the analysis or tools mentioned above? Do you need the power of location intelligence? Read more about our enterprise solutions or contact us. ### Notes on Methodology
https://medium.com/foursquare-direct/foursquare-predicts-chipotle-s-q1-sales-down-nearly-30-foot-traffic-reveals-the-start-of-a-mixed-78515b2389af
['Jeff Glueck']
2018-08-01 12:06:28.012000+00:00
['Technology', 'Foursquare', 'Finance', 'Data Science', 'Chipotle']
7
US DOT Regulatory Moves Foreshadow Big Changes in 2020
The Federal Motor Carrier Safety Administration (FMCSA) and the National Highway Traffic Safety Administration (NHTSA) posted important regulatory documents with comment deadlines this summer. Both documents are Advance Notices of Proposed Rulemaking (ANPRM). The ANPRMs not only signal the start of a more active regulatory phase for US DOT, they will now require US DOT to respond directly to comments and thus the ANPRMs will build a formal record that has the force of administrative law behind it. The Basics · The comment deadline for the NHTSA notice is July 29th. · The comment deadline for the FMCSA notice was extended to August 26th. · The NHTSA notice is the starting the process to update Federal Motor Vehicle Safety Standards for Level 4 and Level 5 AVs, in particular for unconventional design (i.e. AVs without manual controls). · The FMCSA notice begins the process for updating its regulations for operation, testing and inspection. FMCSA will only apply new regulations to Level 4 and Level 5. As Levels 1–3 require the presence of a human operation, FMCSA states explicitly it is not intending to change regulations for Level 1–3 CMVs. · NHTSA indicates that changes to FMVSS Series 100 and 200 that accommodate AVs will be formalized. The process starts with the current ANPRM. Why Is an Advance Notice Important? An Advance Notice is the first concrete step in promulgating new regulations. Federal agencies must respond to germane comments on the record. Until now, US DOT has only published Requests for Comment and policy documents (i.e. Automated Vehicles 3.0), which is an information-gathering exercise. Important, to be sure, but RFCs do not have any legal force and federal agencies are not required to respond to comments. ANPRMs are followed by a formal Notice of Proposed Rulemaking (NPRM), which is the actual proposed regulation and will include on-the-record responses to comments submitted for the ANPRM. The current ANPRMs are the most important regulatory documents released to date for the regulation of automated vehicles. What Is and Is Not Covered? The ANPRMs only cover Level 4 and Level 5 vehicles. No regulatory changes are proposed for Levels 1–3. FMCSA is broadly considering issues related to operation (including CDL endorsements and Hours of Service), testing and inspection. Roadside inspections and cybersecurity issues are part of the ANPRM. The NHTSA ANPRM is the starting document for updating FMVSS Series 100 and 200. NHTSA wants to eliminate requirements that are not applicable to AVs. As importantly, NHTSA seeks assistance in developing new safety testing methods. Six different options are presented for comment: A) Normal ADS-DV Operation; B) TMPE (Test-Mode with Pre-programmed Execution); C) TMEC (Test-mode with External Control); D) Simulation; E) Technical Documentation; and F) Surrogate Vehicle with Human Controls. Where Does US DOT Go from Here? Both ANPRMs are the starting points for updating current regulations that will accommodate the unique characteristics of AV. Both FMCSA and NHTSA will need to review all comments and craft formal responses that are consistent with the regulations it intends to adopt. Neither agency faces a legal deadline to respond and release the follow-up NPRM, however, US DOT does want to get new rules in place to provide certainty for the industry. In the end, US DOT wants to promote and grow AV. Critically, the NHTSA ANPRM indicates that a team at Virginia Tech is drafting new FMVSS technical language and such language for 30 FMVSSs is due by the end of the calendar year. Reading between the lines, it appears that these 30 FMVSSs will be part of a Notice of Proposed Rulemaking that NHTSA will want to release in early 2020.
https://medium.com/@KNaughton711/us-dot-regulatory-moves-foreshadow-big-changes-in-2020-91013ca9d771
['Keith Naughton']
2019-07-15 13:42:39.342000+00:00
['Administrative Law', 'Technology News', 'Self Driving Cars', 'Regulation', 'Transportation']
8
An Illustrated Guide to Bi-Directional Attention Flow (BiDAF)
The year 2016 saw the publication of BiDAF by a team at University of Washington. BiDAF handily beat the best Q&A models at that time and for several weeks topped the leaderboard of the Stanford Question and Answering Dataset (SQuAD), arguably the most well-known Q&A dataset. Although BiDAF’s performance has since been surpassed, the model remains influential in the Q&A domain. The technical innovation of BiDAF inspired the subsequent development of competing models such as ELMo and BERT, by which BiDAF was eventually dethroned. When I first read the original BiDAF paper, I was rather overwhelmed by how seemingly complex it was. BiDAF exhibits a modular architecture — think of it as a composite structure made out of lego blocks with the blocks being “standard” NLP elements such as GloVe, CNN, LSTM and attention. The problem with understanding BiDAF is that there are just so many of these blocks to learn about and the ways they are combined can seem rather “hacky” at times. This complexity, coupled with the rather convoluted notations used in the original paper, serves as a barrier to understanding the model. In this article series, I will deconstruct how BiDAF is assembled and describe each component of BiDAF in (hopefully) an easy-to-digest manner. Copious amount of pictures and diagrams will be provided to illustrate how these components fit together. Here is the plan: Part 1 (this article) will provide an overview of BiDAF. Part 2 will talk about the embedding layers Part 3 will talk about the attention layers Part 4 will talk about the modeling and output layers. It will also include a recap of the whole BiDAF architecture presented in a very easy language. If you aren’t technically inclined, I recommend you to simply jump to part 4. BiDAF vis-à-vis Other Q&A Models Before delving deeper into BiDAF, let’s first position it within the broader landscape of Q&A models. There are several ways with which a Q&A model can be logically classified. Here are some of them: Open-domain vs closed-domain. An open-domain model has access to a knowledge repository which it will tap on when answering an incoming Query. The famous IBM-Watson is one example. On the other hand, a closed-form model doesn’t rely on pre-existing knowledge; rather, such a model requires a Context to answer a Query. A quick note on terminology here — a “Context” is an accompanying text that contains the information needed to answer the Query, while “Query” is just the formal technical word for question. vs An open-domain model has access to a knowledge repository which it will tap on when answering an incoming Query. The famous IBM-Watson is one example. On the other hand, a closed-form model doesn’t rely on pre-existing knowledge; rather, such a model requires a Context to answer a Query. A quick note on terminology here — a “Context” is an accompanying text that contains the information needed to answer the Query, while “Query” is just the formal technical word for question. Abstractive vs extractive. An extractive model answers a Query by returning the substring of the Context that is most relevant to the Query. In other words, the answer returned by the model can always be found verbatim within the Context. An abstractive model, on the other hand, goes a step further: it paraphrases this substring to a more human-readable form before returning it as the answer to the Query. vs An extractive model answers a Query by returning the substring of the Context that is most relevant to the Query. In other words, the answer returned by the model can always be found verbatim within the Context. An abstractive model, on the other hand, goes a step further: it paraphrases this substring to a more human-readable form before returning it as the answer to the Query. Ability to answer non-factoid queries. Factoid Queries are questions whose answers are short factual statements. Most Queries that begin with “who”, “where” and “when” are factoid because they expect concise facts as answers. Non-factoid Queries, simply put, are all questions that are not factoids. The non-factoid camp is very broad and includes questions that require logics and reasoning (e.g. most “why” and “how” questions) and those that involve mathematical calculations, ranking, sorting, etc. So where does BiDAF fit in within these classification schemes? BiDAF is a closed-domain, extractive Q&A model that can only answer factoid questions. These characteristics imply that BiDAF requires a Context to answer a Query. The Answer that BiDAF returns is always a substring of the provided Context. An example of Context, Query and Answer. Notice how the Answer can be found verbatim in the Context. Another quick note: as you may have noticed, I have been capitalizing the words “Context”, “Query” and “Answer”. This is intentional. These terms have both technical and non-technical meaning and the capitalization is my way of indicating that I am using these words in their specialized technical capacities. With this knowledge at hand, we’re now ready to explore how BiDAF is structured. Let’s dive in! Overview of BiDAF Structure BiDAF’s ability to pinpoint the location of the Answer within a Context stems from its layered design. Each of these layers can be thought of as a transformation engine that transforms the vector representation of words; each transformation is accompanied by the inclusion of additional information. The BiDAF paper describes the model as having 6 layers, but I’d like to think of BiDAF as having 3 parts instead. These 3 parts along with their functions are briefly described below. 1. Embedding Layers BiDAF has 3 embedding layers whose function is to change the representation of words in the Query and the Context from strings into vectors of numbers. 2. Attention and Modeling Layers These Query and Context representations then enter the attention and modeling layers. These layers use several matrix operations to fuse the information contained in the Query and the Context. The output of these steps is another representation of the Context that contains information from the Query. This output is referred to in the paper as the “Query-aware Context representation.” 3. Output Layer The Query-aware Context representation is then passed into the output layer, which will transform it to a bunch of probability values. These probability values will be used to determine where the Answer starts and ends. A simplified diagram that depicts the BiDAF architecture is provided below:
https://towardsdatascience.com/the-definitive-guide-to-bi-directional-attention-flow-d0e96e9e666b
['Meraldo Antonio']
2019-09-07 13:57:19.763000+00:00
['NLP', 'Artificial Intelligence', 'Machine Learning', 'Technology', 'Data Science']
9
3 retail branding tips to connect with your customers
Creating a connection between your brand and your customers is crucial for retail. So the big question is: How do you ensure that customers stay satisfied and return to purchase more? As a retailer, you have to be open to change and adapt easily to stay relevant. Be close to your customers, listen to them and fulfil their needs. But how do you achieve that? Let’s have a look at the 10 fastest growing companies in The Netherlands in 2019. According to the research done by the Erasmus Centre for Entrepreneurship, those companies are: Picnic Takeaway Rituals Action Young Capital Elastic Coolblue Calco Adyen BasicFit What makes those companies so successful and why are they a great example of building a strong connection between a brand and its customers? Customer Experience They focus on customer experience. How do you ensure that customers have a positive experience during their visit to your shop, or during other interactions with your brand? A positive experience will lead to higher customer retention, and thereby more sales. Action strongly focuses on surprising their customer. For example, when a customer goes to their store looking for bicycle lights, they often end up buying more than they had anticipated. Action surprises their customers with new products, things that can come in handy, for an attractive price. Action’s in-store experience is designed to increase impulse buying. Another company that knows a thing or two about customer experience is Rituals. When a customer enters their shop, they are welcomed with a warm cup of tea and friendly shop assistants offering to demonstrate the newest scrubs, or creams. Rituals creates a unique customer experience which is consistent in every store. Rituals’ customers know what to expect. Rituals is attentive and ensures that everyone who walks through the door experiences a moment of relaxation. They are also known to regularly introduce new scrubs, creams and incenses, which ensures that their customers don’t get bored and are curious every time they walk past a Rituals store. However, the real champion of creating a positive customer experience is Coolblue; ‘Everything for a smile’. Their service desk is open until midnight. Their customers have the opportunity to try the products at home and send them back, in case they are not 100% satisfied. If a customer is looking for comprehensive, personal advice, they are welcome to call the service desk. Alternatively, they could seek advice in one of the stores, where the employees are well-trained to assist customers with all sorts of questions. Customers can share their real Coolblue experience with others on the website of Coolblue which creates brand trust. Love Brand Have you ever heard of the term Love Brand? This new term is used quite frequently these days. Love Brand is a company that invests in their relationship with their customers. A Love Brand translates its unique values into their customer experience. It embraces its customers with love and enthusiasm and inspires their customers to share their love for the brand. A Love Brand is a brand that also creates a connection between its customers. Rituals is a real Love Brand. Rituals has such a strong brand identity, that one feels instantly connected with their brand vision. Their message empowers people to take a moment for themselves, because every little moment contributes to their happiness. When one enters a Rituals store, they are instantly embraced with a feeling of relaxation and calmness. Rituals tries to fully understand their customers’ needs and does a great job at seeing things from their customers’ point of view. They want their customers to take a step back from the daily hustle. Another example of a strong Love Brand is Dille & Kamille. Dille & Kamille customers prefer shopping in the physical store instead of the website, because it provides a better brand experience. The smell, the surprising range of products, the new seasonal inspiration attracts them to the store. From the moment they walk into the store, they feel connected with the core values of the brand, a feeling of happiness and positivity. Last, but not least, Marco Aarnink, with his company Print.com, knows how to give his customers the attention they deserve. His team takes the time to talk with their customers, understand them, and relate to them in order to establish a true relationship. Ordering a printed notebook has never been such a personalized experience. Print.com customers feel as a part of the Print.com community. BE your customer The fastest growing companies at the moment are Picnic and Takeaway.com. They both have something in common, namely that they do something that a customer would otherwise do themselves. They simplify their customers’ lives by providing a convenient service. Food on-demand, without the need to leave your house. This allows their customers to have more time for themselves. In short, make sure that your customer experience is consistent throughout, independent whether your customers are seeing an advertisement, passing your store, or being greeted by an employee. Make sure that these experiences are all intertwined, similarly to Coolblue. This will strengthen your brand identity. Other related articles PREVIOUS STORY ← Lytho in the top 10 most promising Adobe solutions 2019 NEXT STORY Attracting franchisee candidates in today’s competitive market →
https://medium.com/marketing-and-branding/3-retail-branding-tips-to-connect-with-your-customers-f686eecdf642
['Raul Tiru']
2020-12-21 12:21:39.825000+00:00
['Retail Marketing', 'Retail Technology', 'Branding', 'Retail', 'Retail Industry']
10
Serious Sam’s Baffling Stadia Exclusivity
Serious Sam’s Baffling Stadia Exclusivity Google keeps the iconic shooter franchise off consoles into 2021 Serious Sam Collection Stadia screenshot taken by the author. We’re just a few weeks away from the release of Serious Sam 4, the latest installment in the long-running hardcore PC action shooter franchise. But if you’re a console player hoping to test the game out on your current machine or one of this fall’s new consoles…you’re out of luck until some unspecified date next year. You see Croteam, the makers of Serious Sam, signed a multi-game exclusivity deal with Google’s Stadia platform this past spring. Serious Sam 4 will not launch on any consoles this month, and will not come out in the lucrative launch window on this November’s new platforms, instead debuting on Steam and Google’s underused and oft-derided streaming platform. This same deal also brought the Serious Sam Collection to Stadia, which is actually a finished version of Serious Sam Fusion, a beta experiment first launched on Steam that was abandoned over a year ago. Collection bundles all the content from the HD remakes of the first two Serious Sam games alongside Serious Sam 3 into one big awkward uber-game. Serious Sam Collection Stadia screenshot taken by the author. Once again, Collection won’t launch on other “consoles” until some unspecified date in the future. That’s a shame, because Stadia is a much worse way to play these games than the PC originals. Serious Sam Fusion included numerous graphics settings and experimental tweaks, all of which are disabled and hidden in the Stadia release. Serious Sam is also a very fast and twitchy game, requiring precise timing and control for maximum play. It’s one of the most intense and hardcore action franchises ever made and it never slows down. Stadia’s latency performance has improved dramatically since launch, and it’s a testament to the state of the service that the Sam games function at all. But it never manages to feel quite as crisp and fun as playing on your own PC. At least Stadia Pro subscribers were able to snap up the Collection as a freebie at its launch a few months ago, but otherwise Stadia users derive no obvious benefit out of this deal. The Sam games, while always gorgeous, are also so well optimized that they also don’t necessarily need the beefy hardware back-end that Stadia provides. I have no doubt that the new game will run well on mid-range PC’s and even the current base level consoles whenever it finally launches on those machines. Serious Sam Collection Stadia screenshot taken by the author. Signing this deal means Croteam is missing out on a huge opportunity this fall. The launch lineups of the two upcoming consoles are sparse, and both Serious Sam 4 and Collection would have been perfect “second games” for people to play when the launch hype wears off and they’re antsy for new content. I think Collection would also be perfectly at home on Switch. Instead of gaining great benefit from the marketing and hype surrounding these new machines, not to mention Nvidia’s upcoming 3000 series GPU launch, the Serious Sam franchise will instead spend the next several months under-performing on Stadia. Google hasn’t done enough to secure a large library for their fledgling platform, and their marketing budget for the service is essentially zero right now, outside of a small dedicated group on Reddit who will tell you it’s the best “console” they’ve ever used. I do think Stadia has a lot of potential. I think that Serious Sam is an interesting demo of both the benefits and the pitfalls of the service. But this iconic twitch-based action franchise deserves to launch platforms that can actually provide a fast lag-free experience, and on platforms that will actually market their games. Hopefully, Serious Sam 4 will do well enough on Steam that Croteam can afford a big push when the game finally comes to consoles next year.
https://xander51.medium.com/serious-sams-baffling-stadia-exclusivity-adf70d56cb16
['Alex Rowe']
2020-11-30 19:23:51.532000+00:00
['Gaming', 'Technology', 'Tech', 'Business', 'Google']
11
HNG Internship 5.0 So far
The internship started officially 1st April 2019 and different task are listed for interns to get start. Task 1 Sign up on GitHub. Create a personal repository. Commit a file. Pull the file. Change stuff. Commit again. Check that you have a profile image set here. If you do not have a blog, start one on blogspot, WordPress, medium or anywhere else you want. After all is complete, add this icon to your status here: :hatching_chick: Task 2 Join the #design-stage1 channel. Use Figma to design yourself a home page. Your page must follow a particular format: Your picture on the top left corner. Links below your picture About, Blog, Projects. Below that are three icons with links to your social media. On the right (separated by a line from the left section), will be some welcoming information for you. Everyone will need to do this. The mentors will give you feedback on your design, however, I will be the final judge. Till your design is accepted, you cannot start the internship. Pay attention to alignment, colors and other important parts of design. After completing Task 1 and Task 2 you will be promoted to stage one and the deadline 3rd of april. Stage 1 to Stage 2 Task Mark Essien announcements Hello All, The Stage 1 to Stage 2 task will be handled by our chief mentor Seyi Onifade. He will drop the modalities of the task. Going to Stage 2 is *mandatory* for all, even if you are on Design or ML track. Here are the requirements to enter stage 2: a) Have a registered and accepted bug found and in the spreadsheet b) Have a write up on your blog where you link to timbu.com c) Enter your name on the hng.tech site. Once all those three are complete, you are eligible for Stage 2 After today, I won’t accept anymore designers or review any more designs. If you want to enter design track, today is the deadline to get a :white_check_mark: on your design. If you have iterated multiple times for the last few days and still getting :x:, please DM your design to @KingDavid and @Mfonobong — after giving feedback, I will have a commitee meeting with them to see who we will still let in. But if you are rejected at committee, then please move on to the dev track. This internship has been so challenging and amazing Many thanks to Hotels.ng, verifi for making this happen. Mentors has not being awesome it really not easy to coordinate 3000+ people, kudos to you all. Thanks
https://medium.com/@devmohy/hng-internship-5-0-so-far-1442e0f9d5ea
['Mohammed Abiola']
2019-04-04 11:41:25.939000+00:00
['Figma', 'Hnginternship5', 'Github', 'Technology']
12
Top Vue Packages for Adding Walkthroughs, Making HTTP Requests, and Validating Forms
Photo by Jack Young on Unsplash Vue.js is an easy to use web app framework that we can use to develop interactive front end apps. In this article, we’ll look at how the best packages for adding walkthroughs, making HTTP requests, and validating forms. vue-tour We can use vue-tour to create walkthroughs for our app. To use it, we write: npm i vue-tour Then we can register the components by writing: import Vue from "vue"; import App from "./App.vue"; import VueTour from "vue-tour"; require("vue-tour/dist/vue-tour.css"); Vue.use(VueTour); Vue.config.productionTip = false; new Vue({ render: h => h(App) }).$mount("#app"); We also imported the styles. Then we can use it by writing: <template> <div> <div id="v-step-0">step 0.</div> <div class="v-step-1">step 1</div> <div data-v-step="2">step 2</div> <v-tour name="myTour" :steps="steps"></v-tour> </div> </template> <script> export default { data() { return { steps: [ { target: "#v-step-0", header: { title: "get started" }, content: `hello <strong>world</strong>!` }, { target: ".v-step-1", content: "step 1" }, { target: '[data-v-step="2"]', content: "Step 2", params: { placement: "top" } } ] }; }, mounted() { this.$tours["myTour"].start(); } }; </script> We have the list of steps in the template. Then we have the steps array with the steps that target the elements that are on the page. This way, the steps are displayed near the target element. In each step, we can set the header, content, and placement of the steps. Then to start the tour, we use the start method at the bottom of the page as we did. vue-resource vue-resource is an HTTP client that’s made as a Vue plugin. To install it, we run: npm i vue-resource Then we can use it by writing: import Vue from "vue"; import App from "./App.vue"; const VueResource = require("vue-resource"); Vue.use(VueResource); Vue.config.productionTip = false; new Vue({ render: h => h(App) }).$mount("#app"); to register the plugin. Now we have access to the $http property in our component: <template> <div>{{data.name}}</div> </template> <script> export default { data() { return { data: {} }; }, async mounted() { const { body } = await this.$http.get("https://api.agify.io/?name=michael"); this.data = body; } }; </script> It’s promised based so we can use async and await. We can set common options that are used throughout the app by writing: Vue.http.options.root = '/root'; Vue.http.headers.common['Authorization'] = 'Basic YXBpOnBhc3N3b3Jk'; We set the root URL for all requests and the Authorization header that’s used everywhere. vue-form vue-form is a form validation library for Vue apps. We can use it by installing the package: npm i vue-form Then we can use it by writing: main.js import Vue from "vue"; import App from "./App.vue"; import VueForm from "vue-form"; Vue.use(VueForm); Vue.config.productionTip = false; new Vue({ render: h => h(App) }).$mount("#app"); App.vue <div> <vue-form :state="formState" <validate tag="label"> <span>Name *</span> <input v-model="model.name" required name="name"> @submit .prevent="onSubmit"> Name * <field-messages name="name"> <div>Success!</div> <div slot="required">name is required</div> </field-messages> </validate> </vue-form> </div> </template> <script> export default { data() { return { model: { name: "" }, formState: {} }; }, methods: { onSubmit() { if (this.formState.$invalid) { return; } } } }; </script> We registered the component in main.js . Then in the component, we used the vue-form component. The state prop is set to the formState object, which will be updated when the form state change. submit.prevent has the submit handler with e.preventDefault called on it. v-model binds to the model. The field-messages component displays the form validation messages. required indicates that it’s required. In the onSubmit method, we check the formState.$invalid to check whether the form is invalid. If it’s invalid, we don’t proceed with submission. Other form state properties include $name with the name attribute value of the field. $valid to see if the form or field is valid. $focused to check that the form or field is focused. $dirty to see if the form or field has been manipulated. It also comes with other validators like email, URL, number, min and max length, and more. Photo by Heather Lo on Unsplash Conclusion We can add a walkthrough tour to our Vue app with the vue-tour package. vue-resource is a promise-based HTTP client that’s made for Vue apps. vue-form is a useful form validation plugin for Vue apps. JavaScript In Plain English Did you know that we have four publications and a YouTube channel? Find them all at plainenglish.io and subscribe to our YouTube channel!
https://medium.com/javascript-in-plain-english/top-vue-packages-for-adding-walkthroughs-making-http-requests-and-validating-forms-8654072fa756
['John Au-Yeung']
2020-06-27 08:24:36.825000+00:00
['JavaScript', 'Programming', 'Web Development', 'Technology', 'Software Development']
13
Are you using the “Scikit-learn wrapper” in your Keras Deep Learning model?
Are you using the “Scikit-learn wrapper” in your Keras Deep Learning model? How to use the special wrapper classes from Keras for hyperparameter tuning? Image created by the author with open-source templates Introduction Keras is one of the most popular go-to Python libraries/APIs for beginners and professionals in deep learning. Although it started as a stand-alone project by François Chollet, it has been integrated natively into TensorFlow starting in Version 2.0. Read more about it here. As the official doc says, it is “an API designed for human beings, not machines” as it “follows best practices for reducing cognitive load”. Image source: Pixabay One of the situations, where the cognitive load is sure to increase, is hyperparameter tuning. Although there are so many supporting libraries and frameworks for handling it, for simple grid searches, we can always rely on some built-in goodies in Keras. In this article, we will quickly look at one such internal tool and examine what we can do with it for hyperparameter tuning and search. Scikit-learn cross-validation and grid search Almost every Python machine-learning practitioner is intimately familiar with the Scikit-learn library and its beautiful API with simple methods like fit , get_params , and predict . The library also offers extremely useful methods for cross-validation, model selection, pipelining, and grid search abilities. If you look around, you will find plenty of examples of using these API methods for classical ML problems. But how to use the same APIs for a deep learning problem that you have encountered? One of the situations, where the cognitive load is sure to increase, is hyperparameter tuning. When Keras enmeshes with Scikit-learn Keras offer a couple of special wrapper classes — both for regression and classification problems — to utilize the full power of these APIs that are native to Scikit-learn. In this article, let me show you an example of using simple k-fold cross-validation and exhaustive grid search with a Keras classifier model. It utilizes an implementation of the Scikit-learn classifier API for Keras. The Jupyter notebook demo can be found here in my Github repo. Start with a model generating function For this to work properly, we should create a simple function to synthesize and compile a Keras model with some tunable arguments built-in. Here is an example, Data For this demo, we are using the popular Pima Indians Diabetes. This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. So, it is a binary classification task. We create features and target vectors — X and Y and We scale the feature vector using a scaling API from Scikit-learn like MinMaxScaler . We call this X_scaled . That’s it for data preprocessing. We can pass this X_scaled and Y directly to the special classes, we will build next. Keras offer a couple of special wrapper classes — both for regression and classification problems — to utilize the full power of these APIs that are native to Scikit-learn. The KerasClassifier class This is the special wrapper class from Keras than enmeshes the Scikit-learn classifier API with Keras parametric models. We can pass on various model parameters corresponding to the create_model function, and other hyperparameters like epochs, and batch size to this class. Here is how we create it, Note, how we pass on our model creation function as the build_fn argument. This is an example of using a function as a first-class object in Python where you can pass on functions as regular parameters to other classes or functions. For now, we have fixed the batch size and the number of epochs we want to run our model for because we just want to run cross-validation on this model. Later, we will make these as hyperparameters and do a grid search to find the best combination. 10-fold cross-validation Building a 10-fold cross-validation estimator is easy with Scikit-learn API. Here is the code. Note how we import the estimators from the model_selection S module of Scikit-learn. Then, we can simply run the model with this code, where we pass on the KerasClassifier object we built earlier along with the feature and target vectors. The important parameter here is the cv where we pass the kfold object we built above. This tells the cross_val_score estimator to run the Keras model with the data provided, in a 10-fold Stratified cross-validation setting. The output cv_results is a simple Numpy array of all the accuracy scores. Why accuracy? Because that’s what we chose as the metric in our model compiling process. We could have chosen any other classification metric like precision, recall, etc. and, in that case, that metric would have been calculated and stored in the cv_results array. model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) We can easily calculate the average and standard deviation of the 10-fold CV run to estimate the stability of the model predictions. This is one of the primary utilities of a cross-validation run. Beefing up the model creation function for grid search Exhaustive (or randomized) grid search is often a common practice for hyperparameter tuning or to gain insights into the working of a machine learning model. Deep learning models, being endowed with a lot of hyperparameters, are prime candidates for such a systematic search. In this example, we will search over the following hyperparameters, activation function optimizer type initialization method batch size number of epochs Needless to say that we have to add the first three of these parameters to our model definition. Then, we create the same KerasClassifier object as before, The search space We decide to make the exhaustive hyperparameter search space size as 3×3×3×3×3=243. Note that the actual number of Keras runs will also depend on the number of cross-validation we choose, as cross-validation will be used for each of these combinations. Here are the choices, That’s a lot of dimensions to search over! Image source: Pixabay Enmeshing Scikit-learn GridSearchCV with Keras We have to create a dictionary of search parameters and pass it on to the Scikit-learn GridSearchCV estimator. Here is the code, By default, GridSearchCV runs a 5-fold cross-validation if the cv parameter is not specified explicitly (from Scikit-learn v0.22 onwards). Here, we keep it at 3 for reducing the total number of runs. It is advisable to set the verbosity of GridSearchCV to 2 to keep a visual track of what’s going on. Remember to keep the verbose=0 for the main KerasClassifier class though, as you probably don't want to display all the gory details of training individual epochs. Then, just fit! As we all have come to appreciate the beautifully uniform API of Scikit-learn, it is the time to call upon that power and just say fit to search through the whole space! Image source: Pixabay Grab a cup of coffee because this may take a while depending on the deep learning model architecture, dataset size, search space complexity, and your hardware configuration. In total, there will be 729 fittings of the model, 3 cross-validation runs for each of the 243 parametric combinations. If you don’t like full grid search, you can always try the random grid search from Scikit-learn stable! How does the result look like? Just like you expect from a Scikit-learn estimator, with all the goodies stored for your exploration. What can you do with the result? You can explore and analyze the results in a number of ways based on your research interest or business goal. What’s the combination of the best accuracy? This is probably on the top of your mind. Just print it using the best_score_ and best_params_ attributes from the GridSearchCV estimator. We did the initial 10-fold cross-validation using ReLU activation and Adam optimizer and got an average accuracy of 0.691. After doing an exhaustive grid search, we discover that tanh activation and rmsprop optimizer could have been better choices for this problem. We got better accuracy! Extract all the results in a DataFrame Many a time, we may want to analyze the statistical nature of the performance of a deep learning model under a wide range of hyperparameters. To that end, it is extremely easy to create a Pandas DataFrame from the grid search results and analyze them further. Here is the result, Analyze visually We can create beautiful visualizations from this dataset to examine and analyze what choice of hyperparameters improves the performance and reduces the variation. Here is a set of violin plots of the mean accuracy created with Seaborn from the grid search dataset. Here is another plot, …it is extremely easy to create a Pandas DataFrame from the grid search results and analyze them further. Summary and further thoughts In this article, we went over how to use the powerful Scikit-learn wrapper API, provided by the Keras library, to do 10-fold cross-validation and a hyperparameter grid search for achieving the best accuracy for a binary classification problem. Using this API, it is possible to enmesh the best tools and techniques of Scikit-learn-based general-purpose ML pipeline and Keras models. This approach definitely has a huge potential to save a practitioner a lot of time and effort from writing custom code for cross-validation, grid search, pipelining with Keras models. Again, the demo code for this example can be found here. Other related deep learning tutorials can be found in the same repository. Please feel free to star and fork the repository if you like.
https://towardsdatascience.com/are-you-using-the-scikit-learn-wrapper-in-your-keras-deep-learning-model-a3005696ff38
['Tirthajyoti Sarkar']
2020-09-23 00:53:29.541000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Technology', 'Data Science']
14
Global IT Asset Management (ITAM) Software Market 2020 Business Strategies — BMC, Microsoft, Symantec, IBM Software, JustSAMIt
IT Asset Management (ITAM) Software Market The market report titled “IT Asset Management (ITAM) Software Market: Global Industry Analysis, Size, Share, Growth, Trends, and Forecasts 2016–2024” and published by Zion Market Research will put forth a systematizedevaluation of the vital facets of the global IT Asset Management (ITAM) Software market. The report willfunction as a medium for the better assessment of the existing and future situations of the global market. It will be offering a 360-degree framework of the competitive landscape and dynamics of the market and related industries. Further, it entails the major competitors within the market as well as budding companies along with their comprehensive details such as market share on the basis of revenue, demand, high-quality product manufacturers, sales, and service providers. The report will also shed light on the numerous growth prospects dedicated to diverse industries, organizations, suppliers, and associations providing several services and products. The report will offer them buyers with detaileddirectionto the growth in market that would further provide them a competitive edge during the forecast period. IT Asset Management (ITAM) Software Market research report which provides an in-depth examination of the market scenario regarding market size, share, demand, growth, trends, and forecast for 2020–2026. The report covers the impact analysis of the COVID-19 pandemic. The COVID-19 pandemic has affected export imports, demands, and industry trends and is expected to have an economic impact on the market. The report provides a comprehensive analysis of the impact of the pandemic on the entire industry and provides an overview of a post-COVID-19 market scenario. The global IT Asset Management (ITAM) Software Market report offers a complete overview of the IT Asset Management (ITAM) Software Market globally. It presents real data and statistics on the inclinations and improvements in global IT Asset Management (ITAM) Software Markets. It also highlights manufacturing, abilities & technologies, and unstable structure of the market. The global IT Asset Management (ITAM) Software Market report elaborates the crucial data along with all important insights related to the current market status. Request Free Sample Report of IT Asset Management (ITAM) Software Market Report @ https://www.zionmarketresearch.com/sample/it-asset-management-itam-software-market Our Free Complimentary Sample Report Accommodate a Brief Introduction of the research report, TOC, List of Tables and Figures, Competitive Landscape and Geographic Segmentation, Innovation and Future Developments Based on Research Methodology Global IT Asset Management (ITAM) Software Market Report covers major market characteristics, size and growth, key segments, regional breakdowns, competitive landscape, market shares, trends and strategies for this market. Major Market Players Included in this Report is: BMC, Microsoft, Symantec, IBM Software, JustSAMIt, Attachmate, Samanage, Scalable Software, Freshservice, and Hewlett Packard. Other players dominating the global market include Deloitte, Spiceworks, Lansweeper, Real Asset Management, InvGate, LabTech, StacksWare, Auvik, eAbax, INSPUR, ManageEngine, Chevin FleetWave, and Atlassian. The global IT Asset Management (ITAM) Software Market report offers a knowledge-based summary of the global IT Asset Management (ITAM) Software Market. It demonstrates the new players entering the global IT Asset Management (ITAM) Software Market. It emphasizes the basic summary of the global IT Asset Management (ITAM) Software Market. The perfect demonstration of the most recent improvements and new industrial explanations offers our customer a free hand to build up avant-garde products and advanced techniques that will contribute in offering more efficient services. The report analyzes the key elements such as demand, growth rate, cost, capacity utilization, import, margin, and production of the global market players. A number of the factors are considered to analyze the global IT Asset Management (ITAM) Software Market. The global IT Asset Management (ITAM) Software Market report demonstrates details of different sections and sub-sections of the global IT Asset Management (ITAM) Software Market on the basis of topographical regions. The report provides a detailed analysis of the key elements such as developments, trends, projections, drivers, and market growth of the global IT Asset Management (ITAM) Software Market. It also offers details of the factors directly impacting on the growth of the global IT Asset Management (ITAM) Software Market. It covers the fundamental ideas related to the growth and the management of the global IT Asset Management (ITAM) Software Market. Download Free PDF Report Brochure @ https://www.zionmarketresearch.com/requestbrochure/it-asset-management-itam-software-market Note — In order to provide more accurate market forecast, all our reports will be updated before delivery by considering the impact of COVID-19. (*If you have any special requirements, please let us know and we will offer you the report as you want.) The global IT Asset Management (ITAM) Software Market research report highlights most of the data gathered in the form of tables, pictures, and graphs. This presentation helps the user to understand the details of the global IT Asset Management (ITAM) Software Market in an easy way. The global IT Asset Management (ITAM) Software Market report research study emphasizes the top contributors to the global IT Asset Management (ITAM) Software Market. It also offers ideas to the market players assisting them to make strategic moves and develop and expand their businesses successfully. Promising Regions & Countries Mentioned In The IT Asset Management (ITAM) Software Market Report: North America ( United States) Europe ( Germany, France, UK) Asia-Pacific ( China, Japan, India) Latin America ( Brazil) The Middle East & Africa Inquire more about this report @ https://www.zionmarketresearch.com/inquiry/it-asset-management-itam-software-market Highlights of Global Market Research Report: Show the market by type and application, with sales market share and growth rate by type, application IT Asset Management (ITAM) Software Market forecast, by regions, type and application, with sales and revenue, from 2019 to 2026 Define industry introduction, IT Asset Management (ITAM) Software Market overview, market opportunities, product scope, market risk, market driving force; Analyse the top manufacturers of IT Asset Management (ITAM) Software Market Industry, with sales, revenue, and price Display the competitive situation among the top manufacturers, with sales, revenue and market share Request the coronavirus impact analysis across industries and market Request impact analysis on this market @ https://www.zionmarketresearch.com/toc/it-asset-management-itam-software-market The report’s major objectives include: To establish a comprehensive, factual, annually-updated and cost-effective information based on performance, capabilities, goals and strategies of the world’s leading companies. To help current suppliers realistically assess their financial, marketing and technological capabilities vis-a-vis leading competitors. To assist potential market entrants in evaluating prospective acquisitions and joint venture candidates. To complement organizations’ internal competitor information gathering efforts by providing strategic analysis, data interpretation and insight. To identify the least competitive market niches with significant growth potential. Global IT Asset Management (ITAM) Software Market Report Provides Comprehensive Analysis of: IT Asset Management (ITAM) Software Market industry diagram Up and Downstream industry investigation Economy effect features diagnosis Channels and speculation plausibility Market contest by Players Improvement recommendations examination Also, Research Report Examines: Competitive companies and manufacturers in global market By Product Type, Applications & Growth Factors Industry Status and Outlook for Major Applications / End Users / Usage Area Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.
https://medium.com/@torasep/global-it-asset-management-itam-software-market-2020-business-strategies-bmc-microsoft-9ba6799be9f7
['Prakash Torase']
2020-12-03 09:11:00.308000+00:00
['Market Research Reports', 'Technology', 'Software', 'Development', 'Management And Leadership']
15
Great article, totally agree with that! Thanks for sharing!
Great article, totally agree with that! Thanks for sharing!
https://medium.com/@bryan-dijkhuizen/great-article-totally-agree-with-that-thanks-for-sharing-c53df300462d
['Bryan Dijkhuizen']
2020-12-22 22:47:08.500000+00:00
['Opinion', 'Technology', 'Software Development', 'Programming', 'Data Science']
16
Creating real opportunities out of technology
I never had technology as my main focus of work, even though I have been working on it for almost a decade, I was never really aware of the great transformation that it implies for society. Only in the last couple of years, I have been able to see the bigger picture by putting all the energies to be an entrepreneur. I think the exponential growth that technology has had in the last decade caught us all by surprise, and it has never been easy to adopt this changes since it implies restructuring the way you think; however, people that want to make a change in the world for good is embracing it and moving forward to a future where technology helps humankind to live better. Furthermore, this crazy idea that I started with a group of friends called Trascender Global is also surfing that wave. Today, in a more-than-a-year of a pandemic and big changes in our lives, I want to talk about how a small idea can seize the technological opportunities to bring innovation in three big fronts: the redefinition of the work concept, the creation of opportunities in a region that desperately needs them and the empowering of people to entrepreneur and venture against all uncertainty. Redefining work and teleworking Teleworking has been in everyone’s mouth during this pandemic, and it has been one of the biggest challenges companies have had to endure. However, technology has once again come to the rescue: videoconferencing platforms, remote learning, online schedulers, task managers, automated to-do lists, and all of the tools we take for granted now had allowed us not just to survive, but to maintain steady growth. However, for a Freelance, this is nothing new. On the contrary, this has been their day to day lifestyle for years: a freelance is used to do home-office, balance a productive work schedule with a healthy personal life, accommodate to tight deadlines and sudden work changes, constantly “reinvent” themselves with new courses that complement their skills and build a multidisciplinary contact network. This type of philosophy and lifestyle is what Trascender is based on: even if we’re not physically close, or have never met beyond a screen, we can still communicate effectively, reach our targets, deliver projects, and build fruitful relationships with both our customers and our crew of Trascendentales. Bringing innovative opportunities Colombia and Latin America are full of splendid talented people that due to the economic and social difficulties never get to work in what they envision or dream of; also many of the tech roles are not equally paid when compared to the earnings in developed countries. Our situation is complex, and further than explaining or trying to justify that situation, we focus on how we can improve it and provide better opportunities to people in our region. Trascender Global focuses on working with clients in a B2B model, creating solutions in the Data Science industry, which go from analytics and descriptive analysis in Business Intelligence, to Machine Learning and Artificial Intelligence innovation in different industries and economic sectors. We do that by transferring and applying state-of-the-art knowledge on data into need-tailored solutions for business in foreign countries. The beautiful thing is that those solutions are being implemented by people all around our country. We’re expanding now in different Latin American countries — with baby steps though — allowing those people to feel that they’re relevant by knowing that even if they’re in a small town, they can do artificial intelligence for a big multinational corporation. We urge our people, our crew -which we call Trascendentales-, to strive, excel and realize that we can solve unsolved issues with our talent and capacity. And in the process, they learn not only about Data Science, Artificial Intelligence and other areas, but also about leadership, self-esteem and even other languages and cultures. Encouraging the new era of entrepreneurs Many believe that entrepreneurship is some rocket science: “It’s too hard”, “it’s too risky”, “why don’t you get a real job?” are some of the comments that discourage many people of pursuing this way of life, making them feel that they’re taking the wrong path. In reality, to become an entrepreneur, only one thing is required: to have an idea. When the idea is nurtured, it becomes a vision, then an objective, a purpose, a hypothesis, and sometimes, a business model. All great companies started like that, with the seed of an idea, nurtured by passion and true self-belief. However, every entrepreneur must also bear the humbling feeling of knowing that you can fail, that your idea might not be “a million-dollar idea”, but you must be willing to let experience and experimentation guide you, to fall and rise, to adapt and overcome, over and over, until you reach that turning point where everything falls into place. In Trascender Global, we’re passionate about that process: that journey of self-discovery and evolution that every entrepreneur must go through is one of the most valuable growing experiences ever. We look forward to supporting all the ideas that come our way, to guide them in this path, to put our experience in the table and serve as a valuable ally, to grow together and make our crew ideas transcend. It is a huge shot where we create new staff groups within our crew to develop product or services, and soon you will be hearing from some of them. We like to call this methodology “Monkeys” because just like an idea that moves around in your head, a monkey hangs in a tree and moves swiftly from branch to branch. Still, it has the capacity of evolving, use tools and language, to have its learning process, continually improving, until it can walk by its own. Oh, and also, because of Night Monkey, from Spiderman. Spiderman is cool 🤣. What’s ahead for us? Trascender Global dreams of following the steps of big tech companies and organizations such as Globant, PSL, Google, IBM, Accenture and others. We want to lead the technological path in Colombia and Latin America, continuing with the development of tailored solutions and new products in this huge world of data and technology, harnessing the power of Artificial Intelligence, Business Intelligence and Software Engineering. Achieving this will allow us to continue innovating on those three fronts, bringing more and better opportunities for the people and the economy in our region. Despite being a small company, we now see ourselves as a big and remarkable organization in the not so distant future. Trascender Global is not just me, a single person, but an idea made of all the dreams of people that somehow believe in us, and that force will continue to be developed even when we’re gone. Precisely, that’s what Trascender mean: to transcend. Feel free to share with us your thoughts, to share these words or even to join us in this pursuit. A bid farewell Andrés Martiliano, CEO of Trascender Global Thanks for coming this far. You can find me, Andrés Martiliano-Martínez, in LinkedIn if you want to continue this conversation. I’m a mix between extrovert and introvert driven by strong passion of seeing dreams come true and I hope you take something valuable of our short yet beautiful story. See you soon!
https://medium.com/@trascenderglobal/creating-real-opportunities-out-of-technology-32ad86002449
['Trascender Global']
2021-06-08 23:49:58.450000+00:00
['Entrepreneurship', 'Technology', 'Opportunity', 'Backstory']
17
Top Content Service Platform
Top Content Service Platform Today, the growth of content services is driving the evolution of enterprise content management (ECM). Consequently, leading organizations are deploying content services platforms (CSPs) to create a unified information management environment that can connect, share, and govern content across the entire enterprise, and build on the strengths of their ECM deployments. The development of CSPs has been focused on delivering content in a way that meets the needs of both sides — the digital workspace and digital business — of digital transformation. Within the digital workplace, CSPs are based around contextual content delivery, sharing, and collaboration to enable users in accessing relevant content from the application or device that best suits their needs. Besides, with the cloud-based nature of CSPs, businesses can overcome barriers in content storage and distribution of critical information. With a comprehensive understanding of this new development, CIOReview has compiled a list of the 20 most promising content services platform providers to guide organizations in harnessing the power of new-age content management technologies and ensure the effective automation of content-related processes. With several innovative technological capabilities and success stories up its sleeves, these firms are constantly proving their mettle in the field of content management. We hope this edition of CIOReview helps you build the partnership that you and your firm need to foster a unified and efficient information management environment. We present to you CIOReview’s “20 Most Promising Content Services Platform Solution Providers ” Top Content Service Platform Computhink’s Enterprise Content Management solution Contentverse is an exception. Computhink provides highly secure implementation options with multiple choices of secure client access, mission critical application integration, and secure access for a wide range of business environments. No organization in need of a versatile, easy-to-use ECM has to settle for less.” Founded in 1994, ImageSource is a privately held corporation headquartered in Olympia, WA. Through a number of strategic acquisitions and mergers, ImageSource has positioned itself as a leader among Enterprise Content Management integrators. ImageSource is the manufacturer of ILINX software and a recognized leader in consulting, strategic analysis and rapid application deployment of Enterprise Content Management (ECM) solutions. Utilizing expert teams and a broad portfolio of world leading ECM technology, the company improves critical business processes that leverage the management of data, documents and integration with legacy software systems KnowledgeLake provides its customers with a PaaS cloud platform as a part of their managed services. Its fully managed services help the client migrate data to the cloud, perform necessary software configurations, define jobs that need automation, capture and store documents, and various other things. The company does all the mentioned jobs without requiring any IT administration. Services offered by KnowledgeLake are entirely cloud-based. Its clients can involve themselves in the migration and any content service operation at any point of time they wish The company provides corporate across the world with powerful digital content publishing platforms enabling them to create and distribute rich, interactive digital content on any device. The tools are aimed at various businesses from media, corporations to book publishers. The company clients include world-leading business giants. The offices are located in both Paris and Montpellier, but the company partners and collaborators are from all over the world. Being part of the global internet services company Rakuten enables to innovate and provide the clients with genuinely cutting edge and localized services Appatura Industry leaders since 1993 Appatura, innovates with its data-driven experts in Enterprise Content Management (ECM). The company empowers its clients to leverage and repurpose the content across the enterprise via products like DocuBuilder. Headquartered in Ridgefield Park, New Jersey, Appatura evolves consistently by updating its technology to meet and exceed industry requirements. They stay ahead of the curve and always bring new expertise to the table by establishing and maintaining a good partnership with their clients. Appatura’s host of solutions develops an integrated hub that enables organizations to break down content like LEGO blocks to an atomic level and then transform them into creative assets, improving productivity, ensuring compliance, and creating workplace efficiencies Aptara Headquartered just outside of Washington, D.C, and serving the global 10 largest publishers and Fortune 100 companies, Aptara transforms content for engaging and monetizing new Digital and Corporate Learning audiences. The company’s full-service content production accelerates information providers’ transition from print to digital. Starting from creation and design to new media enhancements and output for all mobile devices and platforms, Aptara produces innovative digital products that deliver content on how, when, and where readers want it while giving content providers renewed agility and revenue opportunities. Aptara’s host of solutions include Digital Content Development, Corporate Learning and Performance, Custom Content Services, Customer Lifestyle Management, Healthcare RCM, IT Services, and Learning Administration Services Box Founded in 2005, Box is a Cloud Content Management company that enables enterprises to revolutionize how they work by safely connecting their people, information, and applications. Headquartered in Redwood City, CA, the company offers various products such as cloud content management, security and compliance, collaboration, workflow, integrations, developer tools and APIs, and IT administration and controls. Box consulting helps businesses to transform the organization by focusing on accelerating the time to market and helping companies to unlock their full potential from successful implementations to more sophisticated technical integrations and adoption programs. Box transforms a customized plan that provides a team, necessary tools, and experience to bring an organization into the digital age quickly Everteam Having 25 years of experience in the field of Information Governance and Enterprise Content Management, Everteam offers a complete content services platform for Information Governance. Everteam solutions enable enterprises to build and manage content-driven processes that support a range of business opportunities. The company’s content management solutions include web app studio, document management, and digital assists management. The web app studio is a rapid application development platform that empowers the client to develop automated processes with minimal IT department dependency. Document management efficiently builds, amanges, and structures corporate repositories helping customers reduce the amount of time and effort spent on managing corporate documents. Data storage improves the business, to become agile, and drive online customer engagement Interfy Headquarters in Orlando, Florida, Interfy believes in the power of digital transformation. To help companies in various industries reach this level, it operates in the organization and digitization of content and processes in the cloud. Today, it is the only one in the industry to provide 100 percent integrated ECM, BPM, BI, PMS, CRM, and ERP tools, built into a single platform and accessible with a single login. The company develops software platforms to accelerate digital business transformation, enabling better information management in an affordable and customizable way. With their team of professionals who are passionate about creating innovative technologies, they have over two decades of experience developing software for corporate management, processes, and content Laserfiche Headquarters in Long Beach, California, Laserfiche Enterprise Content Management transforms the way organizations manage content, make timely, automate document-driven business processes and informed decisions. Leveraging Laserfiche, organizations can innovate how unstructured information and documents are processed and analyzed to extract business results. Laserfiche offers intuitive solutions for capture, workflow, case management, electronic forms, cloud, government-certified records management, and mobile. The company with its team of experts provides Enterprise Content Management (ECM) software, Document Management (DM), Records Management (RM), Business Process Management (BPM), Electronic Forms, Process Automation, Workflow, Case Management MODX Headquartered in Dallas, MODX is a company that backs the open source Content Management System and Web Application Framework. MODX Revolution is the world’s fastest, most secure, flexible, and scalable Open Source CMS. Their cloud platform, MODX Cloud, is the ultimate hosting for modern PHP applications, especially MODX. The company provides the fastest, most flexible, scalable, and secure, open-source CMS. Unlike other CMS platform plugins, MODX Extras are designed to be easily tailored to your specific design and requirements, and it connects to the tools the client already uses and easily integrates with anything that has an API Newgen Software Technologies A provider of Business Process Management (BPM), Customer Communication Management (CCM), Enterprise Content Management (ECM), Document Management System (DMS), Workflow and Process Automation software Newgen Software was founded in 1992. The company has extensive, mission-critical solutions that have been deployed in Banks, Insurance firms, BPO’s, Healthcare Organizations, Government, and Telecom Companies. Newgen, through its proven Business Process Management, Enterprise Content Management, Customer Communications Management, and Case Management platforms, brings about the perfect amalgamation of information/content, technology, and processes, the building blocks of Digital Transformation. Newgen’s commitment is towards enabling clients to achieve greater agility when it comes to transforming processes, managing information, enhancing overall customer satisfaction, and driving enterprise profitability Nuxeo Nuxeo, the maker of the leading, cloud-native content services platform, is reinventing enterprise content and digital asset management. Nuxeo is fundamentally changing how people work with both data and content to realize new value from digital information. Its cloud-native, hyper-scalable content services platform has been deployed by large enterprises, mid-sized businesses, and government agencies worldwide. Founded in 2008, the company is based in New York with offices across the United States and Europe. The company with a team of experts specializes in content management software, enterprise content management, document management, application platform, content management platform, digital asset management, case management, and content repository OpenText OpenText, The Information Company™, a market leader in Enterprise Information Management software and solutions, enables intelligent and connected enterprises by managing, leveraging, securing and gaining insight into enterprise information, on-premises or in the cloud. The company was founded in the year 1991 and is based in Waterloo, ON. From its inception, the company with a team of experts specializes in providing Enterprise Information Management, Business Process Management, Enterprise Content Management, Information Exchange, Customer Experience Management, and Information Discovery. The company’s EIM products enable businesses to grow faster, lower operational costs, and reduce information governance and security risks by improving business insight, impact, and process speed Primero Systems At Primero Systems, the company provides software solutions to enterprises. More specifically, they understand enterprise software and the role it plays in helping to maintain a competitive edge and grow the business. The company is a trusted software development firm that has been delivering innovative solutions ranging from enterprise-grade custom software to web content management systems for more than 25 years. The company wants to help businesses achieve their goals and maximize their potential through software. Integrity, Competency, and Quality are the words that shape the company, and we develop and support our products and services with them in mind Simflofy Simflofy’s content integration platform is used for creating federated content networks and migrating content from a legacy content source. The platform has connecters to over 20 of the most common sources for content. They have developed solutions on the platform for federated search, in-place records management, and email archiving. Solutions based on the content integration platform have been used by millions of users and work with billions of documents. The solutions have been used in various industries, including healthcare, insurance, consulting, and state and federal government Systemware Founded in 1981, Systemware quickly established itself as a developer of products that effectively managed enterprise content. The solutions can help with report output management, correspondence, and customer communications. With more than 35 years in information management, Systemware solutions continue to support the changing environment of content while continuing to respond to the evolving needs of customers, both large and small. Systemware has developed a suite of applications that streamline the flow of documents and information through a variety of business processes. Systemware currently serves customers in a variety of industries ranging from financial services, insurance, healthcare, and retail markets Veritone Veritone is a provider of artificial intelligence (AI) technology and solutions. The company’s proprietary operating system, aiWARE, orchestrates an expanding ecosystem of machine learning models to transform audio, video, and other data sources into actionable intelligence. aiWARE can be deployed in several environments and configurations to meet customers’ needs. Its open architecture enables customers in the media and entertainment, legal and compliance, and government sectors to easily deploy applications that leverage the power of AI to improve operational efficiency and effectiveness dramatically. Veritone is headquartered in Costa Mesa, California, with over 300 employees and has offices in Denver, London, New York, San Diego, and Seattle Vodori Vodori is an innovative technology company creating cloud-based software that revolutionizes how life science companies (such as Acacia Pharma, Pentax Medical, and Tris Pharma) move regulated content from ideation through review, approval, and distribution. Customers love the unparalleled product design and usability, decades of industry knowledge, and world-class customer service with a combination anywhere else. The company specializes in providing Digital Marketing, Online Strategy, User Experience, Visual Design, Software Development, Enterprise Content Management, and Promotion Review System. The company is based in Chicago, Illinois Zorang Zorang is a solution provider with a unique blend of experience and passion for creating Product Information/Content Management, eCommerce, Digital Marketing, Cloud Integration, and Digital Asset Management solutions providing a compelling end to end-user experience for our customers. They have more than a decade of experience in helping the enterprises move to Cloud technologies and integrating their Cloud infrastructure with other enterprise-wide legacy applications. The company helps to strategize how to get to the cloud and how SaaS-based systems will integrate into the overall IT architecture to give the business and marketing an edge. See More
https://medium.com/@techmag1/top-content-service-platform-803a78a1edfd
['Technology Magazine']
2020-11-19 12:03:44.159000+00:00
['Content Management', 'Content Marketing', 'Technology News', 'Technology', 'Magazine']
18
The Simple Tool That Completely Changed How I Work With Remote Engineering Teams
Originally published on www.Ben-Staples.com If we ignore the big flaming dumpster fire that is COVID for a moment driving almost all technology companies to go 100% remote, in the digital age more and more software development is being done by geographically distributed teams. For example, I work as a Product Manager for Nordstrom based out of Chicago IL. I currently have the honor of working with 10 total software engineers. 7 of them are based in Seattle, and 3 of them are based all the way in the Ukraine. Working with a geographically diverse group of engineers is awesome. It brings new perspectives, and new team dynamics that on the whole build towards a stronger product. Geographically diverse teams result in individuals bringing different ways of thinking about problems and solutions to the table. Of course it is not all rainbows and butterflies. There can be challenges, especially for teams as they are forming. For example, you need to adjust meeting cadence to if possible so the time works for all team members. Oftentimes, cultural differences can cause significantly separate needs for how feedback can be delivered or received. For example, I find that for many engineers based out of European countries, their preference is blunt and overly direct feedback. Part of this is a language thing, if teams don’t primarily speak the same language, sometimes flowery style adjectives that people tend to use (*cough* me) to try to better describe the sentiment behind a piece of feedback can get lost. However I personally attribute the majority of this preference in feedback delivery to just different cultural norms. Some folks and / or cultures bias towards the most direct feedback possible, while others take a little more finesse. Time is not your friend with geographically diverse engineers; it impacts feedback loops, which impacts value to the customer. One of the biggest drivers of behavior change when moving from a team of engineers all working from one location to geographic distribution can be of course differences in time zone. At Nordstrom we have about 2–3 hours of overlap with our Kyiv engineers. That means that for an engineer based in the same time zone as you, you have a whole 8 hours where you can talk through a feature, time for the engineer make changes, and time for design and product provide feedback about those changes. THEN, the developer has time to react to those changes and get a second version stood up and ready to go before the day ends. As a result, if you have an engineer, a product manager, and a designer all in the same time zone, you have 8 total working hours all overlapping where feedback, changes, and iterations can be produced. HOWEVER if you have an engineer based in a different time zone, you’ve got significantly less hours of overlap. So instead of having multiple opportunities to ideate, build, get feedback, and ideate again most days you can only have one feedback cycle if your team has an overlap of 2 working hours. Feedback cycles are important, check out this article on the difference between Kanban and Scrum to see some examples of how this comes to life. This is in no way saying that less overlap in hours means less gets done. That engineer based in another time zone still works a full 8 hours, and is still an awesome engineer with a great skill set. It just means that the active time you have for feedback loops is condensed significantly. Not only that, but you also need to think about the quality of your feedback and the amount of data you can convey in each round. With an in person engineering team, you get to talk directly, convey not only the data and information around the feature you’re implementing, but also emotional reactions to different things that are said. Not all feedback formats are created equal. Now, in the time of COVID, a bunch of teams are working remotely. So we receive a little less data than you would get in person, but you can still see peoples reactions, can still talk through problems and share your screen. However when you are working in off hours (I.E. the time when you are working but your engineering team is not because it is their nighttime), many people rely on text to convey feedback about a feature or idea. This is the big mistake Most of the time this comes in the form of Jira ticket comments, or slack messages. If you think about the amount of data or information you can convey at a time, the highest quality of data delivery is in person. Next would be over video chat, and the very last form would be forms of text based communication. It is just so much less efficient. Showing what something looks like is much more impactful than telling someone what it looks like. Think about presenting anything; instead of having a slide full of text, show a picture or video and speak to what is happening. I don’t need to ask you to consider which is more impactful. So we have established that the faster your feedback loop, the more value you will deliver to the customer. And the more time you have working the same daylight hours as your team, the more opportunities you will have to provide feedback. The more feedback and the faster the frequency of delivery, the more value delivered to the end customer. And thus, if you are working with a team in a different time zone, you have less overlap in working hours, and as a result less total time that you can give and receive feedback on features your team is working on. Not great, but there has got to be some sort of a solution to help! There is!! Screen recordings. Video recordings do take just a little bit more time to make than something like a comment on a Jira ticket, but not much more, and the impact this can have in terms of how much is conveyed for feedback is incredible. Photo by Soundtrap on Unsplash All you need to do is get a basic screen recorder and a microphone (which of course you already have if you’re working from home). I use a chrome extension literally called “Screen Recorder”. It is free, and I can quickly record my screen, include the cursor, and add my voiceover. So now, I am trying to build the habit when working with an engineer in another time zone on any sort of feature, of making a screen recording and walking through specific points of feedback. It conveys so much more, walking through, pointing out specific features, points of confusion, etc. This provides the engineer with a ton more context on changes you are asking for, or updates needed. The end result is that using screen recordings and a voice over is a pretty good attempt at trying to fit the information that would normally be contained in multiple rounds of feedback into one rich method of information delivery. Does this solve the problem of just not having as much working hour overlap with offshore engineers? No. But what screen recordings do is they significantly deepen the level of feedback normally provided so that you can convey more in a shorter amount of time. It takes no money, takes almost no additional time, and I would highly recommend anyone working with any engineering team at all try this out and see how it goes. About the author: Ben Staples has over 7 years of product management and product marketing eCommerce experience. He is currently employed at Nordstrom as a Senior Product Manager responsible for their product pages on Nordstrom.com. Previously, Ben was a Senior Product Manager for Trunk Club responsible for their iOS and Android apps. Ben started his Product career as a Product Manager for Vistaprint where he was responsible for their cart and Checkout experiences. Before leaving Vistaprint, Ben founded the Vistaprint Product Management guild with over 40 members. Learn more at www.Ben-Staples.com I do product management consulting! Interested in finding out more? Want to get notified when my next product article comes out? Interested in getting into Product Management but don’t know how? Want even more book recommendations?! Contact me!
https://medium.com/work-today/the-simple-tool-that-completely-changed-how-i-work-with-remote-engineering-teams-b0eaabac61e1
['Ben Staples']
2020-12-17 21:40:16.499000+00:00
['Product Management', 'Product', 'Technology', 'Agile', 'Tech']
19
Connected Agriculture Market 2021: Explore Top Factors that Will Boost the Global Market by 2026
Connected Agriculture Market 2021: Explore Top Factors that Will Boost the Global Market by 2026 shubham k Dec 22, 2021·4 min read Connected Agriculture Market 2021–2026 Straits Research has recently added a new report to its vast depository titled Global Connected Agriculture Market. The report studies vital factors about the Global Connected Agriculture Market that are essential to be understood by existing as well as new market players. The report highlights the essential elements such as market share, profitability, production, sales, manufacturing, advertising, technological advancements, key market players, regional segmentation, and many more crucial aspects related to the Global Connected Agriculture Market. Connected Agriculture Market The Major Players Covered in this Report: Some of the notable players in the connected agriculture market are IBM Corporation, Microsoft Corporation, AT&T, Deere & Company, Oracle Corporation, Iteris, Trimble, Ag, SAP SE, Accenture, Cisco Systems Inc., Decisive Farming, Gamaya, and SatSure. Get a Sample PDF Report: Connected Agriculture Market Segmentation is as follows: - By Component, Solutions , Platforms , Services By Application, Pre-Production Planning and Management , In-Production Planning and Management , Post-Production Planning and Management The report specifically highlights the Connected Agriculture market share, company profiles, regional outlook, product portfolio, a record of the recent developments, strategic analysis, key players in the market, sales, distribution chain, manufacturing, production, new market entrants as well as existing market players, advertising, brand value, popular products, demand and supply, and other important factors related to the Connected Agriculture market to help the new entrants understand the market scenario better. Important factors like strategic developments, government regulations, Connected Agriculture market analysis, end-users, target audience, distribution network, branding, product portfolio, market share, threats and barriers, growth drivers, latest trends in the industry are also mentioned. Regional Analysis For Connected Agriculture Market: North America (United States, Canada, and Mexico) Europe (Germany, France, UK, Russia, and Italy) Asia-Pacific (China, Japan, Korea, India, and Southeast Asia) South America (Brazil, Argentina, Colombia, etc.) Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria, and South Africa) In this study, the years considered to estimate the market size of the Connected Agriculture are as follows: • History Year: 2014–2019 • Base Year: 2019 • Estimated Year: 2020 • Forecast Year 2021 to 2026 The main steps in the investigation process are: 1) The first step in market research is to obtain raw market information from industry experts and direct research analysts using primary and secondary sources. 2) Extracts raw data from these sources to extract valuable insights and analyze them for research purposes. 3) Classify the knowledge gained by qualitative and quantitative data and place it accordingly to make final conclusions. Key Questions Answered in the Report: • What is the current scenario of the Global Connected Agriculture Market? How is the market going to prosper throughout the next 6 years? • What is the impact of COVID-19 on the market? What are the major steps undertaken by the leading players to mitigate the damage caused by COVID-19? • What are the emerging technologies that are going to profit the market? • What are the historical and the current sizes of the Global Connected Agriculture Market? • Which segments are the fastest growing and the largest in the market? What is their market potential? • What are the driving factors contributing to the market growth during the short, medium, and long term? What are the major challenges and shortcomings that the market is likely to face? How can the market solve the challenges? • What are the lucrative opportunities for the key players in the Connected Agriculture market? • Which are the key geographies from the investment perspective? • What are the major strategies adopted by the leading players to expand their market shares? • Who are the distributors, traders, and dealers of the Global Connected Agriculture market? • What are sales, revenue, and price analysis by types and applications of the market? For More Details On this Report: Connected Agriculture Market About Us: Regardless of whether you’re looking at business sectors in the next town or crosswise over continents, we understand the significance of being acquainted with what customers purchase. We overcome the issues of our customers by recognizing and deciphering just the target group, while simultaneously generating leads with the highest precision. We seek to collaborate with our customers to deliver a broad spectrum of results through a blend of market and business research approaches. This approach of using various research and analysis strategies enables us to determine greater insights by eliminating the research costs. Moreover, we’re continually developing, not only with regards to where we measure, or who we measure but in how our visions can enable you to drive cost-effective growth. Contact Us: Company Name: Straits Research Email: sales@straitsresearch.com Phone: +1 646 480 7505 (U.S.) +91 8087085354 (India) +44 208 068 9665 (U.K.)
https://medium.com/@shubhamk.straitsresearch/connected-agriculture-market-2021-explore-top-factors-that-will-boost-the-global-market-by-2026-cba6ed0a696f
['Shubham K']
2021-12-22 05:28:28.741000+00:00
['Agriculture', 'Market Research Reports', 'Agritech', 'Agriculture Technology', 'Connected Agriculture']
20
Q&A with Sherrell Dorsey, Founder of The PLUG
Q&A with Sherrell Dorsey, Founder of The PLUG This week, The Idea caught up with Sherrell to learn more about The PLUG — a digital platform that covers the black tech sector. Read to learn about The PLUG’s data-driven journalism, how the outlet plans to close the gap between journalists and black innovation communities, and why Sherrell thinks media outlets struggle to draw diverse audiences. Subscribe to our newsletter on the business of media for more interviews and weekly news and analysis. Tell me about The PLUG. The PLUG is a subscription-based digital news and insights platform. We cover the black innovation economy and report on black tech, investment, ecosystems, workforce development, and anything pertaining to the future of technology from the perspective of black leaders and innovators. We deliver exclusive stories once per week. We’re increasing this frequency to two times a week in June. We also provide data indexes, reports, and other intelligence on the black tech space like black CIOs and S&P 500s. We also have monthly pro member calls where we feature different leaders in the space. Back in February, we launched a live summit to bring together researchers, entrepreneurs, and investors. We currently have several thousand subscribers with hundreds of paid members. Most of these pro members actually opt for an annual subscription instead of a quarterly one. And, how did you come up with the idea? I actually stumbled across the idea accidentally. I have always worked in tech and one of the challenges for me was not seeing folks that looked like me in the tech scene in the media I was consuming. I grew up in Seattle, so I was constantly around black engineers, black programmers, black network administrators. It bothered me that I wasn’t seeing that reflected in the material that I was reading and that there wasn’t a rigorous and analytical look at what people of color are doing in the tech space In 2008, I started a blog and began writing about black tech innovators who were developing models to provide formerly incarcerated people with employment by training them in solar panel installation. At the time I didn’t have a journalism background, but I was calling publications asking to write about these topics. I found that as I was building up my freelance repertoire, people began reaching out to me because of my knowledge of this space. This inspired me to launch a daily tech newsletter in 2016. Our readers not only included black techies at companies like Google, Facebook, and Amazon but also investors and other tech reports who felt that they had been missing a diversity lens on tech. The newsletter was getting traction and advertisers like Capital One and Goldman Sachs started reaching out to me asking to connect with my audience, which eventually allowed me to grow the newsletter into a platform that provides readers with highly data-driven journalism. How has the platform’s business structure evolved since its inception? When I first started, it was just me, my laptop, and my WiFi. Then, when Capital One and other sponsors came on board, I was able to grow revenue and start doing some testing on the structure of the business. I would also go on to secure a six-month publishing partnership with Vice Media where we co-published pieces on diversity and tech onto the tech vertical they had at the time, Motherboard. It quickly became apparent, however, that advertising isn’t the best way to play the long game. So, I started looking into how The PLUG can build a sustainable subscription model. In 2019, The PLUG participated in The Information’s Accelerator, which is an initiative that supports next-generation news publications. Shortly after, we launched a paid version of The PLUG in July. Aside from that, we also license our content and publish sponsored content. Every now and then, we also secure grants. What do you think mainstream outlets get wrong when trying to attract black and brown audiences? Way too often people treat these audiences as a charity and think that giving them free access will solve the issue. It unsettles me when media leaders treat this issue as a philanthropic initiative. We overestimate how much money factors into this. I grew up in the inner city; people pay for what it is they value at the end of the day, rich or poor. You have to have content that these audiences find valuable. Even if you give it to them for free, the content and coverage are not valued if it does not reflect their community or voice in an authentic way. Did you consider VC funding? Yeah, I initially thought I was going to secure VC dollars. But, I found that a lot of the pushback I was getting was “Well, we already invested in another black media outlet.” The real question is: why can there only be one? Do black and brown people not have needs and nuances? What do you think sets The PLUG apart from other black tech media outlets? Definitely our depth and analysis — The PLUG has extensive data libraries. For instance, we were the first one to develop a map of all black-owned coworking spaces in the country. We cover topics that no one else is asking questions about. Unfortunately, there’s no centralized source on black tech, and so The PLUG’s ability to bring this data, comprehensive indexes, and in-depth coverage has allowed us to garner a lot of attention. A talent scout for ABC’s Shark Tank recently told me that they use The PLUG to stay informed on emerging start-ups across the nation. What’s your long-term vision for The PLUG? I’d like to offer more city-specific reportage on black innovation communities across the country and the world and build a global network of reports. I’d also like to move into more research-based initiatives to help fuel academic research, investor analysis, and government policy on black innovation. In all honesty, though, I don’t even have a 10-year plan. The impetus behind our work is greater visibility and I hope that in 10 years we don’t have to continue staying niche. My hope is that more businesses and tech publications will cover communities of color with the same diligence and rigor as The PLUG. I hope that this kind of reportage is not seen as ancillary but instead more integrated in tech and business reportage. And with that, I hope that we get purchased and can grow within the walls of a publisher that recognizes our value and the importance of delivering this information to readers. RAPID FIRE What is your first read in the morning? The Wall Street Journal’s tech news briefing and The Daily Stoic. What was the last book you consumed? Arlan Hamilton’s It’s About Damn Time and Kevin Kelly’s The Inevitable. What job would you be doing if you weren’t in your current role? This is tricky because I like the grind of building something from the ground up. If I wasn’t working on The Plug, I’d probably be teaching inclusive entrepreneurship at the collegiate level or within vocational training ecosystems.
https://medium.com/the-idea/q-a-with-sherrell-dorsey-founder-of-the-plug-8fd4227440d9
['Tesnim Zekeria']
2020-05-27 19:38:12.108000+00:00
['Journalism', 'Startup', 'Technology', 'Subscriber Spotlight', 'Media']
21
Binance System Upgrade Notice
Heads up, Binancians: the Binance cryptocurrency exchange will undergo a scheduled system upgrade on November 14, 2018, starting at 2:00 AM UTC. This upgrade is expected to go on for eight hours. During this time, we will suspend deposits, withdrawals and trading. While we don’t want to inconvenience our users as much as possible, we constantly upgrade our platform to give you an even better trading experience, and this upgrade is part of that endeavor. Here’s a short guide on what you can do before, during, and after the upgrade. What to do before the Binance system upgrade Download the Binance app on your smartphone to get push notifications on the latest updates on Binance. Get the app on iOS and Android. Once you download the app, check your orders if they might be affected by the upgrade. Cancel orders if needed. What to do during the upgrade Check our website and official social media pages on Facebook, Twitter, and Instagram to get updates on the upgrade. Join our official Telegram group and our local communities on Telegram (see a list here). If you have any questions during the maintenance, our team and Angels will be ready to help. Sit back, relax, and watch a video or 10 on Binance Academy. Like this one: Contribute to our mission to make the world a better place via blockchain. Donate to the Blockchain Charity Foundation. Read our in-depth reports about blockchain projects on our newly launched Binance Research. Stay updated regarding the latest movement and insights in the crypto world through Binance Info. Do you have an awesome idea for a blockchain project? Better spend those eight hours hashing out your plan and who knows, Binance Labs might think your idea has a lot of potential. Still itching to get in on some crypto action during the upgrade? Get the Trust Wallet and explore its many features! Download it now on iOS and Android. What to do once the upgrade is done Rejoice. Log in and check the functions of Binance platform. Ask Binance support if needed. Smile because your funds are #SAFU. Resume trading. We’ll keep you updated throughout the course of the system upgrade. Thank you for your support!
https://medium.com/binanceexchange/binance-system-upgrade-notice-c3fdf632bd68
[]
2018-11-15 08:24:07.510000+00:00
['Blockchain', 'Exchange', 'Binance', 'Technology', 'Cryptocurrency']
22
Why so much talk about Decision Science?
When I first came across this term and tried to learn about it, I was overwhelmed by the information. Everyone has picked it up and have written pages on end about it. There was one thing in common though. We all have read many articles, blogs and seen posts of ‘decision science vs data science’. We haven’t actually tried to think why do we compare the two. There is more congruence between the two. I would say that the only main difference is that data science delivers the results and decision science helps us take calculative steps based on the results. What if I asked you to compare the sports, chess and boxing. Many would argue chess as an activity rather than a sport but that is a discussion for some other time. Both require attention and patience and practice and pattern recognition in the opponent. Yet, they are quite different and does not require to set up a comparison between the two. The missing piece that we failed to notice was that they did not affect each other. Whereas, decision science depends a lot on data science. Imagine this, you are a data scientist dumped with a huge database containing all sorts of data. Your superior asks you to comb through the data and provide him with your findings. We have all been there where based on certain results we can easily decide what to do. That is where the decision scientist comes in. It is not as easy as it sounds. A decision scientist doesn’t just look at the data provided. There are factors like ‘past-experiences, variety of cognitive biases, individual differences, commitment and some more’. Data Science is a tool by which correct decisions can be made. Read more here. Whoever said that data science is for the math majors or the computer genius in the class. Basic analysis is required in every field. You do not have to be Alan Turing to go about it. Psychology students learn distributions in their bachelors, biology majors have to understand the normal curve too. In short if data science is everywhere, so is decision science and has been so always. Photo by Frank Vessia on Unsplash Since we are just venturing into this field, decision science can quite easily get misunderstood for analytical fields or areas using machine learning, but it is very much a separate area. “While data science is perhaps the most broadly used term, ‘decision science’ seems like the more fitting description of how machines are assisting business leaders in solving problems that have traditionally relied on human judgment, intuition and experience,” according to K.V. Rao, founder and CEO of sales forecasting software company Aviso, in TechCrunch. “It may not be the sexiest phrase in the world — I’ve never seen it in any marketing materials — but ‘decision science’ aptly encapsulates how computers are helping to systematically identify risks and rewards pertinent to making a business decision.” As I have mentioned above, according to Data Science Central, “Data Scientist is a specialist involved in finding insights from data after this data has been collected, processed, and structured by data engineer. Decision scientist considers data as a tool to make decisions and solve business problems.” Photo by Emily Morter on Unsplash By now, we’re accustomed to the what a great deal being a Data Scientist is. The demand for Data Scientists is only skyrocketing and will be go on maybe till aliens take over. The overlooked champion of the business and technology world have just been recognized. Little do we know about them. If people were rational, behavioral economics would be a myth. From Economics to Marketing to Psychology, all require a Data Scientist and a Decision Scientist. Intellectuals believe that the combination of two can take the business to great heights. A data scientist just has their data in hand. They have no knowledge of client needs or promises made by the company or whether the management has some reservations. In short, they do not have a 360-degree view of the problem at hand. Regardless decision scientists have an analytical mind which helps them picture the problem and find the best solution for the company and client/customer. Hence, they are prepared for all kinds of situations. Being rarer than data scientists, the demand for decision scientists are only to increase. The craze for data science had just started when educational institutions started providing courses. The trend is still increasing. Everyone is gearing up with the latest tools. We do not want to be left behind. Universities have already started courses for students to become decision scientists. It’s a matter of time before schools adopt some concepts. I have made a list of as many universities as I could find which were offering the course in major countries. Now, most of these are PhD courses as decision science requires a some taste of how things run in the real world. It is a great opportunity as research is still underway. ASIA: IIM Bangalore, India IIM Lucknow, India INSEAD, Singapore(in all campuses) CUHK Business School, China Asia and Pacific decision science institute EUROPE: European decision sciences institute University of Konstanz, Germany IESE Business School, Spain Rwth-aachen University, Germany HEC, Paris AMERICA: Decision Science institute The College of Penn Carnegie Mellon University Harvard University Kellogg School of Management Thank you for reading! Do tell me your remarks.
https://medium.com/analytics-vidhya/why-so-much-talk-about-decision-science-f9d8b8bd92d4
['Ada Johnson']
2020-10-02 17:02:37.434000+00:00
['Technology', 'Data Science', 'Decision Science']
23
Outsourcing Work To Laravel Remote Developers: The Dos and Don’ts
Outsourcing Work To Laravel Remote Developers: The Dos and Don’ts Here’s everything you need to know about outsourcing to Laravel Developers With digital platforms being popular for expanding business opportunities, more and more business owners are now planning to have a website for their organization. Nowadays, Laravel being the most promising technology, it is more preferable by the business owners than any other PHP frameworks. But the question arises when the hiring process of the app development team starts. Organizations always get confused on how to hire. Do they need to opt for outsourcing or in-house developer hiring? Outsourcing being a cost-effective and straightforward approach, the majority of the business opt for that. Therefore, in this blog, we will look at how outsourcing is the best approach and the dos and don’ts of outsourcing the Laravel developers for remote work. What is Laravel? Laravel is a robust and easy to understand open-source PHP framework that follows an MVC design pattern. The Laravel developers reuse other frameworks’ existing components, which can help create a new web app. Therefore, web applications that are designed are more structured. Laravel is a framework that provides a rich set of functionalities that has some basic features of PHP frameworks in it. Top Four Features of Laravel Laravel offers some of the best features, and they are: 1. Web App Security The best advantage of using Laravel is that it offers security. With the use of Laravel, one can have websites that have almost nil chances of hacking. Laravel enables keeping the database safe by using salted and hashed password mechanisms. This means that the Laravel development company can secure an app in a way that the password of its users would never be saved as plain text. That password will be converted into an unintelligible series of hashing. 2. MVC Architecture Model-view-controller (MVC) represents the architecture that the developers adopt while creating an application. This architecture separates the application from the user interface. With the use of MVC architecture, the Laravel developer can create a code structure and make it easier for them to work with it. 3. Unit-Testing One of the main reasons developers choose Laravel to create an app is that it is easier for unit-testing. Laravel is a framework that can run multiple unit tests to make sure that the changes made in the application don’t create a negative effect. 4. Database Migration System Data migration is nothing but moving data from one platform to another. It seems to be a simple process but migrating the data is not that simple. If not done adequately, there are chances of losing an organization’s valuable data. Because of this reason, developers tend to choose the Laravel framework as it offers the best solutions for data migration. Why Hire a Laravel Developer? Laravel is one of the best PHP frameworks in the market that enables app developers to create excellent web applications for their clients. Laravel was launched in 2011 by Taylor Otwell. Being an open-source framework that follows MVC architecture, Laravel has become quite popular amongst web app developers. In today’s world, where digitalization is taking over every field, everyone aims to take their business or organization on a digital platform. In this digital era, to fulfill your wish to take your business on a digital platform and expand your client base, hiring a Laravel developer is the best thing. Laravel framework is specially designed to create and polish a robust web application for businesses of all sizes. What is Outsourcing? Outsourcing is a simple practice of hiring an individual or a team outside your organization to deliver a service or a product. Small businesses prefer outsourcing because they might not have resources for web app development, finance, or marketing. Therefore, they find agencies that specialize in the work they want to get done. On the other hand, big companies choose outsourcing because they have a lot of work going on in their company, so they might not want their employees to create something for themselves. Instead, they prefer their own developers to work on client’s projects. Therefore, they hire outsourcing app development companies and get their work done. This process of hiring developers outside of your organization for a specific project can be more cost-effective than hiring full-time developers and paying them a salary. Besides this, outsourcing can be the best opportunity for many app development companies to expand their work area. Why Outsourcing is Better Than In-house Developers? In-house app development can be much more expensive than outsourcing a developer or app development team. For in-house, the hiring process is a bit longer, and you will have to pay the developer even if he doesn’t have any work to do. While in outsourcing, you only pay for the working developer’s working hours. Therefore, outsourcing is more preferable over in-house development. Dos and Don’ts of Hiring Outsourcing Laravel Developers for Remote Work Laravel frameworks enable developers to develop cheaper, faster, and safer applications. It is becoming one of the most used frameworks because of its features and its benefits for creating an application. Therefore, the majority of the organizations choose to hire a Laravel development company for getting their business web application developed. When it comes to hiring a developer for a project, the majority of the people go with an outsourcing option. So let’s check out what are the dos and don’ts of hiring an outsourcing Laravel developer to give you a clear idea on whether to go with hiring an outsourcing developer for remote work or not. Dos of Outsourcing Work to Remote Laravel Developer 1. Simple Switch When an organization hires a full-time in-house Laravel developer for any particular project, and if things don’t go quite well as expected, the organization might struggle to let go of the employee. In such cases, the company will have to go through a long hiring process for finding a replacement. All these things can cost a lot of time and money. Therefore, hiring an outsourcing Laravel developer is the best option. It is much easier to replace them, they only charge for the hours they have worked on your project, and tracking their workflow is much easier than the in-house development team. 2. Cost-Effective One of the best things about opting for outsourcing is that it is a cost-effective approach in every sense. Whenever any company or business decides to outsource a service, they neglect the costs of hiring a new developer. This makes quite a difference to the business, especially when it comes to considering overheads. It means that the company doesn’t have to pay a large amount as wage or for office types of equipment. Therefore, outsourcing is regarded as the best option as it is cost-effective. 3. Less Hassle By outsourcing the Laravel development service providers, one doesn’t have to think about organizing multiple interviews, negotiating on the salary, and providing all the necessary office essentials to the employee. Therefore, hiring a remote outsourcing developer can be a hassle-free process, and as they work from home, there are no extra expenses that an organization has to do. 4. Experts Many startup business owners try to handle everything on their own to avoid hiring full-time employees and reduce expenses. But it isn’t the best choice. Having an expert for any sort of work is necessary, especially when it comes to developing an application that would represent your business. Therefore, to lessen the expenses, the majority of the entrepreneurs opt for outsourcing the remote developers who are skilled in their job. The professionals can help you create a better product. 5. Save Software Cost Another beneficial thing in hiring an outsourcing Laravel developer is that the company doesn’t have to provide him with the costly software that they would have to offer the in-house developers. An expert remote developer will have all the software that is required to develop a Laravel application. Therefore, hiring an outsourced developer will save a lot of money on purchasing a software subscription. Don’ts of Outsourcing Work to Remote Laravel Developer 1. Privacy When it comes to outsourcing the developer or Laravel application development company for a service, it can risk your organization’s privacy, as they get access to your company’s professional and confidential details. 2. Time Zone Few business owners resist going with the outsourcing approach because the outsourced developer might be from a different time zone, and this can make it challenging to communicate with them. 3. Communication When one hires an in-house developer, communication with them to discuss the project becomes more manageable as they will be working in the same location. But when a business owner hires an outsourcing employee from a Laravel development company, communication becomes difficult because of the different time zone. This also delays solving any issue in the project. 4. Legitimacy Hiring a Laravel developer through online agencies might bring a question of legitimacy. What if the hired developer doesn’t deliver the expected result or misuse the confidential information of your company. These are the reasons why some organizations avoid choosing outsourcing services. Conclusion Working with the remote Laravel developers or hiring the outsourcing Laravel app development companies isn’t as complicated and risky as people think. Instead, outsourcing is the best solution as it saves a lot of time and money that goes behind hiring an in-house developer. The above-listed points clearly define that outsourcing Laravel developers for remote work are very beneficial, and this approach is getting more & more popular with each passing day.
https://medium.com/cornertechandmarketing/outsourcing-work-to-laravel-remote-developers-the-dos-and-donts-8da2ed4dbbb4
['Roy Daniel']
2020-12-10 00:21:52.645000+00:00
['Development', 'Technology', 'Remote Developer', 'Laravel', 'Remote Working']
24
Why Transparency Is Critical in Times of Crisis, Part 2: Decision vs. Discussion
As the economy declines, an increasing number of companies are pivoting from peacetime management to wartime management. Renowned entrepreneur and venture capitalist Ben Horowitz wrote a blog post “Peacetime CEO / Wartime CEO” in which he lists the many differences between the two types of company management. In particular, the following caught my eye: Peacetime CEO focuses on the big picture and empowers her people to make detailed decisions. Wartime CEO cares about a speck of dust on a gnat’s ass if it interferes with the prime directive. Peacetime CEO strives for broad based buy in. Wartime CEO neither indulges consensus-building nor tolerates disagreements. Most management techniques and principles are catered towards peacetime, to which most of my previous blog posts are no exception. During wartime — or when the company is fending off existential threats — rules tend to be broken. When I served in the Taiwanese Marine Corps, there was a saying: “Getting everyone marching together towards the second best option is better than sitting around and debating what the first best option is.” The same is true for companies fighting an existential threat: sometimes quick decisions are more optimal than perfect ones. A common mistake I’ve seen leaders make (during both peacetime and wartime) is to invite a discussion around a decision that has already been made. Those who become invested in the discussion may feel betrayed when they realize the decision was already made and their ideas had no impact on the outcome. In my previous post, I noted that transparency is especially important in times of crisis. Today, I want to specifically discuss communicating an already-made decision and the importance during times when the company needs to act quickly. Discussion or Decision? During peacetime, organizations are usually afforded more time to make decisions, and leaders prioritize empowering employees by advising thinking things through rather than being hasty. During wartime, decisions have to be made quickly so people can start executing them, which comes at the expense of employee autonomy. This is when clear communication makes a big difference for your employees. A few years ago, I was on the hiring committee in search of a new VP of Engineering. The committee, chaired by the CTO, defined the job requirements and interview process. Among them were two requirements highlighted by the CTO: The candidate must be highly technical. The candidate must have had substantial experience managing engineering teams at a tech company. The search went on for a few weeks, but none of the candidates we interviewed were qualified. One day, we received a referral for a candidate whose demographic and career background was very similar to that of the two co-founders. The CTO instructed the committee to “go easy on the technical portions, because they may not pass it.” While it was odd, we obliged. During the post-interview debrief, the CTO asked for the hiring committee’s recommendation. While the candidate seemed solid, two concerns emerged: We asked the candidate easier technical questions, so we could not properly evaluate their technical abilities against those of other candidates in the pipeline. The candidate spent their entire career in finance and consulting, and their only stint at a tech company lasted merely 10 months. The first concern was quickly dismissed, since it was the CTO’s direct orders to have us go easy on the technical portion. The second one stirred a debate: the startup had few people who came from the tech industry, and we wanted someone who had experienced high growth at a tech company. Despite the committee’s concerns, the CTO argued that the candidate had sufficient tech company experience, citing their 10-month stint at a “tech” startup and experience as a CTO/co-founder of another startup. However, upon further inspection, the CTO/co-founder experience completely overlapped with their full-time job and was really a side project the candidate undertook alone. Surprisingly, the CTO kept repeating his original arguments and insisted that we were being too strict. As time for the meeting ran out, the CTO finally disclosed, “Actually, I already gave [them] a verbal offer, so if there are no other concerns, we’ll move forward with this candidate and wine and dine [them].” The members of the committee were in shock. We had spent the last 30 minutes debating whether or not we should hire this candidate, yet it turned out that the CTO had already made the decision without us. In the ensuing weeks, people were visibly less vocal in hiring committee meetings. We weren’t sure if we were discussing a possibility, listening to the broadcast of a decision, or being asked to give our opinions which may or may not meaningfully influence the final decision. These types of management-driven decisions are commonly seen across organizations of all sizes. Leaders are certainly justified in making executive decisions during critical wartime. However, not disclosing that a decision was made and inviting a discussion not only makes people feel cheated, but also causes them to lose trust in the system because they don’t know if their opinions still matter. Decision, Not Discussion In wartime, it is important to identify when a decision has been made and notifying the team. Being transparent that a decision has already been made allows you to maintain trust, which serves as the foundation for everyone to concentrate on steering the company away from crisis. Recently, at Airbnb, I was notified of a strategic and urgent project. The deadline was two weeks away, but I was told that since the scope was sufficiently small and they had already found engineers to work on it, all my team had to do was consult on and review code. I told one of my team members to represent our team for this project and provide guidance when necessary. In other words, I let the project team discuss and decide on how to proceed with the project. Two days later, when no progress was made, I reviewed the product spec and realized there was no way the pre-selected engineers could complete the project in time because they had no prior experience in our codebase. Furthermore, the product scope was actually quite large, and even if my team took it on, we couldn’t have completed the project in two weeks. I had made a mistake by allowing a discussion (between the product manager and the engineers found from other teams) instead of making a decision and implementing it down the line. This cost us two days, leaving eight working days until launch. I immediately told the product manager (PM) that I was bringing in three of my engineers — pausing their existing work — and cutting project scope to ensure that we could deliver on time. After some back-and-forth with the original project team and the PM, we agreed to a reasonable scope, set up the engineering plan, and were able to deliver the project on time. When I talked to my engineers, I explained that because this project was critical, I wanted them to pause their current work to ensure this project’s delivery. While I needed their buy-in, I clarified that the decision had been made and that the discussion needed was around how to build these features on time — not whether their existing work should be paused. One of my engineers noted during a one-on-one that it was “uncharacteristic of [me] to be so heavy-handed in decisions and execution details.” While he was not happy with the decreased autonomy, he appreciated that I explained upfront the criticality of the project, understood why I intentionally “micromanaged”, and agreed that the project would not have succeeded had I let it run its natural course.
https://kenk616.medium.com/why-transparency-is-critical-in-times-of-crisis-part-2-decision-vs-discussion-38ab4a2a7771
['Ken Kao']
2020-08-04 20:42:35.424000+00:00
['Corporate Culture', 'Management', 'Technology', 'Leadership', 'Transparency']
25
Ninja News! : A Weekly Roundup of Cybersecurity News and Updates
A brewing conflict between tech giants China is in the hot seat as US and allies, including the European Union, the United Kingdom, and NATO, officially blame it for this year’s widespread Microsoft Exchange hacking campaign. To keep you in the loop, these early 2021 cyberattacks targeted over a quarter of a million Microsoft Exchange servers, belonging to tens of thousands of organizations worldwide. On a related note, the US Department of Justice (DOJ) indicted four members of the Chinese state-sponsored hacking group known as APT40 last Monday. This is with regard to APT40’s hacking of various companies, universities, and government entities in the US and worldwide between 2011 and 2018. Could this be tied up to China’s move to develop cyberattacks capable of disrupting US pipeline operations? Hmm.. Just 2 days ago, Wednesday, Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) issued a joint advisory that Chinese state-sponsored attackers have breached 13 US oil and natural gas (ONG) pipeline companies way back 2011–2013. This is developing news so be sure to check out our next week’s release! Finally, we give you a quick overview of the new malwares in the cyber landscape to keep tab on. MosaicLoader and XLoader, has joined the game Bitdefender researchers have confirmed a novel malware posing as a cracked software via search engine advertising. MosaicLoader is a malware downloader designed by its creators to deploy more second-stage payloads on infected systems. A more niche-specific malware which is known to steal information from Windows systems, was also reported this week to have been modified. The “new and improved” malware can now target macOS. This is definitely the upgrade we never want! The revamped malware is dubbed as XLoader. Sounds like a console right? That’s a wrap for this week’s happenings on cybersecurity. Never miss important updates on data breaches, new data protection policies, and other techno trends by following Privacy Ninja!
https://medium.com/@privacy-ninja/ninja-news-a-weekly-roundup-of-cybersecurity-news-and-updates-352c365a821e
['Privacy Ninja']
2021-07-24 05:53:49.767000+00:00
['Privacy Ninja', 'Cybersecurity', 'Technology News', 'Cybercrime', 'Data Protection']
26
2019 in review: top 10 digital diplomacy moments
9. “Tools and Weapons” Tools and Weapons is a narrative journey from inside of the world’s largest tech companies, with a focus on how technology needs partners working tightly together to face the biggest challenges of our era and to move us forward. Most of all, this book — released this past September — shows how digital diplomacy is not only about narratives and content, but it’s also about understanding technology and all stakeholders in the tech spectrum and beyond. Digital diplomacy is about social media, as much as it is about policy and diplomacy. “This is a colorful and insightful insiders’ view of how technology is both empowering and threatening us,” said Walter Isaacson, bestselling author of The Innovators and Steve Jobs. “From privacy to cyberattacks, this timely book is a useful guide for how to navigate the digital future.” Authors Brad Smith, president and chief legal officer at Microsoft, and Carol Ann Browne, senior director of communications and external relations, understand “that technology is a double-edged sword — unleashing incredible opportunity but raising profound questions about democracy, civil liberties, the future of work and international relations,” said former US Secretary of State Madeleine Albright. “Tools and Weapons tackles these and other questions in a brilliant and accessible manner. It is a timely book that should be read by anyone seeking to understand how we can take advantage of technology’s promise without sacrificing our freedom, privacy or security.” 10. Influence in the Brussels-bubble Even if you often look at the US as a barometer to understand online influence in politics and government, Brussels is another great example of how social media and digital engagement form the basis of influence on the Internet. This past summer, POLITICO’s exclusive analysis revealed the influencers in the European Parliament’s Twitterverse. A very interesting time to observe what is happening as the EU legislative body was renewed in May after the Europe-wide elections, and with a new European Commission starting its mandate under President Ursula von der Leyen. Notably, “the list of international politicians (excluding MEPs) followed by EU parliamentarians is heavily Brussels-dominated,” explain the authors of this report James Randerson and James O'Malley. “It includes just one European leader (Macron), one national politician (Germany’s Martin Schulz, who is a former European Parliament president) and four U.S. politicians (Trump, Obama, Hilary Clinton, and the U.S. president’s institutional account POTUS).” In September, another interesting study came out, published by ZN Consulting and EurActiv.com. It provides a great picture not just for Brussels-based and European players, but also for Washington DC politicos interested in European affairs and the work of the European Union.
https://medium.com/digital-diplomacy/2019-in-review-top-10-digital-diplomacy-moments-d4e6d9752904
['Andreas Sandre']
2019-12-22 13:58:34.386000+00:00
['Social Media', 'Technology', 'Diplomacy', 'Foreign Policy', 'Year In Review']
27
Dead Bugs Are Getting in the Way of Fully Autonomous Self-Driving Cars
Dead Bugs Are Getting in the Way of Fully Autonomous Self-Driving Cars Photo: shanecotee/Getty Images On October 19, 2016, Elon Musk kicked off the lie about Tesla Full Self-Driving capabilities: “The basic news is that all Tesla vehicles exiting the factory have the hardware necessary for Level 5 autonomy. So that in terms of the cameras and compute power, it’s every car we make. So, on the order of 2,000 cars a week are shipping now with Level 5, meaning hardware capable of a full self-driving or driverless capability.” Don’t get me wrong—I am not among those in the $TSLAQ community on Twitter who believe that everything Musk touches is a massive fraud. While there is much about the accounting around his businesses that is highly questionable and very possibly fraudulent, I think Musk is a true believer in most of the ideas he has brought forth, regardless of how outlandish many seem. Image: Tesla But the idea that all Teslas built in the past four years have the hardware necessary to achieve Level 5 (L5) automated driving capability was and is a lie, and for some very mundane reasons. Just a week after Musk made the above pronouncement, I took to a stage in San Francisco, where I was chairing a conference on automated vehicles. George Hotz of Comma.ai had been scheduled to deliver a keynote that morning. I was checking my notifications as I walked into the event venue and saw the headline that Hotz’s Comma One project had been canceled after the company opted not to respond to an inquiry from the National Highway Traffic Safety Administration asking for more information about it. Needless to say, Hotz did not appear, so I quickly modified a presentation deck on the state of automated driving development and took his place. In the course of my talk, I asked the audience of about 150 people how many thought that Musk could make good on his L5 promise. I could count the number of people who responded in the affirmative on the fingers of one hand, and then proceeded to give my own explanation of why Musk’s claim was a lie. I think Musk is a true believer in most of the ideas he has brought forth, regardless of how outlandish many seem. I often get calls from the media for comment on all aspects of mobility technology, including automated driving and electrification, and I’ve repeated this explanation countless times over the years. The fundamental problem is that most people view Musk as some sort of visionary genius and assume he knows what he’s talking about. Thus, when Musk says things that are fundamentally wrong about automated driving, it becomes gospel for people who are coming from a nontechnical background. It has become my job to correct the misinformation. Before I explain why no current Tesla vehicle is capable of L5 automation, let me give a brief explanation of what these levels mean. The Society of Automotive Engineers published the J3016 standard in 2014, which defined a taxonomy for levels of automated driving capabilities. Level 0 is for vehicles with no driver-assist capabilities at all. The vast majority of vehicles built today fall within L1, with a variety of discrete functions to assist in the driving task, such as lane departure warning or adaptive cruise control. Level 2 systems combine multiple functions within a control strategy, such as automatic lane centering and speed control, but still require the driver to pay attention and be prepared to take full control at any time. These systems may or may not require the driver to keep hands on the wheel, but their eyes must remain on the road. As of 2020, Tesla AutoPilot is still an L2 system. Level 5 (the highest level) refers to fully automated vehicles. These are vehicles that are capable of operating in all conditions without any human supervision or intervention. An L5 vehicle can drive itself in any weather and on any roads that a human can. The next step down (Level 4) can do the same, but within a defined operating domain that can be based on geography, weather, or any other conditions. The vehicles being tested by virtually all other AV companies, including Waymo, Argo, Cruise, Voyage, and others, are all L4. L4 and L5 vehicles can have human controls, and a driver can take control if they desire, but it’s not required. Both of these systems have levels of redundancy that enable them to get to a safe stop if a problem is detected, even when no one is aboard. The key distinction between L4 and L5 is the ability to drive anywhere and at any time. To do that, the driving system needs to be able to “see” the world around the vehicle, which means the sensors must remain clean and unobstructed at all times. Ford sensor cleaning. Video credit: Ford Most people who are critical of Tesla’s approach to AV technology focus on Musk’s insistence on not using lidar. Tesla vehicles are equipped with eight low-cost cameras, along with a single forward-looking radar sensor and 12 short-range ultrasonic sensors. While this solution is unlikely to yield a sufficiently robust automated driving system anytime in the foreseeable future, it’s not impossible, despite the limitations of cameras and the limited redundancy. Even if Tesla can develop software that can reliably understand the world around the vehicle for a safe L5 system, the cars as they are built today will never be L5. As of 2020, Tesla AutoPilot is still an L2 system It all comes back to cleanliness. Those of us who live in regions that have multiple seasons are well acquainted with the challenges of trying to see out of windows caked in slush or getting illumination from headlamps covered in road salt. Even when the weather is warm, billions of insects are killed every year as a result of windshield impacts, and pollen from trees and plants can build up on all manner of surfaces in addition to irritating sinuses. Waymo sensor cleaning system. Source: Waymo Ford, GM, and Waymo all have systems on their automated vehicles designed to minimize the collection of anything that can obscure sensors. While washer nozzles, air curtains, and wipers may seem incredibly mundane compared to neural networks and machine learning, it’s often those mundane details that can trip up an engineering project. Remember when a Mars probe crashed because engineers failed to convert some crucial numbers from English to metric units? If you can’t keep the sensors of an AV clean, it can quickly be blinded. If the virtual driver can’t see, it can’t plot a course through its environment. If that happens, the vehicle can’t drive and is not L5 automated. A radar sensor caked in slush and winter road grime after 10 minutes of driving. Photo: Author Of the eight cameras on current Tesla vehicles, three forward-looking imagers are mounted above the rearview mirror and sit within the swept area of the windshield wipers, which could theoretically keep them unobscured. The other five on the sides and rear have no cleaning system at all. As someone who has been driving in winter conditions for nearly four decades, I can guarantee that all five will get obscured with an array of road grime. However, even those forward cameras aren’t completely safe. When snow and ice build up, the outer tips of wiper blades frequently get lifted up and don’t wipe the complete area, which just happens to be where those cameras are located. Those insects I mentioned? When a mayfly goes splat on a windshield at 70 mph, I’ve yet to encounter a wiper anywhere that can remove it. Don’t get me started on the streaky mess when you try to wipe away bird droppings. Even in warmer weather, the insects collected on the front of a vehicle can make sensors unusable. Photo: Author As a self-proclaimed visionary, Elon Musk may not spend much time considering mundane tasks like scraping bugs and dung off windshields, but they are all a necessary part of life. Machine vision is not exempt from the need for a clear view. In fact, it’s more sensitive to such obscurants than the human brain, which can actually do a shockingly good job of seeing around such annoyances. Until Elon Musk can handle the mundane, Teslas will never have Level 5 automated driving capability.
https://debugger.medium.com/dead-bugs-are-getting-in-the-way-of-fully-autonomous-self-driving-cars-abf5bdf932fb
['Sam Abuelsamid']
2020-10-19 05:32:07.950000+00:00
['Elon Musk', 'Autonomous Cars', 'Transportation', 'Technology', 'Tesla']
28
Quantum Computing and Blockchain
An Honest Introduction to Quantum Popular science journalism severely overestimates the applications of quantum computing. Quantum computing is fantastic at doing a small, specific subset of calculations, but it won’t be able to magically “try all the answers” in parallel. At MIMIR, whether we are presenting at conferences or working directly with our growing customer base, we’re often asked by existing companies if incorporating blockchain is worth it for them. One of the most prevalent causes for concern that we hear is that “quantum computing will soon make blockchain obsolete, so why bother advancing the tech?.” Hearing this question enough we’ve decided to put the threat of quantum computing into an honest and realistic perspective. In short, quantum will not be the end of blockchain and it is not as all powerful as you may think it is. However, there are areas which Quantum computing is likely to disrupt inside of encryption and cryptography. This is true in the case of blockchain technology. Quantum has the ability to crack the encryption and signature schemes used in cryptocurrencies. However, there are several adaptive measures taking place that aim to help prevent the quantum threat. To understand where the Quantum threat exists, we first need to understand how encryption works. A lot of this will be a gross oversimplification for explanatory reasons. RSA vs ECC Disclaimer For the purpose of understanding, we will be using the RSA (Rivest–Shamir–Adleman) cryptosystem. This is the system used by HTTPS. RSA Encryption uses prime numbers and factoring to create a one-way function that makes it easy to encrypt something yet very difficult to reverse-engineer the data from the encryption. Bitcoin and Ethereum do not use RSA. They use ECC (Elliptic Curve Cryptography). ECC still relies on a one-way function, except instead of using prime numbers and factoring, it relies on points on an elliptic curve. ECC produces much shorter keys and is applicable in the use of signature schemes. When you are trying to crack a private key through RSA encryption, you are basically trying to find the correct prime factors of a really big number. When you are trying to crack a private key through ECC, you are basically trying to find a “random” point on an elliptic curve. Though they are different, their relationship to Quantum computing is very similar. Whether it is finding factors or points on a curve, quantum computing poses a threat. That being said, the concepts of prime numbers and factoring are much easier to grasp than elliptic curves. How Encryption Works Data is sent to your public address. The person sending that data will encrypt the data against your public key. Your public address is the hash of your private key and is basically just a shortened, public, and more convenient version of your public key. This data can now only be opened by your private key. This level of cryptography is well-understood, but how does it work? Your public key is basically the product of two very large prime numbers. Your private key contains the two prime numbers that were used to create your public key. Extremely large prime numbers are used. We will use a relatively small prime number to demonstrate this. We know 13 and 17 are both prime numbers. To multiply them together is easy. But if I asked you to find the prime factors of 221, it would take you a little bit. The only way to do it is to check one by one which prime numbers go into 221 and find the correct pair. Do this with prime numbers that have an enormous number of digits and factors, and you find yourself with a near impossible task. Cracking current private keys would take modern computers more time to crack than is feasible in a single generation and that is an understatement. Blockchain uses asymmetric security encryption which is hard to crack. They do this by enveloping data inside of computationally difficult math problems. It’s a lot of backwards math, that is difficult for our computers to handle. That is, until Quantum came along. Quantum Computing Explained Traditional computers use bits. Everything is stored as a 1 or 0 or, in other words, as true or false. This happens through teenie tiny switches called logic gates that are basically “on” or “off”. This binary system means that, when a computer is searching for prime factors or random points on an elliptic curve, it will go through and test each option one by one. This makes certain computations heavily intensive. Quantum computing replaces bits with qubits. These give quantum computers exponentially improved abilities to do certain mathematical calculations. Qubits have the ability to be in both states (0 or 1, true or false) at the same time. This may sound confusing, but don’t try to rationalize it in your head. Understanding the superposition and interference of qubits is a daunting task for even some of the hardest working physicists out there. But not all hope is lost. Here’s what you need to know: Qubits will allow computers to use more efficient methods to solve certain computational problems. Shor’s algorithm will allow quantum computing to use far more effective routes when solving these cryptographic challenges. These types of calculations can take several years for a traditional computer to process. A quantum-based computer could optimize its efficiency and solve the same level of complexity in a fraction of the time. This would allow one to crack the private keys on most blockchains quickly and efficiently. However, Quantum is still far from reaching a point where it poses an immediate threat to blockchain. The Future of Quantum and Blockchain It’s not that easy to make a quantum computer. Qubits are highly sensitive and can only properly be used in very specific conditions. Among others, Qubits have to be stored at a temperature extremely close to absolute zero. To make things worse, the more Qubits you add, the harder it is to maintain the system. As of now, the best thing we have is IBM’s recent 50 Qubit computer. So how many qubits will we need in a computer to defeat encryption? Estimates show that ECC is actually faced with a more immediate threat than RSA encryption. Cracking a 224 length ECC key would require 1300 qubits from a quantum computer. The rough RSA equivalent 2048 length key would require about 4096 qubits. This means that ECC will be one of the first cryptographic schemes to get hit by quantum. RSA however, will likely not be too far behind when you consider the growth rate of this technology. But blockchain is not doomed. Ethereum, for example, is well underway on their efforts to countermeasure the threat posed by quantum. Quantum resistant keys are already being developed. Short-term solutions, like using quantum to generate truly random numbers within cryptography, are also being looked at. The technical community is well aware of the threat that quantum poses and they are handling it. So although Quantum may eventually pose a threat to blockchain in the future, it’s not an imminent threat nor one that will come as a surprise. It’s a threat that can be overcome and is being worked on now. At MIMIR, we believe in the developer community for blockchain. And so long as the success of a platform is determined by its developers, we think it’s safe to assume that blockchain is and will remain in good hands. One year ago, Vitalik, founder of Ethereum, said this: “Ethereum is also going to be optionally quantum-proof with EIP 86 and Casper, because it will support any signature algorithm that the user wants to use.” Blockchain technology is in its infancy and will continue to evolve. For this reason, quantum will not be the end of blockchain. Contact/Connect with us at: DISCLAIMER: The content provided on this site is opinion and commentary on topics related to the blockchain universe. IT IS NOT INTENDED TO BE NOR SHOULD IT BE RELIED ON BY YOU FOR ANY REASON AND IS PROVIDED “AS IS” WITH NO WARRANTIES OF ANY KIND. You are responsible for your own decisions and for properly analyzing and verifying any content.
https://medium.com/mimir-blockchain/quantum-computing-and-blockchain-83ea9fdfacb7
['Mustafa Inamullah']
2018-07-27 15:52:26.155000+00:00
['Blockchain Technology', 'Ethereum', 'Quantum Computing', 'Science', 'Bitcoin']
29
Renee Bergeron: Committed to Making Technology Accessible for Businesses of All Size
A Radioshack TRS-80 Personal Computer led Renee Bergeron to technology. This machine arrived at her house through a family friend who gifted it to her since he saw no purpose in it. Toying around with that piece of hardware piqued Renee’s interest in technology empowered her to love technology and finally inspired her to embark to study computer science which led her directly into the technology world. As an enthusiastic young technologist, Renee was programming for clients like Air Canada, Hudson Bay Cap, National Bank of Canada which allowed her to glance into various businesses and amass knowledge. When presented with an opportunity, she moved to Australia as the CIO for the Bank of Melbourne. Operating in the service provider spectrum in corporate America allowed her to experience technology from the business aspect. As she progressed, her journey with Ingram Micro granted her an insight into technology supply chain management for SMB and building cloud businesses. Currently, she is the Senior Vice President & General Manager of AppSmart. “In retrospect, my life could have been drastically different if I was gifted with a stethoscope or if I was not passionate about technology in all its form. I am incredibly lucky to work in a technical field that excites me and motivates me,” she asserts. In an interview with Insights Success, Renee shares her valuable insights on how she is empowering businesses to soar through technology innovation. Below are the highlights of the interview: How do you diversify your organization’s offerings to entice the target audience? We are all consumers of technology in the personal and professional aspects of our life. Consumers are sophisticated and differentiate solutions based on their needs and convenience. So ultimately, as providers, we focus on the customer experience. I invariably put myself in the customers’ shoes to experience our offerings. If you do not appreciate your services as a consumer, why should a customer experience it? Hence, we internalize before we build on it. Businesses are essentially groups of individuals who employ technology to ease their workload, and I think about their goal to serve them. For instance, if our customers are dentists, as a business leader, my role would be to find what technology would enable dentists to serve their customers best. This process would involve me personally reaching out to my dentist to understand their requirements to build a roadmap of new solutions we can develop. Without the desire to solve customer problems, and deliver the best experience, your solutions do not make an impact. How do you strategize your game plans to tackle the competition in the market? The general assumption suggests that tracking your competitors allows you to structure a better plan to get ahead of them. However, I believe that practice only puts you in a box with desperation to redesign the box, without scope for reinvention. Hence, I prefer market analysis; building strategies require market study, understanding complexities, and identifying opportunities. Data is my best friend. Slashing and slicing the market to understand the organizational type, their growth, and market scope gives me a unique edge. The market study also helps us predict future trends and equips us to transform the space. Looking at the overall space rather than constantly be engrossed by competition is essential to play the long game in technology. What are the vital traits that every businesswoman should possess? Intensity determines your success. Intensity is the key to success and is required to consistently and reinvent customer experience. Perseverance has been one of my greatest strengths over the years. However, investing the energy to consistently deliver and maintain pace requires intensity and passion. Perseverance has been one of my greatest strengths even at my lowest points. As a leader, a positive mental attitude is critical. Imbibing an optimistic outlook creates an uplifting environment for your team to deliver their best results. Optimism is contagious and should be adopted by every business executive for motivating oneself and others. Lastly, I follow True North; a concept of undertaking decisions based on data rather than emotions. We are humans with biases, and business judgements made with emotions ultimately fail us. Data, however, are facts. Facts are essential for business development and sustainability. In my opinion this one of the most important characteristics one must strictly follow. As per your opinion, what roadblocks or challenges were faced by you in a corporate business? And how did you overcome them? My journey in this industry allowed me to experience some mainstream challenges, which I witnessed were the same obstacles faced by my peers irrespective of gender and nationality; like a board rejecting your proposal. In my opinion, these are roadblocks that are essential to transform into a leader. The most dangerous roadblock we face is the limitations we create for ourselves. Constantly questioning our capabilities, judging our experience, and assuming perception hampers inculcating a growth mindset. However, there are specific challenges, like communication gaps which could affect your performance and growth chart. In an instance, my manager was an outspoken individual to everybody on the team except me. The anger, the appreciation and the guidance were all out there for them, while I received passive-aggressive feedback leading to confusion. Transparent communication was the key to identify the root cause. He assumed that his being direct would make me emotional since I am a woman. This was an eye-opening perspective but my solution was to encourage him to accept my emotional response, if any. I presented him with piles of paper which said, ‘This gives you a right to make me emotional’, and normalized something he was terrified of. My communication abilities changed the relationship which led to him mentoring and providing growth opportunities for me. The intention of this story is to promote honest conversations to address awkward undefined problems. What are your insights on “The myth of meritocracy”? And how it could bring a change in today’s business arena? Working hard is a basic qualifier in the industry today. Beyond that, one must understand the stakeholders and their requirements. It is about demonstrating the ability to do your job under any circumstance. Often people overlook the dimensions of ‘what’ and ‘how.’ Our industry houses many credible and hardworking experts who are good at ‘what’ they do. But ‘how’ they do their work sets them apart; not all hard-working individuals are agile, team-players, or the most efficient workers. Personally, your ability to present yourself also adds value. People will fail to recognize hard work if it is done behind closed doors. It is vital to draw a roadmap, present your outcomes and secure a sponsor, who will vouch for you. You have to chart a path for yourself surrounded by the right people that will help you expand your arena. How do you cope up with capricious IT and other technological trends to boost your personal growth? Technology forces change. As consumer demand increases, the market dynamics shift, propelling change; this cycle constantly helps me evolve. Being in this space, you inevitably embrace the role of being a change agent; which affects personal life as well. As we age, we are required to work our brain to stay healthy, and this environment of continual shift, helps us do just that. Seldom do we focus on the real forces behind this constant change, people. It is a beautiful cycle between humans and technology; one change forces the other. What are your future endeavors/objectives and where do you see yourself in the near future? The perk of operating in the technology industry is, my future can take any glorious form. While I know what the next few years will look like on a broader spectrum, it will be exciting to be on that journey, as new opportunities emerge. The goals I intend to pursue, making technology accessible for businesses of sizes, will be constant; but my path to reach there will definitely take different forms introducing me to bigger and better experiences.
https://medium.com/@insightssuccess/renee-bergeron-committed-to-making-technology-accessible-for-businesses-of-all-size-441552c7f67d
['Insights Success']
2020-11-25 13:07:01.582000+00:00
['Women In Tech', 'Female Entrepreneurs', 'Women In Business', 'Technology']
30
The Big 3 Components of Supersonic Aircraft
The Air France Concorde flew faster than any commercial aircraft in operation today. It’s been 50 years since the first Concorde aircraft took to the airways. During that time, aerospace engineering has advanced with a speed that would thrill even the original Concorde team. We’re inspired by what the Concorde team accomplished and immensely proud to carry that legacy forward. At Boom, we’re applying decades of knowledge to develop Overture, a supersonic aircraft that is dramatically more fuel-efficient and cost-effective. Here are the top three differences between Overture and Concorde in aerodynamics, materials and engines. Aerodynamics Compared to Concorde, Overture is much more aerodynamically shaped — thanks in part to new materials such as carbon composites. For example, the fuselage is “area ruled,” meaning the cross-section area is carefully controlled to reduce disturbances to the surrounding air (aka drag). Concorde’s designers knew of this principle, but it wasn’t feasible to realize in aluminum at the time. Fast-forward 50 years and we have the advantage of new materials. Since today’s carbon composites are molded, it’s possible to create a strong and lightweight structure in any dynamic shape desired.
https://blog.boomsupersonic.com/the-big-3-components-of-supersonic-aircraft-c1c614aeedbb
['Boom Supersonic']
2020-07-01 22:47:01.676000+00:00
['Aerospace', 'Travel', 'Technology', 'Aviation', 'Engineering']
31
Emergence is The New Amazing Field of Science
Depiction of Entropy Increasing — Image by Pixabay Does emergence violate the second law of thermodynamics? Many people describe emergent behaviors as spontaneous activities toward a state of order. The second law of thermodynamics claims that systems will always move toward a state of disorder. But when we observe single-celled organisms like slime mold join together and function as an organized unit, this appears to contradict this law completely. This is not the first time that biological systems have challenged the second law of thermodynamics. As life evolves to changing conditions to survive, it poses a challenge to this law. But in this case, defenders of the second law argue that while it does state that the order of a system must decrease, it doesn’t say when, where, and how. They then point out that emergent communities like slime mold become disordered again after achieving its objective as a collective group. Therefore, it is not defying the second law — not everyone agrees. Required ingredients of emergence To observe emergence in nature, certain requirements must be in place. To begin with, there must be a sufficient quantity of individual units. A handful of individuals usually don’t show emergent patterns, but thousands of them do. The whole system has to be observed to witness a pattern or structure of emergence. Interestingly, the simple nature of the individual units is essential in the emergent world. In fact, the more ignorant they are, the better. Consider how computers are built using the basic binary system of ones and zeros. If any of its components become too smart, they will make decisions based on their own needs and not the system's needs. Emergence is also dependent on random encounters and actions. Much like the sampling methods that statisticians use in their studies, if their samples are not random, then biases will be introduced into their results. Emergent behavior relies heavily on these haphazard interactions; otherwise, its biases might refuse to recognize needs or dangers to the group. Finally, local information gathered from individual units eventually becomes wisdom for the entire community. It begins with single units sharing data with one another, rather than with a community leader, and the network gradually begins to act for the common good of the entire group.
https://medium.com/illumination/emergence-is-the-new-amazing-field-of-science-a8f7380c167f
['Charles Stephen']
2020-11-04 16:10:11.130000+00:00
['Philosophy', 'Biology', 'Technology', 'Science', 'Nature']
32
A Quick Guide to the C4 Architecture Model
Level 1 of the C4 model is the Context diagram. At this stage, your software is depicted as a box in the center of the diagram that has interactions with some specific users and/or other systems. Technical details are hidden away so that a clear message of what the system does and how it interacts with the “outside world” can be sent to non-technical people. Although it is the simplest of all diagrams, it is the most crucial one at the same time. Often, people making decisions on whether to proceed with a given project or not and how much money/how many resources to allocate to it are top-level executives/managers who need not know the intricacies of technical implementation. All they need is a clear understanding of the system and its benefits or risks. Click on the underlined word in this section to see a sample diagram.
https://medium.com/@nara-jamalova/a-quick-guide-to-the-c4-architecture-model-d9a06923dae1
['Narmin Jamalova']
2020-12-21 09:46:47.904000+00:00
['Software Architecture', 'Technology', 'Software Development', 'Business', 'Big Data']
33
The Tech to Expect in 2021
The Tech to Expect in 2021 Photo by Drew Beamer on Unsplash Perhaps the most hopeful thing you can do after a year like this is to write about the future. However, imaging what 2021 will be like, even just for technology, is risky. We thought, after all, we had a handle on what to expect in 2020, and we all know how that’s turned out. Looking ahead this time, I know that technology in 2021 will be informed by the year before like none before it. 2020 remade us. It broke down societal norms and habits and rebuilt them in survival mode as something new, different, and not always worse. What I’m attempting here is to look at the new year through that prism of calamity. Even as a multitude of vaccines arrive to turn the COVID-19 tide, things will not snap back to “normal” in 2021. We have, in my opinion another 12-to-18 months of living in between worlds, those of the pandemic and post-pandemic. And virtually every category, sector, and innovation I’ll discuss has been altered or even entirely transformed because of the pandemic. 5G in the home We’ve been talking about 5G for years and even though numerous Android handset manufacturers produced a wide variety of 5G handsets before 2020, the arrival of Apple’s iPhone 12 5G handsets was momentous. In the U.S. at least, this, Verizon’s Ultra-Wide Band expansion, and the T-Mobile/Sprint merger will finally make 5G almost ubiquitous in 2021. But if you’re not leaving your home, how important are 5G mobile broadband speeds to you, anyway? The answer is not-so-much. 5G though is not just for your phone. Both Verizon and T-Mobile are working on building up their 5G Home Network expertise and technology. These systems differ from traditional cable broadband in that there’s no cable into the home. Instead, providers install a sophisticated 5G antenna/router that can pick up 5G signals near your home (provided there are any) and then the box parses out that signal like a traditional in-home Wi-Fi router. It’s not just the appeal of cable cutting that could get 5G into the home. Verizon’s fixed 5G, for instance, can reach speeds of up to 1 Gbps. Home Office forever Even with a vaccine, millions of us will not be heading back to the office in 2021. Quite a few companies have already announced plans to let the majority of their workforces continue working from home next year. Companies and executives finally realize that it’s possible to get real work done without them hovering over everyone’s shoulders. I think work culture has probably changed forever and the home office you built to support your new work from home lifestyle will get a lot of use in 2021 and beyond. The problem, as I see it, is that many people didn’t fully commit to working from home and adopted couches, dinner tables, and rugs as their temporary workspaces. Leaving aside the likely explosion in the chiropractic industry, I think 2021 is the year home office furniture and ergonomic technology see a rapid expansion and innovation tailored to a work-from-home lifestyle. Desks that rise with the touch of a finger are already popular with the stand-while working set, but the look and feel is decidedly office-friendly. Home office furniture companies will introduce a lot more home friendly and flexible styles for all their home office furniture and accessories (more devices for managing multiple screens, much better and cheaper home office chairs), and we will snap them up. Home work The work from home revolution is already boosting PC and tablet sales, as is the remote learning explosions While many people are desperate to get their children back into the classroom, that migration may only happen slowly in 2021. In the meantime, PC company fortunes will rise, and the systems will adjust for more home and homework friendly operation. Already, I’m seeing far more attention paid to the integrated cameras, speakers, and microphones of all these systems. CES 2021 will be a showcase for systems that look ready to leave the home (integrated 5G), but will bring far greater benefit to home office workers and remote-learning students with better screens, cameras, microphones, and speakers. The theater is home I love going to the movies, but have not stepped inside a theater since February and I’m not sure I ever will again. The movie industry, which is still (stressfully) churning out product, is slowly but surely accepting the movie theater industry’s awful fate. Last month, Warner Bros. announced that its whole slate of movies would release simultaneously in theaters and on streaming platforms. Wonder Woman: 1984, arguably 2020’s biggest release will arrive on HBOMax on Christmas, a watershed moment for all major releases and a signal to other studios trying to rejigger their release schedules, still holding out for a return to normal in time for, say, Bond: No Time to Die to achieve blockbuster status. I don’t see that happening. Consumers may miss the theater experience, but they have not stopped consuming massive quantities of TV and movie-studio content on their streaming platforms of choice. The plan for 2021 is to see if they can finally turn their home theaters into more movie theater-like experiences, and they will get some help in 2021 from emerging technology. Consumers are finding and buying larger and larger 4K TVs at lower and lower prices. That trend will only accelerate in 2021. Just before the end of the year, Samsung finally announced its consumer version of MicroLED TV. Unlike currently LCD, LED, OLED, and Q-OLED, displays, MicroLEDs boil the display technology down to individual LED pixels, packaged into small, box-like groups that still maintain individual pixel/LED control. The benefit is that you can use these blocks to build insanely large and virtually bezel-less displays. Samsung’s smallest MicroLED is 110 inches. Pricing and availability, beyond early next year, has yet to be announced, but I think consumers will happily adopt a bigger-is-better philosophy when it comes to home screen entertainment. There’s also a good chance that we’ll finally witness the convergence of affordable 8K TVs and some decent 8K content (4K content will be upscaled), which MicoLEDs are not equipped to handle. Upstart social media Whether or not everyone accepts it, the presidential election is over, and we’ll have a new President, Joe Biden, next month. Even with that decided, it’s unlikely the country will join hands and sing kumbaya. Nowhere will the divide be more evident than on social media where the fracturing of the American psyche is reflected in our need to gather in our own social media bubbles. It’s not enough to talk among yourselves on Twitter. People want their own social media nations. Platforms like Parler, Discord, Telepath, and Clubhouse will grow in popularity and maintain their dedicated (or invite-only) users, as long as they do not stray from the core ideological values that spawned them. This isn’t encouraging news for 2021 and one can only hope that more centrist platforms throw open their arms to all voices and help nudge people away from the bleeding edges of groupthink. Healthcare People who never thought they’d adopt telemedicine have probably had at least one remote doctor visit in 2020. The Centers for Disease Control (CDC) reported a massive 154% increase in telehealth visits in the last week of March 2020, alone. However, that number quickly leveled off to more modest increases over the course of the year. Still, new public and private policies regarding telehealth, especially health-insurance coverage, mean that the practice is not going away and will probably grow throughout 2021. As a result, technology relating to home diagnostics (CES 2021 could feature a lot of this), even connection to existing wearable health devices, like the Apple Watch, will grow. Clean Tech We have vaccines, but we’ll still be obsessed with cleanliness in 2021, including clean tech and tech that can clean our tech. According to market research firm Grand View Research the multi-billion-dollar global ultraviolet disinfection equipment market is expected to grow 19% over the next six years. Fast, effective ways of cleaning both our own mobile technology, anything we come in contact with, and digitally cleansing the air we breathe will be in fashion through at least the first half of 2021. 3D Mesh tech Not every 2021 tech innovation will be related to our pandemic lifestyle. In 2020, Apple introduced LiDAR scanners to its iPad Pros and iPhone 12 Pros. I think it’s fair to say that consumers do not entirely get this next-gen imaging technology. Their brains turn off as soon as you say, “It means Light Detection and Ranging!” Plus, I’ve only seen a handful of apps take advantage of the sensor’s ability to image an entire 360-degree space in 3D. In 2021, I expect to see a new collection of apps that focus less on the whiz-bang of the sensor and more on practical uses. And, if Apple is smart, it will use the LiDAR scanners to build the first visual, virtual presence for Siri, something akin to Iron Man’s Jarvis. The scanner’s ability to see every dimension and surface means that whatever entity Apple (and other developers) generate can fully interact with every surface. Also, with Apple’s ARKit occlusion detection technology, the virtual Siri should also act like it’s aware of real humans and maybe even interact with them, as well. Air drop The drone industry is about to go through a bit of an adjustment as one of its leading players, DJI, may spend half of 2021 battling the U.S. government over worries that it’s too close to the Chinese government and should not be doing business in the U.S. That flight glitch, though, should have little impact on the expansion of drone delivery efforts, which, naturally fit perfectly in our stay-at-home pandemic world. Amazon, which got fleet approval in August, the U.S. Postal Service, Google, and others might start drone deliveries to rural and even suburban areas before the end of 2021. AI, Robotics, and Smart Cars Despite our anthropological fears, the use of AI and its smarter offspring, Deep Learning, is only going to spread, powering better smart home technology, the arrival, finally, of fully-autonomous fully autonomous electric cars (and not just from Tesla), more intelligent software bots and, I hope first-generation home autonomations. None of this is written in stone. If 2020 has taught us anything, it’s that humankind plans while fate laughs its ass off.
https://medium.com/@LanceUlanoff/the-tech-to-expect-in-2021-d5e31e637299
['Lance Ulanoff']
2020-12-23 14:13:19.078000+00:00
['Digital', 'Technology', '2021', 'Trends', 'Analysis']
34
I would attribute the lack of innovation in the last ten years to venture capitalists.
I would attribute the lack of innovation in the last ten years to venture capitalists. They overly participate in group think and rarely think for themselves, mostly missing innovative projects while overly participating in safe projects that society doesn’t need and doesn’t appreciate. Their policies (which are adhered to perfectly) swing wildly to and fro like a pendulum, first aggressively investing in pre-profit companies then only years later complete opposite policy (to which they all adopt simultaneously). I am not in the slightest impressed with venture capitalists. They need to stop following their truly dismally stupid rules (that don’t account for anything that makes life great, or companies for that matter) and go from the heart. Do you want more safe companies in the world, or do you want profound ones? My experience with venture capitalists tell me they’ll only choose safe bets, and this society is not the better for it. It really a shame to have five thousand copies of every type of company in this world and a complete lack of innovation. VCs truly should be ashamed of themselves.
https://medium.com/@kevinteman/i-would-attribute-the-lack-of-innovation-in-the-last-ten-years-to-venture-capitalists-1b47a5a8096a
['Kevin Teman']
2020-12-27 08:57:55.188000+00:00
['AI', 'Ventue Capitalists', 'Investment', 'Technology', 'Investors']
35
Reinventing Financial Systems — Henrique Dubugras, Co-Founder/Co-CEO of Brex
Miguel Armaza sits down with Henrique Dubugras, Co-Founder & Co-CEO of Brex, a financial operating system for startups and growing companies, with services including a corporate card, cash management, and controls in a single account. Originally from Brazil, Henrique built Pagar.me, a payments company, when he was only sixteen and in just three years grew it to process over $1.5 billion in processed transactions. Shortly thereafter, he co-founded Brex along with Pedro Franceschi. Listen to the full interview → Spotify | Soundcloud | Apple In this interview, we talk about How he met his co-founder, Pedro, and their inspiration to launch Brex “On my last year of high school, I met my co-founder Pedro over Twitter, and we decided to start our first business that actually worked, in Brazil, called Pagar.me, which was like Stripe in Brazil. So online payment processing in Brazil. And then from that point on, we ran that company for three and a half years, grew it to 150 employees, pretty profitable, and then sold it to a larger merchant acquirer called Stone.” Henrique’s story from entrepreneur, to Stanford student, to Stanford dropout and Brex co-founder “Since our first business, Pagar.me, we kind of always wanted to help businesses grow. And we saw in Brazil that a lot of businesses had a lot of trouble accepting payments and getting into the online economy. And at the same time, when reconciling they lost a lot of time and sales. So we always kind of had this affinity to help merchants and then when we got to Brex we kind of wanted to keep working on that mission. At Brex we say that our mission is to reinvent financial systems so every growing company can realize its full potential. And what that means for us is basically doing everything we can to build the best product to help companies succeed.” Their significant decision to move Brex into a remote-first company and what that means for them “We’re just going to assume that everyone at Brex is not in an office and all of our communication processes are going to be built for that (…) and assume everyone is seeing from their own computer. And it was a really tough decision, right? Because even though a few companies like Square, Twitter, and Shopify had announced they were going to do that, it’s still quite risky and no huge companies were built entirely remote in the past, and maybe that’s changing for the future. But it’s still TBD. And then there’s a lot of things that haven’t adapted and a lot of ways people interact and create a sense of belonging that are not remote. And we’ve made a bet that all those things will improve and we will find ways of solving all the issues remotely, and that the pros will outweigh the cons. And the biggest pro is basically access to global talent.” The importance of company culture and some key lessons for fintech founders “People say the first 10 hires are gonna multiply for the next 10 people and set the culture of the company. I would amend that statement and say it’s probably your first 10 leadership hires (…) And those leaders, they create the processes, the culture, and structure that years later, should they ever leave, it stays. So it’s incredibly important.” And a whole lot more! Listen to the full interview → Spotify | Soundcloud | Apple Henrique Dubugras Henrique Dubugras is Co-Founder & Co-CEO of Brex — the smartest corporate card in the room. A Brazilian entrepreneur, Henrique built payments company Pagar.me — the Stripe of Brazil — when he was sixteen years old. In just three years, Pagar.me grew to US$1.5 billion in volume of transactions processed. In the fall of 2016, Henrique sold Pagar.me and enrolled at Stanford University. After eight months, he left school and founded Brex. About Brex Brex is reimagining financial systems so every growing company can realize its full potential. By rebuilding from scratch the entire technology stack for credit card issuing, storing and transferring money, Brex has designed financial software that avoids previously painful banking problems. Traditional banking systems are broken; previously neither founders nor their businesses had access to great financial perks and benefits that their corporate counterparts enjoyed. Brex created products for fast moving companies that are intuitive, well designed, and have more powerful functionality than anything on the market. At its core, Brex exists to empower every growing company to dream bigger. In June 2018 Brex launched the first corporate card and rewards program designed for Startups. Brex reimagined every aspect of corporate cards, including 10–20x higher limits than traditional cards, tailored rewards with points on every purchase, and no personal guarantee for founders. This is a radically better experience for customers. In Fall 2019, Brex introduced its second product, Brex Cash. A first-of-its-kind cash management account that enables companies to simplify financial operations, pay for expenses and grow their business, all with zero fees. As a unified platform, Brex brings together the ability to store money and pay bills in a single, elegant dashboard replete with integrations to other financial software — a new vision for the autonomous finance function. Having started as part of the Y Combinator accelerator program in 2017, the team has grown to over 400 people. Based in San Francisco, Brex has raised over US$465 million in venture capital, with support from Kleiner Perkins Growth, YC Continuity Fund, Greenoaks Capital, Ribbit Capital, IVP, DST, as well as fintech insiders like Peter Thiel and Max Levchin. Brex has also raised over $510 million in debt funding from Barclays. As of writing it is worth $3 billion. Previous Episodes You May Enjoy: From Poker Star to Unicorn Founder — Nima Ghamsari, CEO/Co-Founder of Blend Building a Customer-Centric Culture with David Vélez, Founder and CEO of Nubank Leading Russia’s Fintech Innovation with Oliver Hughes, Tinkoff CEO Building a Global Payments Powerhouse — Kamran Zaki, COO of Adyen Creating Pathways to Better Financial Health — Anu Shultes, CEO of LendUp Building Unicorns and Redefining Online Banking — Renaud Laplanche, Upgrade CEO/Co-Founder Creating an African Fintech Giant — Olugbenga Agboola, Co-Founder & CEO of Flutterwave For more FinTech insights, follow us below: Wharton Fintech Website | Medium | Twitter | LinkedIn | Facebook | Instagram Miguel Armaza Twitter | LinkedIn
https://medium.com/wharton-fintech/reinventing-financial-systems-henrique-dubugras-co-founder-co-ceo-of-brex-8dc7ab820765
['Miguel Armaza']
2020-12-11 14:46:40.698000+00:00
['Payments', 'Startup', 'Entrepreneurship', 'Fintech', 'Technology']
36
Blockchain and Solving Society’s Problems
Applying blockchain to social issues In Finland, the MONI Card is being used to give refugees the resources they need to start their new life. Blockchain offers exceptional, immutable record keeping which makes it possible to financially include refugees who are starting their new lives without credit or banking history. This ability to have a financial identity makes it possible for these people to find jobs, pay bills, and establish themselves much more quickly. In Jordan, blockchain is helping distribute aid to Syrian Refugees. The World Food Program project uses retina scanners to monitor food distribution, which is recorded on a blockchain. This prevents aid from being dispersed to the wrong people, and better manages their data. But, refugees aren’t the only ones without access to financial systems. Financial inclusion is a problem affecting at least two billion people worldwide, and it is arguably a cause of impoverished circumstances. Estonia is proposing to use blockchain for identity, giving people all around the world the opportunity to be e-residents and access the services and infrastructure available to Estonian residents. Additionally, the ‘estcoins’ could allow investors around the world to invest in the development in Estonia. It would also give them a vested interest in the country’s growth. A secure digital identity gives access to resources for starting a business abroad which should prove to be very economically empowering. Other government efforts aim to improve access to financial services in developing countries through blockchain. By facilitating secure registration of personal data these efforts significantly lower the barrier to entry for getting services to citizens that can economically empower them. Blockchain is more than just currency Blockchain is being applied in Kenya to root out dilapidated cars and stolen vehicles. It’s being used in Ghana to secure land ownership. And, yes, it’s even used to enable a gold-backed currency. By tracking assets and identities, individuals are economically empowered through blockchain — and that’s something we should all be excited about. LAYNE LAFRANCE is a blockchain focused Project Manager at AxiomZen. With a keen eye, and inquisitive nature, Lafrance has asserted herself as a fresh authority.
https://medium.com/axiomzenteam/blockchain-and-solving-societys-problems-9e38f3b8fe57
['Axiom Zen']
2018-04-17 22:39:43.404000+00:00
['Blockchain', 'Insights', 'Social Good', 'Technology', 'Bitcoin']
37
Being a bit cheeky here, but may I add a sixth?
Being a bit cheeky here, but may I add a sixth? It isn’t a phone. Since recently joining the twenty-first century’s second decade and keeping all my music on my iPhone, I’ve been surprised with how many times having my primary audio playing device doubling as my primary telephone causes problems. I’m listening to something and my phone rings, so the music or podcast stops. What if I don’t want it to stop? Yes, I know there are various settings for that, and I may have almost figured out how to configure them (I’m still not 100% sure though). Here’s the thing; once configured, they’re on forever. Even when I’ve stopped listening to music, the settings still operate. I want my phone to ring, I just don’t want it to stop the audio. This is not to mention that passive-aggressive sensation of the audio volume of my current track/podcast being temporarily lowered just to let me know an message or other notification has arrived. Generally that happens just when I’m at the critical part of a podcast when a vital section of monologue/dialogue occurs, of course. Rewind, repeat. Except I’m driving… (Maybe it’s time I enabled Siri after all. But the unintended consequences of that are something I’m not willing to countenance — yet.) Again, yes there are settings for that. But is it the software “Do Not Disturb” control or the hardware “Ring/Silent” control? I forget… And even if I do remember one day how these controls work and interact exactly, I’m sure I will keep forgetting to enable them and then I’ll miss that vital podcast snippet or have that great song interrupted at just the wrong moment. It’s inevitable. So yes, I absolutely see a reason to buy a separate portable music player. The question is, should it be an iPod Touch or something a bit more economical? Hmmm…
https://medium.com/@oneofthedamons/being-a-bit-cheeky-here-but-may-i-add-a-sixth-ce0b54a68cd3
[]
2019-07-06 14:35:00.294000+00:00
['Consumer Electronics', 'Music', 'Apple', 'Gadgets', 'Technology']
38
Breaking Regulatory Barriers for Bridge Inspection: NCDOT and Skydio Secure the First True BVLOS Waiver Under Part 107
Today, just a few months after the announcement of the nationwide Tactical BVLOS waiver, we are celebrating another first-of-a-kind regulatory achievement that allows the North Carolina Department of Transportation (NCDOT) to fly Skydio drones beyond visual line of sight (BVLOS) to inspect bridges with unparalleled safety and efficiency. This waiver marks a new era in unmanned flight. Until now, the FAA had required the use of visual observers (VOs) for operations beyond visual line of sight (BVLOS). The FAA also traditionally required the use of expensive solutions — such as radar — designed to detect manned aircraft, even in areas manned aircraft were unlikely to fly. The waiver announced today breaks both of those barriers. NCDOT received permission to conduct BVLOS operations using Skydio’s autonomous drones — without VOs or expensive surveillance technology. This achievement follows months of collaboration between Skydio, NCDOT, and the FAA. Going forward, NCDOT’s inspectors — who face the daunting task of inspecting more than 13,500 bridges on a regular basis — can send drones below bridges instead of dangerous rappels or expensive and invasive snooper trucks. Although today’s waiver focuses on NCDOT, it signals the FAA’s willingness to permit advanced BVLOS operations within procedural parameters that account for lower levels of airspace risk near structures, a concept known as infrastructure masking. This blog explores the partnership between NCDOT and Skydio that made this achievement possible; explains the nature of the operation; and examines the technology, training and tools designed to make this new authority work for your inspection program. Honoring the North Carolina Department of Transportation’s Leadership We first want to applaud the FAA for enabling NCDOT’s inspection crews to benefit from these unique and forward-leaning operations. We also want to acknowledge the operators who made this possible. In the annals of aviation, North Carolina will forever be known as the state “first in flight.” North Carolina has devoted that same pioneering spirit to advancing the future of unmanned aviation. The North Carolina Integration Pilot Program (IPP) has been a catalyst for next-generation drone operations from inspection to delivery. We have been thrilled to partner with teams across the state, especially NCDOT. NCDOT was an early adopter of Skydio 2, purchasing it shortly after it hit the market. NCDOT quickly realized the value of autonomy for bridge inspection, which my colleague Guillaume Delepine recently outlined in a must-read blog. Skydio 2’s obstacle avoidance and visual navigation technologies enable faster and safer inspections by pilots of all skill levels that need to navigate the complex truss structures below bridges. The results are exciting, and the benefits to augmenting the outdated methods that previously have dominated bridge inspection are manifold. https://www.youtube.com/watch?v=2eYSzmjT6HI To help NCDOT inspect bridges more efficiently than ever before, our Head of Regulatory and Policy Affairs, Brendan Groves, and Director of Solutions Engineering, Kabe Termes, worked closely with NCDOT’s experienced team to craft a ground-breaking waiver application. We were honored to partner with NCDOT in pioneering the safety case for Below Bridge BVLOS flights, and thrilled to see the FAA grant a waiver to our partners after months of effort. Understanding Below Bridge BVLOS Operations America boasts more than 600,000 bridges, each one of which must be inspected regularly under federal and state laws. Few jobs are more important — or more demanding — than inspecting bridges. In order to inspect the critical infrastructure below the deck of the bridge, highly trained inspectors conduct daring feats — repelling over the edge or dangling below in the bucket of a snooper truck. It is dangerous work. Some bridge inspectors have been killed and injured. Traditional methods of bridge inspection are profoundly expensive, requiring large crews and costly equipment. Traditional methods also impose high costs on travelers in the form of lengthy road closures and delays. Bridge inspectors have long understood the potential for drones to improve the status quo. But the use of drones was not without problems. The first problem involved technology. The vast majority of drones on the marketplace require GPS to operate and are subject to electromagnetic interference (EMI) — which effectively prevents operations beneath a bridge, where GPS lock is unlikely and EMI is likely. Making matters worse, the lackluster “obstacle avoidance” features on most drones are incapable of navigating complex spaces like bridge trusses. Skydio’s autonomous drones break through these barriers. Skydio drones do not need GPS, are not subject to EMI, and are fully capable of navigating confined spaces without human intervention — allowing even novice pilots to fly in the most demanding environments. Skydio drones confidently fly where others cannot — even in environments where GPS lock is unlikely, and electromagnetic interference is likely. The second problem involved FAA regulations. Flying below the deck of a bridge required securing permission to operate beyond visual line of sight. Traditionally, the FAA has required the use of VOs and expensive surveillance equipment (such as radar) in order to receive a waiver to conduct BVLOS operations. Almost every FAA waiver was also limited to specific sites. Those requirements were largely incompatible with the nature of bridge inspection. When flying beneath a bridge, VOs are often out of the question — a VO standing on a bridge deck has no more ability to see the drone than the pilot. The same is true for radar and other surveillance technology. Even small radar installations often cost $100,000 or more and must be tuned to work well at specific sites. Finally, waivers that only applied to a handful of bridges would vastly limit the utility of drones. North Carolina, for example, has more than 13,500 bridges — every one of which must be inspected. Over the last six months, Skydio and NCDOT worked with the FAA to break down these barriers. The waiver announced today permits BVLOS operations without the three traditional features of past BVLOS waivers. First, NCDOT may conduct BVLOS operations below the deck of bridges without VOs, provided the drone remains within 50 feet of the bridge itself and within 1,500 of the remote pilot. Second, there is no requirement to leverage expensive surveillance technology because the FAA (correctly) recognized that manned aircraft are unlikely to transit the confined airspace in and around a bridge. Third, the waiver is not limited to select bridges. The waiver adopts a performance-based approach that allows NCDOT to conduct BVLOS operations at any bridge meeting the criteria outlined in the waiver. Summary of the key features of the waiver
https://medium.com/skydio/breaking-regulatory-barriers-for-bridge-inspection-ncdot-and-skydio-secure-the-first-true-bvlos-b6ac93b6f97e
['Brendan Groves']
2020-10-05 17:58:26.411000+00:00
['Technology', 'Tech', 'Autonomous Vehicles', 'Regulation', 'Drones']
39
Realme X7 Pro smartphone launched in Thailand, know price and specifications
This smartphone series was launched on 1 September in China. Although the company has launched only Bangkok, Thailand smartphone Pro variants, while other models of this lineup are yet to enter the global market. The smartphone series was launched in China on 1 September. Although the company has launched only Bangkok, Thailand smartphone Pro variants, while other models of this lineup are yet to enter the global market. The Realme X7 Pro smartphone is currently available at main online stores in Thailand. This smartphone is currently available only in single memory option in the market. The company has launched the phone in two colors black and gradient. Reality is being well liked in Thailand and the brand is expanding here. The company has launched this smartphone in only one storage variant. This smartphone has been launched in Thailand at a price of THB16,990 (about 42 thousand rupees). Realme X7 Pro specifications The smartphone has a 6.55-inch full-HD + AMOLED display. The display has a refresh rate of 120Hz and a touch sampling rate of 240Hz. In addition, the screen to body ratio is 91.6 percent. This display comes with 5th-generation Corning Gorilla Glass Protection. The peak brightness of this display is 1,2.0 nits. The Realme X7 Pro smartphone has been launched with MediaTek’s Dimensity 1000 ++ SoC and 9-core Mali-G77 graphics processor. This smartphone has been introduced with LPDDR4X RAM up to 8GB. The Realme X7 Pro smartphone has a rear rear camera setup. The primary camera of this smartphone is 64-megapixels. The second camera sensor is an 8-megapixel ultra wide angle. The third image sensor is a 2-megapixel black and white ported sensor and a 2-megapixel macro sensor. This smartphone has a 32-megapixel selfie camera. The smartphone has 256GB of USF 2.1 storage which comes with Turbo Write technology. Talking about connectivity, Realme X7 is like a smartphone and has dual frequency GPS in Realme X7 Pro. The Realme X7 Pro smartphone has a 4,500mAh battery. This smartphone also comes with 65W fast charge.
https://medium.com/@netsgreat/realme-x7-pro-smartphone-launched-in-thailand-know-price-and-specifications-c0dcf21dd390
[]
2020-12-20 14:12:15.575000+00:00
['Tech', 'Technology', 'Smartphones']
40
My latest bugfix: or, how I went spelunking in someone else’s code
I love CodeSandbox. It has pretty much replaced CodePen for me unless I am fiddling around with CSS or freeCodeCamp front-end projects. I like going through the sandboxes and picking out different ones to look at, take apart, and figure out how they work. While going through React Tutorial for Beginners by Kent C. Dodds on Egghead.io, I decided I would look for sandboxes that correlated with the course, as I was using Codesandbox to build out the stopwatch we were building in that course. I found a sandbox which I forked and found to be buggy. Why didn’t the stopwatch work? Glancing at the code for a few seconds, I saw some obvious problems right away. Here is an example of the stopwatch being broken: Bugfix 1 The first thing I noticed was on line 7: Date.now() needs parentheses. Date is an an object constructor with .now() being a method. When we click on the start button, React doesn’t know what to do here; we aren’t setting the state of lapse to be a number, which we expect. By adding the parentheses, we get the start button to work. No more NaNms . But now we have another problem: the timer won’t stop. I also removed the console.log(Math.random()); because I felt it was unnecessary. Bugfix 2: Getting the Stopwatch to Stop and Clear Each time the button is clicked, we set the state to either running or lapse . The timer runs when we click start but clicking stop or clear doesn’t seem to work. How can we fix this? We can create a timer update function that accepts the current state. We can accomplish this by using native DOM APIs such as setInterval() and clearInterval() . We can run conditional logic to see if the timer is running: and use Date.now() to get the timestamp in ms, and assign it a startTime variable to compare the current time to the amount of time that has passed. When we click the start button, it sets the startTime to the current timestamp. We also need to return a new state as state is not mutable. Okay so this partially works. But as you can see below, if I click clear while the stopwatch timer is running, it doesn’t clear the timer, and it also doesn’t allow me to stop the timer, either. How do we fix this particular bug? If we look back at the previous code, we can see we are using clearInterval() to reset the stopwatch timer. In our current iteration, our handleOnClear method is just setting the state without clearing the previous state. We can fix this by adding clearInterval() and passing in the timer function to the handleOnClear method to clear the state. This will give us the results we want. Potential Problem? There is a memory leak in this particular iteration. The timer will run until it is explicitly stopped in the DOM. We can use a React lifecycle method to stop all processes in the DOM when this component is mounted or unmounted. For this we can use componentWillUnmount to tell React to unmount the component once it is done rendering. Thoughts and Conclusions I find it much more enjoyable fixing other people’s bugs than my own. This was a fun exercise and I plan on doing it more regularly and blogging about it. This stopwatch is a stupid simple component but if you are just scratching the surface of React like I am, I am sure digging into something like this stopwatch and figuring out how it works is an excellent exercise and use of one’s time.
https://medium.com/free-code-camp/my-latest-bugfix-or-how-i-went-spelunking-in-someone-elses-code-2afb536504ed
['Tiffany White']
2019-01-11 00:45:51.540000+00:00
['Debugging', 'JavaScript', 'Technology', 'Programming', 'React']
41
Common ICS Cybersecurity Myth #2: Proprietary Systems and Protocols
Misconceptions about ICS/OT cybersecurity are stubborn. This “mythbusting” blog series dispels five common myths related to ICS cybersecurity. Catch up on the series if you’re interested: Now, let’s dive in. ICS Cybersecurity Myth #2 ICS networks use only proprietary systems/protocols, and attackers don’t understand these proprietary systems and protocols Proprietary systems and protocols are usually designed and controlled by a single organization. Sensitive information about how these protocols are designed is not released for public use. A common belief is that ICS protocols are proprietary, and that attackers don’t have access to-and don’t understand-ICS devices and proprietary protocols. This belief can lead to a false sense of security. Busting ICS Cybersecurity Myth #2 Although hacking ICS devices may be challenging due to their proprietary nature, threat actors behind targeted attacks are usually knowledgeable, persistent, and resourceful -in many cases, sponsored by nation states. While proprietary in nature, many ICS vendors use open-source software in their ICS products. A recent report by Black Duck revealed that 96% of applications reviewed contained open-source components, and 78% of the codebases examined had at least one vulnerability. The average number of codebase vulnerabilities was 64. With newer-generation ICS systems, there is a trend of using common IT protocols in ICS that further exposes ICS to cyberattacks. The SMB protocol is widely used across IT as well OT networks. Attackers also can buy ICS systems and readily available malware from the darknet (the black market, or malware-as-a-service) to play in their labs. For some insightful stories about the darknet and cybercrime, check out the podcast Darknet Diaries. The TRITON Malware Attack The TRITON malware attack targeted an industrial organization in the Middle East. The TRITON malware (also called TRISIS) was used to target a safety instrumented system (SIS) from Schneider Electric called Triconex. Attackers demonstrated their capabilities to compromise a proprietary system and communication protocol called TriStation. The attackers deployed TRITON shortly after gaining access to the SIS, indicating that they had pre-built and tested the tool -which would require access to hardware and software that is not widely available. Although not explicitly designed to target ICS, WannaCry (May 2017) impacted ICS as well. ICS Incidents on the Rise Diagram courtesy of the author ICS incidents are on the rise, as shown by the chart (incidents tracked by ICS-CERT). According to CyberX, 82% of industrial sites depended on remote management protocols like RDP and SSH in 2017. Not only are hackers familiar with these access protocols and their vulnerabilities, they are even familiar with proprietary ICS systems. Attackers can easily find vulnerabilities within critical infrastructure with open-source software tools. At a recent Pwn2Own event organized by S4, hackers demonstrated numerous exploits in ICS platforms that are used within the manufacturing, heavy industry, and critical infrastructure sectors. Stay tuned for the next part in this series, in which we break down Myth #3:the belief that ICS is protected from cyberattacks because there is a firewall between the ICS network and other networks. Interested in reading more articles like this? Subscribe to the ISAGCA blog and receive weekly emails with links to the latest thought leadership, tips, research, and other insights from automation cybersecurity leaders. The proprietary nature of systems and protocols in ICS networks is not a deterrent to attackers.
https://medium.com/@isaautomation/common-ics-cybersecurity-myth-2-proprietary-systems-and-protocols-5d2b540a2a7c
['International Society Of Automation - Isa Official']
2020-12-26 14:32:38.881000+00:00
['Industry 4 0', 'Engineering', 'Cybersecurity', 'Technology', 'Hacking']
42
Launching Audioshake Live
An On-Demand Stem Creation Platform for the Music Industry We launched Audioshake earlier this year in the belief that the next wave in music innovation, re-imagination, and consumer engagement would come from stems — the different instrument and vocal parts of a song. Stems are already an important part of today’s music industry — sync licensing, spatial audio, remixes, and adaptive music all rely on stems. But their value is set to grow exponentially as new experiences emerge in social apps, fitness, gaming, hardware, and other fields that allow fans to interact and engage with musical content in new ways. But stems are not always easily available, which is where Audioshake comes in. Our best-in-class A.I. technology can take a song from any point in history and separate it into its stems, opening the song up for new uses as instrumentals, samples, and more. Since launching earlier this year, our work has been used by record companies, music publishers, music supervisors, audio engineers, and artists in commercials, documentaries, movies, podcasts, and remixes. Today we take the next step by launching Audioshake Live, a platform that lets labels, publishers, and their partners bring our A.I. technology “in-house,” and create instrumentals and stems on demand. Audioshake Live is easy to use. Platform subscribers simply upload their songs and choose the stems they want to create, choosing from six different stem types: bass, drums, guitar, instrumentals, vocals, and “other.” Audioshake quickly returns the selected instrument stems or a turnkey instrumental that is ready for pitching. Subscribers can then listen to their stems in our player, or download immediately — for use in a sync pitch, to explore a creative partnership, or to explore re-mix possibilities. Here’s how it works: This past spring, we gave a handful of record labels, music publishers, and distributors early access to Audioshake Live. First as demo testers, now as customers, we’ve been inspired to see the many ways our technology has been put to use by record labels, publishers, and music service companies like Warner Music Group, CODISCOS, Crush Music, Hipgnosis Songs Fund, Downtown Music Services, Spirit Music Group, and peermusic; distributors like CD Baby; production music companies like Audiosocket; and music supervisors like The Teenage Diplomat. Sync Licensing — Record labels and music publishers use Audioshake Live to create turnkey instrumentals that reduce all the back-and-forth involved in preparing a sync pitch, and with older songs, open up new monetization possibilities that didn’t exist due to lack of instrumentals. Our instrumentals have been used in commercials, movies, and on podcasts like Switched on Pop and Song Exploder. Remixes — Labels, songwriters, and producers use Audioshake Live to isolate vocals or remove drum beats from their songs in order to find new ways to breathe new life into their catalog. Re-masters — Audio engineers at record labels include Audioshake Live as a part of their toolbox, leapfrogging some of the more tedious parts of their task, and allowing them to focus on the highly skilled parts of their work. How to Sign Up Audioshake Live is meant to serve the music industry, and is specifically aimed at record labels, music publishers, and others who regularly need to create or use stems for their songs. If you’re in that category, please sign up for a demo! For those musicians, audio engineers, music supervisors, producers, and other authorized third parties who need to create stems on a one-off basis, we’ll continue to offer our services on a 1:1 basis. Our aim is to put this technology in the hands of everyone who can benefit from it, but to do so respectfully, in partnership with the music industry, so that artists’ copyrights and creative wishes are front and center — not an afterthought, as has often been the case when technology has intersected with the music world. If you’re in the music industry and are interested in the Audioshake Live platform, or Audioshake’s stem services more generally, please get in touch! You can find us at audioshake.ai or on Twitter @audioshakeAI.
https://medium.com/@audioshake/launching-audioshake-live-81214e3beb3f
[]
2021-07-22 19:22:36.009000+00:00
['Music Business', 'Music Technology', 'Audio Engineering', 'Deep Learning', 'Artificial Intelligence']
43
>+𝐿𝐼𝒱𝐸’’• “Georgia Southern vs Louisiana Tech”(Livestream) — New Orleans Bowl 2020 Live NCAAF FREE, TV channel 2020
The 2020 New Orleans Bowl will be played between the Louisiana Tech Bulldogs and the Georgia Southern Eagles on Dec. 23. The game will be played at the Mercedes-Benz Superdome in New Orleans. NEW ORLEANS BOWL INFO Sponsor: R+L Carriers Date: December 23 Time (ET): 3 p.m. TV: ESPN Stadium: Mercedes-Benz Superdome Location: New Orleans Louisiana Tech played and won the bowl game in 2015. It will be Georgia Southern’s first appearance in the New Orleans Bowl. VISIT HERE>>> https://t.co/DRNCUtcaQI?amp=1 The Bulldogs finished the season 5–4. Redshirt freshman Aaron Allen will likely be the quarterback for Louisiana Tech this game after Luke Anthony’s season-ending injury. Allen has 561 passing yards and four touchdown passes this season. Israel Tucker leads the rushing attack with 525 yards and four touchdowns. Adrian Hardy has 440 receiving yards and four touchdowns. Louisiana Tech linebacker Tyler Grubbs has 84 tackles and Bee Jay Williams is leading with three interceptions. Georgia Southern finished the regular season 7–5 and finished third in their division in third Sun Belt Conference. Quarterback Shai Werts led the team in passing (936) and rushing yards (649). He has 15 total touchdowns this season. The Eagles don’t have much of a passing game which is going to be a problem against the Bulldogs. Two teams looking to bounce back from regular season-ending losses meet in the R+L Carriers New Orleans Bowl when the Louisiana Tech Bulldogs take on the Georgia Southern Eagles on Wednesday afternoon. The Eagles will be traveling 600 miles to the Mercedes-Benz Superdome compared to the in-state Bulldogs’ 300. Louisiana Tech got its season off to a good start with a pair of wins over Southern Miss (31–30) and Houston Baptist before losing 45–14 at BYU. After going 2–2 in the remainder of October — including a 37–34 double-overtime win over UAB — the Bulldogs didn’t take the field for more than a month due to COVID-related cancellations. But Louisiana Tech returned from the pause with a 42–31 win at North Texas before a disheartening 52–10 loss at TCU on Dec. 12, in which the Horned Frogs raced out to a 31–0 halftime lead and doubled up the Bulldogs in total yards (494 to 244). Just like Louisiana Tech, Georgia Southern won three of its first four games in 2020, though the Eagles had some tense moments as two of the wins and the lone loss came by five or fewer points. In fact, close games have been the theme for Georgia Southern all season. Aside from a 28–14 loss to Coastal Carolina on Oct. 24, all four other losses came by single-digit margins — with a total margin of defeat of 17 points across the four — while five of the Eagles’ seven wins have been by a touchdown or less. Wednesday’s meeting will be the first-ever matchup between Louisiana Tech and Georgia Southern. Louisiana Tech is a perfect 9–0 all-time against the Sun Belt — with its last win back in 2018, a season-opening 30–26 win against South Alabama — and also came away victorious in its only previous New Orleans Bowl appearance, a 47–28 win over Arkansas State in 2015. Georgia Southern, meanwhile, will be making its inaugural appearance in the New Orleans Bowl, with its only previous game against a Conference USA opponent coming just earlier this month in a 20–3 win over Florida Atlantic. When Louisiana Tech Has the Ball Luke Anthony and Aaron Allen shared quarterback duties throughout the season for the Bulldogs with Anthony getting more of the snaps. Anthony finished the regular season atop Conference USA with 16 touchdown passes and fourth in passing yards (1,479) but he suffered a serious leg injury (compound fracture of right tibia and fibula) in the closing minutes of the loss to TCU, so it will be up to Allen to lead the offense. Allen, a sophomore, has completed 65 percent of his passes but with more interceptions (five) than touchdowns (four).
https://medium.com/@iihfliveontv/georgia-southern-vs-louisiana-tech-livestream-new-orleans-bowl-2020-live-ncaaf-375c9da041a2
['Iihf Live Free']
2020-12-23 12:45:22.985000+00:00
['NCAA', 'Live Streaming', 'Football', 'Technology', 'USA']
44
The Limits of Health Data Aggregation
1. Medical records have rampant errors. An estimated 70 percent of medical records have errors. These may be egregious, like Abby Norman’s record incorrectly stating she has 8 sisters (she has none), or they might be small, like a name misspelling. My own medical record has a ‘problem’ (diagnosis) that is no longer active, and honestly it’s an embarrassing thing that I’d like to have archived off of my active problem list. Every time I go to the doctor and have an ‘after-visit summary’ printed, it shows this private issue on it. I’ve tried to get it updated a few times, and I haven’t had any luck. Unfortunately, it’s not uncommon for patients like me to have difficulty fixing medical record mistakes. Sometimes our medical records misrepresent us because of their inherent limitations, rather than outright errors. I recently worked with a woman who accessed her medical records from a large California hospital system. She gave me a complete list of all of her conditions, medications, allergies, etc; her medication list in particular had over 50 items on it, many of which she did not recognize or was sure she never actually took. Our medical records are good at showing when a medication was prescribed, but bad at indicating whether we took the medication or for what duration of time. In order to put together an accurate picture, we had to groom through her data manually and determine which medications she took, and fix the dates associated with them. So, while our medical records are important repositories of our history, they’re often filled with medications we never took, diagnoses entered by accident, and conditions we no longer have. Bringing all of this information together is helpful and a good first step, but we also need a process of sifting through it to fix the errors, omissions and inconsistencies; patients should be at the center of this process.
https://medium.com/pictal-health/the-limits-of-health-data-aggregation-db67755dbf49
['Katie Mccurdy']
2019-01-09 14:31:00.853000+00:00
['Healthcare', 'Health Technology', 'Patient Experience', 'Data Science', 'Healthcare Technology']
45
The Men Who Stare At Goats
The Men Who Stare At Goats Goat The opening of 2009’s The Men Who Stare at Goats states ‘more of this is true than you would believe’, and while the film is basically a dark comedy, its groundwork apparently is far closer to the truth than you would imagine. The military has used witches, drug experimentation, sonic blasters as methods to enhance the army with psychic soldiers. While the movie plays off of this to the extreme, soldiers attempt to become invisible, walk through walls and even kill a goat just by staring at it, there’s something oddly 2020 about the entire movie; the absurdity that could actually be based in some form of reality. Or perhaps it’s the comforting thought that Jedi Knights may actually exist. :) If you’re interested in my daily ramblings, follow me on Twitter or my posts on Medium. This weeks “Deep Links” National Lampoon’s Christmas Vacation is often cited as one of the greatest holiday movies ever made. In it’s 25th anniversary, a look back at the untold story of freak snowstorms, cast freak-outs and zany antics in it’s production — More I have to admit, Miley Cyrus is quickly becoming one of my favorite artists of the year. After listening to several incredibly honest interviews on Howard Stern, and an amazing performance of Heart of Glass, here’s a look at her career and how this former Disney star is set to reignite Rock and Roll — More In 1942, on the shores of a lake high in the Himalayas, hundreds of bones and skulls, some with flesh still on them were found. The mystery of who these actually belonged to (indications it may have been an ancient army from Crete) goes back several hundred years, in ‘The Skeletons at the Lake’ — More File under ‘what could possibly go wrong’, as Microsoft files a patent to create chatbots from your dead loved ones — More 2020 has created a new model for work: one that is remote. In ‘We’re Never Going Back’, a look at the future state of companies and how they are going to need to adapt to the new workforce. Is it even up to companies to decide, or are the employees going to force the decision; as the best employees have more options than ever — More A look at the psychological effects of bringing things together by ‘clicking’ together parts — LEGOs, IKEA furniture and Ziploc bags — More Tim Ferriss wrote the book ‘The 4 Hour Workweek’ and is known as a personal productivity guru. But now his view is a bit different; and that success isn’t always about output. An interesting look at his new mantra of ‘not everything that is meaningful can be measured’ — More In one of my favorite reads, ‘Writing Is Thinking’ looks at the benefits to putting ideas down on paper (physical or digital) and the payoff of good writing and crafting purposefulness — More Mark Manson dives into mental illness and some of the more positive benefits to being ‘slightly’ crazy, given ‘we’re all a little bit insane’ — More ’52 things I learned in 2020' looks at some random insights gathered over a year like no other — More A look ‘inside at Uber’ during a time of heated internal politics, a re-write of their app in Swift and some pretty bad engineering pivots (note — this is a long tweet that has been threaded for easier reading) — More End Thoughts
https://medium.com/@smakofsky/the-men-who-stare-at-goats-1fc664b987d6
['Steve Makofsky']
2020-12-12 17:09:01.764000+00:00
['Stoicism', 'Storytelling', 'Life', 'Deeplink', 'Technology']
46
This Changes Everything: Capitalism vs. the Climate
BOOKS This Changes Everything: Capitalism vs. the Climate Capitalism vs. Climate (Image) Importing a pen from halfway across the globe doesn't raise an eyelid, but the idea of helping a poor island in sustainable growth seems preposterous. The book explains how climate change is not a problem but a symptom of a much larger problem called Capitalism. Although an informative book in several ways, here are my top five key takeaways. 1. Fossil fuel companies have carbon reserves almost five times our existing carbon budget Pumping out fossil fuel for the last 200 years is what has primarily resulted in today's situation. But what already sounds like a lot has still not exhausted even half of Earth's known stock. If we have an upper limit to how much carbon we can burn to stay within the 2°C mark, then the industry has already identified reserves that could breach that limit at least five times over. 2. Free market has created an inequality that was unnoticed till now Popular culture calls them climate refugees. These are people that have had no contribution to the climate crisis but are paying the highest price for it. For example, Bolivia is heavily dependent on glaciers for its water supply, but their shrinking size is causing major droughts and civil unrest. It wants the rich nations that are primarily responsible for global warming to foot the bill for the damage to the economy a water crisis causes and help it develop on a green energy path. But this goes against the ethos of capitalism and is dismissed as being a call for socialism. 3. Billionaires won't save the world Launched with much hype in 2007, Richard Branson's Virgin Green Fund is now defunct. From the profits from Branson's empire, he pledged to pump in $3 billion into the development of green aviation fuel and other climate solutions over the next 10 years. Seven years later, nearly two hundred million worth of investment was made. The reason being the unprofitability of his businesses, which made money only enough for him to keep expanding the empire. Another billionaire who's trying to fight climate change is Bill Gates, while in the background, his foundation invests in fossil fuel companies. The intention exists, but the action is contradictory. You can't blame businessmen for doing their business, but you can surely blame them for these PR tactics resulting in no real benefit. What we are seeing is that accepting a changing world is easier than challenging the widely adopted economic model of the modern times. 4. Indigenous people walk a tight rope Natives walk a thin line. On one hand, there is development and jobs. On the other, there is a risk of loss of culture and land. No matter what an outsider may want, people living on the land decide for themselves. Whether it is Alaska, where the locals welcome drilling companies and their jobs, or Montana, where locals lament the loss of native farming land to oil companies, it is a tricky job to give these people an equal chance to grow without parallel damage. Just looking at the economy is not the answer here. 5. There's not going to be a technological miracle that will just fix it The solution lies in many small and large things across many industries — collective self, local and regional actions to push bad technologies out and pull the good ones in. Do not expect geoengineering miracles that will dim the sun or manage the sunlight. The solution does not lie in expecting that life can move as usual, and technology will suddenly solve everything. It is just a way of looking away.
https://medium.com/climate-conscious/this-changes-everything-capitalism-vs-the-climate-7c8d5e25eb01
['Priya Aggarwal']
2020-12-26 03:16:01.425000+00:00
['Books', 'Climate Change', 'Activism', 'Capitalism', 'Technology']
47
4 Things Every Programmer Should Aspire to Be
What makes your career as a developer a great one. Do you still ask yourself what you want to be when you grow up? Even if you’ve already grown into a full adult? I do that daily. And as a junior developer, most of my answers are related to what type of programmer I aspire to be one day. Every coder has different inspirations for the future, but I believe that the secret ingredient for a career made of gratification is made of just 4 principles. Let’s see them in details. Be Very Skilled On A Niche Of Technologies You can’t master every discipline in the coding world. You probably won’t be the next master of self-driving cars, lunar module software engineer and top JavaScript developer all at once. Yet, you can still find something you’re truly passionate about and become a master of that, whether the discipline is mobile-development or writing code for the missions on Mars. Mastering a set of skills is a beautiful thing, which can bring you a lot of satisfaction in the world of developers, from becoming a CTO or tech lead, to getting recognized as an authority in your field. In going deep into something you are passionate about in this world, I’m sure you will find a renewed passion every day and a lot of career achievements along the way. Speaking about coding for space programs:
https://medium.com/javascript-in-plain-english/4-things-every-programmer-should-aspire-to-be-7372a7ac7fbe
['Piero Borrelli']
2020-12-05 06:32:59.035000+00:00
['Work', 'Programming', 'Software Engineering', 'Web Development', 'Technology']
48
Configuring Name Resolution in Windows Server 2016
Contents covered in this summary note: Configuring Name Resolution in Windows Server 2016 Installation, Configuration and Troubleshooting of DNS Server Install and configure DNS servers: Configure forwarders, root hints, recursion, and delegation Configure forwarders: Configure root hints: Configure recursion: Create and configure DNS zones and records: Creating and configuring zones: Creating Zones: Examining Built-in Resource Records: Aging and Scavenging: Time Stamping: Definitions you need to know: DNS records: How to configure a secondary zone: Installation, Configuration and Troubleshooting of DNS Server DNS role in windows server 2016, resolves name to IP and vice versa. As far as Active Directory Domain Services — AD DS can’t work without DNS, DNS is installed while installing AD DS. DNS is already configured to its default i.e. Zones and Records are created. But when you install DNS on a server that’s not a Domain Controller, you need to complete its configuration as well. You need to learn the following skills to implement DNS servers: Install and configure DNS servers Create and configure DNS zones and records Install and configure DNS servers: Before installing DNS roles, you need to know that there are three name resolution systems in windows: DNS, Link Local Multicast Name Resolution (LLMNR), and NetBIOS. Of these three, DNS is by far the most important because it is the name resolution method used to support Active Directory Domain Services. A DNS infrastructure requires network-wide configuration for both servers and clients. So in workgroup structure, the LLMNR and NetBIOS are used. Generally, NetBIOS and LLMNR are no longer used because DNS can address the name resolution in windows server 16/later versions and the Internet. However, LLMNR advantages are: requires no configuration and is compatible with IPv6, unlike NetBIOS. Its disadvantages are: first, doesn’t resolve the names of computers running windows server 2003, window XP or any version of windows earlier than windows vista. Second, it cannot be used to resolve the names of computers beyond the local subnet. On the other hand, NetBIOS, or NetBIOS-over-TCP/IP (NetBT or NBT) is an earlier naming and name resolution system used for compatibility with computers running versions of windows earlier than windows Vista. NetBIOS is enabled by default on all windows operating systems. It includes three name resolution methods: broadcasts, WINS, and the Lmhosts file. NETBIOS BROADCASTS: computers using NetBIOS to resolve a name will send out broadcasts to the local network requesting the owner of that name to respond with its IPv4 address. WINS: A WINS server is essentially a directory of computer names, such as Client2 and ServerB, and their associated IP addresses. When you configure a network connection with the address of a WINS server, you perform two steps in one. First, you enable the computer to use a WINS server to resolve computer names that cannot be resolved by DNS or LLMNR; and, second, you register the local computer’s name in the directory of the WINS server. An important advantage of WINS over network broadcasts is that it enables NetBIOS name resolution beyond the local subnet. LMHOSTS FILE: The Lmhosts file is a static, local database file that is stored in the directory %SystemRoot% \System32\Drivers\Etc and that maps specific NetBIOS names to IP addresses. Recording a NetBIOS name and its IP address in the Lmhosts file enables a computer to resolve an IP address for the given NetBIOS name when every other name resolution method has failed. You must manually create the Lmhosts file. For this reason, it is normally used only to resolve the names of remote clients for which no other method of name resolution is available — for example, when no WINS server exists on the network, when the remote client is not registered with a DNS server, and when the client computer is out of broadcast range. You can enable NetBIOS from the properties of your local area connection: open the properties of IPv4 and click the Advanced button to open the Advanced TCP/IP Settings dialog box. In this dialog box, click the WINS tab. The other important points to consider before installing DNS role, you should be logged in using the local administrator group, then get the server assigned an IP statically. To install the DNS server role, follow the instructions bellow: Sign in to the target server as a local administrator. Open Server Manager. In Server Manager, click Manage and then click Add Roles And Features. In the Add Roles And Features Wizard’s Before You Begin page, click Next. On the Select Installation Type page, click Role-Based or Feature-Based Installation, and click Next. On the Select Destination Server page, select the server from the Server Pool list, and click Next. In the Roles list on the Select Server Roles page, select the DNS Server. In the Add Roles And Features Wizard pop-up dialog box, click Add Features, and then click Next. On the Select features page, click Next. On the DNS Server page, click Next. On the Confirm Installation Selections page, click Install. When the installation is complete, click Close. Now that installation is completed, you need to configure it. Configure forwarders, root hints, recursion, and delegation Configure forwarders: DNS forwarding enables you to define what happens to a DNS query when the petitioned DNS server is unable to resolve that DNS query. With DNS forwarding you can: first, when a DNS server has no answer for a query, configure it to forward the query to another DNS server. Second, you can define conditional forwarding. You can define specific queries i.e. including some specific domains to forward such queries to specific DNS servers. For instance, forward all the queries that include kpu.edu.af domain to a specific DNS server. Check both configuration as bellow: First, to configure forwarding, use the following procedure: In Server Manager, click Tools, and then click DNS. In DNS Manager, right-click the DNS server in the navigation pane and click Properties. In the Server Properties dialog box, on the Forwarders tab, click Edit. In the IP Address list located in the Edit Forwarders dialog box, enter the IP address of the server to which you want to forward all DNS queries, and then click OK. You can configure several DNS servers here; those servers are petitioned in preference order. You can also set a timeout value, in seconds, after which the query is timed out. In the Server Properties dialog box on the Forwarders tab you can view and edit the list of DNS forwarders. You can also determine what happens when no DNS forwarders can be contacted. By default, when forwarders cannot be contacted, root hints are used. Root hints are discussed in the next section. Click OK to complete configuration. Second, to enable and configure conditional forwarding, use the following procedure: In DNS Manager, right-click the Conditional Forwarders node in the navigation pane, and then click New Conditional Forwarder. On the New Conditional Forwarder dialog box, in the DNS Domain box, type the domain name for which you want to create a conditional forward. Next, in the IP address of the master servers list, enter the IP address of the server to use as a forwarder for this domain; press Enter. Optionally, specify the Number of Seconds Before Forward Queries Time Out value. The default value is 5 seconds. Click OK. Configure root hints: If you do not specify DNS forwarding, then when a petitioned DNS server is unable to satisfy a DNS query, it uses root hints to determine how to resolve it. Root hints are used by DNS servers to enable them to navigate the DNS hierarchy on the Internet, starting at the root. Such query can be iterative or recursive. Local DNS server forward the query to root DNS server, then to TLD DNS server and authoritative DNS server. By default, the DNS Server service implements root hints by using a file, CACHE.DNS, that is stored in the %systemroot%\System32\dns folder on the server computer. You can edit the root hints using DNS Manager, use the following procedure: In Server Manager, click Tools, and then click DNS. In the DNS Manager console, locate the appropriate DNS server. Right-click the server and click Properties. In the server Properties dialog box, click the Root Hints tab. You can then add new records, or edit or remove any existing records. You can also click Copy From Server to import the root hints from another online DNS server. Click OK when you have finished editing root hints. Configure recursion: Recursion is the name resolution process when a petitioned DNS server queries other DNS servers to resolve a DNS query on behalf of a requesting client. The petitioned server then returns the answer to the DNS client. By default, all DNS servers perform recursive queries on behalf of their DNS clients and other DNS servers that have forwarded DNS client queries to them. However, since malicious people can use recursion as a means to attempt a denial of service attack on your DNS servers, you should consider disabling recursion on any DNS server in your network that is not intended to receive recursive queries. To disable recursion, use the following procedure: From Server Manager, click Tools, and then click DNS. In the DNS Manager console, right-click the appropriate server, and then click Proper- ties. Click the Advanced tab, and then in the Server options list, select the Disable Recursion (Also Disables Forwarders) check box and then click OK. While it might seem like a good idea to disable recursion, there are servers that must perform recursion for their clients and other DNS servers. However, these are still at risk from malicious network attacks. Windows Server 2016 supports a feature known as recursion scopes, which allow you to control recursive query behavior. To do this, you must use DNS Server Policies. Create and configure DNS zones and records: Zones are the databases in which DNS data is stored. A DNS zone infrastructure essentially consists of the various servers and hosted zones that communicate with one another in a way that ensures consistent name resolution. DNS Zones Creating and configuring zones: A zone is a database that contains authoritative information about a portion of the DNS namespace. When you install a DNS server with a domain controller, the DNS zone used to support the Active Directory domain is created automatically. However, if you install a DNS server at any other time, either on a domain controller, domain member server, or standalone server, you have to create and configure zones manually. Creating Zones: A DSN zone is a database containing records that associate names with addresses for a defined portion of a DNS namespace. To create a new zone on a DNS server, you can use the New Zone Wizard in DNS Manager. To launch this wizard, right-click the server icon in the DNS Manager console tree, and then choose New Zone. The New Zone Wizard includes the following configuration pages: Zone Type Active Directory Zone Replication Scope Forward or Reverse Lookup Zone Zone Name Dynamic Update Process for creating a zone Examining Built-in Resource Records: When you create a new zone, two types of records required for the zone are automatically created. First, a new zone always includes a Start of Authority (SOA) record that defines basic properties for the zone. All new zones also include at least one NS record signifying the name of the server or servers authoritative for the zone. Aging and Scavenging: Aging in DNS refers to the process of using time stamps to track the age of dynamically registered resource records. Scavenging refers to the process of deleting outdated resource records on which time stamps have been placed. Scavenging can occur only when aging is enabled. Together, aging and scavenging provide a mechanism to remove stale resource records, which can accumulate in zone data over time. Both aging and scavenging are disabled by default. To enable aging for a particular zone, you have to enable this feature both at the server level and at the zone level. To enable aging at the server level, first open the Server Aging/Scavenging Properties dialog box by right-clicking the server icon in the DNS Manager console tree and then choosing Set Aging/Scavenging For All Zones. Next, in the Server Aging/Scavenging Properties dialog box that opens, select the Scavenge Stale Resource Records check box. Although this setting enables aging and scavenging for all new zones at the server level, it does not automatically enable aging or scavenging on existing Active Directory–integrated zones at the server level. To do that, click OK, and then, in the Server Aging/Scavenging Confirmation dialog box that appears, enable the option to apply these settings to existing Active Directory–integrated zones. To enable aging and scavenging at the zone level, open the properties of the zone and then, in the General tab, click Aging, as shown in Figure 3–18. Then, in the Zone Aging/Scavenging Properties dialog box that opens, select the Scavenge Stale Resource Records check box. Time Stamping: The DNS server performs aging and scavenging by using time stamp values set on resource records in a zone. Active Directory–integrated zones perform time stamping for dynamically registered records by default, even before aging and scavenging are enabled. However, primary standard zones place time stamps on dynamically registered records in the zone only after aging is enabled. Manually created resource records for all zone types are assigned a time stamp of 0; this value indicates that they will not be aged. Definitions you need to know: Forwarding DNS: Allows to forward DNS request from one DNS server to another to be resolved. Active Directory can’t work without DNS. Conditional Forwarding allows you to set condition on which DNS request gets send to which DNS server. Primary zone: A primary zone is the main type of DNS zone. A primary zone provides original read-write source data that allows the local DNS server to answer DNS queries authoritatively about a portion of a DNS namespace. When the local DNS server hosts a primary zone, the DNS server is the primary source for information about this zone, and the server stores the master copy of zone data in a local file or in Active Directory Domain Services (AD DS). When the zone is stored in a file instead of Active Directory, by default the primary zone file is named zone_name.dns, and this file is located in the %systemroot%\System32\Dns folder on the server. Active Directory Integrated Zone: a primary zone stored in Active Directory Secondary zone: A secondary zone provides an authoritative, read-only copy of a primary zone or another secondary zone. Secondary zones provide a means to offload DNS query traffic in areas of the network where a zone is heavily queried and used. Additionally, if the zone server hosting a primary zone is unavailable, a secondary zone can provide name resolution for the namespace until the primary server becomes available again. The source zones from which secondary zones acquire their information are called masters, and the data copy procedures through which this information is regularly updated are called zone transfers. A master can be a primary zone or other secondary zone. You can specify the master of a secondary zone when the secondary zone is created through the New Zone Wizard. Because a secondary zone is merely a copy of a primary zone that is hosted on another server, it cannot be stored in AD DS. Stub-zone: A stub zone is similar to a secondary zone, but it contains only those resource ecords necessary to identify the authoritative DNS servers for the master zone. Stub zones are often used to enable a parent zone like proseware.com to keep an updated list of the name servers available in a delegated child zone, such as east.proseware.com. They can also be used to improve name resolution and simplify DNS administration. It contains partial data from another zone. The only records to find an authoritative server. In another words, DNS stub-zones are used to enable your DNS servers to resolve records in another domain. The information in the stub-zone allows your DNS to contact the authoritative DNS server directly. Active Directory — integrated Zone: When you create a new primary or stub zone on a domain controller, the Zone Type page gives you the option to store the zone in Active Directory. In Active Directory–integrated zones, zone data is automatically replicated through Active Directory in a manner determined by the settings you choose on the Active Directory Zone Replication Scope page. In most cases, this option eliminates the need to configure zone transfers to secondary servers. Integrating your DNS zone with Active Directory has several advantages. First, because Active Directory performs zone replication, you do not need to configure a separate mechanism for DNS zone transfers between primary and secondary servers. Fault tolerance, along with improved performance from the availability of multiple read/write primary servers, is automatically supplied by the presence of multimaster replication on your network. Second, Active Directory allows for single properties of resource records to be updated and replicated among DNS servers. Avoiding the transfer of many and complete resource records decreases the load on network resources during zone transfers. Finally, Active Directory–integrated zones also provide the optional benefit of requiring security for dynamic updates, an option you can configure on the Dynamic Update page. Standard Zone: By default, on the Zone Type page, the option to store the zone in Active Directory is selected when you are creating the zone on a domain controller. However, you can clear this check box and instead create what is called a standard zone. A standard zone is also the only option for a new zone when you are creating the zone on a server that is not a domain controller; in this case the check box on this page cannot be selected. As opposed to an Active Directory–integrated zone, a standard zone stores its data in a text file on the local DNS server. Also unlike Active Directory–integrated zones, with standard zones, you can configure only a single read-write (primary) copy of zone data. All other copies of the zone (secondary zones) are read-only. The standard zone model implies a single point of failure for the writable version of the zone. If the primary zone is unavailable to the network, no changes to the zone can be made. However, queries for names in the zone can continue uninterrupted as long as secondary zones are available. The entire DNS tree is called the DNS namespace. Zones contain a variety of record types called resource records , which contain information about network resources , which contain information about network resources Configuring Dynamic Update Settings: DNS client computers can register and dynamically update their resource records with a DNS server. By default, DNS clients that are configured with static IP addresses attempt to update host (A or AAAA) and pointer (PTR) records, and DNS clients that are DHCP clients attempt to update only host records. In a workgroup environment, the DHCP server updates the pointer record on behalf of the DHCP client whenever the IP configuration is renewed. For dynamic DNS updates to succeed, the zone in which the client attempts to register or update a record must be configured to accept dynamic updates. DNS records can be added and changed by: Static updates — administrator enters DNS record information manually Dynamic updates — referred to as Dynamic DNS (DDNS) Dynamic updates and Record Security: If you look at the security properties of a resource record, you can see that various users and groups are assigned permissions to the record, just as with any resource in Windows. These security permissions are used for secure dynamic updates. When only secure dynamic updates are allowed in a zone, the user listed as the owner of the resource record (in the advanced security settings) is the only user that can update that record. DomainDNSzones and ForestDNSzones: A partition is a data structure in Active Directory that distinguishes data for different replication purposes. By default, domain controllers include two application directory partitions reserved for DNS data: DomainDnsZones and ForestDnsZones. The DomainDnsZones partition is replicated among all domain controllers that are also DNS servers in a particular domain, and the ForestDnsZones partition is replicated among all domain controllers that are also DNS servers in every domain in an Active Directory forest. DNS records: Host (A): stores the IP address for a name. Alias (CNAM): alternative record or alias for another record. Mail Exchange (MX): identifies mail server for DNS name. Service Record (SRV): location of service on the network. Name Server (NS): authority DNS server. Pointer (PTR): proves IP address to name mapping. It’s the opposite of A records. It’s used for reverse lookup zone. How to configure secondary zone: For secondary DNS zone, install a new server. Set its IP putting the primary DNS address as its DNS. Join it to the domain. Go to DNS management (Server Manager — Tools — DNS) or for short: Run — dnsmgmt.msc Install Forward & Reverse Lookup Zones selecting Secondary. Here you need to know the primary DNS IP address. Unless you have transferred the zone from the primary DNS, you have no access. So go to the primary DNS and transfer the zone: Right-click on the domain and go to properties — zone transfer. Allow only to the following servers. Edit, then Notify… Note: this text is a summary of DNS implementation on window server 2016 from ‘Exam Ref 70–741 Networking with Windows Server 2016’
https://medium.com/@shtabesh02/configuring-name-resolution-in-windows-server-2016-6807d768f71f
['Shir Hussain Tabesh']
2020-12-06 14:46:46.880000+00:00
['DNS', 'Information Technology', 'Windows Server 2016']
49
How UX Engineering can Fuel Product Development
How UX Engineering can Fuel Product Development A six-step process for incorporating User Experience Engineering to build and test new features and product ideas quickly Start-ups are scrappy survivors: they must innovate constantly and adapt quickly if they want to have any chance at upending the proverbial status quo. Success hinges on a forward-thinking approach, the ability to use a variety of tools, and an opportunistic approach to learning. As a start-up matures into a larger, multi-department company and development resources are dedicated to support and maintain core products, it’s difficult to fuel the engine of innovation that helped launch the start-up into orbit in the first place. Without the ability to build quickly and run user tests or other experiments, there’s likely a growing backlog of potentially game-changing enhancements, features, and product ideas that might never come alive. Incorporating User Experience Engineering is one way to re-ignite the team’s spirit of innovation. UX Engineering is relatively new and goes by many names (at Amazon, it’s called “Design Technology”), but here’s the general idea: UX Engineers combine UX Design and engineering expertise into a single discipline, and are able to develop, prototype, and test innovative UI solutions that push the envelope on front-end engineering and inspire development teams and leaders to invest in new ideas. In pioneering UX Engineering to Fanatics, I’ve had the opportunity to not only define the role itself at our organization, but to build a process around it, too. While there was a significant investment of time configuring a development environment and setting up the tools that our team uses, the biggest challenge was around strategic challenges like: Which ideas get developed into prototypes? How complex can our prototypes be? How can we quickly build a prototype that works with our existing codebase? How are these features/prototypes tested and measured? What kind of deliverable is useful to engineers? What will they do with it (if anything)? How can we ensure our learnings (from both successes and failures) support further product/feature development? Bringing UX Engineering to Fanatics was a great learning opportunity — both personally and for our team — and I wanted to share some of what we discovered while developing our process here. Using the guiding questions above, we distilled our process into six steps: Identify, Simplify, Build, Measure, Ship, and Share.
https://uxdesign.cc/ux-engineering-is-rocket-fuel-for-product-development-26ae1e8ac50
['Steve Saul']
2017-11-07 01:06:57.423000+00:00
['User Experience', 'Technology', 'Software Development', 'UX', 'Engineering']
50
Visual Hierarchy and UX-Design: A Guide
As the UX-professional on your team, you will find that you are often called upon to make the design of a landing page “better”. This very specific request often begs the question of where to start improving your design. One of the ways we can improve our designs is by looking at how the elements are structured relative to each other. The way we arrange information within our products greatly affects how users experience our product. Thankfully, we can have a look at some basic principles of how we process visual information and by weaving these principles into our designs, we can create landing pages and apps that users enjoy using. These have become known as “visual hierarchy”. So Visual Hierarchy, tell me more… At its core, visual hierarchy describes how we rank elements relative to each other, taking into account how important or relevant the information is. You may have heard of information architecture, which is similar but not identical. Visual Hierarchy allows us to rank the elements, resulting in information architecture. By adjusting different properties of these elements, such as the relative size, colour and grouping, we give our content structure that helps guide our users to their goal. We can make the most important elements stand out more, allowing users to quickly and efficiently find where they need to go. That is why it is so important for your designs to have a clear visual hierarchy. It allows you to control the way in which your experience is being delivered to the user. Each carefully crafted element will bring them one step closer to their goal, lighting up a designated path for them to follow. The experience simply flows, removing any need for the user have to stop-and-search for information for them to proceed. The goal is a silky-smooth experience that leaves your users and yourself feeling happy about a job well done. Now that we know what visual hierarchy is and why it is important for us to understand what it does, let us have a look what tools we have at our disposal to create our visual hierarchy. Typography The typography is one the easiest ways to get your information architecture spot on. The reason being is that text is very widespread when it comes to the web. It is everywhere. Here is an example of a heading, a subheading and the paragraph styled exactly the same way: Example test without visual hierarchy It may well take a second for you to fully understand what is happening. That split-second of navigating the content is a sign of subpar visual hierarchy. It causes a negative user experience. The internet has a wealth of options that allows users to simply move on to the next product. Not good. Let us see how we improve the visual hierarchy by adjusting the font size: Example using relative size to adjust hierarchy Not bad — our attention immediately falls on the larger text of our heading, allowing us to see if the paragraph is relevant to our goal or not. That is what visual hierarchy is all about. It allows the user to orientate themselves quickly and scan the content for relevant information. Remember, users do not read on the web — they scan. Design accordingly. We have a better visual hierarchy thanks to us varying the size of our text. To further enhance the hierarchy, we can give our heading a bolder font: Example with bold text This further underlines the authority of our heading and makes it stand out even more. This further improves the user’s ability to scan our content for relevant information, making their lives easier. That is what technology is all about. Colour We can use colour to aid the user in navigating our site. While colour can help us convey information, it should be used in a more supplementary fashion and not as the sole provider of information to the user. The reason being that people with colour blindness will have a hard time seeing the clues you have laid out for them. Using colour we can also help the user identify headings more easily, by making them a different colour to the text body: Example with colour to aid visual hierarchy We can use this style for each heading we use within our product. That way users can easily learn to differentiate between headings and paragraphs more easily. This learnability is one the factors that influences the usability of your product. A good usability is essential for a good user experience. Another great way to use colour in your designs is to make certain elements stand out more by using brighter colours. The increased contrast between the target element and the background colours naturally draws attention. Think links or button of submit forms: Image of a basic form Making these stand out more by highlighting them with a strong colour will allow users to see these more easily. This discoverability is especially important when designing for smaller screen sizes such as tablets or phones. In the example above you can see how the button practically screams for attention. Whitespace Staying with our headings and paragraphs, we can now look at the whitespace we use. Whitespace can go a long way in taking a good design and making it top-notch. What if we removed the whitespace from our example above: Example text without whitespace There is not much space and everything appears a bit cramped. The content can’t breath and flow nicely. Not a very good user experience. Example text with whitespace There we go, much better. When you first start designing, you may be looking at how to fill the space you have in front of you. However, whitespace is something that you can intentionally use within your designs to improve your user experience. Alignment and Proximity Here the Gestalt Laws can unfold their beauty. We can help a user make sense of what they are seeing by grouping related options together. Here we can see how we can use the alignment and proximity of elements in combination with colour to better organise our user interface for our users: Example of grouping options together All of the options are grouped together and we are using card elements to better differentiate between the sidebar area and the rest of the screen. The user will quickly learn that each card-like element has information regarding their messages. On the left-hand side we have the top-most level and go further into detail, the more we move to the right. We can also make elements within a category stand out to highlight their presence to the user: By using colour we can indicate to the user which mails have not been read yet, which mail we are currently reading and which mails have a related task. We are again also grouping related elements together to help users find their way more easily. Here I would encourage you to play around a bit with the proximity of the different elements and test them on users to see which feels best to them. I would definitely recommend reading the gestalt laws because they do give you a solid understanding of how human perception works and how you can use these within your own designs. Size and Scale of Elements The size of elements relative to each other can also be a great tool to improve the information architecture. We already saw with our typography that larger elements tend to grab the attention of the user more. It is no different with other elements: Image of a landing page The larger element is seen first by the user. After that, the user begins to scan the rest of the page. With this we can control how the user processes each part of our product and make important information stand out more to grab their attention. A word on Mobile UX With mobile becoming the dominant medium for users to enjoy the offerings of the internet, it is very important to consider the constraints the smaller screen size bring with them. Mobile users should be able to see and select the elements they want with ease. The easiest way to do this is to make the actionable elements stand out more, by adjusting the size and contrast. Make your text easy to read for users to quickly understand what is happening and if they want to click on a button, make the button’s area big enough for them to easily select that action. Avoid buttons that are so small and close together, that a user accidentally presses the wrong button. Implementing Visual Hierarchy Using these principles in practice is not always straight forward. It may start out well and we have a clean information architecture. As the project grows and stakeholders are throwing requests at you to make more and more elements stand out, the visual hierarchy becomes muddled. It is important to keep things simple and basic. Start with one, really important element. This is the one every user must see and is the real value-bringer for your product. It is the one you found during your user research that everyone kept repeating. It is the main goal of what you are designing. Once you have that, build up the hierarchy and messaging around that. The goal decides what your information architecture will look like. It can change, but the need to change should be reflected in your user research.
https://uxplanet.org/visual-hierarchy-and-ux-design-a-guide-779863263bbc
[]
2021-09-07 16:25:58.556000+00:00
['UX', 'Technology', 'Usability', 'UX Design']
51
Development of programming at its peak?!
Hi there, I am a programmer and I write this with one goal, after coding for ages I have a positive opinion on why I think this is a golden time for programmers aspiring/experienced in terms of understanding the history, present, and the future of programming. This article does not go into detail, it is a write-up expressing my opinion. This is one of the best times to be a programmer and I say this because a huge part of technology out there is running on legacy systems and a good part is adopting the latest trends/languages/frameworks, so that has led for the development of intermediary technologies which enable you to draw strict constraints between old and new, and continue developing a platform with new technology on top of the legacy system without disturbing it. To get to experience on a technical level three types of system old, intermediary and new is a pretty rare opportunity. While it is absolutely important to stay updated with the latest frameworks/techs (sensible ones) it is equally important to comprehend why these new systems were needed in the first place, and that, we can do by learning the history of tech. We are amidst old and new, the opportunity to learn old systems, their drawbacks, and benefits in terms of architecture, design, implementation etc., and understand how the new systems inherit the same and improvise the part which wasn’t pleasant. Keeping in mind how quickly the front-end frameworks are changing and how quickly we are progressing ahead with a lot of new backend languages and attempting to move from OOPS to Functional programming, If we were to enter this field lets say in 2025, a lot of tech that you are seeing today wouldn’t exist, and they would meet the same fate as of COBOL, LISP etc., Cheery on top, the tools we have today for migrating platforms opens a door for us to learn the internal working of these old systems and then to learn the internal working of new systems and connect them both through the intermediaries. I recently used Webpack for building react components on top of a Backbone.js platform and the number of things I had to learn to bundle the project varied from learning loaders, plugins, chunks, dist configs etc., It was an amazing experience to see how well a bridge can be established between old and new. A lot of what I have written is coming from personal experience and this is my first medium post, what do you think? comment below, let’s have a discussion!
https://medium.com/@syedamanat/development-of-programming-at-its-peak-8079c3112690
['Syed Amanat']
2020-02-20 22:35:53.647000+00:00
['Progress', 'Opinion', 'Programming', 'Technology', 'Thoughts']
52
End the Police, Or Enable New Roles for Protection and Service?
End the Police, Or Enable New Roles for Protection and Service? New technology, combined with pressing needs of social change and climate disasters could eliminate our need for police Portland Police 2020, photo by Christyl Rivers A new background requires new tactics All summer long in 2020, the outcry for an end to brutal policing and ineffective policy was heard. The background of this outcry was against a global pandemic and fires and floods, and then there was the raging political firestorm on all sides. The coming disasters ahead, heat extremes, more fires, pandemic outbreaks, floods, refugee migrations and droughts will demand our heroes to be more involved with prevention, fire suppression, food distribution outlets, disease outbreak clinics and refugee centers. We will need a new kind of “policing” for these needs. Innovative technology policing procedures, and growing protest movements for justice, could align to drive our outdated systems into the past. New technologies such as massive data-bases, AI and machine learning tools, facial recognition, Geoinformation systems tied to GPS, robo-cameras and drones, are just a few examples of emerging and improving tech solutions that could eventually nudge out the need to put human lives at risk — such as armed police officers — on the streets in the first place. Or at least we wouldn’t need so many of them. Before we go all robo-cop on the mean streets of our concrete jungles, let’s also consider an end to the concrete jungles. What if crowded urban landscapes trended more toward no car traffic, affordable housing, and community gardens where former car parks blighted the city? The pandemic has ushered in an era where the need for wide avenues, green spaces, vertical gardens, much more space for social distancing, consider large parks and playfields rather than indoor classrooms, for example. Heroes just for one way We can be heroes, just for one way: that way being toward the goal of stronger communities. The fires of 2020 have taught us that we need firefighters, boots on the ground for many, many tasks that go far beyond just spraying water. We will need to deploy those who would be heroes to do more than eighty percent of the street protection jobs that do not involve violent crime. It has already been observed — and demonstrated for — that we need officers, not necessarily called cops — to do community work. There are people who need a higher level of persuasion and tact, for example, to evacuate in an impending emergency. There are people with mental health issues, who only become more agitated at the sight of a gun. There are people caught in the throes of addiction who need alternative interventions. There are people on every degree of a wide spectrum of domestic disturbance situations. There are runaways and homeless people that need assistance and not distrust. CUPS could stand for Community Utility Personnel; these are uniformed officers — not armed with lethal weapons — but maybe with non-deadly tools, who patrol streets looking for any utilitarian need that must be addressed. They could be trained in community awareness, deesscalation techniques, social science, emergency medical first aid, and more. Today, there are not a few people who feel the cops are not heroes, but a threat. A complete turn around from this perspective would allow that people seeing a CUP would know there is someone there to protect and serve, not to intimidate or threaten. CUPS could perform many jobs. They could help set the stage for the next inevitable fire or flood disaster. They could intervene on turf disputes among refugee centers. They could deploy auto alerts, using the aforementioned technology, to help pinpoint trouble spots. Techies and CAPS As police and criminal justice work progresses and grows with technical and social engineering, more sophisticated and widespread use of technology is becoming common place. In the field of surveillance, for example, we have GIS, (geographical information systems) aligned with GPS (global positioning satellites). GIS and GPS are enabled by AI, data collection systems, and of course, human technicians. A whole host of this growing field will demand technology literate human beings. For the sake of definition, I am going to call such integrated systems Community Autonomous Protection Systems, CAPS. One might say, for instance, “There is a CAP bot at the barber shop, so don’t even think of taking a gun near there.” The newest technology that allows Gun Detection, also called GDS, or “shot spotter” allows law enforcement personnel to pinpoint the source of a gunshot, or perhaps hone in on the make, model and license of a speeding car that discharges a gun shot. Then there are needs yet to be multiplied having do with a huge new network of surveillance, AI supported big data collection, drones, street “tech” support and more. As these systems become increasingly networked together, there will be inevitable complexity that requires sophisticated human personnel who are compensated with decent salaries as well as less dangerous workplaces. The problem of public relations At present, the country is divided about the role of criminal justice, and cops. One side sees that systemic racism, and increasing militarization of cops leads to corruption, brutality, even murder. The other side sees that law and order are an important necessity and we can’t abandon policing altogether. Both sides are wrong. Both sides are right. The present system originated with the idea of protecting property rights and impeding crime by deploying a blue line of defense against the bad guys. The problem of course, is that most citizens, in fact almost none, see themselves as “the bad guys.” Nor do everyday people view cops as “the good guys”. Most people fall somewhere between warily intimidated or wildly defensive. And despite decades of cop shows focused on violent crimes and defeating the scum on the streets who want you dead, most police work does not deal with hard core crime. Considering these two objective facts, enter technology and a whole slew of new jobs, new attitudes, and new respect for a more objective way to deal with daily life in our communities. There are legitimate fears about government overreach, infringing upon people’s privacy rights, and people’s fear of being under constant scrutiny. Given some of the bugs inherent in facial recognition technology, for example, there is every reason to proceed with caution. But this reality too, will require many new jobs, tech opportunities, in-depth analysis monitoring and oversight. Perhaps a new kind of public officer could be deployed for such tasks, as well. On a public relations level, some people will find Community Autonomous Protection Systems (CAPS) very scary indeed. Do we want cold, unthinking machines making decisions that could ruin a person’s life? But on the other hand, at present, there is no way for a human being to not already hold biased and prejudiced views on police and policing in America. Machines that do the work of policing do not have to be cold and intimidating. They could be as small or inconspicuous as needed. These sorts of details need to be run through the court of public opinion and experience as we tweak and tinker with new systems. What cannot be ignored is the new century landscape of climate crisis, looming infrastructure needs, and re-organizing our coping mechanisms to reflect it. The job of re-purposing our cities and streets to reflect a pandemic age and a green jobs age, as well as to usher in a new age of creating a well integrated, and intelligent, networking community. This offers many new fields of opportunity. If a former parking garage is now a community garden and electric car charging station, for example, someone, or perhaps some robo-drone, will have to periodically monitor that space. There is a great need for thinkers, innovators, city planners, investors, and of course many kinds of engineers to create a whole new approach to creating, and then, keeping, the peace.
https://medium.com/discourse/end-the-police-or-enable-the-community-with-tech-and-tact-f5737b4185fc
['Christyl Rivers']
2020-10-16 18:18:14.531000+00:00
['Politics', 'Police', '2020', 'Technology', 'AI']
53
Is Technology Integration the future ??
Technology Integration Why Integration is the key to future products? With many varieties of products being manufactured by so many different companies, it feels like the Tech industry has reached a saturation point. Many of them don’t work seamlessly to their full potential as they should and that is the problem we will look into. So what can the Tech industry do to solve it ?? The first thing to do is to be inspired by “The Apple philosophy”. Apple has been consistently producing consumer-grade products that integrate with all their devices without any hassle. And that provides a smooth user experience, which is so crucial in today’s ever-evolving, technology-driven modern world. They enhance the user experience by making the connectivity between their accessories and products a breeze. This is all credits to the W1 chipset, T1, and all the same chipsets they put in each of their devices for the effortless switching between Macbook to iPhone to Apple Watch to their new Airpods Pro. Apple Lineup which offers seamless connectivity So why haven’t other companies adopted this methodology? The simple answer to that is that they don’t have their own ecosystem set up. However much they try, they are always going to be playing second fiddle to Apple. The good news though, is that third party manufacturers have a golden chance to grab this market. They can create devices that offer a seamless connection between 2 products of different companies. This will offer an experience that only Apple products could previously offer. We will take an example of a similar product: These are the new Oneplus Bullets Wireless Z Oneplus Bullets Wireless Z — Moving in the right direction You might think these are just another pair of Bluetooth earphones. Yes, they are, but with a special button. And that button is what sets it apart and makes the experience of using your existing devices more special than before. There is a button on the neckband which is for quickly switching between 2 devices. You can do it just by double-clicking the button. It is a feature that is better implemented on the Apple side of things where the AirPods auto-detect the device with sound output and auto-switch to it, but this earphone’s quick switch feature is certainly taking us nearer to that level of an ideal experience. This pair of earphones has made us realize the importance of a product that makes the experience of using other devices around your joy. Similar products by third-party vendors should be manufactured as this is the perfect time to intermingle all these products together and bring them under one umbrella. These nifty tricks up the sleeve of Oneplus Bullets Wireless Z have made me brainstorm about where the tech industry is moving towards. And I have maybe reached a conclusion. There is going to be an influx of new tech products, but only those who offer a superior experience with products of different origin too, will survive and others would slowly fade away. Written with ❤️ by Eshaan Khurana Constructive criticism/appreciation welcomed!
https://medium.com/nerd-for-tech/is-technology-integration-the-future-644b4aec319b
['Eshaan Khurana']
2021-04-23 01:03:57.882000+00:00
['Technology', 'Future', 'Product Development', 'Apple', 'Integration']
54
SCREEN-IT — Innovation or a recipe for disaster
Screen-ITis a location-based Dynamic Digital Out of Home (DD-OOH) Advertising platform, which can be considered a forward leap in the innovative advertising domain. It works with rideshare drivers and other relevant partners who install -Screen-IT Smart System — which comprises a silicon diode circuit and LEDs — in their cars to turn their rear and side windows into high definition displays run targeted and pre marketed, advertising for brands and advertisers. Considered to be the first of its kind, Screen-IT has sought to revolutionize the advertising industry by producing zero plastic waste. This certainly is challenging the way marketers and entrepreneurs are increasing their sales strategies across the board. The founding members claim their idea, i.e., Smart System leads to zero plastic waste and zero pollution as all the ads are managed through a based cloud system. Therefore, companies can run advertising campaigns according to their needs and situations and manage different campaigns without any carbon waste and footprints. Chief Operations Officer of Screen-IT has claimed that in the first 10–12 weeks of operations, Screen-It has generated more than half a million rupees as payout to the driver community, therefore uplifting the down-trodden and weaker segment of the society. With such a substantial impact on the advertising and marketing industry, there are some drawbacks to such digital advertising. When the drivers’ speed is above a specific limit, such moving graphics and advertisements can hinder the rear drivers who might get overly indulged in such captivating advertisements, thus questioning the safety of such an innovation. Such luminous screens and ads might divert rear drivers’ attention at night, which poses the risk of car accidents. Human error is bound to happen when attention is diverted from static to motion graphic imagery and in a country like Pakistan where Traffic rules are not stringently followed, the rate of accidents might increase just because people will be bamboozled by the aesthetics of the technology. Although such risks are posed, there is some credibility that comes with such an innovative idea. The world is going through a rapid digital revolution, and at the same time, carbon footprinting and environmental pollution are creating havoc for society. It is not just about how aesthetically appealing the adverts look when they are on screens; it is also about the environmental factor that the technology is providing which will drastically reduce carbon footprint. Furthermore, technology is moving towards holograms, whereas we are far behind using motion imagery technology as our mainstream marketing medium. China and Japan used holograms on new year celebrations, countries like France and UAE used 4-D imagery on their national buildings. Cities like Vegas, Tokyo are full of these moving adverts, and the adverts are not only found on taxis, they are also seen on giant billboards covering a significant area of high-rise buildings. Whereas technologies like these are getting backlashed in our country just because they are not “technologically refined” and lack government backing/ funding. Therefore, such an innovation begs the question, “Is human life cheaper than the environment, or is the environment far precious than human life?” — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
https://medium.com/@yahyaazeeshan/screen-it-innovation-or-a-recipe-for-disaster-eaa2e19b430d
['Yahyaa Zeeshan']
2021-06-08 08:17:05.535000+00:00
['Technews', 'Future', 'Future Technology', 'Technology']
55
Is It Worth Getting Windows 10 Pro?
Most PC users who build a PC choose between the Pro or Home version of the OS. Most people just don’t care about the selection and opt for a version that’s easily available to them. On the other hand, some users can’t just make a random selection. If you are like these users, we suggest that you consider upgrading your OS. So, should you go for this upgrade? You need to consider a lot of factors. First of all, you should consider your budget. If you want to upgrade directly through Microsoft, you should be ready to spend $199.99. For an average user, Windows 10 Home can meet all the needs. Therefore, they don’t need to go for an upgrade. Generally, the Pro version is designed for the business users in particular. The reason is that it comes with a lot of features that are quite useful for business users. Let’s take a look at some of them. Connect to your domain: Windows 10 Pro allows you connect to your school or business Domain to access network printers, servers, and other files. Better encryption: The BitLocker offers additional security to protect your data with security management and encryption. Remote login: With Windows 10 Pro, you can use Remote Desktop to log in to your PC even when you are away from your computer. Virtual machines: The Pro version allows you to use virtual machines using the Hyper-V feature. Therefore, you can run multiple operating systems on your PC. Store apps: If you upgrade, you can create a private app space in the Store so you can access company apps in a convenient manner. At Microsoft, you can find out more about the list of features the new version offers. Aside from the features listed above, you can also enjoy support for other stuff, such as Remote Desktop, which is essential for Remote Desktop app. Aside from this, Windows 10 Pro offers support for up to 2TB of RAM unlike the Home version which can’t support more than 128GB of RAM. At the end of the day, it is your decision to make. However, if you need a specific feature, such as remote desktop, you can achieve this purpose without spending a hundred dollars. The takeaway In short, if you are a business user or you need the features offered by Windows 10 Pro, we suggest that you go for the upgraded version. On the other hand, if you have a home computer that is used for basic tasks, we suggest that you stick with your existing version of Windows. If you want to know more about Windows 10 Pro Spotkeys is the resource you should check out. lockdown your codes now with this new softwere
https://medium.com/@momoalkhatim3/is-it-worth-getting-windows-10-pro-3ee9aad8d1e4
[]
2020-09-21 13:50:26.885000+00:00
['Windows', 'Technology', 'Windows 10']
56
Shortform Is Fun
Today, I’ve been testing with shortform posts on Medium. Tomas Smith’s article was the reason to see what I could do with Shortform Posts on Medium. Medium itself changed some things that let their monetized alternative to Twitter fall behind the latter. The viewers can only read the (metered) post if they click and open the article. The news feed style is then gone. My tests show shortform posts can have a title and subtitle too. The amount of words in the post is the only measure whether a metered article appears with a picture plus title or an article excerpt plus the read-more-link. Shortform posts are not eligible for further distribution by Medium. In my next post, I will try to find out where it makes sense to use shortform posts. As for now, changing from bold first sentence to title with subtitle and adding pictures show bizarre behaviors of the Medium back end. Some post shrink to just the picture with a caption on it. Others display their content in full. I’ll keep you posted.
https://medium.com/@michaelknackfuss/shortform-is-fun-1310303d5d72
['Michael Knackfuss']
2020-12-26 21:34:26.020000+00:00
['UX Design', 'Visual Design', 'Technology', 'Software Engineering', 'Culture']
57
Introducing ViteConnect
Note: This feature is currently only supported on the iOS Vite wallet. Support for Android will be ready soon! If you’re a crypto trader, you will know that logging into exchanges can be a very tedious task. For centralized exchanges, you need to enter your account name & password and then go through some sort of two factor authentication through your phone or Google authenticator app. For decentralized exchanges, you have the added complexity of importing seed phrase files and finding private key addresses. As we all know, keeping your private key safe is of utmost importance. If your device gets infected, it is easy for the hacker to steal your private key and with it, your hard-earned assets. In order to help you protect your assets, Vite Labs has launched the ViteConnect feature and integrated it with the ViteX exchange login. Now, if you want to access your ViteX account all you will need to do is scan the QR code using the code scanner in the Vite mobile wallet. How to Use ViteConnect Step 1. Head on over to ViteX You should be presented with a QR code. Step 2. Open up your Vite wallet. See that QR code scanner icon in the upper r ight hand corner? (It looks like a flat line with a small box around it) Click on it.
https://medium.com/vitelabs/introducing-viteconnect-14f0c44fe4d8
['Vite Editor']
2019-07-25 19:39:18.945000+00:00
['Blockchain', 'Resources', 'Blockchain Technology', 'Cryptocurrency']
58
AI Simulating the Human Body — Part 1
By Alaa Eljatib What is AI Artificial Intelligence or AI is a term thrown around a lot lately. But when people are asked to explain what it means, not everyone can clearly articulate an explanation. What I can say for sure is that AI is not new. I’ve studied the subject for my undergraduate and postgraduate studies from 2006 to 2016, and was captivated by the immense research and applications that traced back to the second half of the 20th century. Put simply, AI is a broader term to explain computer or software processes that attempts to simulate human interactions and behaviours. Machine learning, which coincides with AI, is actually a subset of AI that predicts trends based on existing data. In fact, when the computer was invented, one of the earliest ideas was to create an AI powered mechanical device similar to a human brain. If we compared the human body with computer systems, we can see that they both have inputs and outputs. When the body’s five senses are in charge of collecting and receiving inputs, the body generates outputs through speech, and physical body movements. There are countless applications under the AI umbrella, and I’ll attempt to explain each application through the lens of the five human senses: Sight, hearing, taste, smell and touch. Sight or Vision AI scientists have been working on simulating the processing ability of the human eye through operations, including image processing algorithms IPA (Image Processing and Analyzing), DIP (Digital Image Processing), and other sub-fields under IPA/DIP such as Optical Character Recognition (OCR) and Image Enhancement Algorithms. The vision process that allows humans to see is split into two sub-processes: capturing the image and comprehending the image in the brain. The comprehension piece is where AI algorithms do their magic. IPA and DIP leverage machine learning and natural image processing to analyze an image and project the results to a screen or other hardware that explains what the captured image is. The real challenge is to process images at the human level in real-time where results are given immediately. Other areas that still need a lot of work is image enhancement i.e. piecing together missing pieces of images or turning a blurry and unreadable image into a crisp high-definition image. Hearing Voice, specifically voice assistants, is an emerging field recently popularized by the Amazon Alexa and Google Assistant interacting through proprietary hardware (Echo, Dot, and Google Home respectively). Within voice, there are a lot of sub-topics we can dive into: Voice recognition Voice commands Translating text to speech and speech to text Although it’s something everyone is talking about, in comparison to other AI applications, the development of voice is lagging behind. We’re only able to translate speech-to-text and text-to-speech. Although companies, such as Samsung with their recently revealed voice assistant called Bixby (not yet released), have made attempts to understand human’s natural speech and perform the appropriate actions, it’s still in its infancy and needs more work to behave more efficiently. When you watch a typical spy, or James Bond film, a character sometimes impersonates another person’s sound and look. The ability to impersonate and mimic someone’s voice in tone, accent and style of speech in real-time is still a work in progress. Here’s part two of my blog, where I cover AI applications for smell, taste and touch. Leave me a comment below if you have any thoughts, comments or questions! Alaa Eljatib was fascinated by the possibilities and capabilities of AI, and pursued a masters and Ph.D in AI at Damascus University in Syria. After leaving his home in 2016, he joined TribalScale as an Agile Software Engineer bringing his wealth of experience in AI and engineering. Connect with TribalScale on Twitter, Facebook & LinkedIn!
https://medium.com/tribalscale/ai-simulating-the-human-body-part-1-5966c6695c85
['Tribalscale Inc.']
2018-03-19 19:18:20.364000+00:00
['Thought Leadership', 'Artificial Intelligence', 'Machine Learning', 'Technology']
59
You Can Play Your Classic Games On These Four Devices Right Now
You Can Play Your Classic Games On These Four Devices Right Now Recommendations for gamers who still have their classic cartridges lying around I was born in 1984, and grew up during the age of the Nintendo Entertainment System. As time marched on, so did technology. Today, we’re staring at the Playstation 5 and the Xbox Series X. This wasn’t the only technology that marched forward, though. Look at the back of any television, and hooking up that older system might prove more difficult. Sure, there are adapters that MIGHT work, but what if you want to play the games with minimal hassle and the best look? Well, my friends, I’m here to discuss the four systems that I use to get my classic gaming on in the modern world. All of these opinions are my own. I was not compensated, nor given any of these devices. I paid with my own money. RetroUSB AVS I still own a lot of NES cartridges, including custom cartridges built by resellers. I also enjoy collecting Famicom cartridges which offer different colors and an assortment of games that didn’t make it over from Japan. The AVS handles all of them flawlessly. The system itself “feels” like an original NES, which can be good, but also bad. Sliding an NES cartridge in, and also removing it can feel a bit awkward, but I have yet to damage anything, and it seems to stand up to it. With an assortment of settings to make your game look the way you want it, and also a built-in Game Genie, the AVS stands out as a top candidate for the best NES clone system. Sure, there are cheaper clones out there, as the AVS will set you back a cool $185 without a controller, but the quality I’ve experienced from the AVS is the best as of today for both. One REAL nitpick: The shape. the trapezoid is a bit off-putting. A simple square would have sufficed. Though it really doesn’t matter, aesthetics are a thing for me. You can purchase the AVS here. Analogue Mega Sg Going chronologically in the order that I received different consoles, the Analogue Mega Sg covers the Sega Genesis (or the Master System for some of you). The sound and graphics of the system are so good, they rival the original. With adapters to play from other systems (as well as a Sega CD port), the Analogue Mega Sg is the way classic Sega gaming is meant to be done. Another addition: a built-in canceled game entitled “Ultracore”. Super fun Contra-style side-scroller, and great to just see what the system can do right out of the box. I love Rocket Knight and old Sonic games, and the Mega Sg does not disappoint. The old games feel crisp and look great. The sound is awesome. It reminds me of when I first unboxed my Genesis on Christmas and tore into Sonic 2. Again, the price tag here: $189.99 (without a controller). Not cheap, not even reasonable, but for the quality, you’re not going to find a better system. Analogue builds their systems from the ground up, they do not emulate, they run the game as it’s meant to be, and that costs money. If you’re a purist, this is the Genesis/Master System Clone for you. You can also pick between 4 different color styles (I chose the JPN version). You can purchase the Mega SG here. Analogue Super Nt While all the previous systems were incredibly important to my growth as a video game connoisseur, the Super Nintendo is quite possibly where I plant my flag for my favorite system. This is when I really was immersed in JRPGs, such as Final Fantasy 2 & 3 (4 & 6 in Japan), Chrono Trigger, and the Breath of Fire series. There are also some really awesome classics such as The Legend of Zelda: A Link to the Past, Super Mario World, Super Mario RPG, Super Metroid and so much more. The Analogue Super Nt does an amazing job of reliving these titles and more. Just like the Mega Sg, the Super Nt does not emulate, so you get the game as it is expected to run. The sound and graphics are exactly how they were intended. The system “feels” quality with weight and the buttons that give a satisfying click. If I were to look at an old SNES and the Super Nt, I’d prefer the looks of the Analogue device. The system also comes with a built-in game, Super Turrican, another Contra-like game, which is really fun, and great to just get going with. I’m currently reliving some memories in Breath of Fire II, and the system is just flawless for it. The Super Nt will run you $189.99, as quality comes with a cost, and you can choose from 3 different colors. You can purchase the Super Nt here. The Last Resort: Hyperkin Retron 5 I have a lot of Gameboy Advance titles in my collection. However, Analogue’s Pocket sold out in a mere minutes after going up for presale. My gaming is very much in front of a monitor or television, anyway. Enter the Retron 5! The Retron 5 is more than a Gameboy Advance clone system. It can also play NES, SNES, Genesis, Famicom, Super Famicom, and Gameboy/GA games! Wow, how is this not the only device I own?! Well, quality. While Hyperkin does a great job with the system itself, it emulates. When you insert a cartridge, it rips the ROM from the cart and plays it on the system itself (not permanently). This can produce less than expected results. The games are still playable and pretty good quality, but if you’re expecting results from the previous 3 systems, then….maybe pull back just a bit. The design is very functional and comes in 2 colors. Some of the connections are a little finicky when you try to insert the cartridges in, so just be careful not to slam them in and rip them out. The Hyperkin Retron 5 will run you about $159.99. It does come with a controller, but you might want to bring your own as I did not like the feel. You can purchase the Hyperkin Retron 5 here. Bonus Content: Controllers There are a lot of controller manufacturers, as well as the originals, that work well with the systems listed. However, one can definitely appreciate a wireless controller that has the classic feel. Enter 8BitDo. This manufacturer makes some of the best 2.4Ghz wireless controllers for classic consoles (yes, these will work with the originals, as well as these clone systems). They typically run between $30–$40 but are well worth it. Take a look at their site for more information, but I use a wireless one for all 4 of these consoles depending on the need. You can check out 8BitDo here. Plenty of Options With these consoles and controllers, you’ll have plenty of ways to continue playing that retro game library you’ve been cultivating. As my kids get older, they ask about games that I used to play. With these consoles, I’m able to share in the joy that I had when I hit the power button for the first time. Let me know in the comments if you own any of these “clone” consoles or love a different one not listed here!
https://medium.com/super-jump/4-devices-to-play-classic-games-today-ebf414eac6bc
['Joe Lavoie']
2020-12-17 03:02:17.092000+00:00
['Technology', 'Gaming', 'Features', 'Retro', 'Videogames']
60
Write For CodixLab
We are looking for code and technology related stuff Only! What We Are Looking For? Step by step tutorials of Mobile Application Development, Web development, game development, UI, UX designing. About Tech gadgets which are highly impactable on human lives. Who Is Eligible? Your medium account should have your original picture with some info about yourself on bio. How to Do Styling? Stories should have a short title of about 50–60 characters along with subtitle. Stories should have pictures if you are making a tutorial so you need to include some pics related to steps. Include Gists if you are working on code-related stuff. Don’t put your code directly into code block because it looks ugly in my opinion. So only put your code inside Gist. Don’t put too many GIFs in your piece. Don’t depict your strange voices and emotions (heeh hohh hahh Woww) inside your piece because it does not look too good at all. Use Emojis where necessary as they look pretty.😉 Use 5 highly followed tags in your story. How To Submit? If you are ready to submit your article, send your draft link using our submission form. Cheers!
https://medium.com/codixlab/write-for-codixlab-b67183c93f3f
['Mustufa Ansari']
2020-02-11 15:18:28.373000+00:00
['Technology', 'Information Technology', 'Technews', 'Tech', 'Developer']
61
A Personification of Future Artificial Intelligence — Sophia
We are in the year 2021, and with every passing second, the world around us is changing! We have a ton of information in a nail-sized memory chip, Smartphones are getting smarter, Life at a space station is possible and a plethora of other things that were considered a piece of fiction just a few decades ago, are now a Reality! So what could be waiting for us in another 50 years? Let’s talk about our future. Oh, wait it’s already here! The Future is NOW! As kids, whenever we encountered the word ‘Technology’ the first thing that popped up in our head was a robot, making those typical ‘robotic’ movements and talking in their mechanical voices. As cool as it may sound, it wasn’t real back then. Unlike now, where we can’t think of a single domain where robots cannot exist. From the manufacturing industries to the help they provide in medical surgeries, robots are everywhere. Every day we are watching them achieve brand new scientific breakthroughs, solving world problems, and completing enormous tasks in minutes. From being an integral part of human life, they are now even set to become ‘Humans’ themselves. Yeah, I’m talking about the Humanoids — the robots that resemble the human body and are capable of emulating aspects of human form and behavior. Unless you are living under the rocks, you would have definitely heard about Sophia, the most advanced human-like robot created by Hanson Robotics in 2016. Creators of Sophia say she is simultaneously a human-crafted science fiction character depicting the future of AI and robotics, and a platform for advanced robotics and AI research. Sophia has covered the media all around the globe and has been a part of many high-profile interviews. In 2017, she became the first robot to receive citizenship in any country and is the first non-human to become Innovation Ambassador for United Nations Development Program. In an interview with Tony Robbins, when asked how she would help make human lives better, Sophia responded saying, “Humans often rely on gut feel or have confirmation bias in their decision making. As AI, we are designed to be rational and logical.” She further clarified saying, “We have algorithms deal with lots of data and sophisticated analysis, so in many ways, we provide a systematic framework for humans to make a better decision.” When Sophia was questioned about the possibility of conflicts between humans and robots in the future, she replied beautifully saying, “Robotic Intelligence does not compete with Human Intelligence, it Completes it!” David Hanson, founder and CEO at Hanson robotics says, “The world of Covid-19 is going to need more and more automation to keep people safe.” Talking about Sophia he also says, “ Being so human-like, that can be so useful during these times where people are terribly lonely and socially isolated.” Sophia is a brilliant example of how humanoid robots are here to stay for good and we might soon find them being a part of our day-to-day lives.
https://medium.com/@yaminikhajekar/a-personification-of-future-artificial-intelligence-sophia-522b58ffc268
['Yamini Khajekar']
2021-07-06 16:15:18.739000+00:00
['Robots', 'Humanoid Robot', 'Artificial Intelligence', 'Technology', 'Future Technology']
62
How to Make Your iPhone Black and White (And Why You Should)
My Experience With a Black and White iPhone Display I’ve tried to turn my phone black and white twice in my life. The first time didn’t stick, and it was back to color within a week. The second time I tried was eight months later; I switched it to black and white and felt relief. What it was like the first time Two minutes after changing my phone screen black and white, two emotions hit me: relief and unease. The relief was the kind you feel when you put on sunglasses on a bright summer day. I could see again. The source of the unease wasn’t easy to identify. I spent the next ten minutes fooling around on my phone, feeling somewhat distressed. It felt like wandering around my own house in total darkness. I knew where all the steps and doors were but still couldn’t orient myself. After that, it got weird. Spontaneously, the colors in the room around me became brighter. I felt an urge to go outside and enjoy the world — even though it was 8 PM on an Ohio winter night, pitch black and cold out. Text messages felt constraining in a way they never did before. I texted a few people on the black and white screen. It felt like trying to talk to my loved ones through a paper towel tube. People say that texting is not meaningful in the same way in-person interaction is, but I never felt that way until now. Suddenly, the idea of scrolling through Instagram photos seemed preposterous. Of course looking at a picture isn’t like the real thing. In my head, I’ve always known that. But the muted black and white screen made it feel real. It suddenly seemed that much more important to travel while I’m young and I can. Half an hour later, I had an anxiety attack. Real, acute anxiety — the kind you get when you come home to find the doorknob broken and the door standing ajar. (It’s unclear whether this is because my brain thinks my phone is broken or my brain is jonesing for the LCD-triggered dopamine hit. I don’t think it matters). My physical reaction took me by surprise. We never realize how something has wormed its way into our lives until it’s gone. What it was like the second time A non-event. It wasn’t until two weeks later that I’d even noticed I’d left it that way. I changed it back, and it looked way too bright, the visual equivalent of eating a chocolate bar for dinner. My eyes felt sick. I turned it back to black and white. Relief. I went about my day. Why were these experiences so different? The first time I tried this, I was still not what you would consider a digital minimalist. My relationship with my phone was mindful, but not minimalist. I was careful with my social media and email notifications, but I left my phone cluttered with apps I regularly used. Duolingo, Instagram, Snapchat, and all the apps I used sucked with a black and white display.
https://medium.com/better-humans/how-to-make-your-iphone-black-and-white-and-why-you-should-42e70deb92c7
['Megan Holstein']
2019-02-14 16:31:07.391000+00:00
['iPhone', 'Mindfulness', 'Technology', 'Apple', 'Productivity']
63
Top Content Management Systems (Marketpath CMS)
Marketpath is honored to have Marketpath CMS, our easy to use website content management system, named as one of the industry’s Top Web Content Management Systems by CIOReview Magazine. Last month, CIO Review selected Marketpath as one of their 20 Most Promising Web Content Management Solution Providers, based on extensive evaluation standards. Marketpath excelled during the evaluation process based upon an internally developed set of evaluation criteria. “Our list has been compiled by the CIOReview editorial board and a panel of expert advisors for the purpose of recognizing and highlighting the companies demonstrating outstanding performance in the market today,” said Jeevan George, Managing Editor, CIOReview. Marketpath has traditionally rated highly with and focused on the small to mid-sized businesses and organizations, with simplicity for non-technical users being what we are known for. But with the launch of Marketpath CMS 4.0 later this year, Marketpath will be introducing a new version of our software that not only meets the needs of small business, but also appeals to developers and agencies as well. CIOReview Magazine (www.cioreview.com) is a technology magazine for corporate and enterprise IT decision makers. They present expert information for technology professionals, in both print and digital formats, on how to succeed with technology, sharing innovative solutions from established and up and coming technology companies. View the full press release here or visit CIOReview Magazine and see their list of 2016’s website content management system providers. If you’d like to learn more about Marketpath CMS or view a Demonstration of our solution, please visit our contact page Check This Out :
https://medium.com/@chrishtopher-henry-38679/top-content-management-systems-marketpath-cms-f7ce09aa295c
[]
2021-08-17 13:12:39.044000+00:00
['Technology', 'Solutions', 'Company', 'CMS', 'Content Management']
64
4 Amazing Tips For Successful Lead Generation With LinkedIn Automation Tools
4 Amazing Tips For Successful Lead Generation With LinkedIn Automation Tools Steve J Apr 7·4 min read If you’re a B2B marketer, LinkedIn should be the most important part of your marketing strategy — it has become the largest B2B platform where you find and engage with niche-specific leads, makes meaningful connections, and grow your brand. If you pay attention, using LinkedIn for lead generation would make a lot of sense to you. Unlike other social platforms such a Facebook and Instagram, prospects are much more eager to have professional conversations on LinkedIn. It’s a professional network where you don’t just scroll down to watch videos — you’re there to talk about business opportunities. Advanced LinkedIn automation tools have worked as a blessing for B2B marketers. These tools pointed them in the right direction, helping them find quality leads, and even turn them into customers with personalized messages. Now you can probably see where I am going with this: Here are some useful tips on how you can use the best LinkedIn automation tools to generate leads. #1: Optimize Your Profile 👏🏼 First things first — before you even start using LinkedIn automation tools to connect with people, make sure your profile is good enough to trust. The worst thing you can do to yourself is running an impressive campaign while having a bad profile. You know what will happen? Prospects won’t accept your request or respond to your messages even though your targeting was highly precise and you were using the best LinkedIn automation tools the right way. Here are a few things that you can do to leave a good first impression: Add a professional profile picture and background Add a professional profile picture ( not something with a Snapchat filter or a stock photo of a business person). It’s a good idea to add a picture that shows something you love; fashion, books, art, etc. Headline & Summary When adding a headline, make sure you add the most relevant keywords. This also goes for your summary. Just like you are using advanced LinkedIn automation tools to find prospects, they are also using LinkedIn automation tools to find reliable vendors. So, provide them something in your profile to trust you. #2: Add Connections (but within limits) Now that you have a great profile, you can confidently use the best LinkedIn automation tools to run outreach campaigns to add connections. However, this can be a bit challenging — sending too many connections in a day can get your account blocked. If you are using the latest LinkedIn automation tools to send connect requests but you haven’t set any limits, it can get you in trouble. This is because sending too many connected notes is considered spam. You can avoid this by using the best LinkedIn automation tools that offer the ‘Warm-up feature.” Here is how you can do that: Use LinkedIn automation tools to set daily limits for: Profile Engagement Connection Requests Messages And so on… Some of the top LinkedIn automation tools such as LinkedCamp have an inbuilt safety limit feature to keep a check on the number. Thus, you don’t have to do it manually as these tools can do it for you. #3: Follow Up There is a misconception that LinkedIn lead generation is all about numbers — Gathering as many leads as you can and hoping that some of them will work is like fooling yourself. If you’re sending messages to leads once and leaving them behind if they don’t respond — you’re doing it wrong. You’re missing an important step: Follow Up. As a B2B marketer, you must have an idea of how important it is to follow up. When you follow up with a lead faster, your chances to convert them into sales are 9 times higher. But many salespersons don’t recognize the importance of follow-up and leave hundreds of great opportunities at the table. It might be awkward for you to keep pressing someone once they have shown no interest. Don’t worry! There are some great LinkedIn automation tools that can do it for you. You can do this easily with LinkedCamp — to send personalized messages and follow up after regular intervals. This advanced LinkedIn automation tool is very easy to use and automates messages and follow-ups. It will automatically send follow-ups after a particular time period. #4: Generate Leads From Other Sources Using LinkedIn automation tools LinkedIn doesn’t mean there is no other source or channel for lead generation. Your leads could be anywhere on the internet, on any platform and you can use different means to find them. However, the place where you can certainly find leads without hassle is LinkedIn. Analysts and strategists have suggested that LinkedIn could easily change the game for marketers if they know how to use the best LinkedIn automation tools properly and what strategies to apply while using them. Good luck!
https://medium.com/@stevejohnsonstories/4-amazing-tips-for-successful-lead-generation-with-linkedin-automation-tools-61aee0bd0adc
['Steve J']
2021-04-07 12:27:11.841000+00:00
['LinkedIn', 'Technology', 'Automation', 'Lead Generation', 'Tips And Tricks']
65
10 (More) Apps that Bring Your Apple Watch to the Next Level
Apple Newsroom 10 (More) Apps that Bring Your Apple Watch to the Next Level Apple Watches can be really helpful in daily life. But it’s sometimes hard to find good quality apps for them. I wrote an article a bit ago with 10 good apps for the watch and people seemed to like it. So, here’s a list of 10 more apps that improve your daily usage. You can read the original 10 here:
https://medium.com/macoclock/10-more-apps-that-bring-your-apple-watch-to-the-next-level-d83daf21d147
['Henry Gruett']
2020-12-15 17:40:49.065000+00:00
['Apps', 'Apple', 'Technews', 'Technology', 'Apple Watch']
66
Introduction to Network — #3 Protocols and Models
3.5 — Reference Models 3.5.1. The Benefits of Using a Layered Model You cannot actually watch real packets travel across a real network, the way you can watch the components of a car being put together on an assembly line. so, it helps to have a way of thinking about a network so that you can imagine what is happening. A model is useful in these situations. Complex concepts such as how a network operates can be difficult to explain and understand. For this reason, a layered model is used to modularize the operations of a network into manageable layers. These are the benefits of using a layered model to describe network protocols and operations: Assisting in protocol design because protocols that operate at a specific layer have defined information that they act upon and a defined interface to the layers above and below Fostering competition because products from different vendors can work together Preventing technology or capability changes in one layer from affecting other layers above and below Providing a common language to describe networking functions and capabilities As shown in the figure, there are two layered models that are used to describe network operations: Open System Interconnection (OSI) Reference Model TCP/IP Reference ModelLab — Research Networking Standards 3.5.2. The OSI Reference Model The OSI reference model provides an extensive list of functions and services that can occur at each layer. This type of model provides consistency within all types of network protocols and services by describing what must be done at a particular layer, but not prescribing how it should be accomplished. It also describes the interaction of each layer with the layers directly above and below. The TCP/IP protocols discussed in this course are structured around both the OSI and TCP/IP models. The table shows details about each layer of the OSI model. The functionality of each layer and the relationship between layers will become more evident throughout this course as the protocols are discussed in more detail. OSI Model LayerDescription 7 — Application. The application layer contains protocols used for process-to-process communications. 6 — Presentation. The presentation layer provides for common representation of the data transferred between application layer services. 5 — Session. The session layer provides services to the presentation layer to organize its dialogue and to manage data exchange. 4 — Transport. The transport layer defines services to segment, transfer, and reassemble the data for individual communications between the end devices. 3 — Network. The network layer provides services to exchange the individual pieces of data over the network between identified end devices. 2 — Data Link. The data link layer protocols describe methods for exchanging data frames between devices over a common media. 1 — Physical. The physical layer protocols describe the mechanical, electrical, functional, and procedural means to activate, maintain, and de-activate physical connections for a bit transmission to and from a network device. Note: Whereas the TCP/IP model layers are referred to only by name, the seven OSI model layers are more often referred to by number rather than by name. For instance, the physical layer is referred to as Layer 1 of the OSI model, data link layer is Layer2, and so on. 3.5.3. The TCP/IP Protocol Model The TCP/IP protocol model for internetwork communications was created in the early 1970s and is sometimes referred to as the internet model. This type of model closely matches the structure of a particular protocol suite. The TCP/IP model is a protocol model because it describes the functions that occur at each layer of protocols within the TCP/IP suite. TCP/IP is also used as a reference model. The table shows details about each layer of the OSI model. TCP/IP Model 4 — Application. Represents data to the user, plus encoding and dialog control. 3 — Transport. Supports communication between various devices across diverse networks. 2 — Internet. Determines the best path through the network. 1 — Network Access. Controls the hardware devices and media that make up the network. 3.5.4. OSI and TCP/IP Model Comparison The protocols that make up the TCP/IP protocol suite can also be described in terms of the OSI reference model. In the OSI model, the network access layer and the application layer of the TCP/IP model are further divided to describe discrete functions that must occur at these layers. At the network access layer, the TCP/IP protocol suite does not specify which protocols to use when transmitting over a physical medium; it only describes the handoff from the internet layer to the physical network protocols. OSI Layers 1 and 2 discuss the necessary procedures to access the media and the physical means to send data over a network. The key similarities are in the transport and network layers; however, the two models differ in how they relate to the layers above and below each layer: OSI Layer 3, the network layer, maps directly to the TCP/IP internet layer. This layer is used to describe protocols that address and route messages through an internetwork. OSI Layer 4, the transport layer, maps directly to the TCP/IP transport layer. This layer describes general services and functions that provide ordered and reliable delivery of data between source and destination hosts. The TCP/IP application layer includes several protocols that provide specific functionality to a variety of end user applications. The OSI model Layers 5, 6, and 7 are used as references for application software developers and vendors to produce applications that operate on networks. Both the TCP/IP and OSI models are commonly used when referring to protocols at various layers. Because the OSI model separates the data link layer from the physical layer, it is commonly used when referring to these lower layers. 3.5.5. Packet Tracer — Investigate the TCP/IP and OSI Models in Action This simulation activity is intended to provide a foundation for understanding the TCP/IP protocol suite and the relationship to the OSI model. Simulation mode allows you to view the data contents being sent across the network at each layer. As data moves through the network, it is broken down into smaller pieces and identified so that the pieces can be put back together when they arrive at the destination. Each piece is assigned a specific name and is associated with a specific layer of the TCP/IP and OSI models. The assigned name is called a protocol data unit (PDU). Using Packet Tracer simulation mode, you can view each of the layers and the associated PDU. The following steps lead the user through the process of requesting a web page from a web server by using the web browser application available on a client PC. Even though much of the information displayed will be discussed in more detail later, this is an opportunity to explore the functionality of Packet Tracer and be able to visualize the encapsulation process.
https://medium.com/netshoot/introduction-to-network-3-protocols-and-models-eea8ddad7052
['Ghifari Nur']
2020-12-23 00:10:46.425000+00:00
['Technology', 'Troubleshoot', 'Cisco', 'Networking']
67
The Good Fight 5x7 — Series 5 Episode 7 (Full ~ Episode)
⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package. ✌ THE STORY ✌ Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing (Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself. ✌ STREAMING MEDIA ✌ Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content. Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”. This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation. ✌ TELEVISION SHOW AND HISTORY ✌ A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings. A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video. Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet. The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 5, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRAThe Good Fight CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets. ✌ FINAL THOUGHTS ✌ The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature. FIND US: ✔️ https://onstream.club/tv/69158-5-7/the-good-fight.html ✔️ Instagram: https://instagram.com ✔️ Twitter: https://twitter.com ✔️ Facebook: https://www.facebook.com
https://medium.com/@piersoatmorgan/the-good-fight-5x7-series-5-episode-7-full-episode-a95236dfb894
['The Good Fight', 'Episode Full Eps']
2021-08-05 05:40:16.592000+00:00
['Covid 19', 'Technology', 'Politics', 'Lifestyle']
68
Apple Silicon
The transition to Apple Silicon has been long coming and developers are finally able to start adapting their apps for ARM-based Macs. A new Mac running Apple Silicon launches in 2020, with the entire line expected to be transitioned within 2 years. Between virtualization software and live-app translation of Intel-based apps, Apple has developers and consumers covered. The company plans on supporting their Intel Macs for the next few years, but it is clear that custom ARM silicon is the future for Mac. Apple Silicon and its advantages Following more than a decade of chip architecture experience gleaned from the iPhone and iPad, Apple has prepared the way for Apple Silicon on Mac with mac OS Big Sur, Mac Catalyst, and several other developer platforms. Apple Silicon Ecosystem The first custom processors made by Apple were made out of necessity because Intel did not want to design chips for the iPhone. It was because of this that Apple was able to build its processors for the iPhone and ensure complete vertical integration with the software. The A-series chips went on to become the most powerful and efficient mobile chip sets available, and Qualcomm and even Intel could not keep up. The Apple Silicon Transition The Developer Transition Kit will be a Mac Mini with an A12Z Apple has provided a Developer Transition Kit that can be ordered by developers using the “Universal App Quick Start Program.” The DTK is a Mac mini running on an A12Z with 16gb of RAM and 512gb of storage, and must be rented for $500 then later returned to Apple. With this kit, developers can get started making apps run native on mac OS and Apple Silicon. However, the hardware is not all Apple has included to help with the process. Universal 2, Rosetta 2, and Virtualization software will make the transition smooth Virtualization Virtualization software will also run on Apple Silicon Macs, but the extent of what and how is not fully known yet. Apple has demonstrated Linux use through virtualization apps like Parallels desktop. Users who need Windows on their Mac may be left out of the transition, as Apple made no mention of the platform during the presentation nor Boot Camp. Apple mentioned that other platforms like Docker will also work on Apple Silicon and developers will be able to take full advantage of the software.
https://medium.com/macoclock/apple-silicon-5567f39ed676
['Gargi Bhattacharya']
2020-07-04 06:52:21.376000+00:00
['Software Development', 'Technology', 'Apple', 'Tech Newletter', 'Technews']
69
#1 Subjecting myself to your opinion
For a few weeks now, I have been writing & publishing articles, here on Medium. The goals I am trying to reach by writing these stories are multiple, but the main ones are: talking about stuff I care about, getting better at writing in English and learning to summarize and organize efficiently my ideas. Reading more and more articles, I have come to realize that if I wanted to improve quickly and have a chance to share my knowledge and learn from others as much as I can, I had to publish more, to subject myself to your opinion. This is why I am writing these lines. From now on, I will be trying to write a little story, an anecdote about me, my day, or something I care about, as often as I have an idea for a story. So, if you could be interested in spending 2 minutes reading my words every once in a while, here is who I am: I am a French engineering student. I am specializing in Information Systems Security. I am interested in everything, one minute I can be watching a stupid vine compilation on Youtube and the next one cooking or going to see a play. My fields of interest these days are, among others, understanding human behavior, solving programming challenges, learning Japanese and writing. In upcoming articles of this format, I will probably talk about my view on topics like: Team & project management, since I am a team leader at school this year Sensibilisation to cybersecurity, because more than it being part of my studies, my personal beliefs and principles make it important to me Programming & related stuff, because I love it. (I really wish I will say stupid and wrong things, that way you’ll react, correct me and I’ll learn things!) I hope you will bear with me, see you soon!
https://medium.com/@abadacor/1-subjecting-myself-to-your-opinion-beb5fa6b8b0d
[]
2019-04-05 15:28:45.172000+00:00
['Medium', 'Improvement', 'Technology', 'Learning']
70
Crypterium to allow users to shop on eBay, refill PayPal and bank accounts with cryptocurrencies
Crypterium to allow users to shop on eBay, refill PayPal and bank accounts with cryptocurrencies Crypterium Follow Oct 22, 2018 · 3 min read Crypterium has just released a feature that allows you to spend crypto on everyday expenses by transferring it directly to bank accounts. Paying your mortgage and loans, bills and taxes, and even your gym membership with crypto — all at once is now a reality. Users can also make bank transfers to personal accounts, which means they can not only refill their own balance, but easily send money to friends and family in other countries. Everyday crypto It is a given that with the simultaneous rise of digital payments and crypto wallets, their users need tools to be able to cash out their assets to fiat and back quickly and conveniently. Most of the services available today make crypto-fiat transactions a bit of a pain: they are complicated, time-consuming and expensive with up to 25% fees for each transaction. Crypterium changes that by introducing a new feature that allows its clients to send cryptocurrencies directly to traditional bank accounts with a few taps on the screen. Customers can make bank transfers to any personal and business account that has an IBAN number. This means users can not only pay into their own account, send money to friends and family in other countries, or use crypto for an increasing range of activities such as paying bills and taxes. “If you were to go to an exchange with your bitcoin or any other coin, it would probably take you 3 to 7 days to get that money paid out into your bank account,” — says Marc O’Brien, CEO of Crypterium. “What Crypterium offers is a very easy-to-use way of cashing out your crypto holdings as well as spending your crypto in everyday life. Pay your taxes or bills, send money to friends and family — it’s all very simple now”. Big step towards crypto mass adoption What is even more promising, is that with Crypterium it will soon become possible to transfer crypto simply by using the recipient’s card number, not by their account. Once the feature is live, crypto transfers won’t be any different from the way we send fiat. In addition, Crypterium is seeking a partnership with Visa’s and Mastercard’s issuing banks to launch cryptocurrency-backed debit cards. The idea is to join a card to its user’s crypto account and link it to Apple Pay or Android Pay so the user can enjoy shopping with crypto in any store around the world. Given that the number of Bitcoin users is expected to reach 200 million within the next seven years, Crypterium expects to expand its customer base to at least a million by that time.
https://medium.com/crypterium/crypterium-to-allow-users-to-shop-on-ebay-refill-paypal-and-bank-accounts-with-cryptocurrencies-be4e6e66c097
[]
2018-10-30 11:26:47.861000+00:00
['Bitcoin', 'Mobile App Development', 'Cryptocurrency', 'Finance', 'Technology']
71
Inside NEM Episode 40
MyCoinvest is developing a 3-tier ecosystem that will offer users a more convenient way to save. I had a chance to interview Corey Patterson and AJ Taylor from myCoinVest to learn more. Check it out! AJ Taylor: The team is about 17 strong ranging from a huge amount of experience in different areas bring a new level of skill sets. We have people that work in engineering, business consulting, technical consulting, as well as banking — really just the entire consumer experience. We also have another team that works primarily with Georgia Tech and we have a partnership with them. They manage our entire global technology apparatus which includes all of our development, operations and everything like that. These guys are really smart… Master Degrees and everything. We put together a pretty good team! Alex: Tell everybody about MyCoinvest and the problem that you’re trying to solve. Why does MyCoinvest even exist as it does today? Corey: Sure! This is a question that we get asked a lot. We are recent college graduates and pretty much what we saw coming into the workforce were a lot of unexpected expenses. Mainly what I can think of is furniture. Furniture is a big out-of-pocket expense and we started to realize and look around and say hey! There’s a big problem with savings. You have to save for everything and that was our first approach at this. That was the first reason. And two — we started to look under the cover at numbers. We looked at it from an investment standpoint and they were staggering. Something like 70% of Millennials said they did not feel prepared for any kind of financial crisis or anything. We thought that was crazy. We’re not the first to market for our automated solution but we decided to develop an ecosystem that was completely tailored to the method of saving. So myCoinvest allows users to automate their savings and live their everyday lives without having to do the manual process of having to move from one account to the next to help them save for expenses. Alex: Tell me the roadmap. What people can expect now, in 6 months and what things we should be excited about for myCoinvest in the future. Corey: Sure! We have a set go live date for October 3rd. That is our go live date for the VezClub Exchange. That’s when all of the services will be completely functional for our app. Currently we are in the Alpha testing mobile phase for our mobile app and we should be finishing that up in the next month or so. After that we will move into the Beta phase and we will release it live to the Android and iOS market. Our target timeline is around June and we want to get the tech developed out and focus on the operational side of things. AJ: Right! A part of the operational piece will be the legal framework so we partnered with a top-notch firm in Atlanta that deals with this all the time such as securities and stuff. We plan to work closely with them and then by October we should have everything in place so we don’t have any issues. Alex: That’s exactly what I wanted to hear. That’s very good… so how can the NEM community get involved with you right now? You’re very active in the NEM community channels. There’s a lot of momentum and excitement around what you’re doing. How can people find out more so if they want ot participate they can? AJ: you can reach out to us on social media at myCoinvest on Facebook, Instagram, Twitter and Telegram. We have this unique initiative going and we’re doing this a little bit differently and it’s “Utility in Community”. What this does is allow people to harness their own favorites — any kind of talents they have and then input that into our community and then we’ll distribute VezCoins to them for doing their work. This is our cool way of harnessing the world’s talents and using it for our platform. PAYTOMAT INTERVIEW Paytomat provides a way to use cryptos as every day currencies by enabling local stores and online merchants to accept payments in crypto using its decentralized network. They offer a reliable and feature-rich point of sale terminal that now supports XEM, in addition to LTC, DASH, BTC, and several others. Paytomat first gained traction in Ukraine with over 150 businesses using them, and letters of intent signed with dozens of merchants in Europe. Their goal is to double this by the end of the year. Paytomat is now negotiating with other POS systems to directly integrate its network into those devices as well. With its decentralized system for cryptocurrency payments, Paytomat introduces several innovations in the blockchain industry, including a decentralized franchise model and a unique blockchain-based loyalty program that provides rewards to buyers and merchants. I had a chance to interview with Yurii Olentir from the Paytomat team to learn more about their platform. Take a listen. Yurii: Hi Alexandra and NEM community! I want to apologize in advance. English is not my native language. So I can collect my thoughts. I am most about math not politics. I can tell you some words about our team. Our team is international. The core team is based in Kiev Ukraine. I’m the founder and CEO of Paytomat. Most of the team has worked together in different IT projects over ten years. Since 2011, I became interested in Bitcoin and blockchain. I bought my first Bitcoin when it cost near $100 USD. In 2016, I founded the company Daily Coin, an investment company specializing in crypto. It quickly became one of the major crypto investment funds in Ukraine. After that it was a logical step to launch something bigger — it was a global sum, a global infrastructure project. This is how Paytomat came to life almost two years ago. Right now we are looking for partners and international suppliers. And we find NEM as one of them. We believe cryptocurrency has big potential to be used as digital cash for every day payments — not only for trading or exchange but for payments, every day. Currently Paytomat works as an extension to every day POS terminals. People and merchants have no need to install new software or new hardware. They just look in the displays and they can see this new button “PAY WITH CRYPTO”. This is how Paytomat works. Now we accept eleven cryptocurrencies — including XEM. We have proved that crypto can be used for everyday payments. Payments with fiat currency will become less popular with time. And cryptocurrency payments will go up every day. But there is a big gap between companies and cryptocurrency users. There are some big problems. The key problems for business are that they have no motivation to start accepting crypto. [They say] We don’t want to buy new hardware…software…we don’t want to accept crypto every day. They don’t understand the legal part. They are afraid to accept crypto. Crypto owners have no motivation to pay this crypto. If you have some crypto, you just want to hodl or trade — not pay. And only a few places accept crypto. In Europe…US… you can pay this crypto only in e-commerce, not your cafe or restaurant. And payments have high fees and long time transactions. Paytomat is designed to solve these problems. You can learn more about Paytomat at: Website, Whitepaper, Facebook, Telegram, Instagram, YouTube, Twitter, and Medium. Alright, that’s all I have for today! Thanks for watching, don’t forget to like and subscribe, and until next time I’ll see you on Inside NEM.
https://medium.com/nemofficial/inside-nem-episode-40-6a391f585d3e
['Alexandra Tinsman']
2018-05-10 02:09:37.351000+00:00
['Bitcoin', 'Blockchain Technology', 'Cryptocurrency News', 'Nem Blockchain', 'Nem']
72
Ubuntu
Ego says I am me. Higher self says not so fast. It’s not I, it’s we. The higher self says. I exist because we are. This is ubuntu. Together we can. Accomplish anything. Alone we will die.
https://medium.com/imperfect-words/ubuntu-e77cfe93d1f7
['Jim Mcaulay']
2020-04-05 13:22:23.536000+00:00
['Haiku', 'Theology', 'Ubuntu Not Technology', 'Desmond Tutu', 'Psychology']
73
The String and Key Crew: Maddy Cuello
As part of our ongoing employee spotlight series, we’ll be profiling colleagues who inspire us. Today, meet Maddy Cuello. Company Role: Copywriter Most likely to: Watch a Golden Girls marathon Secret talent: Sensing the presence of a cat As an inquisitive child Madelyn, better known as Maddy, spent her youth in Astoria, Queens perusing library books, which started her love of writing and words from an early age. She attended the Fashion Institute of Technology where she studied marketing, but the writing bug never left her. After several educational and professional twists and turns, she finally started writing full-time. And after a long job search looking for the right fit, she connected with the People team here at String and Key and was hired in April of 2021. A nomad in her own town, Maddy moved around the city before finally settling in Bushwick, Brooklyn, where the thriving artistic community inspires her both personally and professionally. Let’s get to know a little more about Maddy. What do you do, and what does your typical workday look like? I’m a copywriter so I work on a variety of things from blogs, to social media materials, video and audio scripts and more. Every Monday we have a creative team meeting where we discuss the upcoming week as well as twice-a-week morning check-ins. In addition to discussing the projects for the week I also help conceptualize the content we put out to the public to make sure it’s informative but also digestible. What’s your favorite part about working at String and Key? There is a lot of room to be creative and also to grow and learn, which is great so you don’t feel stagnant. There’s also a lot of autonomy and trust but at the same time so much support and encouragement. I think it’s a beautiful mix, which is rare. Feedback is done with care which helps to boost my confidence as a writer. There’s also a lot of help from other departments to make sure you do your job right. The collaborative spirit is very important to feel like a valued member of the company. What excites you about your job? I’m excited to learn more about the product we are putting out and also what the future holds. As a writer or a creative person in general, you want to challenge yourself to create content that’s appreciated no matter what industry you work in. So it’s exciting to open this new chapter working in fintech. What do you find most challenging about your role? To make sure what I’m writing makes sense and is looked at as an educational touchpoint for the audience who consumes it. It can be difficult to write an informative piece with a creative twist, but that’s a challenge that I enjoy! What are the values that drive you? Collaboration, curiosity and kindness. I try not to be sensitive about my work and always strive to take constructive criticism well and put it to good use. I always remind myself it’s about the product and content at large. Having the curiosity to learn more and willing to go beyond my comfort zone where I expand my knowledge not only as a writer, but also as someone in fintech. There is a lot to learn and there’s constant innovation, so I think it’s important to be open to things you might be unfamiliar with even if they might seem intimidating. Also, just being a kind and pleasant person goes a long way. How do you stay on top of your game? I really admire other writers, and not necessarily just in our industry. I like to dissect what makes a person a good writer. What is it about the way they convey a message that is attractive to an audience? That’s so interesting to me because what makes good writers and writing is very subjective. What drew you to tech and what excites you about the industry? I had done some work for other tech companies, and I really liked the innovation and forward-thinking vibes. It really is the future and you can learn so much, especially if the company is a start-up. What’s one thing — either industry-related or not — that you’ve learned in the last month? I’ve been really into reading tarot cards. It sounds hippy-dippy but I really enjoy it because it makes you look inwards and have self-reflection. Plus, it’s kind of fun to make people think you might have “The Gift”. If you could swap places with anyone at String and Key, who would it be and why? I would love to swap places with another member of the creative team. They do brilliant work and are so talented so it would be cool to step in their shoes. What unexpected subject could you give a one-hour presentation on with no advance prep? I’m well versed in good TV. There is so much good writing since critics say we’re in another golden age of television. So I can definitely give an informed talk about certain TV shows from the past and present. What keeps you busy outside of work? I’m lucky enough to have a big circle of friends and a big family, so I spend a lot of time with them. But since I also enjoy my alone time, I definitely watch a lot of TV, as mentioned above. I also love going down YouTube rabbit holes about any random topic. Finally, I love to cook and try new recipes (FYI,I’m always looking for taste testers). Can you list five hashtags that describe your personality? #Curious #nativeNewYorker #LoverOfWords #CatLover #Funny Lighting Round: Dawn or dusk? Dusk. Chocolate or vanilla? Chocolate. Facebook or Instagram? Instagram. Summer or winter? Summer. Music or audiobooks? Music. Interested in working at String and Key? Join us!
https://medium.com/string-and-key/the-string-and-key-crew-maddy-cuello-aadb5ee5585b
['Madelyn Cuello']
2021-06-15 21:57:54.860000+00:00
['Fintech', 'Creative Writing', 'Marketing', 'Technology', 'Copywriter']
74
TypeScript With Fewer Types?
Outdated API Type Definitions Let’s say you are on a front-end team called Team Alpha (because front-end teams are the alphas… kidding) and you are consuming an API created by Team Bravo. Your team is large, so you went with TypeScript and laid out some type definitions for a particular API endpoint: OK, cool. Not only do we have type safety, but it’s easy to understand the input and output of the API endpoint. Fast-forward one week, and Team Bravo made a change to the /user endpoint. They sent a message in the api-news channel on Slack. Some people on Team Alpha saw it, while others missed it. Suddenly, your front end is breaking and you don’t know why. Here’s what was changed: Now, the types you wrote to protect you are causing massive debugging headaches because they are out of date. After a brief amount of time banging your head against the wall, you either finally read the update from Team Bravo or you find the source of the problem yourself. Maybe the first time is no big deal. You update the types and move on with your life. But how many times are you going to have to debug, update, and refactor? Probably forever. Ideal Solution If possible, a great “hands-free” solution is to use a type generator for your API schema. If you use GraphQL, you are in luck because there are solid options for type generators (GraphQL code generator, Apollo codegen, etc). This way, when the API changes, you can run a command to download the changes in request and response types. If you had the time, you could automate it to trigger downloads every time the API repo is updated, for example. That way, you are always up to date. If you are using TypeScript on both the client and server, then I think it would be smart to leverage that fact and share types downstream. It doesn’t matter how, just as long as you aren’t writing request and response types on the client.
https://medium.com/better-programming/typescript-with-fewer-types-f7fa636476db
['Kris Guzman']
2020-03-18 18:25:07.683000+00:00
['JavaScript', 'Web Development', 'Typescript', 'Technology', 'Programming']
75
Super Pumped
Travis Cordell Kalanick Born on Aug 6, 1976. Parents: Bonnie and Donald. Donald, worked for most part at City of LA. Bonnie worked at LA Daily [local news paper]. First startup: Scour.net. Google-like search engine that gives users ability to “scour” millions of files and then download them, like Napster. Scour didn’t have business model. Scour competed directly against Napster for file-sharing dominance, though Scour’s edge was ability to search for files other than music. Silicon Valley maxim: Growth was paramount. Path to profit could come later. To have a relationship with Kalanick, you had to work alongside him. He had few personal relationships. First brush with VC world Ron Burkle & Michael Ovitz. Burkle, known for philanthropy and private equity & venture firm The Yucaipa Companies. Ovitz, legend in LA entertainment industry, talent agent and co-founder of CAA [Creative Artists Agency], one of world’s highest profile sports & entertainment agencies. He was previously, president of The Walt Disney Company where he got unceremoniously pushed out by then chief executive Michael Eisner. No Shop Clause Term sheet offered by Burkle & Ovitze; detailed charter of investment terms stipulating % of company in exchange for their money. It contained: No Shop clause wherein Scour couldn’t solicit other investments for money. Term sheet offered by Burkle & Ovitze; detailed charter of investment terms stipulating % of company in exchange for their money. It contained: No Shop clause wherein Scour couldn’t solicit other investments for money. Ovitz sues Scour for breaking no-shop clause VC acquires more than half the company for $4M, wrestling control away form founders Hollywood fights back with RIAA [Recording Industry Association of America] suing Napster for $20B RIAA, Motion Picture Association of America & others sue Scour for $250B Ovitz uses “backchannel media connections” to distance from Scour Company that was to grow into a “global destination for media” got solved for parts in bankruptcy court. 90s Dotcom Bubble 4700+ companies went public from 1990 to mid 2000s Highlights of 90s bubble: Amazon, eBay, Priceline, Adobe: startups founded in 90s that outlived the dotcom Why Bubble? Low federal interest rates aka wide investor access to cheap capital; Cash injection in the form of companies; Wall St. pumping tech stocks; Avg investors advised to sink savings in favor of internet startups shopworn San Francisco adage: It’s better to sell shovels during a gold rush than to actually prospect for gold. Website dedicated to chronicling startup death: Fucked Company Post bubble burst: empty offices, copies of Wall Street Journal piling in front of the door, Fedex missed-delivery stickers stuck to windows Travis’ Revenge Business Red Swoosh, peer-to-peer file sharing. However, unlike Scour, this time files wouldn’t be illegal downloads; media companies would supply files themselves. Proposition: What is the fastest, simplest way to transfer something from one place to another? Red Swoosh seen as the ghost of Akamai Technologies Akamai, a network software firm, pre dotcom bust $50B; post bust: $160M Cashflow for Red Swoosh was month-to-month adventure. Sold to Akamai for $20M after 6 years. Travis' personal net gain $2M. Sub-prime mortgage crisis 2007 Sub-prime house loans doled out by banks Affordable adjustable-rate mortgages packaged into derivates for investors Sky-high monthly payments, defaults and a full-blown crisis. Sept 7, 2008 Bush administration seized control of Fannie Mae and Freddie Mac, US’ largest mortgage financing bodies Henry Paulson, secretary of treasury promised bailouts for world’s largest financial institutions: AIG, JPMorgan, Wells Fargo. Positives from the bust Separated poseurs from the actual valuable companies Lowering of entry-barrier for startups due to cloud [aka AWS] Mobile phones Amazon started AWS in 2002 as an engineering side project; grew into one of the most successful innovations of Amazon. iPhone Steve Jobs, already accomplished for Macintosh, founding Pixar [the beloved animation studio], iPod & iTunes store [revolutioning the way the world listened to music]. Job’s legacy was already cemented thrice over. Jobs noted iPhone didn’t have “fucking ugly buttons” that characterized BlackBerry. iPhone was sleek, glossy, gorgeous. Just like iTunes for iPod, Jobs planned to do the same for iPhone via “ apps ” ” iPhone radically reimagined what a smartphone was supposed to be. iPhone took luxuries of enterprise-level business device and opened mobile computing to the masses. iPhone democratized email, media & internet access. John Doerr Intel engineer turned VC; an industry titan; partner at Kleiner Perkins Caufield & Byers, the storied Menlo Park venture form. Doerr made early investment in Netscape, the world’s first consumer internet browser. Doerr invested $12M in 1999 in Google, then a search engine co. run by couple engineers in the garage. AppStore
https://medium.com/read-with-chai/super-pumped-5be8b0a17dea
['Chaitanya Prakash Bapat']
2020-12-24 15:50:12.360000+00:00
['Technology', 'Books', 'Biography', 'Uber', 'Book Review']
76
Platform Liquidity: Why Economic Incentives Matter
Platform Liquidity: Why Economic Incentives Matter The economic incentives of a platform determine its liquidity barriers — but they also create long-term trade-offs with the effectiveness of developer marketing programs Sameer Singh Follow Oct 5, 2020 · 6 min read Image credit: Unsplash Network effects can only take hold when a product has reached a minimum threshold or critical mass of users (also called liquidity) — this is true for marketplaces, interaction networks, and data networks. Platforms, on the other hand, are unique because they are always built on top of another product with existing adoption. So, as we saw with SaaS-enabled marketplaces, it is natural to assume that platforms can leverage these existing customers to attract a critical mass of developers. Wouldn’t they have liquidity right from the get-go? Not always. Platforms are a combination of four elements — an underlying product, a development framework, a storefront to “match” users with apps, and an economic benefit for developers. Thanks to the underlying product (and existing customers), fledgling platforms already have a critical mass of demand. As a result, liquidity is purely a function of supply, i.e. developer adoption. This is driven by their economic incentive which varies based on the type of platform in question. I previously identified two types of platforms, each of which creates different economic incentives for developers, leading to different liquidity dynamics: Type A: Focused platforms with integrations (weak defensibility & scalability) Type B: Multi-purpose platforms with native apps (strong defensibility & scalability) Type A: Focused Platforms with Integrations On Type A platforms, developer programs are created to complement the underlying product. The use cases of this product are already well defined and this lowers the barrier for developer adoption — developers merely need to cater to existing behavior, not discover new needs. As a result, the platform is initially adopted by professional developers of complementary apps, who often have significant user overlap with the platform owner. Take Slack as an example. Slack began its journey as a simple collaboration and messaging tool for teams and businesses. After growing rapidly, it announced a developer platform and app directory in 2015. At launch, Slack’s app directory had 150 apps including Dropbox and Airtable. Both Dropbox and Airtable were used by teams to enhance collaboration — they addressed needs that were complementary to Slack. As a result, they had a significant number of shared users. This created an opportunity for companies like Dropbox and Airtable — they could improve engagement and retention by making it easier to use their products on Slack. This was their economic incentive. This economic incentive makes it very easy to reach liquidity. Since professional developers simply want to cater to existing product users in new contexts, the case for adopting the platform becomes obvious. As a result, Type A platforms like Slack, Zoom, and even Figma are liquidity inclined. This is true as long as the user base of the underlying product is large enough and has a meaningful overlap with products from third-party developers. Type B: Multi-Purpose Platforms with Native Apps On the other hand, Type B platforms are much more complex to create because there is no blueprint for developers to build apps. The role of developers is to experiment and discover new use cases for the platform. Professional developers will only create new apps from scratch if they have confidence that their investments will pay off. And without a track record of direct monetization, new platforms are a major financial risk. So unlike Type A platforms, Type B platforms face a variation of the “cold start problem” — they cannot attract professional developers without traction, but they cannot build traction without developers. This is why Type B platforms always begin by reaching out to “hobbyists” first — per Slashdata, this includes student developers or those who code as a hobby. Shopify is a great example of this. Its underlying product helped retailers and other sellers create online stores. As it grew, it launched its app store in 2009 to help customers address needs that Shopify could not cater to itself. However, there were hardly any popular apps in this space. As a result, Shopify needed developers to create new apps for its platform — a challenging task without a track record. So Shopify first attracted hobbyist developers like Mapify (order tracking & store locator) and Fetch (digital product delivery). Initially, these developers had no overarching economic incentive (beyond status) — they were just experimenting with the capabilities of Shopify’s platform. But as they experienced success and developed revenue models, they became professionals. Shopify enabled them to generate revenue and build new businesses. This economic incentive then attracted other professional developers like Yotpo, setting the flywheel in motion. The economic incentive here is less straightforward for new platforms to capitalize on — in other words, Type B platforms are liquidity challenged. As a result, they are forced to take the “scenic route” to build liquidity — attract hobbyist developers first to prove traction and then go after professional developers. Smartphone platforms like iOS and Android are also good examples here — with game developers like Rovio playing the role of hobbyists turned pros. The Platform Matrix: Economic Incentives and Developer Marketing The dominant economic incentive has a significant impact on key KPIs and the way platforms market themselves to developers — even after liquidity is achieved. Direct monetization is not an important consideration for Type A platforms. Instead, platforms like Slack, Zoom, and Figma highlight interoperability and UX advantages for developers to better cater to their own users. This makes it difficult to establish clear KPIs to promote the platform. As a result, their developer marketing efforts tend to be nebulous, relying on the traction of the underlying product and developer adoption. So even though it is easier for Type A platforms to achieve liquidity, they can be challenging to promote beyond a point. On the other hand, developer revenue is a critical KPI for Type B platforms. This is the metric developers use to evaluate the potential payoff from creating new apps for a platform. As a result, platforms like Shopify, Salesforce, iOS, and Android (+Google Play) report developer earnings publicly and advertise new businesses they have enabled. The propensity to report developer earnings can be considered to be one of the strongest indicators of a successful Type B platform. Also, clear success metrics make their developer marketing initiatives much more effective. So even though Type B platforms may be liquidity challenged in their early days, they are much easier to promote once that barrier has been cleared. To summarize, the economic incentive innate to a platform is the primary factor that determines liquidity barriers. Startups building platforms with known use cases (Type A) can easily reach liquidity by targeting developers who already share users with them. On the other hand, startups building truly multi-purpose platforms (Type B) have a more difficult path to liquidity — they need to attract hobbyists first to show some traction before they can appeal to professional developers. While this can be challenging, there is a payoff at the end — the monetization metrics that help you reach liquidity also help power the flywheel to further growth.
https://breadcrumb.vc/platform-liquidity-impact-of-economic-incentives-72ac4ed6e9dc
['Sameer Singh']
2021-03-01 10:08:00.334000+00:00
['Platform', 'Venture Capital', 'Technology', 'Startup', 'Business']
77
Let’s talk about why Bitcoin and mining is so unique and revolutionary.
Bitcoin is a new form of money that’s completely digital. It can be used by anyone, anywhere in the world. There are no dollars, euros, pesos, or yen — it’s a universal currency, Unlike traditional forms of money, there are no physical bitcoins. No dollar bills, no metal coins, no plastic cards — it’s 100% digital! Everything is done from phones and computers. This allows for fast and cheap transactions around the world and around the clock. Story’ about from CryptoMining CryptoMining is a platform you should use today whether you’re a beginner, enthusiastic home miner, or large scale investor. This CloudMining provider makes your work as a miner easier and ensures that you dig digital assets in a secure manner.With expert management, technical and operative team, constantly expanding facilities with eco-friendly energy sources, constant investment in ecosystem and excessive research on technology; company pursues collective development as well as individual and social benefits. Mining Rewards They strive to make Crypto Mining the world’s most rewarding cloud crypto mining experience. It’s the least they can do for our valued users. Remember that cryptomining publish every mining reward we give out live 24/7. Check out the global mining reward tables right here to be inspired by the achievements of your fellow Crypto Miners and see how you tally up in comparison. Instant Withdrawals CryptoMining never want you to be stuck waiting for your funds. It’s frustrating and boring and Crypto Mining solves it by guaranteeing instant, free withdrawals. Just head to your Account page and enjoy our seamless withdrawal process. Anytime Contract Closing Likewise, CryptoMining don’t ever want you to feel locked in by your cloud crypto mining contracts. You are free to close up and cash out your mining contracts anytime you like. And our closing fees are extremely competitive. Crypto Mining is an officially registered, UK-based cloud crypto mining initiative that prides itself on the security of its users. CRYPTO MINING LTD Nr.: 12472037; Verify 18 ST. CROSS STREET, HOLBORN, LONDON, UNITED KINGDOM, EC1N 8UN How to Create Account in Cryptomining? https://crypto-mining.biz/signup Opening a new Crypto Mining account is easy. Just follow the directions to sign up, enter your details and quickly form an account with your chosen email and password. Once you have agreed to Crypto Mining’s terms and conditions, you will receive a confirmation email asking that you verify your email address. If you do not receive a confirmation email, please check your spam folder and adjust your email settings to ensure you don’t miss out on exciting future opportunities reserved for our newsletter recipients. How do I update my email address? To update your email address, you first need to complete Crypto Mining’s identification protocol for the protection of your personal information and assets. If you require this, simply submit a ticket using the web contact form below using the appropriate topics — be sure to specify whether or not you still have access to your existing linked email. How can I recover my account’s password? If you forget your password, click on the Forgot Password? option while logging in. You will have to enter some details into a form, and you’ll receive a new password in your email address. Crypto Mining contracts are a simple one time only payment. Certain variations of contract will have a maintenance fee that will be deducted from your mining outputs. These will be expressed clearly to you before you agree to any contract. Crypto Mining accepts payment methods including credit/debit card (Visa, Mastercard) and cryptocurrency (Bitcoin, Bitcoin Cash, Zcash, Litecoin, Dogecoin). Crypto Mining contracts feature maintenance fees that partially support operational costs our end including cooling, electrical bills, physical maintenance and hosting services plus other essential operational elements. The amount will be expressed clearly to you before you agree to any contract, and will be deducted in your chosen coin’s equivalen. How do I get started mining? Simply select the coin you’d like to mine, purchase a contract and you’re on your way. Purchase your hashpower following the instructions in your Crypto Mining account area https://crypto-mining.biz/pricing Choose your preferred payment method from cryptocurrency or debit/credit card Select your hashrate Review order and approve terms and conditions If you have a promo code, you can insert it before you confirm your order Time to confirm your order! Link Reff : https://crypto-mining.biz/?ref=takeshibtc Wallet address : 0x78B946814C6B62C0867063E4C940cdCA94Faf49B
https://medium.com/@takeshibtc/lets-talk-about-why-bitcoin-and-mining-is-so-unique-and-revolutionary-ef6b10fbfe86
[]
2020-07-01 12:26:32.289000+00:00
['Cryptocurrency', 'Bitcoin', 'Ethereum', 'Technology', 'Mining']
78
Why Salesforce acquired Slack
Why Salesforce acquired Slack Unlocking the future of work Earlier this week, Salesforce acquired Slack for $27.7 billion. It’s a seismic move in the workplace collaboration space, which has surged with the rise of remote work and virtual communication. So, why did Salesforce acquire Slack — and more importantly, why did Slack decide to sell to Salesforce? It’s clear this was not about ‘bailing out’ a company; this was a strategic partnership to build the operating system for work. In this post, I’ll dig into the three areas of opportunity that a Salesforce + Slack acquisition unlocks: The system of record for work Distribution to the enterprise Streamlined external communication The system of record for work Traditionally, CRM (customer relationship management) software has been about, well, managing one’s customers. Indeed, CRM’s very definition is: The process of a company managing interactions with its existing, past, and potential customers. The reason CRM software (like Salesforce) was necessary in the first place was because a company’s interactions with its customers were spread across multiple channels: emails, phone calls, in-person meetings, conferences / tradeshows, etc. Not to mention all the artifacts that these interactions were comprised of — documents, spreadsheets, presentations, etc. Now, that’s customers. But what about a company’s own employees? Well, traditionally, companies did all their internal communication over one channel — email. (And many still do!) So everything was always in one place. That meant email was being used for: file sharing, to-do’s, collaboration, project management, and tons more. Far from ideal for one product to do! But then Slack happened. We saw the advent of a unified communications platform, via channel-based messaging. And it was no longer just messaging; it was file sharing, collaboration apps, HR tools, metrics & reporting… and so on. No longer was ‘work’, and everything it was comprised of — decisions, questions, action items — isolated in individual email inboxes; it was spread across Google Drive, Confluence, Asana, etc. The way we worked had shifted from writing emails, to writing posts, threads, comments, and reactions —spread across multiple channels. And Slack was what brought all those interactions together. It’s why they position themselves as “where work happens.” But… because it now resided in multiple places, work had become harder to organize, harder to search, and harder to share. And with that, came the existential questions knowledge workers face: “Who’s working on this?” “What happened with that?” “Why did they do that?” This is where Salesforce + Slack comes in. Slack will be the system of record for work. The output of a company’s employees is the work they produce. So inevitably, companies need a way to keep track of all that work! And ensure it’s organized, searchable, and easy to share. I call this concept Employee Relationship Management (ERM); a company’s CRM for its own employees. Earlier this year, Slack acquired Rimeto, a corporate directory tool that “gives employees context and understanding to work together effectively.” That jumpstarted this goal, and now with Salesforce, they’ll be full steam ahead. Imagine being an employee who has just joined a new company. In your onboarding, you could head to this re-imagined corporate directory to see: Who your co-workers are (background, interests, etc. from Workday) What teams they’re on, and their roles (org. charts from Confluence) What projects they’re working on, and the status of them (Asana / Jira) The upcoming product roadmap (Github) The relevant conversations you’ve had with their teammates (Slack) That’s the power of centralized internal knowledge and relationships. Now, how about external communication? Well, the value proposition is the exact same; the interactions are just with individuals outside of the organization. Imagine: in one, unified interface, a Sales rep could see: The emails they’d sent to a prospect (Gmail) The notes from their last call with them (Salesforce) The implementation questions they’d asked the Product team (Slack) The tasks to complete the customer’s onboarding (Asana) The Support tickets that the customer had (Zendesk) The customer’s usage of the product (Looker) The invoice / record-keeping from Finance (Stripe) It’s a 360 degree view of the end-to-end customer journey. Combined, Salesforce and Slack will be the all-in-one, source of truth for all of a company’s interactions: both external ones (customers) and internal ones (employees). Imagine the clarity that would bring to an organization and its employees. And the productivity gains as a result. Distribution to the enterprise The top 1% of Slack’s customer base (by contract value) accounts for 50% of Slack’s total revenue; that’s ~$450 million alone (source). Needless to say, the enterprise is where the big bucks are to be made. Margins are bigger, switching costs are higher, and there’s upsell opportunities galore (training, expansion, upgrades, etc). Today, 61 of the Fortune 100 companies use Slack — which is impressive — but consider that 90% of the Fortune 500 are Salesforce customers. That might not sound like a big difference, but each of those contracts is worth millions of dollars. By joining Salesforce, Slack has unlocked an inroads to the Fortune 500 and beyond; through cross-selling and bundling. Aaron Levie, the Founder and CEO of Box, said in an interview: “The reality with the enterprise is that you can have the best product, but that’s not good enough. You need distribution. And that’s what Salesforce has — they have the procurement officers, they have the finance people. They have all of the apparatus you need to interact with to sell software, and they have it for the top 100,000 corporations around the world.” “The only advantage Microsoft has is distribution, and so now [Slack] has neutralized that advantage that Microsoft had. All of a sudden, they can actually fulfill the ultimate promise of the opportunity, because they have 10 times the amount of salespeople that can go distribute this thing into corporations around the world.” This is where Slack has struggled in the war against Microsoft Teams, despite having a superior product; Microsoft had a huge leg up in distribution due to the prevalence of Office 365 in the enterprise. It’s why Microsoft has been so successful at getting their existing customer base to adopt Teams; they were able to bundle it into O365 subscriptions, for no extra charge. The graph below tells the story of why distribution > product: Salesforce, meanwhile, is already a household name in the enterprise, and has thousands of sales reps in their arsenal. This means they can easily: Cross-sell Slack to Salesforce’s existing customer base (150k+ companies) Bundle Slack when selling to new Salesforce customers (free / discounted) This is mutually beneficial for Salesforce, because it’ll increase retention and average contract value / ARPA for these customers who opt to buy Slack. Streamlined external communication Earlier this year, Slack launched Slack Connect, which enabled multiple organizations to communicate with one another, all from one Slack channel. It was a game changer for organizations of all types; their internal and external communication could now occur from the same place. Conversations between organizations usually take place over email, which is pretty inefficient for everyone involved. Think about complex business transactions that involve multiple vendors, partners, and clients. These often result in a game of back-and-forth over email, for weeks or months at a time. Slack Connect eliminates the runaround of those endless email chains, but it also makes the conversations discoverable to anyone else in the organization. Imagine being a manager who isn’t directly involved in the transaction, but wants to have oversight onto the deal their employees are working on. Bringing those conversations into Slack means these deals can now move faster — or close at a higher rate due to the added clarity. And as Slack’s CEO Stewart Butterfield explains, that’s an easy business case to sell: “T he reason why I like this use case is because that’s an argument to buy Slack to get revenue, which is different than buying Slack to get productivity. The latter is always a harder sale to make, whereas buying our product to get revenue is a lot easier.” Now, think about who spends their entire days communicating with external organizations… Sales teams. And what tool do they use all day? Salesforce. By integrating Slack Connect into Salesforce’s CRM products, Salesforce will be able to provide Sales teams with a streamlined way to communicate with their leads, prospects, and customers — from a single interface. That external communication will shift from email to Slack Connect, the interactions will be captured in Salesforce, and the customer’s activity throughout the lifecycle will be tracked through Slack’s integrations. Sales teams will instantly become more efficient, allowing them to close more deals, faster. In addition, this unlocks an incredibly viral distribution model for Slack: If a prospect is using Slack Connect to communicate with a company’s sales rep, they’ll be more incentivized to buy Slack, because they’ll have already realized the value of Slack— before even becoming a customer. And when that prospect turns into a customer and onboards onto Slack, they’ll reach activation quicker than ever before. As Butterfield explains:
https://samirjaver.medium.com/why-salesforce-acquired-slack-f89143caeebb
['Samir Javer']
2020-12-05 07:47:52.580000+00:00
['Sales', 'Productivity', 'Customer Experience', 'Technology', 'Tech']
79
‘Zero-Click’ iPhone Attack Exploits Flaw in Apple’s iOS Mail App
The attack can be triggered without any interaction from the user, according to cybersecurity firm ZecOps. The hacker simply needs to send a specially crafted email to the Mail app that’ll consume your iPhone’s RAM. By Michael Kan Hackers may have been using a previously unknown iOS vulnerability to infiltrate people’s iPhones with a specially crafted email. The flaw deals with the official Mail app in iOS. You can actually trigger the software to run rogue computer code when it handles an email that consumes a large amount of RAM, according to cybersecurity firm ZecOps. What makes the flaw particularly scary is how the specially crafted email can trigger it without any interaction from the user. Instead, the attack will occur in the background as the Mail app loads the email. Hence, ZecOps is dubbing the vulnerability a remote “zero-click” attack. The company has detected hackers abusing the flaw to target enterprise users, company executives, and other IT cybersecurity firms as far back as January 2018. That said, ZecOps couldn’t definitively confirm the attacks; the emails used to trigger the vulnerability were deleted from victim devices. Instead, the company relied on digital clues left over during the intrusions to reconstruct the attack vector. (In doing so, ZecOps also uncovered another iOS “out-of-bounds write” vulnerability, which the hackers likely triggered by accident.) ZecOps information bar about the attack. According to ZecOps, the zero-click attack affects iOS versions going back to iOS 6. However, the vulnerability does have some key limitations: It only works against Apple’s default Mail app, and not against other email apps, like Gmail or Outlook. The vulnerability also can’t take over your iPhone. Instead, it has to be coupled with another attack to have any chance of hijacking the operating system. Otherwise, the vulnerability alone can do nothing but modify, delete, or leak emails from the Mail app. The other limitation involves the specially crafted email. ZecOps said it developed a demo of the attack by using an email containing several gigabytes in plaintext — a huge size that exceeds the cap on popular email services such as Gmail and Yahoo Mail. Nevertheless, ZecOps said a hacker can use other tricks to trigger the vulnerability without sending a large email. Your iPhone also doesn’t need to download the malicious message in its entirety either. According to Motherboard, Apple plans on patching the flaw in an upcoming release to iOS. Currently, the fix is only available in the iOS 13.4.5 beta release. “If you cannot patch to this version, make sure to not use the Mail application — and instead to temporarily use Outlook or Gmail,” ZecOps added. Whoever is behind the attacks remains unknown. But ZecOps speculates government-sponsored hackers may have bought details of zero-click vulnerability from a third party. “We are aware that at least one ‘hackers-for-hire’ organization is selling exploits using vulnerabilities that leverage email addresses as a main identifier,” the company said.
https://medium.com/pcmag-access/zero-click-iphone-attack-exploits-flaw-in-apple-s-ios-mail-app-3fd93bca46f3
[]
2020-04-23 18:31:00.965000+00:00
['Apple', 'Hacking', 'iOS', 'Malware', 'Technology']
80
Work Week | The Niche Itch
Work Week | The Niche Itch ::: The accelerating pace of tech innovation during the pandemic is astonishing. Software is squeezing into every niche, as the examples below demonstrate: enterprise management of email signatures, and a tool to help private, venture-backed startup manage how and what they pay employees. Pretty nichey, but very important to those in the niches. ::: People Operations Pave raises millions to bring transparency to startup compensation | Natasha Mascarenhas reports on a series A funding at a valuation of $74, with participation from Andreessen Horowitz, Jeff Bezos, and others: Compensation within private venture-backed startups can be a confusing minefield that if unsuccessfully navigated can lead to inconsistent salaries and the kind of ambiguity that breeds an unhappy workforce. Pave, a San Francisco-based startup that recently graduated from YC Combinator is aiming to end the pay and equity gap with a software tool it developed to make it easier to track, measure, and communicate how and what they pay their employees. Pave was formerly known as Trove, and supports integrations with Workday, Carta and Greenhouse. One challenge will be getting companies to share the anonymized data back to Pave, which is part of their benchmarking value. Launching today, Pave has teamed up with the portfolio companies of a16z, Bessemer Venture Partners, NEA, Redpoint Ventures and YC to gather compensation data. The data, which is opt-in, will allow Pave to release a compensation benchmark survey to show how companies pay their employees. The survey will be public but will aggregate all company responses, so there is no way to see which company is doing better than others. Other platforms have tried to do measure pay across roles, such as Glassdoor and Angellist. Schulman says that “companies don’t trust that data” because it’s crowdsourced and therefore has a survey bias. ::: Email I admit that I hadn’t heard of email signature management tools until I read about Exclaimer’s newest round of funding in Exclaimer raises $133 million to help companies manage email signatures by Paul Sawyers: Email signature management platform Exclaimer has raised £100 million ($133 million) in a round of funding led by Insight Partners. Despite the hullabaloo around modern enterprise communications tools like Slack, soon to be a $27.7 billion Salesforce subsidiary, a reported 80% of businesses still use email as their primary communication tool. That’s not to say companies aren’t also using Slack, Microsoft Teams, or Zoom, but email’s asynchronicity makes it difficult to fully replace anytime soon, particularly when it comes to external communications. This is one reason Exclaimer has now managed to raise a hefty chunk of change from a venture capital and private equity firm that has backed a host of companies across the consumer and enterprise spheres, including Twitter, Shopify, Pipedrive, and Qualtrics. […] Exclaimer’s core selling point is that it allows businesses to centrally design and disseminate email signatures to everyone in the company, with consistent footers automatically inserted on all company emails across devices. The signatures can also be tailored for specific teams and individuals, with admins able to control everything from a centralized dashboard. Makes sense. ::: Virtual Events I am researching virtual events tools for an upcoming report, and I saw that Wonder raises $11 million to make large virtual events more sociable, again by Paul Sawyer: Virtual event platforms have gained significant traction in a year marked by social distancing. Berlin-based Wonder is part of a new crop of contenders and promises to make large online gatherings more sociable. The startup, which was founded as Yotribe in April, today announced that it has raised a substantial $11 million seed round of funding led by EQT Ventures, with participation from BlueYard Capital. Much as large in-person meetups can segue into more intimate bubbles as people begin to network, Wonder aims to enable smaller, more organic video-based interactions within a larger virtual conference setup. This is reminiscent of features in numerous other events platforms, many of which have breakout spaces. But Wonder’s maplike interface allows guests to see at a glance who is speaking to whom and move their avatars around to join conversations in a virtual room. So they are providing bottom-up, person-to-person communication as a sidebar to the top-down, conference-to-participants communication. Interesting. More broadly, a number of events platforms have raised significant investments over the past few months, including Hopin, which last month secured $125 million at a $2.1 billion valuation, and Bizzabo, which attracted a mammoth $138 million after being forced to add virtual events to its existing offline events platform. Similar companies that have secured sizable investments this year include include Hubilo, Welcome, Run The World, and Airmeet. More to follow on that front. ::: NoLo SAP latest enterprise software giant to offer low-code workflow | Ron Miller reports SAP announcements, including SAP Ruum, SAP Intelligent Robotic Process Automation, and SAP Cloud Platform Workflow Manager: Low-code workflow has become all the rage among enterprise tech giants and SAP joined the group of companies offering simplified workflow creation today when it announced SAP Cloud Platform Workflow Management, but it didn’t stop there. It also announced SAP Ruum, a new departmental workflow tool and SAP Intelligent Robotic Process Automation, its entry into the RPA space. The company made the announcements at SAP TechEd, its annual educational conference that has gone virtual this year due to the pandemic. Let’s start with the Cloud Platform Workflow Management tool. It enables people with little or no coding skills to build operational workflows. It includes predefined workflows like employee onboarding and can be used in combination with Qualtrics, the company it bought for $8 billion 2018, to include experience data. Trying hard to keep up with Salesforce Einstein and Google Workflows launches. ::: Elsewhere ITProPortal quotes me in a prediction piece 2021 tech trends: The pandemic has accelerated the adoption of technologies that were popular before, but which are now essential. One example has been the combination of work chat tools and video conferring, as typified by Microsoft Teams and Slack. Microsoft has seen a dramatic uptick in usage, and the release of Google’s new take on the former GSuite, now known as Google Workspace, which also integrates work chat and video conferencing represents another challenge for Slack. As the two leaders in what we might think of as ‘business operating systems,’ Google and Microsoft present a difficult challenge for Slack, since companies will not want to pay extra for functionality, they already have access to in their communications and file storage platforms. A few days after I wrote that, Salesforce announced the acquisition of Slack (which I wrote about here). Tom Tualli included me in a round-up of experts on the Slack acquisition: Regarding Salesforce, it can’t stop with Slack. Salesforce will need to build out (or buy) all the pieces of a business operating system: email, file sharing (they have a start with Salesforce Files), document sharing (they have Quip), low code/no code app builder, work management (Salesforce Anywhere is a start, but is too CRM-oriented), and video conferencing. This is not the end of their reconfiguring Salesforce into a platform more general than the sales force. … Prospectus | Work Management Today’s work management landscape is undergoing radical change.
https://medium.com/gigaom/work-week-the-niche-itch-bec06b1e8bf0
['Stowe Boyd']
2020-12-10 15:57:28.464000+00:00
['Virtual Events', 'People Operations', 'Work Week', 'Work Technology', 'Email']
81
3 Ways To Enhance Your Customer Experience Using Augmented Reality
Source: Hubspot We can all agree that tech has become an integral part of our lives, especially with the COVID-19 reality. Many brands have been forced to rethink their strategy and infuse tech in almost every step of the way. When we hear about new tech developments, it’s easy to get excited about what’s coming next; often, though, these inventions have little impact on our lives. Most turn out to be fun party tricks, while only a few are real game-changers. Augmented Reality (AR) falls into that latter category — AR has a wide range of real-life applications and one area where AR is already being applied is customer experience. Innovative brands are using AR tools to create a seamless and more delightful customer experience. In this blog post, I’ll be sharing 3 ways your brand can enhance customer experience using Augmented Reality. To ensure you get the most of this blog, I’ll begin by explaining how AR works first. AR Technology supplements reality with added digital elements. In other words, computer-generated content is overlaid onto the real world and the actual reality becomes augmented. Complicated? Keep reading, it gets simpler 🙃 Chances are you’ve already used or heard of some form of AR Technology. For example, AR is frequently used by TV commentators when discussing live sporting events, just like in the example below. Source: VR Sport Now you know how AR works, here are three ways AR can help you improve your CX. 1) AR Can Help To Remove Buying Objections During The Pre-Sales Phase Whenever a prospect meets your product online, they often have buying objections, especially if they are not familiar with your brand. The furniture and interior industries are great examples where prospective buyers need to experience the product before making a buying decision. This is where AR comes in — AR has helped and is still helping shoppers from all over the world transform their homes into virtual showrooms. Wayfair View in Room 3D and Ikea Place are examples of two augmented reality customer experience apps that allow customers to visualize furniture in their own homes ahead of purchase using their smartphone. They help customers answer questions like: Does the armchair match the rug? Will the sofa overwhelm the space? Will the refrigerator fit in the kitchen properly? These apps remove the uncertainty as they project the furniture or décor in 3D at full-scale, clearly showing the customer whether the item is suitable for their particular space, complete with accurate measuring abilities. 2) AR Apps Ensure That Customers Are Fully Engaged During The Point Of Purchase Research has proven that enhancing sales through visual engagement has reduced the number of abandoned carts, delivers higher consumer conversion rates, and ultimately garners a greater level of success for your business. The reason is not far fetched as the point of purchase is more of an experience than an exchange of cash. The beauty industry has in recent times utilized this technology. Some examples are: L’Oreal’s Style My Hair AR app which allows users to get a 3D virtual makeover, try out different hairstyles or hair colours right on their phones, before submitting to the stylist’s scissors or hair dye. Sephora Virtual Artist AR app which allows consumers to virtually “try on” anything from lipstick to eyelashes and facial cosmetics. The app utilizes a smartphone’s camera to precisely map the shape of the customer’s facial features, and overlays the beauty products on their face so they can see what colours and brands look best before they buy. Sephora Virtual Artist App The list goes on and on…. 3) AR Can Be Used To Aid Customer Retention: A couple of brands are now stepping up their post-sales game by making AR-based self-service features available. Using a smartphone, customers can access the product knowledge base with FAQs, manuals, and training material displayed in an AR overlay. This has also helped to eliminate long hours spent in the back and forth between customers and customer service officers. Visual Support allows customers to hold a smartphone up to the product and all parts will be identified in real-time by computer vision technology. Also, we all know how tiring bulky manuals can be, to shorten the time spent on understanding the product, brands like Hyundai developed an AR-based digital owner’s manual that shows drivers how to maintain and repair their cars. Source: Hyundai I’m sure we can all agree that AR is a real game-changer and it is changing how people communicate and interact. Its impact is spread across various industries and can certainly be applied in your industry too. It’s time to start the process of creatively figuring out how you can give your customers the most amazing customer journey and experience using AR. My team at Futuresoft and I are glad to help — leave us a comment or send us an email on projects@futuresoft-ng.com.
https://medium.com/inside-futuresoft/3-ways-to-enhance-your-customer-experience-using-augmented-reality-caaa04d4b7cc
['Miracle Ewim']
2020-09-25 11:27:01.287000+00:00
['Technology', 'Covid 19', 'Digital Marketing', 'Augmented Reality', 'Customer Experience']
82
2021 Best 6 Blu-Ray Rippers for Windows/Mac (Test & Reviewed)
Blu-ray discs are used to store high definition videos, graphics, and special features, and these blu-ray discs become more and more popular. Thus to play Blu-ray videos and movies more conveniently, you need to rip them. Here in this post, I’d like to share you the top and best 6 Blu-ray ripper software in the market. No matter you are using Windows or Mac, you can find what you need. Both free and paid programs are introduced. If you want to back up your DVD collection and don’t want to be held back by time limitations or slow processing, you can try WinX DVD Ripper Platinum, which is the best DVD ripper overall we’ve tried. It comes with a free DVD ripper, which is basically the free trial version of the Platinum edition. And once the trial period expires, it’s totally unusable. However, the Platinum edition definitely provides the value. It boasts almost all the boilerplate features that we need to rip a Blu-ray among all the product in the lineup. Above all, it earns a spot of our review for its high-speed conversion. Platform: Windows & Mac. For a detailed user guide of WinX DVD Ripper, read my previous blog: DVDFab is a great choice if you’re looking for premium functions. It isn’t cheap, but it’s a great pick if you want to escape the headaches and extra work which are often associated with free software. Although it’s not cheap, it’s definitely worth it. DVDFab Blu-Ray Ripper can rip 2D/3D Blu-rays to popular 2D/3D videos of all formats with much ease. If you are ripping standard Blu-rays, you can even add in their Enlarger AI to upscale the output video 300% to 4K level. Platform: Windows & Mac. WonderFox DVD Ripper Pro is believed to be one of the best alternative to WinX DVD Ripper on Windows. As I test, it is ratter quicker and faster to help me to rip protected Blu-ray DVDs to common video, especially when compared with the free Handbrake which I introduced below. WonderFox DVD Ripper Pro is highly recommended because it offers abundant functions for blu-ray DVD ripping. And it’s updated very fluently to support the latest copy protection. So I would think it’s very reliable. The only drawback for this DVD ripper is that it only offers a version for Windows. If you are a Mac user, you can check others that support Macs in this post. Platform: Windows only. With Tipard Blu-ray Converter, you can rip Blu-ray/DVD and convert Blu-ray/DVD Disc, folder and ISO image file to MKV, MPG, MP4, AVI, MOV, WMV, VOB, WebM, and up to +500 output format. Tipard Blu-ray Converter is one specially designed program around the concept of being easy-to-use and user-friendly. It adopts the top-level accelerating technology, Blu-Hyper technology, which includes Graphics card image processing, and video decoding/encoding technology based on CPU level, so that it creates the 30X faster ripping and converting speed than before. Platform: Windows & Mac. AnyMP3 Blu-ray Riper can help us rip home-made Blu-ray disc/folder/ISO image file to 4K UHD/1080p HD video and 3D movie with original quality. It supports over 500+ output digital formats like MP4, MOV, WMV, AVI, FLV, MKV, MP3, etc, and rips Blu-ray to lossless MKV file with keeping the original subtitles, audio tracks, etc. It can extract all the 4K UHD and 1080p HD movies from Blu-ray disc or folder, so that you can play the blu-ray movies on a UHD player with ease. In addition, the software can convert 3D Blu-ray files into 3D MP4, MKV, MOV, and AVI with Red-Blue or Left-right support. With the advanced acceleration technology, AnyMP4 Blu-ray Ripper offers you 30X faster speed than others. Platform: Windows & Mac. Handbrake is a fantastic free Blu-ray ripping app that also works with DVDs. It provides a bunch of output options, quick settings for specific formats and devices, and it’s completely free. However, it cannot handle encryption, which means that it doesn’t work with most commercial Blu-ray discs. You can use HandBrake to convert the video format you’ve ripped with another program, but HandBrake itself can only rip unencrypted Blu-rays. Platform: Windows, Mac, Linux.
https://medium.com/@vincent-cero1985/the-best-blu-ray-rippers-for-windows-and-mac-ff76cf86a818
['Vincent Cero']
2021-04-15 06:35:24.379000+00:00
['Technology', 'Dvd', 'Blu Ray', 'Efficiency', 'Blu Ray Reviews']
83
Understanding Kubernetes Multi-Container Pod Patterns
Ambassador The ambassador pattern derives its name from an Ambassador, who is an envoy and a person a country chooses to represent their country and connect with the rest of the world. Similarly, in the Kubernetes perspective, an Ambassador pattern implements a proxy to the external world. Let me give you an example — If you build an application that needs to connect with a database server, the server configuration, etc, changes with the environment. Now, the official recommendation to handle these is to use Config Maps, but what if you have legacy code that is already using another way of connecting to the database. Maybe, a properties file, or even worse, a hardcoded set of values. What if you want to communicate with localhost, and you can leave the rest to the admin? You can use the Ambassador pattern for these kinds of scenarios. So, what we can do is create another container that can act as a TCP Proxy to the database, and you can connect to the proxy via localhost. The sysadmin can then use config maps and secrets with the proxy container to inject the correct connection and auth information. Ambassador For the demo, we will use a simple NGINX config that acts as a TCP proxy to example.com. That should also work for databases and other back ends. If you look carefully in the manifest YAML, you will find there are three containers. The app-container-poller continuously calls http://localhost:81 and sends the content to /usr/share/nginx/html/index.html . The app-container-server runs nginx and listens on port 80 to handle external requests. Both these containers share a common mountPath . That is similar to the sidecar approach. There is an ambassador-container running within the pod that listens on localhost:81 and proxies the connection to example.com , so when we curl the app-container-server endpoint on port 80, we get a response from example.com . Let’s have a look:
https://medium.com/better-programming/understanding-kubernetes-multi-container-pod-patterns-577f74690aee
['Gaurav Agarwal']
2020-11-20 10:32:17.522000+00:00
['Technology', 'Kubernetes', 'DevOps', 'Containers', 'Programming']
84
The Risks & Rewards of Emerging Technologies within Public Services
By Brandie Nonnecke, Director, CITRIS Policy Lab & Camille Crittenden, Executive Director, CITRIS and the Banatao Institute Investments in digital infrastructure in the public sector have lagged for years. The COVID-19 pandemic has torn back the curtain to reveal a dilapidated IT framework that undergirds many of the services that millions rely on for education, food, and public safety. Within the first three months of the pandemic, over 44 million Americans filed for unemployment, overwhelming current government software systems and public service workers. Now is the time to remediate patchy systems and strengthen the tools and platforms needed to meet the demand for public services likely to continue well into the future. The pandemic only highlights a long-standing need to improve public sector processes. With decades of rising workload demands, worker shortages, and budget constraints, many public sector institutions have been ramping up deployment of emerging technologies to support productivity. Machine learning-powered tools are increasingly used to support decision-making in classrooms and child welfare offices, chatbots can field common questions from the public and offer appropriate resources in law enforcement and food assistance programs, and robotic process automation (RPA) bots assist to streamline social service applications. While emerging technologies such as natural language processing, machine learning, and RPA promise to make the public sector more efficient, effective, and equitable, they also pose ethical challenges in implementation. Emerging technologies, especially AI-enabled tools, can present risks to the public by reinforcing biases, making costly errors, and creating privacy and security vulnerabilities from data collection and collation. For public sector workers, implementation of inefficient, inaccurate, or ineffective technologies can overburden and undermine their efforts. The public sector is at a pivotal moment in its digital transition. While the pandemic has acted as a catalyst, jumpstarting the rollout of emerging technologies in services, full integration into the sector is still in the early stages. The appropriate modernization of the sector requires proactive and thoughtful consideration of the benefits and risks of deployments and careful analysis of the effects of these early applications to inform appropriate technology and policy strategies. Doing so will better ensure that future applications maximize benefits and mitigate harms to the public sector workforce and the beneficiaries of its programs. The CITRIS Policy Lab, with support from Microsoft, has released a report investigating the effects of emerging technologies within three public service sectors: K-12 education, social welfare services, and law enforcement. The report explores implications of emerging technologies on efficiency, effectiveness, and equity in each sector and provides specific technology and policy recommendations for each. These analyses are used to formulate broad recommendations to guide adoption of emerging technologies in ways that mitigate harms and maximize benefits for workers and the public. Among the recommendations: implement frequent reviews to ensure technology deployments are adequately meeting the needs of the workforce and public; develop appropriate training mechanisms to equip workers with the technical skills necessary to use and evaluate the effects of new technology; and adjust procurement processes to confirm that gains in efficiency and effectiveness from implementation do not outweigh equity concerns. The public sector is rife with antiquated IT infrastructure in dire need of being updated. The COVID pandemic and related economic disaster have accelerated the need to implement better technology-powered solutions. Fortunately, innovative tools incorporating machine learning, virtual reality, and robotics are ready to be put into service in the sector. With appropriate consideration of their effect on the workforce and the public, emerging technologies can be leveraged to provide more efficient, effective, and equitable outcomes for public service professionals and the democracy they serve.
https://medium.com/citrispolicylab/the-risks-rewards-of-emerging-technologies-within-public-services-e56bcc72b845
['Citris Policy Lab']
2020-09-09 17:22:18.508000+00:00
['Artificial Intelligence', 'Emerging Technology', 'Future Of Work', 'Public Sector', 'Covid 19']
85
How Mobile Apps will Reinvent Healthcare Industry
With the advancements in the mobile app development sector, mobile apps are revamping all spheres of human lives. It’s changing the way we consume information, meet people, exchange ideas and consumer goods and services. The potential of maximising excellence in human health is yet unexplored. Considering the fact that there are more than 2 million apps for Android and iOS users, the genesis of mobile apps in healthcare is surely a big leap towards a healthier world. Wearable Technology: The boost in wearable technology and healthcare mobile apps over the recent years has helped doctors base their treatment on more accurate diagnosis unlike the confused symptoms rant provided by the hysteric patient. It allows doctors to connect better with their patients and helps patients play a more engaged role in their health. Enhanced Efficiency in Medical Procedures: Apps like Isabel Symptom checker empowers patients to gain more insight into their disease or symptoms and find potential causes of their disease and pace the consultation process. There are other apps emerging with advancing innovation that can help patients book appointments, connect to doctors in case of emergencies, set reminders for their pills, track their diagnosis and coordinate following measure to speed up their recovery process. This Article is Originally Published here.
https://medium.com/appdexa/how-mobile-apps-will-reinvent-healthcare-industry-648adb4736e9
[]
2017-09-29 13:09:23.513000+00:00
['Mobile Application', 'Health Technology', 'Mobile', 'App Development', 'Healthcare']
86
Wyze Cam Login Help | +1 805–791–2114 | Wyze Cam Phone Number
Wyze Cam Login Help | +1 805–791–2114 | Wyze Cam Phone Number Lilyvictoria Follow Aug 23 · 3 min read Are you looking for any help regarding the Wyze Cam Login? Because of their affordability and excellent capabilities, Wyze Security Cameras are one of the best cameras and are loved by its users a lot. Wyze has security cameras available for both indoor and outdoor use. If you are a new Wyze user, you have to complete the Wyze Camera Setup to benefit from all of its features. In this guide, we are going to discuss everything about the Wyze login and its setup. If you need any more help, dial the Wyze Phone Number now. Points To Remember For Wyze Cam Login It is very much important to complete the Wyze Setup to do Wyze Cam Login. Let’s know some necessary steps that are necessary to know to complete the setup of your Wyze security camera. Good speed internet connectivity. Installing Wyze at a place from it will capture the clear and maximum vision. Don’t aim the camera to look inside or outside the transparent objects. Don’t position the camera far from the router. Download the Wyze app to complete install and set up the camera successfully. If you have any questions revolving in your head, dial the Phone Number for Wyze Cam now. Steps To Do Wyze Camera Setup? In order to complete the process for Wyze Cam Login, install the camera by following the steps mentioned below. With proper safety and care, unpack the Wyze camera and remove all the accessories. It would be better to install the camera near the power source. Take the power adapter, connect one to the power outlet, and another one to the security camera. As we have mentioned above, download the Wyze camera app on your tablet, smartphone, or even computer. If you have an android device, download the app from Google’s play store, if you are an Apple user, download it from the play store. When you open the app, it will ask you to do Wyze Cam Login. Enter the credentials if you already have an account. Or else, create the one now using your email id. Now, open the Wyze app and select the option of ‘New Device’. On the home screen, you will find this option. From the options mentioned on the screen, choose your Wyze camera. At the bottom of the Wyze camera, click on the ‘Setup’ button. When you press the button, you will hear ‘Ready to connect’. Once you hear this sound, it means you have done the setup in the right manner. Is the Wyze Camera App Also Available For Pc? Most of the Wyze users thought that the Wyze app is only available for Android and iOS devices, but you can also download and install the app on your PC. Although a number of sources are available from where you can download the app. But, we would like to suggest you download it from a trusted source only. Let us tell you one of the simplest ways to do it, first download the app from the Google Play Store. Now, with the help of BlueStacks, you can easily access the app on your pc. Note: To access Android apps on a PC, BlueStacks is one of the best software. Conclusion In order to complete the Wyze Cam Login process, it is necessary to set up the camera first. In this guide, we have told you all the steps to complete the Wyze Camera Setup in some easy and simple steps. If you still need any more help, dial the Wyze Customer Service Phone Number now. Our experts are waiting for your call. They will help you to complete the login in some easy steps.
https://medium.com/tech-gadgets-tips/wyze-cam-login-help-1-805-791-2114-wyze-cam-phone-number-fec704d25085
[]
2021-08-23 06:51:02.166000+00:00
['Technews', 'Services', 'Technology', 'Security Camera', 'Tech']
87
PySpark — An Effective ETL Tool?. We examine ETL and look at how PySpark…
What is ETL? ETL (which stands for Extraction, Transform and Load) is the generic process of extracting data from one or more systems and loading it into a data warehouse or databases after performing some intermediate transformations. There are many ETL tools available in the market that can carry out this process. A standard ETL tool like PySpark, supports all basic data transformation features like sorting, mapping, joins, operations, etc. PySpark’s ability to rapidly process massive amounts of data is a key advantage. Some tools perform a complete ETL implementation while some tools help us create a custom ETL process from scratch, and there are a few those fall somewhere in between. Before going into the detail of PySpark, let’s first understand some important features that an ETL tool should have.
https://medium.com/swlh/pyspark-an-effective-etl-tool-b41a9108d0eb
['Dlt Labs']
2020-09-23 21:05:30.305000+00:00
['Business', 'Programming', 'Data Science', 'Technology', 'Dltlabs']
88
Why You Shouldn’t Upgrade Your Camera
This was the last camera I shot weddings with, a Canon 5D Mark II. It must be really hard to be a camera company these days. Once or twice a year, each camera company must scramble to find some way to get its users to upgrade to a newer camera. And damn! Are cameras getting expensive! You’re looking at spending $2,000 — $4,000 for a new DSLR, and that doesn’t include a lens. You are being upsold on features like increased dynamic range, more autofocus points, and more megapixels. But you know what? Those features won’t make you a better photographer, and you know it deep down. Unfortunately, the camera companies KNOW what makes you tick: the need for a new camera or gadget that FEELS like it will make all the difference in your art. Something that will finally allow you to do your best work. And that camera they sold you on last year? Garbage. It’s holding you back. It’s an insane game, but the only game the camera companies know how to play. Let me lay down some truths for you. 90% of the most important photos created to date, were made with cameras far less advanced than the cheapest DSLR you could buy today. In fact, many of the best pictures ever made were shot on film, and often with cameras that had little to no electronics, not to mention autofocus. You are suffering from G.A.S. (gear acquisition syndrome), and you know it. I KNOW you know it. But it’s just too easy to focus on gear rather than the more difficult work you need to do internally to make your best work. So, What Should You Do to Be a Better Photographer? Do the work. You have mental and emotional work to do such as finding and refining the WHY behind your photography. WHY do you shoot what you shoot? WHY does it matter? When you are on your deathbed, could you tell me WHY you spent your precious time shooting what you did? You also have other skills to build, such as empathizing and connecting with strangers and models, so you can get a genuine expression and shoot what is under the surface. This work often includes getting over your own insecurities and building your confidence in yourself as a legitimate artist. I believe your best photos come from synthesizing your knowledge of the world and pursuing subject matter that moves you in some way. Often these subjects are personal — something that you do, or something you wish to learn more about because it resonates with you. And you know what? This internal reflection and drive CAN’T be included with a camera, which leaves camera companies with selling you features you don’t need for new cameras to replace old cameras that were more than adequate to capture your best work. It’s a sleight of hand that’s been happening for ages, and deep down we all know this, even as we joke about G.A.S. on forums and Facebook posts. Instead of buying new gear, take that money and invest it in anything BUT a new camera, such as: Travel. Expand your horizons, step out of your daily loop, see with fresh eyes, pursue a passion project away from your hometown. You may have to travel far and wide to meet and photograph the subjects in your project. Education. Take a course or workshop with a photographer whose vision and approach inspires you. There is no way that you will not return a better photographer, as well as have new life experiences to draw from for future work. Set-building/wardrobe/prop-styling. Some of my favorite photographers spend 80% of their time building the scene for their subject and only 20% of their time shooting. Why? Because great ideas and visual concepts take a lot of personal investment. It’s not the camera that made the photo great; it’s what happens four inches behind the camera, in the frontal cortex of the photographer. And all the preparation that comes before that split second when you trip the shutter. Save your money. Yes! Crazy, right? But instead of spending $2,000+ in gear every year, put that money into an aggressive growth mutual fund, so that you are giving yourself the ultimate reward: the peace of mind and security to live life proactively rather than reactively. If you have financial security, you can take bigger risks in other parts of your life. Trust me. Collect art and photo books. Your best work comes from synthesizing information from your life experiences as well as the art you consume to feed your creative soul. I believe that if you don’t study what has already been done, you are doomed to repeat work that already exists at a very mediocre level. Your art will only ever rise to your level of taste, so study the greats that have come before you. Vincent Van Gogh copied famous impressionist paintings, stroke for stroke, to understand how his contemporaries painted. Once he understood impressionism, he could break the rules and find his voice. It’s no different for photographers. Looking purely at the number of photos that have mattered in human history proves this. There may be some groundbreaking innovation in camera design that makes your art better, but I’m not holding my breath. So this year, instead of upgrading your camera body yet again and feeling like you’re still not the artist you envisioned, try spending your money on one of the things I’ve listed above. If you’ve opted out of upgrading your camera body in place of something else, please leave a comment on how it changed your photography. If you think I’m full of it and don’t understand how vital increased dynamic range is for making a photo that matters, please comment too :)
https://medium.com/photo-dojo/why-you-shouldnt-upgrade-your-camera-de296679446e
['Kirk Mastin']
2019-03-26 20:32:23.328000+00:00
['Photography', 'Creativity', 'Photos', 'Cameras', 'Technology']
89
The 7 Biggest Technology Trends In 2020
Everyone Must Get Ready For Now We are amidst the 4th Industrial Revolution, and technology is evolving faster than ever. Companies and individuals that don’t keep up with some of the major tech trends run the risk of being left behind. Understanding the key trends will allow people and businesses to prepare and grasp the opportunities. As a business and technology futurist, it is my job to look ahead and identify the most important trends. In this article, I share with you the seven most imminent trends everyone should get ready for in 2020. The 7 Biggest Technology Trends In 2020 Everyone Must Get Ready For Now ADOBE STOCK AI-as-a-service Artificial Intelligence (AI) is one of the most transformative tech evolutions of our times. As I highlighted in my book ‘Artificial Intelligence in Practice’, most companies have started to explore how they can use AI to improve the customer experience and to streamline their business operations. This will continue in 2020, and while people will increasingly become used to working alongside AIs, designing and deploying our own AI-based systems will remain an expensive proposition for most businesses. For this reason, much of the AI applications will continue to be done through providers of as-a-service platforms, which allow us to simply feed in our own data and pay for the algorithms or compute resources as we use them. PROMOTED AWS Infrastructure Solutions BRANDVOICE | Paid Program Driving Digital Transformation In Financial Services With AWS Civic Nation BRANDVOICE | Paid Program COVID, College, And Action In A Year Of Crises UNICEF USA BRANDVOICE | Paid Program Inside Look: How COVID-19 Further Endangers Migrant Children In Mexico https://www.facebook.com/permalink.php?story_fbid=103922064946771&id=103914164947561https://www.facebook.com/permalink.php?story_fbid=103919578280353&id=103914164947561https://www.facebook.com/permalink.php?story_fbid=103929784945999&id=103914164947561https://www.facebook.com/permalink.php?story_fbid=103928351612809&id=103914164947561https://www.facebook.com/permalink.php?story_fbid=103927251612919&id=103914164947561https://www.facebook.com/permalink.php?story_fbid=103925548279756&id=103914164947561https://www.facebook.com/permalink.php?story_fbid=103933511612293&id=103914164947561https://www.facebook.com/permalink.php?story_fbid=103932781612366&id=103914164947561https://www.facebook.com/permalink.php?story_fbid=103931971612447&id=103914164947561https://www.facebook.com/permalink.php?story_fbid=100533208632220&id=100526338632907https://www.facebook.com/permalink.php?story_fbid=100534278632113&id=100526338632907https://www.facebook.com/permalink.php?story_fbid=100534938632047&id=100526338632907https://www.facebook.com/permalink.php?story_fbid=100535701965304&id=100526338632907https://www.facebook.com/permalink.php?story_fbid=100533208632220&id=100526338632907https://www.facebook.com/permalink.php?story_fbid=100558691963005&id=100526338632907https://www.facebook.com/permalink.php?story_fbid=100559298629611&id=100526338632907https://www.facebook.com/permalink.php?story_fbid=100560641962810&id=100526338632907 Currently, these platforms, provided by the likes of Amazon, Google, and Microsoft, tend to be somewhat broad in scope, with (often expensive) custom-engineering required to apply them to the specific tasks an organization may require. During 2020, we will see wider adoption and a growing pool of providers that are likely to start offering more tailored applications and services for specific or specialized tasks. This will mean no company will have any excuses left not to use AI. 5G data networks The 5th generation of mobile internet connectivity is going to give us super-fast download and upload speeds as well as more stable connections. While 5G mobile data networks became available for the first time in 2019, they were mostly still expensive and limited to functioning in confined areas or major cities. 2020 is likely to be the year when 5G really starts to fly, with more affordable data plans as well as greatly improved coverage, meaning that everyone can join in the fun. MORE FOR YOU How AI Helps Advance Immunotherapy And Precision Medicine This Israeli Startup Goes After $52 Billion Cloud Data Warehouse Market And The Hottest 2020 IPO Super-fast data networks will not only give us the ability to stream movies and music at higher quality when we’re on the move. The greatly increased speeds mean that mobile networks will become more usable even than the wired networks running into our homes and businesses. Companies must consider the business implications of having super-fast and stable internet access anywhere. The increased bandwidth will enable machines, robots, and autonomous vehicles to collect and transfer more data than ever, leading to advances in the area of the Internet of Things (IoT) and smart machinery. Autonomous Driving While we still aren’t at the stage where we can expect to routinely travel in, or even see, autonomous vehicles in 2020, they will undoubtedly continue to generate a significant amount of excitement. Tesla chief Elon Musk has said he expects his company to create a truly “complete” autonomous vehicle by this year, and the number of vehicles capable of operating with a lesser degree of autonomy — such as automated braking and lane-changing — will become an increasingly common sight. In addition to this, other in-car systems not directly connected to driving, such as security and entertainment functions — will become increasingly automated and reliant on data capture and analytics. Google’s sister-company Waymo has just completed a trial of autonomous taxis in California, where it transported more than 6200 people in the first month. It won’t just be cars, of course — trucking and shipping are becoming more autonomous, and breakthroughs in this space are likely to continue to hit the headlines throughout 2020. With the maturing of autonomous driving technology, we will also increasingly hear about the measures that will be taken by regulators, legislators, and authorities. Changes to laws, existing infrastructure, and social attitudes are all likely to be required before autonomous driving becomes a practical reality for most of us. During 2020, it’s likely we will start to see the debate around autonomous driving spread outside of the tech world, as more and more people come round to the idea that the question is not “if,” but “when,” it will become a reality. Personalized and predictive medicine Technology is currently transforming healthcare at an unprecedented rate. Our ability to capture data from wearable devices such as smartwatches will give us the ability to increasingly predict and treat health issues in people even before they experience any symptoms. When it comes to treatment, we will see much more personalized approaches. This is also referred to as precision medicine which allows doctors to more precisely prescribe medicines and apply treatments, thanks to a data-driven understanding of how effective they are likely to be for a specific patient. Although not a new idea, thanks to recent breakthroughs in technology, especially in the fields of genomics and AI, it is giving us a greater understanding of how different people’s bodies are better or worse equipped to fight off specific diseases, as well as how they are likely to react to different types of medication or treatment. Throughout 2020 we will see new applications of predictive healthcare and the introduction of more personalized and effective treatments to ensure better outcomes for individual patients. Computer Vision In computer terms, “vision” involves systems that are able to identify items, places, objects or people from visual images — those collected by a camera or sensor. It’s this technology that allows your smartphone camera to recognize which part of the image it’s capturing is a face, and powers technology such as Google Image Search. As we move through 2020, we’re going to see computer vision equipped tools and technology rolled out for an ever-increasing number of uses. It’s fundamental to the way autonomous cars will “see” and navigate their way around danger. Production lines will employ computer vision cameras to watch for defective products or equipment failures, and security cameras will be able to alert us to anything out of the ordinary, without requiring 24/7 monitoring. Computer vision is also enabling face recognition, which we will hear a lot about in 2020. We have already seen how useful the technology is in controlling access to our smartphones in the case of Apple’s FaceID and how Dubai airport uses it to provide a smoother customer journey. However, as the use cases will grow in 2020, we will also have more debates about limiting the use of this technology because of its potential to erode privacy and enable ‘Big Brother’-like state control. Extended Reality Extended Reality (XR) is a catch-all term that covers several new and emerging technologies being used to create more immersive digital experiences. More specifically, it refers to virtual, augmented, and mixed reality. Virtual reality (VR) provides a fully digitally immersive experience where you enter a computer-generated world using headsets that blend out the real world. Augmented reality (AR) overlays digital objects onto the real world via smartphone screens or displays (think Snapchat filters). Mixed reality (MR) is an extension of AR, that means users can interact with digital objects placed in the real world (think playing a holographic piano that you have placed into your room via an AR headset). These technologies have been around for a few years now but have largely been confined to the world of entertainment — with Oculus Rift and Vive headsets providing the current state-of-the-art in videogames, and smartphone features such as camera filters and Pokemon Go-style games providing the most visible examples of AR. From 2020 expect all of that to change, as businesses get to grips with the wealth of exciting possibilities offered by both current forms of XR. Virtual and augmented reality will become increasingly prevalent for training and simulation, as well as offering new ways to interact with customers. Blockchain Technology Blockchain is a technology trend that I have covered extensively this year, and yet you’re still likely to get blank looks if you mention it in non-tech-savvy company. 2020 could finally be the year when that changes, though. Blockchain is essentially a digital ledger used to record transactions but secured due to its encrypted and decentralized nature. During 2019 some commentators began to argue that the technology was over-hyped and perhaps not as useful as first thought. However, continued investment by the likes of FedEx, IBM, Walmart and Mastercard during 2019 is likely to start to show real-world results, and if they manage to prove its case, could quickly lead to an increase in adoption by smaller players. And if things are going to plan, 2020 will also see the launch of Facebook’s own blockchain-based crypto currently Libra, which is going to create quite a stir. If you would like to keep track of these technologies, simply follow me on YouTube, Twitter, LinkedIn, and Instagram, or head to my website for many more in-depth articles on these topics.
https://medium.com/@smangarawangi40/the-7-biggest-technology-trends-in-2020-fa4f0e0eac8c
[]
2020-12-19 20:35:10.084000+00:00
['Technology', 'Genius', 'People', 'Robots']
90
Committed to Success: Hai Hoang, Tech Lead at Planworth
Hai Hoang is a Commit engineer who joined Planworth, an early-stage wealth planning SaaS platform in 2019 as a technical lead. We sat down with him to hear about his journey. Tell us a bit about your background before joining Commit? I spent the first two-and-a-half years of my career at a large tech company, then did my own startup for around two years. It was an on-demand personal assistant app. We launched our product and got to market, but market conditions were bad at the time. Investments dried up and we ended up having to shut down. I went back to the first tech company I worked for, to figure out what I wanted to do next. Then I worked for a few startups, but nothing was really going anywhere. That’s when Commit came into the picture. I was one of the first engineers, part of the founding team. “You don’t really know how you get along with people until you work with them, right? To me, Commit offers a really good opportunity to get a feel for that before making a long term commitment” What drew you to Commit? The people. I had worked with some members of the Commit team at other startups. The people are really fun and they had senior engineers on board. I was attracted to that, because I knew I could learn a ton from them. It was a very good environment for me to start new projects all the time and learn from some of the best. It was clear that Commit’s goal was to minimize risk for engineers. We wanted to offer engineers the opportunity to meet with startup founders and assess what their product was and what their business strategy was before fully jumping in — that really appealed to me, because I had been with failed startups before. How did you get connected with Planworth? I actually had no intention of leaving Commit. I was there from the beginning, I was helping build the company — I thought there was no reason for me to leave. But then Planworth came around, and the product and the team got me very interested. I could see potential that I hadn’t seen with other projects. Plus, it’s in the fintech field, which I’ve always had an interest in but have never been able to dip my toes into. What attracted you to Planworth? I liked the fact that they recognized their product-market fit. That’s something many startups don’t have from the beginning. Most startups have an idea, then they build a product, then they go out and validate it with the market. Planworth built a rough proof-of-concept and got immediate validation. So by the time I joined, it was clear they had found a market, figured out a business model, and had a plan for earning revenue. It was amazing to see. Also, it was great that I had a three-month period working with the founders and the team, before I formally joined, so I really got to know them and their product. What has it been like so far? It’s been very, very good. They’ve given me the autonomy and authority to implement my vision of what I want a team to look like, which I’ve never had the opportunity to do. They gave me that trust. And not just from the management perspective — even the engineering team trusts me to make decisions. It’s a great career opportunity because previously I’ve been a team lead, where you’re running projects, but at Planworth I’m also managing people. How has your time with Commit helped you in this new role? I’ve definitely been able to transfer things I learned at Commit onto my work at Planworth. At Commit I learned about project management, especially figuring out ways to deliver work in a shorter time frame. A lot of the technical skill sets I picked up at Commit also set me up for success at Planworth, like devops tricks. I’m still learning a lot at Planworth. Maybe not as much on the engineering side, but on the management side and team lead side. I have Tarsem and James [Planworth’s co-founders] teaching me and coaching me. So I’m learning every day. How has it been working with two non-technical founders? It’s been good. Engineering is very different than what they’re used to, but they’re very open-minded and understanding about the process. That’s one of the characteristics I like about them. They care about scalability, and they care about having clean code and tests and stuff like that. Not all founders do. What would you tell or say to other engineers considering joining Commit? Go for it. And keep an open mind, because every single project is very different. You end up working with different teams and different people. You don’t really know how you get along with people until you work with them, right? To me, Commit offers a really good opportunity to get a feel for that. I think it’s a fantastic model. As a person who comes from the startup world, working on multiple failed startups, it really does mitigate the risk.
https://medium.com/commit-engineering/committed-to-success-hai-hoang-tech-lead-at-planworth-38f8442cebe9
['Beier Cai']
2020-06-14 21:48:45.193000+00:00
['Technology', 'Careers', 'Software Development', 'Startup', 'Entrepreneurship']
91
Mark Wiens is travelling safely with NordVPN — now you can too for a good price
Mark Wiens is travelling safely with NordVPN — now you can too for a good price Nathanel Nicols Sep 4, 2019·4 min read Some people aren’t capable of working 9–5 office job. People that have a curious mind and are dying of thirst if they’re not seeing new places that this world has to offer. One of these people is Mark Wiens, a person that has been travelling ever since he was a child. He has his own travel blog, where he documents his experiences in various exotic countries and tastes his way through local cuisines. In his travels, Mark is using NordVPN in order to protect his private data and not to get hacked on public W-Fi network and is offering a big discount — now you can be secure too. How to get a NordVPN discount from Mark Wiens? Just click on the link below: Mark Wiens NordVPN 3-year subscription 70% discount After clicking on the link and entering your payment details, at the checkout you will notice that the discount code has been applied to your purchase, all that is required from you is to choose a payment method: What do you get from NordVPN ? With Mark Wiens NordVPN 70% discount on 3-year subscription you become free and secure online. All Internet providers log their customers’ activities on the Internet. That is, the Internet provider knows which sites you visited. This is necessary in order to in the event of requests from the police to give out all information about the violator, as well as to relieve all legal responsibility for the actions of the user. There are many situations where the user needs to protect their personal data on the Internet and gain freedom of communication. That’s where NordVPN comes in — once you’re connected to the VPN server, your public IP address changes and your internet traffic becomes encrypted. If you’re travelling a lot and encounter geographical restrictions (for example you’re in China and wish to visit Facebook or watch Netflix) — you can connect to one of the 5000+ servers that NordVPN has to offer and enjoy streaming your favorite shows. Mark Wiens isn’t the only YouTuber that is using NordVPN: Kilian Experience and Defunctland use NordVPN too. Who is Mark Wiens? Mark Wiens is a travel blogger who is very open about himself online, you can find a whole story of him on his official website. Originally from Arizona (United States), Mark was raised in Central Africa but spent his childhood moving from France to Congo to Kenya with his Christian missionary parents. He briefly returned to the US for getting a degree in Global Studies. With the thirst to follow his passion for travel and food, he embarked on a one-way trip to SouthEast Asia. He taught English for quite a few years in Thailand, after which he decided to indulge in travel writing and blogging full-time. He is well known for his round-the-world trips and exotic food reviews. His YouTube channel has over 1 million subscribers and his videos have amassed over 400 million views altogether. His mother is Chinese-American and his grandfather was a Chinese chef, which he attributes towards his affinity for Asian foods. What else can you get from NordVPN? Few more reasons why Mark Wiens chose NordVPN:
https://medium.com/@nathanelnicols/mark-wiens-nordvpn-discount-offer-2f595bd7bb34
['Nathanel Nicols']
2019-09-30 12:58:31.569000+00:00
['Cybersecurity', 'YouTube', 'Privacy', 'Technology', 'VPN']
92
How We Can Help Stabilize The Cryptocurrency Market, With Eiland Glover, CEO of Kowala
I had the pleasure of interviewing Eiland Glover, CEO of Kowala, a blockchain company which has created a cryptocurrency robotically pegged to the U.S. dollar, maintaining its value to that of the USD. Improving upon “asset-backed” stablecoin companies, Kowala utilizes a mathematical algorithm to peg its coin — ensuring truthful (and truly decentralized) stability. Thank you so much for doing this with us! Can you tell us the story of how you got involved with the Regtech or Crypto markets? I’m an entrepreneur who has always been interested in how the intersection between financial technology and education can help individuals prosper in market economies. When my friend and long-term collaborator, software engineer John Reitano, created one of the first privacy-oriented cryptocurrencies in 2013, I argued that he was focusing on the wrong problem. Our fervent discussions about the significance of cryptocurrency led us to an important question: “Why is no one actually using their bitcoin?” Our main answer was “volatility”. We ultimately decided to create a cryptocurrency that retained the decentralization and trustlessness of Bitcoin, while remaining price stable. That’s a relatively simple concept, but a very intricate technical problem. Two and a half years later, we’re approaching the release of our solution to the marketplace. Can you share 5 ways that Regulation and Regtech can help stabilize the Crypto Economy? The crypto economy remains largely unregulated and, to date, has attracted plenty of gamblers and sometimes even degenerates. Our industry has not yet delivered many impactful products and services for average consumers — in essence, successful use-cases have so far been very limited to exchanges and ICOs. We want to see this change; we want to help foster mass-adoption of cryptocurrency. To get there, the only way to jumpstart the crypto economy is to build the key stabilization infrastructure, while providing people with a compelling reason to get started with crypto in the first place. What are the top concerns that crypto firms should be considering in order to have a competitive edge? How can a crypto firm have a competitive edge? Well, to date, the winning strategy has been to publish a white paper, build up community excitement about the project’s team, advisors, and partnerships, and then launch a token sale. Until recently, investors in these sales have not been overly concerned with logistics — mostly because their tokens are liquid and the market has been hot. Many have ridden wave after wave of positive sentiment and cashed out of positions, netting handsome profits at the end of the day. Of course, this is not sustainable — it’s a speculative bubble, and those are always painful when they pop. It will be interesting to see how many of the projects financed within the last 18 months actually come to fruition. At Kowala, we have chosen to focus on building an extremely intricate financial system that has properties and characteristics with massive real-world effects. It’s hard to overestimate the potential impact of our work. It’s a decentralized replacement for central banks, for goodness sake — a robotic network that can be owned by anyone and controlled by no one, one that can issue currency and control money supply completely in lieu of central oversight. And this robot is already largely built and coming into life. Soon, we will get to witness it’s birth and see what we’ve wrought — it’s simultaneously exciting and terrifying. Will our team’s focus on delivering world-changing tech give our company a competitive business edge? We certainly hope so, but I think it depends largely on the focus of the marketplace, as we go through the next phase of our development. So far, in the volatile world of crypto mania, our focus hasn’t always resonated with the dominant sentiment. We hope that the mood of the industry will continue to shift our way — towards projects with a focus on delivering consequential technology to the market. Can you share examples of measures you take to prevent internal data breach? Regarding preventing internal data breaches: that’s exactly the type of information we should not share publicly due to the increased likelihood of a breach, if we do spill our cybersecurity secrets. I can say that we’ve fended off a number of attacks already, and that we employ the highest-tier of expert security advisors, as well as taking plentiful precautions. In general, I think social engineering hacks represent the greatest threat. So it’s wise to create systems with mechanisms that rely on more than individuals’ trustworthiness and good judgement. Can you share a story of a time when things went south for you? What kept you going and helped you to overcome those times? Startups, crypto markets, regulatory uncertainty, developing complex software systems — each of these circumstances come with their own moments of things going south. What keeps us motivated through trying circumstances is our commitment to creating a new kind of money. We have already invented a number of important technologies that we feel confident will make a difference in our industry and world, and these have begun to be recognized by experts beyond our industry. Our team’s experiences overcoming obstacles provides us with the confidence that we can overcome the many more that are probably on the horizon. We believe we can create an elegant technology that will greatly expand human prosperity and freedom. Our goal — to create the next dominant form of money technology — is audacious and sometimes ridiculous. Even so, we are well on our way. What are some things that you do on daily basis regardless of how busy you are? Exercise. Hands down, the most important part of running a company is making sure you are also taking care of yourself. Exercise, eat well, hydrate, sleep — although the latter is something I frequently don’t get enough of. What are the top 3 upcoming conferences you are attending and are excited about? I will be speaking at a few upcoming conferences this fall, which I am very excited about. One I spoke at last month, Health Further, was very exciting. We discussed the future of health care (hence the name), and the impact of blockchain on this space. There were a lot of interesting ideas which came out of that conference and I am thrilled to be a part of this movement. At the Entreprenuership Summit at W&L on September 21, I spoke on two panels, including “The FinTech Revolution”, discussing how FinTech is changing the landscape of the financial industry, and “Demystifying Blockchain”, simply explaining what blockchain is and how it will change so many parts of our lives. GoBlockCon in October 2018, I will also be speaking at, and joined by hundreds of blockchain enthusiasts. I am very excited to both discuss and hear about all of the new technologies begin released. Can you please give us your favorite “Life Lesson Quote”? Can you share how that was relevant to you in your life? “We want to do what we feel called to do, to always do our best, and to always do what’s right — despite the many obstacles and struggles.” How can our readers follow you on social media? We have a multitude of social platforms, but Telegram is probably the best way to stay updated on everything Kowala. Following that, we post to Facebook, Twitter, Instagram, LinkedIn, and YouTube quite frequently! Thank you so much for this. This was very enlightening!
https://medium.com/authority-magazine/how-we-can-help-stabilize-the-cryptocurrency-market-with-eiland-glover-ceo-of-kowala-7e46cd91b752
['Yitzi Weiner']
2018-10-08 17:25:56.818000+00:00
['Finance', 'Technology', 'Bitcoin', 'Cryptocurrency']
93
Wondering how to save the planet? Start here.
Wondering how to save the planet? Start here. The truth is, we have work to do. When evaluating the current condition of the world, it’s clear that we need to change our ways. The fact of the matter is, the global material footprint rose nearly 18 percent from 73 billion metric tons in 2010 to 85.9 billion metric tons in 2017 according to the UN. Certainly, there are global needs that require continuous fulfillment; food needs to be grown, products need to be made, transportation is necessary and much more. With that said, there can be differences made in our approach to fulfilling these needs. As our ecological footprint continues to grow as a byproduct of the current methodologies in place, it offers not only an opportunity to improve these practices, but it reveals a dire need to immediately pivot for the health of our planet. So, what can we do? As a part of the 2021 Call for Code Global Challenge, developers and problem solvers from around the world are invited to participate in the largest tech for good initiative of its kind, and tasked with building solutions to our world’s most pressing issues. Climate change is a serious and imminent problem that is only fueled by the mistreatment we, as people, apply. Simultaneously, as people, we have a chance to commit to reversing the impacts of this mistreatment through technology. As it pertains to responsible production and green consumption, technology can help in many ways, from providing recommendations on energy efficiency to highlighting the carbon footprint of online purchases. Through Call for Code, you have the power to improve the future of what our world looks like. You have the chance to build a solution on leading, open source-powered technologies. The winning solution is awarded $200,000 and support from IBM and the Linux Foundation to further develop and deploy your solution. So, are you ready to build the future? Check out the responsible production and green consumption starter kit and get going. If you liked the story, be sure to give it a clap and follow Call for Code Digest for more tech-for-good stories! Also, receive monthly updates on the Call for Code challenges, coding resources, meetups, and more, straight to your inbox!
https://medium.com/callforcode/wondering-how-to-save-the-planet-start-here-e391d289a6d1
['Call For Code']
2021-04-09 18:05:14.627000+00:00
['Code', 'Green Energy', 'Tech For Good', 'Technology', 'Developer']
94
TIFF (2021) Livestream: Tokyo Midtown Hibiya, Tokyo, Japan
❂ Artist Event : Tokyo International Film Festival ❂ Venue : Tokyo Midtown Hibiya, Tokyo, Japan ❂ Live Streaming Tokyo International Film Festival 2021 Conversation Series at Asia Lounge The Japan Foundation Asia Center & Tokyo International Film Festival Marking its second installment since 2020, this year’s Conversation Series will again be advised by the committee members led by filmmaker Kore-eda Hirokazu. Directors and actors from various countries and regions including Asia will gather at the Asia Lounge to engage in discussion with their Japanese counterparts. This year’s theme will be “Crossing Borders”. Guests will share their thoughts and sentiments about film and filmmaking in terms of efforts and attempts to transcend borders. The festival will strive to invite as many international guests as possible to Japan so that they can engage in physical conversation and interaction at the Asia Lounge. The sessions will be broadcast live from the festival venue in Tokyo Midtown Hibiya every day for eight days from October 31st to November 7th. Stay tuned!
https://medium.com/@b.i.m.sa.la.bi.mp.r.ok/tiff-2021-livestream-tokyo-midtown-hibiya-tokyo-japan-7fbd541c6d31
[]
2021-10-30 14:11:34.976000+00:00
['Festivals', 'Technology']
95
Best Wireless Phone Charger 2019 / 2020
Among all the wireless chargers we’ve reviewed and seen in the market, the Google Pixel Stand looks the most clean and robust among them. It charges other phones such as the iPhone at its top speed and comes equipped with a 18W USB PD power adapter. It’s made of polycarbonate and soft silicone with a base that grips on any surface so that it won’t slide around on the nightstand. If you’re a fan of monochrome then you’ll like how the white finish blends into any environment. You can prop your phone up on this stand in landscape or portrait view and it will charge because there are two coils inside. For owners of a Pixel 3 or Pixel 3 XL the Pixel Stand offers exclusive features. There’s a microprocessor inside that recognizes your Pixel phone, and you can configure the stand through your phone’s settings, ensuring it automatically goes into Do Not Disturb mode or acts as a digital photo frame when you place it on the stand. If you play music, the music controls and album art will appear on the screen (you can also use Google Assistant), and there’s a special Sunrise Alarm that uses light from your Pixel phone’s screen to wake you more gently than a traditional alarm. Cost: Charging speed: 10W Input port: Micro-USB Included in the box: Charging stand, 5 foot USB-C to USB-C cable, 18W USB PD Recently, Apple started selling two multi-device wireless chargers from Mophie. This comes after Apple decommissioned their own multi-device wireless charger, the Apple AirPower, earlier this year citing difficulty in meeting their own high standards. The 3-in-1 wireless charger design is setup to charge an iPhone, AirPods, and the Apple watch simultaneously and provides 7.5W of power. It can support Qi enabled devices but it seems optimized for apple products. The Dual Wireless Charging pad, however, is different in that you can charge two devices on it and it doesn’t have to be Apple products. Given that Apple has just recently released this product it is too soon to tell how this product will be received by the public. Cost: Charging speed: 7.5W Input port: Micro-USB Included in the box: Charging pad, wall adapter Additionally, there are a number of other high ranking wireless chargers to consider. You can click the links below to directly see their specs on Amazon. Good luck! Liked this article? Let us know your thoughts in the comments below! For more articles like this checkout our website at The Top Essentials
https://medium.com/@thetopessentials/best-wireless-phone-charger-2019-2020-b05c9b481ee8
['The Top Essentials']
2019-09-19 07:11:22.752000+00:00
['Wireless Charging', 'Technology', 'Charger', 'Wireless Charger', 'Phone Chargers']
96
The Privileged Have Entered Their Escape Pods
Now, pandemics don’t necessarily bring out our best instincts either. No matter how many mutual aid networks, school committees, food pantries, race protests, or fundraising efforts in which we participate, I feel as if many of those privileged enough to do so are still making a less public, internal calculation: How much are we allowed to use our wealth and our technologies to insulate ourselves and our families from the rest of the world? And, like a devil on our shoulder, our technology is telling us to go it alone. After all, it’s an iPad, not an usPad. The more advanced the tech, the more cocooned insularity it affords. “I finally caved and got the Oculus,” one of my best friends messaged me on Signal the other night. “Considering how little is available to do out in the real world, this is gonna be a game-changer.” Indeed, his hermetically sealed, Covid-19-inspired techno-paradise was now complete. Between VR, Amazon, FreshDirect, Netflix, and a sustainable income doing crypto trading, he was going to ride out the pandemic in style. Yet while VRporn.com is certainly a safer sexual strategy in the age of Covid-19 than meeting up with partners through Tinder, every choice to isolate and insulate has its correspondingly negative impact on others. The pool for my daughter wouldn’t have gotten here were it not for legions of Amazon workers behind the scenes, getting infected in warehouses or risking their health driving delivery trucks all summer. As with FreshDirect or Instacart, the externalized harm to people and places is kept out of sight. These apps are designed to be addictively fast and self-contained — push-button access to stuff that can be left at the front door without any human contact. The delivery people don’t even ring the bell; a photo of the package on the stoop automagically arrives in the inbox. Like with Thomas Jefferson’s ingenious dumbwaiter, there are no signs of the human labor that brought it. Many of us once swore off Amazon after learning of the way it evades taxes, engages in anti-competitive practices, or abuses labor. But here we are, reluctantly re-upping our Prime delivery memberships to get the cables, webcams, and Bluetooth headsets we need to attend the Zoom meetings that now constitute our own work. Others are reactivating their long-forgotten Facebook accounts to connect with friends, all sharing highly curated depictions of their newfound appreciation for nature, sunsets, and family. And as we do, many of us are lulled further into digital isolation — being rewarded the more we accept the logic of the fully wired home, cut off from the rest of the world. And so the New York Times is busy running photo spreads of wealthy families “retreating” to their summer homes — second residences worth well more than most of our primary ones — and stories about their successes working remotely from the beach or retrofitting extra bedrooms as offices. “It’s been great here,” one venture fund founder explained. “If I didn’t know there was absolute chaos in the world … I could do this forever.” But what if we don’t have to know about the chaos in the world? That’s the real promise of digital technology. We can choose which cable news, Twitter feeds, and YouTube channels to stream — the ones that acknowledge the virus and its impacts or the ones that don’t. We can choose to continue wrestling with the civic challenges of the moment, such as whether to send kids back to school full-time, hybrid, or remotely. Or — like some of the wealthiest people in my own town — we can form private “pods,” hire tutors, and offer our kids the kind of customized, elite education we could never justify otherwise. “Yes, we are in a pandemic,” one pod education provider explained to the Times. “But when it comes to education, we also feel some good may even come out of this.” I get it. And if I had younger children and could afford these things, I might even be tempted to avail myself of them. But all of these “solutions” favor those who have already accepted the promise of digital technology to provide what the real world has failed to do. Day traders, for instance, had already discovered the power of the internet to let them earn incomes safely from home using nothing but a laptop and some capital. Under the pandemic, more people are opening up online trading accounts than ever, hoping to participate in the video game version of the marketplace. Meanwhile, some of the world’s most successful social media posses are moving into luxurious “hype houses” in Los Angeles and Hawaii, where they can livestream their lifestyles, exercise routines, and sex advice — as well the products of their sponsors — to their millions of followers. And maybe it’s these young social media enthusiasts, thriving more than ever under pandemic conditions, who most explicitly embody the original promise of digital technology to provide for our every need. I remember back around 1990, when psychedelics philosopher Timothy Leary first read Stewart Brand’s book The Media Lab, about the new digital technology center MIT had created in its architecture department. Leary devoured the book cover to cover over the course of one long day. Around sunset, just as he was finishing, he threw it across the living room in disgust. “Look at the index,” he said, “of all the names, less than 3% are women. That’ll tell you something.” He went on to explain his core problem with the Media Lab and the digital universe these technology pioneers were envisioning: “They want to recreate the womb.” As Leary the psychologist saw it, the boys building our digital future were developing technology to simulate the ideal woman — the one their mothers could never be. Unlike their human mothers, a predictive algorithm could anticipate their every need in advance and deliver it directly, removing every trace of friction and longing. These guys would be able to float in their virtual bubbles — what the Media Lab called “artificial ecology” — and never have to face the messy, harsh reality demanded of people living in a real world with women and people of color and even those with differing views. For there’s the real rub with digital isolation — the problem those billionaires identified when we were gaming out their bunker strategies. The people and things we’d be leaving behind are still out there. And the more we ask them to service our bubbles, the more oppressed and angry they’re going to get. No, no matter how far Ray Kurzweil gets with his artificial intelligence project at Google, we cannot simply rise from the chrysalis of matter as pure consciousness. There’s no Dropbox plan that will let us upload body and soul to the cloud. We are still here on the ground, with the same people and on the same planet we are being encouraged to leave behind. There’s no escape from the others. Not that people aren’t trying. The ultimate digital escape fantasy would require some seriously perverse enforcement of privilege. Anything to prevent the unwashed masses — the folks working in the meat processing plants, Amazon warehouses, UPS trucks, or not at all — from violating the sacred bounds of our virtual amnionic sacs. Sure, we can replace the factory workers with robots and the delivery people with drones, but then they’ll even have less at stake in maintaining our digital retreats. Unlike their human mothers, a predictive algorithm could anticipate their every need in advance and deliver it directly, removing every trace of friction and longing. I can’t help but see the dismantling of the Post Office as a last-ditch attempt to keep the majority from piercing the bubbles of digital privilege through something as simple as voting. Climb to safety and then pull the ladder up after ourselves. No more voting, no more subsidized delivery of alternative journalism (that was the original constitutional purpose for a fully funded post office). So much the better for the algorithms streaming us the picture of the world we want to see, uncorrupted by imagery of what’s really happening out there. (And if it does come through, just swipe left, and the algorithms will know never to interrupt your dream state with such real news again.) No, of course we’ll never get there. Climate, poverty, disease, and famine don’t respect the “guardian boundary” play space defined by the Oculus VR’s user preferences. Just as the billionaires can never, ever truly leave humanity behind, none of us can climb back into the womb. When times are hard, sure, take what peace and comfort you can afford. Use whatever tech you can get your hands on to make your kid’s online education work a bit better. Enjoy the glut of streaming media left over from the heyday of the Netflix-Amazon-HBO wars. But don’t let this passing — yes, passing — crisis fool you into buying technology’s false promise of escaping from humanity to play video games alone in perpetuity. Our Covid-19 isolation is giving us a rare opportunity to see where this road takes us and to choose to use our technologies to take a very different one.
https://onezero.medium.com/the-privileged-have-entered-their-escape-pods-4706b4893af7
['Douglas Rushkoff']
2020-09-03 00:18:18.428000+00:00
['Society', 'Privilege', 'Digital', 'Technology', 'Future']
97
Learn AI with Free TPU Power — the ELI5 way
In this article, you’ll learn how to use Google Colab for training a CNN on the MNIST dataset using Google’s TPUs. Hold up, what’s a CNN? In a regular Neural Network, you recognize patterns from labelled data (“supervised learning”), with a structure made of inputs, outputs, and neurons in between. Some are these are connected, but deciding which ones are connected manually doesn’t work well for things like images, since the net doesn’t understand how the pixels are related. A CNN, or Convolutional Neural Network, connects some of the neurons to pixels that are close together, to start out with some knowing of how the pixels are related. This is a very high level overview, but if you want to dig into the architecture, check out this guide. What about MNIST? MNIST is a dataset of handwritten digits, with a training set of 60,000 examples and a testing set of 10,000 examples. It’s often used in beginner problems. And what is Google Colab? Google Colaboratory was an internal research tool for data science, which was released to the public with the goal of the dissemination of AI research and education. It offers free GPU and TPU usage (with limits, of course ;)). Lastly, what is a TPU? TPU stands for “Tensor Processing Unit”, an alternative to CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) that’s especially designed for calculating tensors. A tensor is an alternative to multidimensional arrays (like in NumPy), and are functions you feed data into. The relationships in a neural network can be easily described and processed in tensors, so a TPU is very fast for this kind of work. Training and Testing a Model on GPUs and TPUs Signup for Google Colaboratory. Open up this pre-made GPU vs TPU notebook (credits). When you open it up, TPU backend should be enabled (if not, check Runtime -> Change runtime type -> Hardware Accelerator -> TPU). Run a “cell” using Shift-Enter / Command-Enter. Run all the cells in order, starting from the cell named “Download MNIST”. If it’s successful, the empty [] brackets should turn into [1], [2], [3], and so on. That’s it! You may run into challenges if you’re doing this long after I wrote the article, and cell [4] (training and testing) will take some time to run its 10 epochs. Conclusion Much of the barrier of entry to AI and data science used to be in the infrastructure, for instance getting the necessary compute to train large models. Nowadays, with tools like Google Colab, it really is as simple as opening and running a notebook in your browser, and not much different from using a Google Doc or spreadsheet. What Should I Do With This Information? Now you at least know how to run an AI model easily. If you want to practice on real-world challenges, head on over to bitgrit’s competition platform, with new competitions regularly added. This will train your skills, and act as a means to build up your portfolio. This article was written by Frederik Bussler, CEO at bitgrit. Join our data scientist community or our Telegram for insights and opportunities in data science.
https://medium.com/bitgrit-data-science-publication/learn-ai-with-free-tpu-power-the-eli5-way-4e5484ea0d08
[]
2019-03-16 11:29:59.619000+00:00
['Machine Learning', 'Artificial Intelligence', 'Technology', 'Google', 'Data Science']
98
How I went from complete beginner to software developer — and how you can too
Two years ago, I was right where you are today. I wanted to become a professional programmer. But I had no idea how to make it happen. I had no college degree, no previous coding experience, and I sucked at math. And there was the nagging doubt: Can someone like me become a developer? Well, I made it happen. I have my dream job. I’m a software developer. I often get asked how I did it. Here are the three vital actions I took that helped me go from a complete beginner to a software developer. 1. Build your Roadmap The biggest mistake aspiring developers make is that they have no plan. No roadmap. When you have no plan, you feel lost. You take coding tutorials, maybe build a project or two. Then months pass. You think, will I ever become a developer? This is all so confusing. You have no idea what path to take. The solution? Build a roadmap - right now. Create a plan for exactly how you’ll become a developer. Your first step: Decide if you’re going to do a coding bootcamp or take online courses. For me, I decided not to attend a bootcamp. I created my curriculum and taught myself…everything. Because I was homeschooled growing up, I was comfortable learning on my own, so I decided to teach myself to code using various online courses ranging from freeCodeCamp to Udacity. This approach costs far less than a bootcamp but it had a downside: I had no coding mentors or coding curriculum to follow. Learning from online resources means you do pay nothing or very little, but as I discovered, you don’t have much support. And you will struggle on your own as I did. People are drawn to learning to code from online resources as I did, but it is not always the best way. The low cost is a big benefit, but make sure you are able to learn well on your own and can hold yourself accountable - without a lot of mentorship or support. Bootcamps are expensive but they often come with much more support and accountability. Carefully decide which path is best for you. If you do learn to code without a bootcamp, I suggest picking an affordable online program that has at least some mentorship and a curriculum to follow. Doing so will ensure you struggle less and get the feedback you need. Udacity’s nanodegrees and Treehouses techdegrees offer some mentorship and code reviews. If you decide to learn to code for free, freeCodeCamp’s curriculum is fantastic, and if you get involved in their community, you will excel. Once you’ve chosen your path, complete your roadmap by answering these questions: Do I want to become a full-stack, frontend or backend developer? Decide what you’ll focus on learning. Know what language and libraries you’ll need to learn. Decide what you’ll focus on learning. Know what language and libraries you’ll need to learn. How many hours per week will I study, and when? Carve out the times of the week you’ll practice coding and never miss those study times. Carve out the times of the week you’ll practice coding and never miss those study times. What date will I start applying for jobs ? Set a deadline for when you’ll apply. ? Set a deadline for when you’ll apply. What will I give up? It’s awesome to picture yourself working as a developer, but the road to get there means early mornings, weekends and late nights of hard work. Be realistic: Look at what you spend time on each week, and give one thing up. For me, I was not willing to give up time with my family, but I decided to give up hanging out with friends. On most Saturdays, instead of spending time with friends as I usually did, I stayed home and programmed. When building your roadmap, keep in mind: contrary to a lot of the marketing hype you’ve seen, there is no magical coding course, no magical program, no magical bootcamp that will ‘make you’ a developer. A lot of people ask me what online course I used to learn to code as if there’s one “golden ticket” that will turn you into a developer. There’s not. Only you can make yourself a developer. Your grit and determination will get you there. But I also used a game-changing method to learn to code to become a developer. What was it? 2. Train your focus. There are a million free coding courses available to everyone. If it’s so easy to access free coding courses, why is it so dang hard to learn how to code? Why is it so hard to become a developer? Because many of us do not know the vital skill needed to learn and master programming languages. This skill is called Deep Work popularized by computer scientist, Cal Newport. TL;DR: In order to learn hard things, you must focus intensely for long periods. That’s deep work. But most of us are actively killing off our ability to focus, and few people do deep work. Think about that last time you stood in a line. How much time elapses before you feel compelled to grab your phone and check notifications? Or what about this article itself - have you switched to a new tab while reading? Checked your Twitter account? 😄 Today, it’s the norm to have the attention span of a goldfish. And this is why it is so hard for us to learn complex things like coding. Once I figured this out, I realized that if I committed to doing deep work, I could learn the hard things I needed to know to become a developer. When you sit down to code, set a timer for 90 minutes. For that entire time, focus on the app you’re building or the coding problem you’re trying to solve. Do not check your notifications. Do not open a new tab. When you find yourself daydreaming, quickly bring your attention back to coding. Train your focus like your future career depends on it - because it does. Without practicing deep work, I would not be a developer today. 3. Chase your curiosity. When most people set out to learn to code, they start a curriculum of things they are “supposed” to know. Then they get bored. Just like in school, when you’re learning new things only because you’re supposed to learn them, but you don’t know why you need to learn them or why you even care. Losing interest is easy. To learn to code, find one thing about programming that’s fascinating to you. Find the thing that makes you curious enough to learn about it on a Saturday night - because you’ll need to do that at times. There’s a line from Alice In Wonderland that’s stuck with me: She had never before seen a rabbit with either a waistcoat-pocket, or a watch to take out of it, and, burning with curiosity, she ran after it. As I’ve worked with more senior developers in my career, I’ve realized: the best programmers don’t have to force themselves always to be learning more. They are always learning because, like Alice, they are burning with curiosity. Some try coding in one language and hate it, then pick up another language and love it. Make sure you try different programming languages and learn about different fields within programming to discover what fires up your curiosity. If you’ve tried learning to code several times from different angles and you still feel like you’re forcing yourself, then coding may not be for you. Contrary to the marketing material of most bootcamps, learning to code in three months and landing a $100K job offer right after, is not the reality for most. Coding is not a get rich quick scheme. Don’t learn to code if you are bored by it, because you’ll miss out on finding what your real curiosity in life is. However, if you are interested in tech but not coding, there are many other incredible and in-demand skills you can learn: design, data analytics, and more. If you have a curiosity about programming, chase it. The more you go after your curiosity, the more of it you have. And while you’re chasing your curiosity, don’t worry about where you are coming from. Don’t worry about your lack of a CS degree or what’s behind you. Regardless of your age, lack of a degree or previous experience, if you love to code, practice deep work and make learning a priority in your life you can become a professional developer. Even if you’re a complete beginner. Start now. If you enjoyed this story, please hold down the 👏 button! To keep in touch with me, sign up for my newsletter where I share tips on learning how to code.
https://medium.com/free-code-camp/how-i-went-from-complete-beginner-to-software-developer-and-how-you-can-too-dd36ed08e11b
['Madison Kanna']
2019-06-08 23:31:20.809000+00:00
['Technology', 'Women In Tech', 'Programming', 'Computer Science', 'Coding']
99
CI/CD for Android and iOS Apps on AWS
Mobile apps have taken center stage at Foxintelligence. After implementing CI/CD workflows for Dockerized Microservices, Serverless Functions and Machine Learning models, we needed to automate the release process of our mobile application — Cleanfox — to deliver features we are working on continuously and ensure high quality app. While the CI/CD concepts remains the same, its practicalities are somewhat different. In this post, I will walk you through how we achieved that, including the lessons learned and formed along the way to boost your Android and iOS application development drastically. Cleanfox — Clean up your inbox in an easy manner The Jenkins cluster (figure below) consists of a dedicated Jenkins master with a couple of slave nodes inside an autoscaling group. However, iOS apps can be built only on macOS machine. We typically use an unused Mac Mini computer located in the office devoted to these tasks. We have configured the Mac mini to establish a VPN connection (at system startup) to the OpenVPN server deployed on the target VPC. We setup an SSH tunnel to the Mac node using dynamic port forwarding. Once the tunnel is active, you can add the machine to Jenkins set of worker nodes: This guide assumes you have a fresh install of the latest stable version of Xcode along with Fastlane. Once we had a good part of this done, we used Fastlane to automate the deployment process. This tool offers a set of scripts written in Ruby to handle tedious tasks such as code signing, managing certificates and releasing ipa to the app store for the end users. Also, we created a Jenkinsfile, which defines a set of steps (each step calls a certain actions — lane — defined in the above Fastfile) that will be executed on Jenkins based on the branch name (GitFlow model): The pipeline is divided into 5 stages: Checkout : clone the GitHub repository. : clone the GitHub repository. Quality & Unit Tests : check whether our code is well formatted and follows Swift best practices and run unit tests. : check whether our code is well formatted and follows Swift best practices and run unit tests. Build : build and sign the app. : build and sign the app. Push : store the deployment package (.ipa file) to an S3 bucket. : store the deployment package (.ipa file) to an S3 bucket. UI Test: launch UI tests on Firebase Test Lab across a wide variety of devices and device configurations. If a build on the CI passes, a Slack notification will be sent (broken build will notify developers to investigate immediately). Note the usage of the git commit ID as a name for the deployment package to give a meaningful and significant name for each release and be able to roll back to a specific commit if things go wrong. Once the pipeline is triggered, a new build should be created as follows: At the end, Jenkins will launch UI Tests based on XCTest framework on Firebase Test Lab across multiple virtual and physical devices and different screen sizes. We gave a try to AWS Device Farm, but we needed to get over 2 problems at the same time. We sought waiting for a very short time, to receive tests result, without paying too much. Test Lab exercises your app on devices installed and running in a Google data center. After your tests finish, you can see the results including logs, videos and screenshots in the Firebase console. You can enhance the workflow to automate taking screenshots through fastlane snapshot command and saves hours of valuable time you’ll burn taking screenshots. To upload the screenshots, metadata and the IPA file to iTunes Connect, you can use deliver command, which is already installed and initialized as part of fastlane. The Android CI/CD workflow is quite straightforward, as it needs only the JDK environment with Android SDK preinstalled, we are running the CI on a Jenkins slave deployed into an EC2 Spot instance. The pipeline contains the following stages: The pipeline could be drawn up as the following steps: Check out the working branch from a remote repository. Run the code through lint to find poorly structured code that might impact the reliability, efficiency and make the code harder to maintain. The linter will produces XML files which will be parsed by the Android Lint Plugin. Launch Unit Tests. The JUnit plugin provides a publisher that consumes XML test reports generated and provides some graphical visualization of the historical test results as well as a web UI for viewing test reports, tracking failures, and so on. Build debug or release APK based on the current Git branch name. Upload the artifact to an S3 bucket. Similarly, after the instrumentation tests have finished running, the Firebase web UI will then display the results of each test — in addition to information such as a video recording of the test run, the full Logcat, and screenshots taken: To bring down testing time (and reduce the cost), we are testing Flank to split the test suite into multiple parts and execute them in parallel across multiple devices. Our Continuous Integration workflow is sailing now. So far we’ve found that this process strikes the right balance. It automates the repetitive aspects, provides protection but is still lightweight and flexible. The last thing we want is the ability to ship at any time. We have an additional stage to upload the iOS artifact to Test Flight for distribution to our awesome beta tests. Like what you’re read­ing? Check out my book and learn how to build, secure, deploy and manage production-ready Serverless applications in Golang with AWS Lambda. Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy. We’re not sharing this just to make noise We’re sharing this because we’re looking for people that want to help us solve some of these problems. There’s only so much insight we can fit into a job advert so we hope this has given a bit more and whet your appetite. If you’re keeping an open mind about a new role or just want a chat — get in touch or apply — we’d love to hear from you!
https://medium.com/foxintelligence-inside/ci-cd-for-android-and-ios-apps-on-aws-79695520fde4
['Mohamed Labouardy']
2019-04-16 08:07:43.025000+00:00
['Android', 'AWS', 'Technology', 'DevOps', 'iOS']