id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
16,467
2,019
"Microsoft beefs up Word, Excel, and Outlook with machine learning | VentureBeat"
"https://venturebeat.com/2019/11/04/microsoft-office-365-productivity-ignite-2019"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft beefs up Word, Excel, and Outlook with machine learning Share on Facebook Share on X Share on LinkedIn Office 365. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Microsoft’s Ignite conference starts today in Orlando, Florida, where the company is expected to announce updates across its product portfolio. More than a few were revealed this morning on the Microsoft 365 side of the business, which encompasses not only Office 365 products like Word, PowerPoint, Excel, and Outlook but Yammer, OneNote, and OneDrive. Word and Excel Word A preview of Ideas in Word for the web is rolling out for Office 365 commercial users. It’s an AI-powered proofreader that taps natural language processing and machine learning to deliver intelligent, contextually aware suggestions that could improve a document’s readability. For instance, Ideas in Word will recommend ways to make phrases more concise, clear, and inclusive. And when Ideas in Word comes across a particularly tricky snippet, it will put forward synonyms and alternative phrasings, like “society” as a substitute for “society as a whole.” With each suggestion, Ideas in Word will provide justifications and explanations, such as why “then” should be used in place of “than” in a specific context. It’ll also offer estimated reading times and extract and highlight key points in paragraphs, as well as underlining potentially sensitive geopolitical references and decoding acronyms. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Excel Office scripting, a new process automation feature that allows actions to be recorded inside a workbook and saved to a script, will arrive in Excel by the end of the year. Users will be able to integrate saved scripts with Microsoft’s Power Automate and schedule them to run automatically as part of a larger flow. Additionally, Excel users who’ve opted into the Office Insiders program will soon begin to see a new natural language query function that will allow them to ask questions of data and get answers without having to write formulas. Once they enter a question in the query box at the top of the Excel pane, the function will provide answers supported by formulas, charts, or pivot tables. A separate Sheet View component, which is generally available, allows users to sort and filter data before saving the results as a separate view. Resulting views can be saved to be sorted, filtered, and perused by other users in the document in real time, on Excel for the web. Above: Performing natural language queries in Excel. Lastly, a new XLOOKUP search function launches today. It’s the successor to Excel’s VLOOKUP and HLOOKUP functions, and it overcomes the limitations of both by introducing the ability to look up values to the left and handle column insertions and deletions. Email Outlook Starting today on iOS and on Android in Spring 2020, Outlook users can “play back” emails using the new Cortana-powered Play My Emails feature, which intelligently reads out email and shares agenda changes on demand (in the U.S.). Voice commands via a Bluetooth audio device enable message replies and basic navigation. With respect to Outlook on both iOS and Android, a new email notification experience encrypts data on a handset’s lock screen until that handset is unlocked. It’s joined by Up Next, a feature that surfaces next events in attached calendars at the top of inboxes. There’s also Meeting Insights, which suggests files and emails potentially relevant to upcoming meetings, and inferred locations, which recommends locations for recurring calendar events. Those are in addition to a revamped search experience that displays multiple panes on tablets and other devices with increased screen real estate. Above: The Cortana-powered Play My Emails feature. By 2020 on Outlook on the web, users will be able to add personal calendars and respond to emails using AI-generated suggested replies. And on Windows, a reimagined Coming Soon option will allow them to explore updates and preview major changes, such as an enhanced search tool that shows suggested emails as a query is typed. PowerShell for Exchange and transport A new set of cmdlets, or lightweight Windows PowerShell scripts that perform a single function, will soon be available for public preview in PowerShell for Exchange. They’re built on REST APIs, and Microsoft claims they’re significantly faster than their predecessors. In a related development, a number of Office 365 email transport improvements will begin rolling out in general availability starting this week. Message Recall will enable the recalling of emails already sent, while proxy address sending will let users specify mail to be sent displaying an additional alias. Lastly, IT admins will get the ability to customize receipt limits at a user level. Office mobile A new mobile Office experience — Microsoft Office Beta — launches in public preview today for Android and iOS. It’s one that unifies Word, Excel, and PowerPoint into a single app, with an Actions pane that contains shortcuts to common tasks like converting images to text, creating and signing PDFs, and sharing files among devices. The new app also lets users create quick notes using Sticky Notes, and it facilitates the sharing of files between phones and PCs via QR code and otherwise. Microsoft Office Beta also builds in PowerPoint presentation creation tools, and Microsoft says it will add its AI-powered Designer feature, which offers up suggested design templates, in coming releases. On the Excel side, a new tile view shows individual columns in a single, editable tile. Above: Microsoft Office Mobile Beta. “We all want to be able to work on the go from mobile devices, and we’re always looking to simplify and improve the experience. Now you no longer need to download each app separately and will have everything you need to be productive on the go,” wrote corporate vice president for Microsoft 365 Jared Spataro in a blog post. “You can snap a picture of a document and turn it into an editable word file, for instance, or transform tables from a printed page into an Excel spreadsheet.” SharePoint and OneDrive As early as 2020, Microsoft will roll out a web-based self-service capability for moving content from third-party cloud storage providers into SharePoint and OneDrive. In addition, the company anticipates the launch of a new SharePoint Migration orchestration center with a unified admin console in early 2020. In other SharePoint and OneDrive news, both services will soon (in December) support file sizes larger than 50GB, coinciding with an increase in the the SharePoint site collection limit from 500,000 site collections per tenant to 2 million. Those changes will arrive alongside the differential sync capability previewed earlier in the year, which will allow users to sync only the updated parts of large files instead of the entire file. Last, but not least, a forthcoming Request Files feature will make it easier to collect files from multiple people without showing them each other’s submissions. On the subject of SharePoint lists, admins will soon be able to create card layouts containing content, links, and images using a no-code inline card designer. A handy native SharePoint form configuration will spotlight data by adjusting which information appears in a SharePoint list. And SharePoint Home Sites, which are rolling out now, will let users design branded landing sites that pull in news, events, content, conversations, and videos (optionally in multiple languages.) Yammer Yammer has been redesigned from the ground up with the Fluent Design System , Microsoft says, with “dozens” of new capabilities. This latest release is designed to work with a range of devices and to integrate deeply with Microsoft Teams, SharePoint, and Outlook. To this end, Yammer lets Outlook users read, like, and reply to conversations from their inboxes. For community members, there’s added modern styling, conversation filters, new editing experiences, and the ability to favorite a community, plus a new event page experience to make it easy to discover and engage with live events. And the mobile apps for iOS and Android have been rebuilt with new styling, sizing, and iconography. The new Yammer features a personalized conversation feed designed to connect people with conversations across their organization. Each community can be given a unique identity with branding and cover photos, and moderators can highlight conversations with pinned posts and close discussions to prevent replies. Furthermore, leaders can broadcast live and on-demand events with a production option using webcams and desktop sharing, or share experiences and messages with short videos posted directly from the Yammer mobile app. Stream Microsoft is supercharging Stream, its corporate video-sharing service, with new editing and AI-driven translation features. Starting in preview this week on the web and the Stream mobile app, users can customize recordings with emojis, images, and text and view the finished product in apps created with Microsoft’s Power Platform. Plus, they’re able to take advantage of the six new languages supported in Stream for automatic transcription: Chinese (simplified), French, German, Italian, Japanese, and Portuguese (Brazilian). Early 2020 will see the rollout of New Voice enhance, which will leverage machine learning to eliminate background noise and sharpen the speaker’s voice in noisy environments. Whiteboard and Visio Microsoft’s cross-platform digital canvas app — Whiteboard — is now generally available for the web and has gained five templates in public preview that target scenarios like brainstorming, sprint planning, and product planning. They’re selectable from the templates gallery, which can be accessed by tapping the Insert button in the toolbar. As for Visio , it now supports co-editing on the web, enabling users to collaborate across Teams and Microsoft 365 in real time. The new Unified Modeling Language 2.5 diagrams — which are also generally available for Visio for the web — let users begin with blank templates or modify starter diagrams. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,468
2,020
"ProBeat: The rise and inevitable fall of Microsoft Teams and Slack | VentureBeat"
"https://venturebeat.com/2020/03/20/probeat-the-rise-and-inevitable-fall-of-microsoft-teams-and-slack"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: The rise and inevitable fall of Microsoft Teams and Slack Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Microsoft yesterday revealed some new daily active users (DAUs) figures for Microsoft Teams. First, the company said that Microsoft Teams had passed 32 million DAUs, a 60% jump in four months. That was comparable to the last milestone: When Microsoft Teams passed 20 million DAUs, that was an almost 54% jump in four months. But then Microsoft added that a week later, the figure had surged to 44 million DAUs, meaning the roughly four-month increase was closer to 110%. Microsoft directly attributed this surge to a global demand for remote work due to COVID-19. Yesterday we reached out to Slack to see if the company had an update on its DAUs figure. After all, the last number was from October (12 million DAUs), and the service was likely seeing a similar spike. Slack declined to talk about DAUs, but a spokesperson pointed us to an SEC filing from the same day. Slack had added approximately 7,000 new paid customers from February 1 to March 18, 2020. That represented “a ~40% increase over each of our previous two fiscal quarters, when we added approximately 5,000 new paid customers per quarter.” If the pacing keeps up, Slack could add 180% more paid customers this quarter than it usually does. The briefing didn’t mention COVID-19, but the message was clear: The same surge for global demand was responsible. None of this should be surprising. Executives are frantically trying to adjust their businesses to weather the storm. IT departments are rolling out new tools and educating employees about existing tools. The COVID-19 pandemic has forced people to rethink what work to prioritize, not just how they should be executing it. Products and services that facilitate connecting people between companies, not just within a company, are inevitably seeing a boom in usage. The question on everyone’s mind: Where will this surge settle? Slack won’t say, Microsoft is optimistic That’s exactly what we asked Slack and Microsoft. Slack did not respond to a request for comment by publishing time, but it’s easy to see what the company is hoping. Every SaaS company on the planet tracks its unsubscribe rate. Slack will be working incredibly hard not just to woo more users and sign new business customers, but to ensure the unsubscribe rate doesn’t jump once the current COVID-19 pandemic has concluded. More paid customers with the same unsubscribe rate means net new customers in the long run. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Microsoft is thinking the same. “This moment in history is a turning point for how we will work and learn,” Jared Spataro, Microsoft 365 corporate vice president, told VentureBeat. “Teams lets you do more, all in one place. And that remains true whether you’re working from home or your office. Once people start using Teams, it tends to become part of how they work.” I’m not so sure. Don’t get me wrong — I’m a big believer in the notion that the future of work is remote. The tools are only getting better, and I expect that one day remote work will be the norm. But this is a sugar high — an artificial stimulus. The only options Last week, I shared a few work from home tips I have picked up over the years. My colleague Seth Colaner pointed out that I didn’t list any specific tools. That was by design — I told him I didn’t want to open up that can of worms. But the more I thought about it, the more I realized that in many cases, workers don’t have control over the tools they use. They are told which email client, collaboration app, and video conferencing service to use. The important thing to remember here is that these new users and new customers are showing up because they have to. They aren’t showing up because they want to. Their government, their employers, or simply their good judgment is telling them not to go into the office. The remote working tool experience is forced. Even if they aren’t working remotely, the person they normally would have met in person probably is. Again, that’s more usage for Slack, Microsoft Teams, Facebook’s Workplace, Google Hangouts, Zoom, Cisco Webex, GoToMeeting, BlueJeans, and all the rest. There’s plenty to like about each of these solutions, but the surge SaaS companies are seeing can ultimately be attributed to a new problem. If the problem goes away, there will be a period of adjustment. Executives, IT departments, and employees will once again adjust what work to prioritize, and what the best options are to execute that work. Some will continue to use these tools after the COVID-19 pandemic. Others won’t. They’re on the rise now, but a fall is inevitable. It’s merely a question of how steep that fall will be. Will Microsoft Teams go back to seeing ~60% growth in DAUs every four months? Will Slack go back to adding 5,000 paid customers every quarter? When will that be? There’s a big risk of saturation here. When this is over, everyone who could have tried Microsoft Teams and Slack may have already been forced to adopt it. An unprecedented rise means an unprecedented fall. All these remote tools would be wise to set up some cushions. What the cushions look like will be the other half of the battle — we have no idea what the workforce will look like when this is over, nor do we know when it will be over. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,469
2,020
"ProBeat: How coronavirus will change tech events forever | VentureBeat"
"https://venturebeat.com/2020/03/27/probeat-how-coronavirus-will-change-tech-events-forever"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: How coronavirus will change tech events forever Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. I could have written this column a month ago, on February 27, when Facebook canceled its F8 developers conference scheduled for May. While I was writing, O’Reilly got out of the event business permanently. I could have also waited another month, and I’d have even more material. We have no idea how long the coronavirus pandemic will last and even less of a gauge on how long we’ll be feeling its impacts thereafter. Here at VentureBeat, we host at least two major events per year. We’ve shifted GamesBeat Summit 2020 (April 28 and April 29) to completely digital, and we’ll be clarifying our plans for our AI conference Transform 2020 (July 15 and July 16) shortly. It would be foolish for me to tell you that COVID-19 will only impact this year’s events. It would be equally foolish for me to tell you how each event will look like in 2021. But here is what you — the tech event host, the business executive, the startup founder, the developer, or simply the casual event attendee — should think about. Going forward, tech events will never be the same. Some will cease to exist, some will change drastically, and, I think, some will stay as they were, but better. Should I host a tech event? Companies put on tech events for a variety of reasons. For some, it’s purely a marketing play. For others, it’s their whole business model. And of course, there’s a bunch in between. For those that are simply promoting themselves, their products, and their services, they will naturally compare the ROI of their events for 2020 versus other years. Whether they cancel their event or put on an online-only version, they will judge that against putting on a physical event. That’s a very hard comparison to make, given that one can’t discount market conditions. The coronavirus isn’t only resulting in health-based decisions, but economic-based ones too. It would not surprise me if some companies choose to not host tech events anymore, or to permanently shift online. But each company will decide what makes sense for them — we won’t see all companies that put on tech events about themselves conclude the same thing. For those that put on tech events for others to mingle, do business, and learn, it’s going to be an even harder equation. Because events are the main piece of the pie, these companies will be hit hardest. They will compare the ROI for 2020 versus other years as well, and again, they’ll have to account for market conditions, but it will be more difficult to determine which way to move forward. The issues, holes, and limits of online-only events will be exposed, for better and for worse. Some will choose online-only while others will deem 2020 an off-year and never look back. I think most interesting will be those that will try to experiment with mixing online and in-person components. Should I attend a tech event? Similarly to how tech event hosts are reevaluating their options, attendees will be reevaluating whether they should attend at all. Based on 2020, some will make blanket decisions on whether they simply attend everything they usually do or skip events altogether. Most attendees, however, will take it on a case-by-case basis. This is assuming, of course, that the decision isn’t made for them. The next biggest factor is what type of attendee you are. If some portion of marketing your business comes from attending tech events, you’ll likely look at 2020 and think through what worked and what didn’t. You’ll also consider that the 2020 market conditions were an anomaly. It can be expensive to attend an event. Can you still get the deal done if you don’t show up in person? Will you get fewer deals done? This year, mid-tier and small-tier companies will be most impacted because in-person events are often how they get attention. They will also be most willing to try new ways to stand out, which may include online-only events or other forms of marketing entirely. Broadly speaking, any offline activity that can be done online will be done online. If that activity can be done more successfully online, where success is measured by ROI, many will consider not doing it offline anymore. Then there’s the question of networking. Executives often seal deals at events, but there’s also plenty of relationship-building that doesn’t immediately result in a signed contract. Those interactions are a lot harder to quantify. Some may realize that they can skip events, or at least certain events. Others will conclude that they cannot skip any. But the biggest learning will be whether online-only works for them or not. All of this will result in the type of attendee changing for many events. Age, time, and location It’s possible, for example, that younger attendees will be more willing to embrace online-only tech events since they grew up on the internet. Overall though, I’m doubtful the desire to meet in person is going to disappear. Does the value of face-to-face meetings vary with age? Are such meetings necessary to build trust, or are they not everything they’re cracked up to be? Like the other factors above, I strongly believe this will vary from person to person, event to event. One more thing is worth highlighting in all this disruption. When it is deemed safe to attend tech events again, there will be a conflicting sense of caution and urgency. Similarly to how the various bans were rolled out, the changes won’t happen overnight. There won’t be a specific date to point to and say “This is the day when health recommendations, travel restrictions, and attendee sentiment across the world turned green again.” A significant amount of logistics have to work in tandem for people to show up at an event, especially if travel is involved. On the flip side, there will be pent-up demand to attend events, especially from those that have been missing them. Add the fact that more events are packed into a smaller timeframe and you have a pipeline issue. Yes, many events were canceled or shifted to online-only, but others were delayed to the second half of the year. Event organizers and attendees will have to balance these two forces pulling in opposite directions. Here at VentureBeat, we’ll be doing what we always do on the site and at our events: Keeping you informed. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,470
2,020
"Candor: 267 companies have frozen hiring, 44 had layoffs, 36 rescinded offers, 111 are hiring | VentureBeat"
"https://venturebeat.com/2020/03/28/candor-267-companies-have-frozen-hiring-44-had-layoffs-36-rescinded-offers-111-are-hiring"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Candor: 267 companies have frozen hiring, 44 had layoffs, 36 rescinded offers, 111 are hiring Share on Facebook Share on X Share on LinkedIn A real-time snapshot of the impact the coronavirus is having on hiring/firing. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. COVID-19 has taken a toll on the workforce, and you can now see a list of who’s hiring, freezing hires, laying people off, and rescinding job offers , according to crowdsourced data from Candor, a company that helps tech workers negotiate salaries. In a matter of hours, the list of more than 400 companies posted in the past day went viral, with hundreds of new tips submitted from job seekers, employees, recruiters, and VCs. So far, the company has received reports from about 400 companies, and 267, or 60%, said they were implementing hiring freezes. Candidates reported having job offers rescinded in 36 cases (18%) while layoffs occurred in 44 companies (13%). [ Update : The list now includes more than 1,000 companies]. The good news: 111 companies are hiring. But 10% of those still hiring have implemented some kind of hiring freeze, laid off people, or had offers rescinded. That’s likely because hiring only continues for essential roles. It’s no surprise that Zoom, the video conferencing company, is hiring people. Twitch, Twitter, Twilio, Roblox, TikTok, Qualcomm, PUBG, Paypal, Palantir, Nvidia, Northrop Grumman, Medtronic, Lockheed Martin, eBay, Doordash, DocuSign, and Amazon are among those hiring. Candor is crowdsourcing this list directly from employees at the businesses in question, and wherever possible confirming reports or adjusting them after talking with company representatives. The firm is reviewing submissions via Intercom, Airtable, and email, prompted by contacts via its live chart. In some ways, this is more accurate, as many companies aren’t announcing their moves. Candor helps tech professionals negotiate their salary. A lot of this data is sourced directly from clients who are experiencing the situation live on the ground. Candor cofounder and chief technology officer David Chouinard put the list together to help bring transparency to the state of the job market. Business software is the largest category of submissions (81 submissions), with 41 hiring freezes, representing 52% of companies in this industry. The travel, hospitality, and transportation segment was particularly hard hit, with 95% of companies freezing hiring. Only two companies, Bolt and Cruise, report they are still hiring. All booking platforms — like Kayak, Expedia, and Booking.com — have suspended hiring. Uber and Lyft have a headcount freeze but continue to backfill already open positions. And 12 companies — like Bird, Expedia, Sonder, Mondee, and Knotel — have confirmed layoffs. Above: A real-time snapshot of the impact the coronavirus is having on hiring/firing. (updated 3/29/20 at 1:32 pm). All job search companies have hiring on hold — including LinkedIn, Indeed, and Glassdoor. Consulting companies seem to be strongly affected — PwC, Oliver Wyman, Deloitte, and EY have reported hiring freezes. In automotive, all six reporting companies, including Tesla and Cox Automotive, have reports of a hiring freeze. In media and entertainment, seven companies, including Netflix, Hulu, and Disney, are putting roles on hold. Only 46% of companies continue to hire. And in food and dining, 25% of 12 companies are still hiring. Doordash and Instacart report strong hiring. Meanwhile, ZeroCater, Deliveroo, and Snackpass have frozen hiring. The business software industry has 34 companies (41%) still hiring, the largest category. Financial services and consumer software were the next largest industries, with 22 and 14 companies still hiring, respectively. As for the big tech companies, Facebook announced in a recent internal Q&A that the company plans to continue hiring. However, it is canceling interviews for internships. Some Apple employees are starting to report hiring freezes. Amazon is on a hiring spree, as its business model is more resilient in the current climate. Amazon’s physical store teams are not currently hiring. Google is continuing to hire across all of its divisions. [Updated 3/28/20 3 p.m. Pacific: Added new info on more than 500 companies. Updated again 3/29/20 at 10:48 a.m. with 1,000 companies.] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,471
2,020
"ProBeat: What we've learned so far from Zoom's big boom | VentureBeat"
"https://venturebeat.com/2020/04/03/probeat-what-weve-learned-so-far-from-zooms-big-boom"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: What we’ve learned so far from Zoom’s big boom Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Not a day goes by in the age of the coronavirus (COVID-19) without a mention of Zoom. The video conferencing tool is being used by political parties, corporate offices, school districts, small businesses, and individuals needing to connect as they work and learn from home. We now finally have some numbers to quantify the jump: Zoom went from 10 million daily active users to 200 million daily active users in three months. Let’s put those numbers into context. Skype’s daily active users grew by 70% month over month. Microsoft Teams’ daily active users jumped 110% in four months. Zoom’s daily active users exploded 1,900% in three months. Such a drastic increase in usage is a huge boon for any company, let alone one like Zoom that hasn’t been public for even a year. It also brings plenty of technical obstacles, and even more questions. This is common in tech; the more popular a service gets, the more problems it has, and the more scrutiny it receives. The problems compound if the rise is razor-sharp. Scale, settings, and security There are three categories of learnings. The first is scale. Any company that grows 20X in 3 months would struggle. Case studies will be written on how Zoom was able to adapt its infrastructure to the astronomical demand in weeks and often probably days. The learning here is Zoom made the right investments early on and was able to do a phenomenal job increasing its capacity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The second is settings. There has been a ton of confusion on why Zoom works in certain ways, how to avoid Zoombombing trolls, and what you can do to protect your privacy in general. For its K-12 program, Zoom has changed the settings so virtual waiting rooms are on by default, and so that teachers are the only ones who can share content in class by default. If you’re not in that program, you may want to change your Zoom settings as well — the EFF has a handy guide. The learning here is never underestimate the importance of defaults. The third is security. This should not be conflated with settings. While Zoom wasn’t designed for consumer use, security researchers have identified plenty of issues that can’t be changed with a setting. Zoom has responded swiftly, but plenty of damage has already been done. The learning here is you can never start taking security seriously too early. Zoom’s whole business is about making video calling easy to use. It’s easy to make software easy to use. It’s hard to make secure software easy to use. Security scrutiny Over the past few weeks, Zoom has been the subject of too many security headlines to list. Just this week alone, we’ve seen: New York’s attorney general sent a letter to Zoom over security vulnerabilities and privacy concerns. While Zoom claimed to use end-to-end encryption , its video and audio content from meetings is not end-to-end encrypted. Zoom’s macOS installer was leveraging malware tricks. Zoom’s Windows client was letting attackers use UNC to steal Windows login credentials. Per a California lawsuit, Zoom is being sued for sharing users’ personal data with Facebook via its iOS app, even when those users don’t have Facebook accounts. Zoom was letting users covertly access data from some people’s LinkedIn profiles. Zoom is using encryption keys issued by servers in Beijing , even for calls outside of China. That’s not an exhaustive list for the week, but hopefully it explains why SpaceX banned Zoom and called it a day. For its part, Zoom has apologized for the slew of failures and froze development of new features to focus on security and privacy. The company has done a lot to address some of the issues, including: Removed the Facebook SDK in its iOS client and prevented it from collecting unnecessary device information from users. Permanently removed the attendee attention tracker feature. Released fixes for two Mac-related issues. Released a fix for the Windows UNC link issue. Permanently removed the LinkedIn Sales Navigator app after identifying unnecessary data disclosure. It’s great that Zoom didn’t waste time discounting the claims and instead acted quickly. The company didn’t really have a choice if it wants to keep its 20X larger user base. But it’s concerning that so many issues existed in the first place. And the hits are going to keep coming. Zoom is playing Whac-A-Mole. Just yesterday, infamous security journalist Brian Krebs highlighted an automated Zoom conference meeting finder “zWarDial” that discovers some 100 meetings per hour that aren’t protected by passwords. It can then extract a Zoom meeting’s link, date, time, organizer, and topic. Today, we learned a simple web search can unearth thousands of Zoom calls recorded in people’s homes. Zoom has fundamental flaws that no amount of band-aids can fix. Long-term solutions Zoom CEO Eric Yuan today shared that soon, all meetings will require password protection. Depending on the execution, that might disrupt the ease of joining a Zoom meeting. Already, the process has an additional step compared to browser-based video conferencing tools in that Zoom requires installation. Yuan also says Zoom will double down on its bug bounty program. This is a smart move — bug bounty programs motivate individuals and hacker groups to not only find flaws your security team didn’t, but to disclose them properly. Otherwise they are more likely to use them maliciously or sell them to parties who will. Rewarding security researchers with bounties costs peanuts compared to paying for security snafus. Most importantly, however, Yuan said that if he can’t turn Zoom into the “most secure platform in the world” in the next several years, he’ll consider open-sourcing Zoom’s code. That’s a big deal and would be a huge vote of confidence for the platform. It’s a lot easier to find security holes if you have all the code right in front of you. So why not just open-source it now? Based on everything we’ve seen, my bet is that Zoom’s code isn’t ready. It’s 20X harder to add security and privacy after the fact than to build it in from the start. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,472
2,020
"Researchers' AI recommends lockdown strategies to curb coronavirus | VentureBeat"
"https://venturebeat.com/2020/04/06/microsoft-ai-lockdown-policies-curb-spread-of-coronavirus"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers’ AI recommends lockdown strategies to curb coronavirus Share on Facebook Share on X Share on LinkedIn Microsoft Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A preprint paper coauthored by researchers at Microsoft, the Indian Institute of Technology, and TCS Research (the R&D division of Tata Consultancy Services) describes an AI framework designed to help cities and regions make policy decisions about lockdowns, closures, and physical distancing in response to pandemics like COVID-19. They claim that because it learns policies automatically as a function of disease parameters like infectiousness, gestation period, duration of symptoms, probability of death, population density, and movement propensity, it’s superior to the modeling tools that have so far been used. If the peer-review process bears out the researchers’ claims, the framework could be useful to organizations and governments in the nearly 200 countries with cases of the coronavirus. Asian nations including Singapore and Taiwan have demonstrated that containment strategies like contact tracing — the process of identifying people who may have come into contact with an infected person — can effectively mitigate the spread of disease. The coauthors first generated a graph network — a model containing objects in which some pairs are related, where the objects correspond to vertices and each pair of vertices is called an edge — with 100 nodes and 1,000 individuals. Each node stood in for a city or a region containing a certain number of individuals, and the strength of the connections between pairs of nodes was directly proportional to the product of the population between nodes and inversely proportional to the square root of the distance between them. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Next, the researchers modeled the best disease parameters as available for COVID-19: an incubation period of 5-10 days, an infected period of 7-14 days, an 80% likelihood of showing visible symptoms, a 2% death rate, and a 100% transmission probability for infected persons who come into contact with susceptible persons. Multiple simulations were run to obtain reliable statistics. Throughout the study, the researchers assumed that an open node allowed people to travel to and from other open nodes in the network. People showing symptoms weren’t allowed to travel to other nodes but asymptomatic and exposed people could do so. (When a node was locked down, all travel to and from the node was blocked.) Additionally, they accounted for the fact that while symptomatic people were quarantined within nodes, a small number of people broke quarantine and circulated within. The researchers also established several baseline lockdown policies, in which they assumed each node had the option to be locked down or opened once per week. They then defined a set of policies that locked down any given node if the fraction of symptomatic people in that node crossed a predefined threshold of 5%, 10%, 20%, 50%, or over 100%. Lastly, the team trained a Deep Q Network reinforcement learning algorithm (an algorithm that spurred on software agents via a reward) that made a per-node binary decision each week — “open” or “lockdown” — by running a number of simulations of the spread of the disease. To have the algorithm identify the optimal policy for lockdowns, they quantified the cost of each outcome of the simulation: A weight of 1.0 was placed on each day of lockdown and each person infected; a weight of 2.5 was placed on each death; and the reward was defined as the negative of those costs so that higher rewards corresponded to lower costs. In experiments, over the course of 75 simulations with simulations lasting 52 weeks (364 days), the researchers determined that policies with 5% to 10% lockdowns experienced a lower peak of infections. Predictably, the policy was wary of decisions contributing to an increase in the fraction of symptomatic people within the same node and the population overall, and so it locked down larger nodes earlier once the infection started spreading and nodes where the potential for outside infection was higher as soon as infection began spreading within the node. The coauthors caution that none of the authors are experts on communicable diseases and that the AI model in the study doesn’t account for population size and geography, and that they didn’t use real data for the network model. But they say that a deeper analysis is in progress and that they’ll continue to add more detailed descriptions and literature review in stages. Beyond this study, various teams are developing AI systems to track the spread of COVID-19. Carnegie Mellon researchers are in the process of retraining an algorithm to predict the seasonal flu, while the Robert Koch Institute in Berlin used a model that takes into account containment measures by governments, such as lockdowns, quarantines, and social distancing prescriptions to show that containment measures can be successful in reducing the spread. Elsewhere, startup Metabiota offers an epidemic tracker and a near-term forecasting model of disease spread, which they use to make predictions. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,473
2,020
"ProBeat: Microsoft Teams video calls and the ethics of invisible AI | VentureBeat"
"https://venturebeat.com/2020/04/10/probeat-microsoft-teams-video-calls-and-the-ethics-of-invisible-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: Microsoft Teams video calls and the ethics of invisible AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft Teams is getting a bunch of new features this year. Robert Aichner, Microsoft Teams group program manager, recently told us how Microsoft is building the most interesting one : real-time noise suppression, which uses machine learning to filter out background noise during calls. Because the feature isn’t final, we also had a secondary discussion about the user experience. It’s one thing to filter out someone typing, a barking dog, a door being shut, a rustling chip bag, or a vacuum cleaner running in the background. It’s another to filter out something that call participants may want to hear (in fact, the team has decided not to filter out certain noises, including musical instruments, laughter, and singing). Governments and companies alike have been abusing AI in a variety of ways. But even AI features clearly designed to help humans do something as basic as communicate clearly come with ethical questions for the companies and developers that build them. The discussion Aichner and I had gave me a brief glimpse into what businesses and developers wrestle with as they embed AI features deep into their products. I hope you find it as interesting as I did — a lightly edited transcript follows. VentureBeat: How does it actually work in practice? Once the feature is rolled out, will you have to turn it on, or is it enabled by default for every call and every speaker? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Aichner : Right now, we believe that, probably, it’s not a good experience for the user to be able, or that the user needs to turn it on. I think the challenge in terms of user experience is, let’s say I had a switch where I can toggle it on or off. For me, it wouldn’t sound any different. Let’s say I have somebody in the background making a lot of noise. If I turn it on, I still hear that person. The benefit is for you, because you will not hear that person. So it’s hard to really implement the toggle on my side. And since we also believe that this will work better than the current noise suppression, in all cases, we also don’t see a reason or a need for turning that off. So for now, I think the plan would be to just have that on and improve that experience. VentureBeat: So it’s going to be on by default for recipients? Aichner : It works on the send side. It works on the person who has the client. If I have this new client, then whoever I call will not get noise from me. VentureBeat: Will there be any sort of indicator to the sender that there’s extra noise that is being filtered out or to the recipient that something is being filtered out? Any sort of visual indication that noise suppression is activated or anything like that? Aichner : I think that’s some detail we haven’t really decided on, so I guess I can’t really comment on whether we would indicate that to the user. I assume you’re going down that road of ‘Hey, if you have this cool capability, why not indicate to the user that there’s a lot of noise but you can’t hear it?'” VentureBeat: There’s that, and there’s also wanting to demo something to someone. Say I call my dog over and tell him or her to speak, and then the recipient doesn’t hear it because of the noise suppression. Or, say an audio company wants to demo something to a client. I’m sure we can think of better examples, but what happens when noise suppression gets in the way and it isn’t working as expected? You’re telling me you can’t turn the feature off, but you’re also telling me that the person that should be hearing the noise will not even know that noise suppression is modifying the call. Aichner : That’s good feedback. We have discussed this topic. And as I said, we haven’t really made a final decision on that yet. The noise types we have … For example, typing noise, it’s not likely that you want to transmit typing noise. We are careful that certain things where we feel ‘OK, that might not actually be perceived as noise,’ then maybe we shouldn’t filter it out. But, yeah, that’s good feedback. We have thought about that, we just haven’t made the decision on that yet. VentureBeat: Did I understand correctly that there are certain noises that you have deemed to be OK that aren’t speech, and they’re in your machine learning model? Aichner : We looked at whether certain noises really are something which we want to filter out. I think one example, which you can probably think of, is music. Do you want to filter out music or do you not want to filter out music? Because there are cases where you could say, ‘Hey, I want to show you how I play the violin.’ And then we are filtering out the violin. That’s probably not what you want. So we are trying to see what are the cases where something would be considered as a desired signal you want to transmit. VentureBeat: Those noise types that you want to keep, those are still up in the air? Aichner : We have looked at certain noise types and basically said that for certain noise types we are not optimizing for those now to remove them. We would try to keep those. VentureBeat: Right now, if I’m on a group call and someone accidentally has background noise, I can mute them. I can also unmute them, they can mute themselves, and so on, depending on the type of call. But this, because it’s automatic, and there’s no indication, that could be problematic. I think recipients should see an indication that background noise is being filtered out, and there should be a toggle that lets you turn the filter off. I think it makes sense to filter it out by default, especially if your team is confident that you get it right 99% of the time or whatever. But the user should have an override. Otherwise, you’re going to get into situations where users realize what’s going on, ditch Teams, and go use something else where they can hear the person unfiltered. Apply it to video. You wouldn’t want certain parts of someone’s camera feed filtered out without your knowledge. Aichner : I get your point. I think the question is, what are you suppressing? The current noise suppression, we usually don’t give you a way to turn it off because it’s really more stationary noise and we believe that most people don’t want to hear that. But even there, there are special cases. In Skype we have a special mode where you can turn off noise suppression and you can turn off automatic gain control because you have professional audio setup and you don’t want the client to mess with your audio signal at all. I think there are cases where that’s justified. And then, I get your feedback. We have thought about that, and we haven’t made a final decision yet. We are looking into how confident are we that by default we would do the right thing. But then, as you said, it might make complete sense that you have some way to override it. Whether that’s a prominent button or whether that’s somewhere in the settings, that’s all something we would need to decide. VentureBeat: I think there’s two things here. There’s the debate of whether the user should be made aware that noise suppression is activated. You should have that debate and it seems like you are. And then if you do that, there should be a secondary debate: Should there be an option to turn it off right then and there or do you have to go into the settings? Can you turn it off per call or is it a global switch for all of Teams? Those are UX decisions but, I think, also ethical decisions. Aichner : Yep, fully agree. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,474
2,020
"MIT's AI suggests that social distancing works | VentureBeat"
"https://venturebeat.com/2020/04/16/mits-ai-suggests-that-social-distancing-works"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT’s AI suggests that social distancing works Share on Facebook Share on X Share on LinkedIn A computer image of the type of virus linked to COVID-19. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a preprint academic paper published in early April, MIT researchers describe a model that quantifies the impact of quarantine measures on the spread of COVID-19, the novel coronavirus. Unlike most of the models that have so far been proposed, this one doesn’t rely on data from studies about previous outbreaks, like SARS or MERS. Instead, it taps an AI algorithm trained to capture the number of infected individuals under quarantine using the SEIR model, which groups people into classes like “susceptible,” “exposed,” “infected,” and “recovered.” This approach potentially achieves accuracy higher than or comparable to previous work, which could help to better inform governments, health systems, and nonprofits as they make treatment and policy decisions about social distancing. For instance, the model found that in places like South Korea, where there was immediate government intervention, the virus spread plateaued more quickly. “Our model shows that quarantine restrictions are successful in getting the effective reproduction number from larger than one to smaller than one. The [model] is learning what we are calling the ‘quarantine control strength function,'” said George Barbastathis, MIT professor of mechanical engineering, who developed the model over the course of several weeks with civil and environmental engineering Ph.D. candidate Raj Dandekar as a part of a final class project. “That corresponds to the point where we can flatten the curve and start seeing fewer infections.” The MIT model was trained on data collected from Wuhan (China), Italy, South Korea, and the U.S. after the 500th case was recorded in each region (January 24, February 27, and February 22 for Wuhan, Italy, and South Korea, respectively) until April 1. After 500 iterations, it learned to predict patterns in the infection spread, drawing a correlation between quarantine measures and a reduction in the virus’ effective reproduction number. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Data from the MIT disease model. As the number of cases in a particular country decreases, the forecasting model transitions from an exponential regime (indicating that the virus is spreading exponentially) to a linear one (indicating that infections are plateauing). Italy began entering this linear regime in early April, and the model anticipates that the U.S. will make a similar transition somewhere between April 15 and 20 — similar to other projections like that of the Institute for Health Metrics and Evaluation. The model also predicts that the infection count in the U.S. will reach 600,000 before the rate of infection begins to stagnate. “This is a really crucial moment of time. If we relax quarantine measures, it could lead to disaster,” said Barbastathis. “If the U.S. were to follow the same policy of relaxing quarantine measures too soon, we have predicted that the consequences would be far more catastrophic.” The MIT model agrees with one published earlier this year by researchers at Microsoft, the Indian Institute of Technology, and TCS Research (the R&D division of Tata Consultancy Services), which learned policies automatically as a function of disease parameters like infectiousness, gestation period, duration of symptoms, probability of death, population density, and movement propensity. Over the course of 75 simulations with simulations lasting 52 weeks (364 days), it showed that governments that locked down 5% to 10% of communities experienced a lower peak of COVID-19 infections. Separately, an international team of researchers used human mobility data supplied by Baidu to elucidate the role of COVID-19 transmission in Chinese cities. They found that, following the implementation of control and containment measures, the correlation between the geographic distribution of COVID-19 cases and mobility dropped and growth rates became negative in most locations, indicating that the measures mitigated the spread of COVID-19. That said, it’s important to keep in mind that even the best AI models — like those developed by HealthMap, Metabiota, and BlueDot , which were among the first to accurately identify the spread of COVID-19 — only learn patterns from historical data. As the Brookings Institute noted in a recent report, while some epidemiological models employ AI, epidemiologists largely work with statistical models that incorporate subject-matter expertise. “[A]ccuracy alone does not indicate enough to evaluate the quality of predictions,” wrote the report’s author. “If not carefully managed, an AI algorithm will go to extraordinary lengths to find patterns in data that are associated with the outcome it is trying to predict. However, these patterns may be totally nonsensical and only appear to work during development.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,475
2,020
"ProBeat: Apple and Google's contact detection API will fail, but they should build it anyway | VentureBeat"
"https://venturebeat.com/2020/04/17/probeat-apple-google-contact-tracing-tech-fail"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: Apple and Google’s contact detection API will fail, but they should build it anyway Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Apple and Google shocked the world last Friday with their COVID-19 announcement to collaborate on an opt-in Bluetooth-based proximity contact detection API for iOS and Android. Contact tracing is the identification and follow-up of people who may have come into contact with an infected person. We’ve been learning more about the API this week, including that it will work on iOS 13 and Android 6+ devices (via Google Play Services), and that only health authorities will be able to access it. This is a good opportunity to remind businesses: Technology is a tool, not a solution. New tools take a lot of iteration to get right. When it comes to health care, this is doubly true. Apple and Google’s tech will fail. Privacy concerns aside, there are too many obstacles for the contact tracing tool to be effective. Let us count the ways. First, the API will only be available in mid-May. Second, you need a compatible mobile phone (not a given for many, depending on age, country, income, race, and so on). Third, you will need to update your smartphone. Fourth, health authorities will have to release apps that use the API. Fifth, you will have to download and install such an app. Sixth, you will have to leave Bluetooth on when you’re out. Seventh, the tech will have to correctly identify that you have come within 6 feet of other people. Eighth, everyone else you came into contact with have to have done all of the above too. Ninth, if the app lets you know you might have COVID-19, you have to get tested (not a given, depending on location). Tenth, if you tested positive, you will have to opt-in to notify everyone who may have come into contact with you. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Time, adoption, and technology This will not happen for the large majority of people. It will especially not happen for those most susceptible to getting COVID-19. The above can be boiled down to three big obstacles: time, adoption, and technology. (Read Khari Johnson ‘s take on what privacy-preserving coronavirus tracing apps need to succeed. ) For time, there’s nothing we can do about the fact the API isn’t ready today. I’m sure Apple and Google are working very hard to ship it ASAP and that it will work on day one. For adoption, there’s nothing we can do about the fact people don’t like to be told to install updates or apps, and generally to opt-in to anything the government tells them to. (For example, about 12% of Singapore’s population downloaded TraceTogether, the government’s contact-tracing app that also relies on Bluetooth.) Adoption might be improved a bit by eliminating the app requirement. Apple and Google plan to build the Bluetooth-based contact tracing platform directly into iOS and Android “in the coming months.” That’s right, more time. And finally, this is all based on Bluetooth. The technology that has a history of being unreliable when all you want to do is pair two devices. Even if it was reliable, Bluetooth wasn’t designed for contact tracing or anything remotely related to determining if two devices were a certain distance apart for a certain time. (Apple and Google are specifically using Bluetooth Low Energy, which has a range of about 30 feet on a typical phone — the theoretical maximum is “less than” 330 feet.) It doesn’t matter how many of the world’s smartest software engineers Apple and Google have, their combined market cap, or their mobile market share duopoly. Even if they can make Bluetooth do what they want, they can’t solve the problems of time and adoption. False positives But wait, doesn’t every little bit help? Let’s say a fraction of a fraction of iOS and Android users end up using this contact tracing tool. Let’s say it “works” for that small group of people. Isn’t that a good thing? Well, yes and no. Tech’s mantra is move fast and break things. That’s fine for building a fun mobile game. It’s not fine for a health care app that is supposed to help track the spread of a global pandemic. (For his part, Google CEO Sundar Pichai rightly said this week that tech companies should not get carried away with their role in combating COVID-19.) I’m not talking about false positives where the system flags that you were in contact with someone who is infected but you didn’t become infected. I’m talking about false positives solely due to Bluetooth: You’re sitting in your apartment (or in any building) and your device comes into close proximity with someone’s device through a wall, above you through the ceiling, or below you through the floor. Two devices could come into close proximity in cars side-by-side at a red light. Or you could have your phone on you and pass a phone simply not on another person. A bug in an app that is supposed to warn you that you may have a deadly disease can be serious. False positives in a contact tracing app have consequences. There is a mental toll to learning you might be infected. What happens if you get tested and you find out all is well? There’s an even bigger mental toll if you’re repeatedly told you might be infected. What happens then? The few people that opted-in decide to opt-out. Future-proofing Contact tracing is not a new idea. It’s widely used in public health: Identify people who may have come into contact with an infected person, collect information about these contacts, test them for infection, treat the infected, and trace their contacts in turn. Rinse and repeat to reduce infections in the population. It’s hard-core, manual detective work. Human beings are good at contact tracing. We have 0 evidence that phones are. So given all the above problems, why should Apple and Google build it anyway? It’s simple: The novel coronavirus wasn’t the first pandemic and it certainly won’t be the last. This will happen again. We should still invest in technology that could one day help investigators with contact tracing. It may sound crass, but think of Apple and Google’s COVID-19 contact tracing as a beta program. When the next virus comes along, the technology will already exist and will have been tested. There will still be problems of adoption and questions about efficacy. But Apple and Google, assuming the Android-iOS duopoly holds until the next virus, will be able to issue updates. And even if Bluetooth doesn’t exist anymore or we’ve all ditched our smartphones for smart glasses, many of us will remember COVID-19 and all the efforts to flatten the curve. We will have learned what worked, what didn’t, and what had potential. That’s the beautiful thing about technology — it can be adapted, reused, and improved. Most importantly, we won’t have to wait for a global pandemic to be declared. A contact-detecting API will be one of the many tools in humanity’s toolbox. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,476
2,020
"ProBeat: As Google Shopping challenges Amazon with free listings, fraud is inevitable | VentureBeat"
"https://venturebeat.com/2020/04/24/probeat-google-shopping-free-listings-amazon-fraud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: As Google Shopping challenges Amazon with free listings, fraud is inevitable Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google this week announced that Google Shopping will be free for merchants next week in the U.S. and by end of year globally. Google Shopping results will soon “consist primarily of free listings.” That’s right; until now, Google Shopping showed only paid listings instead of serving, oh, I don’t know, the best listings Google found on the web. The change is a huge win for any business owner that sells products online, and anyone who buys online, and is thus a direct attack on Amazon. It’s also bound to bring the same problems that Amazon experiences. Froogle was born in December 2002, rebranded to Google Product Search in April 2007 , and then again renamed to Google Shopping in May 2012. That last change was bigger than just a new coat of paint — soon after, Google adopted a pay-to-play model. At the time, small businesses argued they would not be able to compete with larger companies’ advertising budgets. Eight years later, when there are “hundreds of millions of shopping searches on Google each day,” the company is reversing course. The reason? “We know that many retailers have the items people need in stock and ready to ship, but are less discoverable online.” Those small businesses do matter, after all. Google switching away from pay-to-play is massive news for ecommerce. Free listings are coming at a time when millions of stores have been forced to shut down. But the company says it’s simply accelerating existing plans — this isn’t a pandemic-driven limited-time offer. Google is permanently removing a big barrier to entry for smaller players. And yet, free listings are a double-edged sword. Anyone can start selling via Google Shopping. Also, anyone can start selling via Google Shopping. That won’t just translate to more consumer choice, but consumers overwhelmed with choice. Google will naturally pitch ads to small businesses looking to stand out. But ads won’t stop Google Shopping’s bigger inevitable problem: fraud. All about Amazon Every ecommerce conversation either directly or indirectly references Amazon. You simply cannot talk about online shopping and not mention the online retail giant. Everyone else in the space frankly fears the company, whether they admit to it or not. Removing the ad requirement now is a smart move for Google. The pandemic has led to ad rates plummeting and online shopping taking off. Amazon’s stock is at an all-time high, while Google’s has taken a hit. (Both companies report earnings next week, so we’ll have a better idea of their financials then.) Google argues free listings means retailers will gain free exposure to millions of people, and shoppers will get more products from more stores. It’s easy to see how Google thinks this will play out. Opening Google Shopping to everyone means more products available to consumers. More products available means more product searches and higher usage. Higher usage means more competition and more pull against Amazon. More pull against Amazon means more value to advertisers. Remember: The majority of Google’s revenue comes from ads. Google is betting on scale, as it often does. Even though the company will no longer require ads, the long-term bet is to make more money from shopping ads, not less. Fraud Like many tech giants, Amazon has big problems. They include the usual antitrust and ethics concerns, as well as atrocious working conditions. But Amazon is adored for its shopping experience, which doesn’t really have issues. Except one. More than half of Amazon’s sales come from third-party sellers. Embracing small businesses has been a huge boon for its own business, but it’s also incredibly difficult to manage. That’s why there are so many horror stories of fraud on Amazon. I’m not talking about purchasing a product on Amazon and not receiving it. I’m talking about consumers receiving a counterfeit version of a product and merchants finding their legitimate products resold at a markup. If you’re buying something important from Amazon, such as children’s toys or health care products , it’s important to check that you’re getting it from Amazon directly and not a third-party seller that ships who-knows-what in an Amazon box. Meanwhile, small businesses often spend enormous resources policing and reporting their own products being resold on Amazon by those trying to make a quick buck. Except for a handful of its own phones, tablets, and laptops sold in the Google Store , all of Google Shopping sales come from third-party sellers. There is no easy way to ensure that a product isn’t screwing the consumer and/or the merchant. Well, at least not anymore. That’s what paid listings was helping accomplish. It becomes very expensive, very quickly to sell scam products if you have to pay every single time. I’m sure Google will work harder than Amazon to keep fraud off Google Shopping. The company has to, after all, as it hasn’t spent its whole existence building its own store and products. Overall, Google Shopping going free is a good thing, both for consumers and for merchants. But consumers and merchants will likely have to be more vigilant, while small businesses will have to pay up to stand out. Just like on Amazon. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,477
2,020
"ProBeat: Why surveillance drones may become a staple of daily life | VentureBeat"
"https://venturebeat.com/2020/05/01/probeat-why-surveillance-drones-may-become-a-staple-of-daily-life"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: Why surveillance drones may become a staple of daily life Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. We recently profiled Canadian drone maker Draganfly and how it quickly spun up its “pandemic drone.” In short, the company is running pilots in the U.S. to offer social distancing and health monitoring services using machine vision and AI tech licensed from the University of Southern Australia. We spoke with Draganfly CEO Cameron Chell both before and after the first pilot, in Westport, Connecticut, ended abruptly. The coronavirus pandemic has forced the public and private sectors to consider drones for everything from tracking the spread of COVID-19 to gauging when to lift restrictions. Long after this epidemic is over, drones could play a critical role not just for delivery , but also in detecting and tracking similar outbreaks, safeguarding both public health and business operations. Draganfly’s original timeline was to test at other sites once phase two in Westport was complete — but phase two never happened. Chell had a lot to say before and after Westport made its final decision. Below are a few excerpts from our interviews at the respective times. Before Westport pulled out VentureBeat: Have you seen any indication that monitoring crowds for their temperature and whether they’re coughing and sneezing can actually be useful, or is it too early in the tests to determine? Chell: I think it’s a bit early. From a public safety standpoint, I don’t know that it will be all that useful. I don’t think this is going to be like a routine thing where you see drones flying in the sky doing health measurements. From a public safety perspective, it’s a bit more like, the CDC or World Health say “Hey, we got an issue happening here, and we need to amp up our vigilance. We see some hotspots emerging, here’s what’s happening.” And then in that case, I think local authorities can take a proactive approach and start doing some sampling. And they do it while they’re flying drone missions for other things. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, on the industrial side, for workplace safety, I absolutely see this type of technology being implemented. Because workplaces want to know: What’s the health of their workforce, and do they need to take steps in order to protect them? And certainly on the industrial side from consumer safety, a theme park or an airline, consumers are going to want to know what are the health measurements. In those types of scenarios where you’ve got industry making decisions based on bottom-line type of metrics, I think you’ll see this get implemented very, very quickly. VentureBeat: Have other towns shown interest? Chell: The demand to test the technology has been insatiable. We need a pragmatic approach to help manage our resources. But also to take the learnings and continue to grow so that if there’s a resurgence in the current pandemic, or if there’s a new epidemic or something that starts to emerge, this type of tool can be implemented, and implemented on a scale because there’s been proper policy and procedure that’s been thought through. VentureBeat: Was this “insatiable demand” simply from law enforcement and police departments across the country? Chell: That’s certainly a portion of it, police and law enforcement are certainly an important part of that. But no, it’s also come from the health care industry. For medical facilities, it can help triage incoming patients during a surge — it can measure employee health coming in as a facility for the sake of the employees, but also for the sake of patients. We’ve had significant commercial interest from the airline industry, the tourism industry. Things like theme parks, cruise lines. VentureBeat: What are those industries interested in? Chell: Their position is a bit more around “What are we going to do to protect our consumers and help give them some assurance to attract them back to our business?” We see this notion of there being health measurement reports available, kind of similar to how you would see weather reports. Do I want to take my family to Disneyland this weekend? Do I want to travel on this particular airline to this particular location, based on the health measurement that’s happening out there? Those are the types of things that are really being actively talked about in industry. In a medical profession, it’s a whole bunch more about employee safety and patient safety. In the public safety law enforcement spaces, it’s all about protecting the public and how do we how do we address it if we have to practice social distancing again or if we can release social distancing. And do we have the data if it’s working or not working. VentureBeat: Do you think this technology will be used more for enforcing social distancing and seeing if it’s actually working, or more for making a determination on whether to open up a city back up? Chell: Yeah, I think both. I think in those times of transition or concern, that’s when you’ll see the technology being used more. If there is a spike in another part of the world or if hospitals are seeing higher indications of flu in a particular season, or something like that, then you may see this type of technology being used. We will get through this whole pandemic scenario. And then next year we get into flu season and we see a spike in flu rates that’s maybe 3% or 4% higher than a normal typical year. And panic sets in. And we start implementing social distancing. And we start contemplating shutting businesses down. Mark my words, that will happen next year. We don’t have a way to say, “OK, let’s take some real data” other than how many people are going to hospital, how many people think they’re sick. So I think this type of data collected at ports of entry, this type of data collected in municipalities that might be in riskier zones or might see these types of spikes — that’s when you’ll see the technology being used again. As we come out of this and we start to open up, there’s a rush to try to implement this technology now so that we can justify decisions around should we open up, shouldn’t we open up, can we do better than guesswork. So in terms of public safety, I see it as a bit more circumstantial as opposed to pervasive, at least in the short term. I could envision all of this data is collected by industry, that I think will be put into place. Once that’s pervasive enough, over the globe over the next couple of years, that data being anonymized and collected so that we effectively have an early warning system. So if we see a whole bunch of this anonymized data showing that Southern California has a higher incidence of a potential infectious disease that’s happening over the course of this last weekend, I think it would give us faster data than we’re collecting right now. And I think that’s a few years out. After Westport pulled out VentureBeat: Why should governments invest in drones versus, say, smart thermometers? Yes, you run a drone company, but I’m curious about your thoughts on those two, or in general other technologies that might not be seen as potentially problematic with regards to privacy. Chell: Whether it’s a drone or a camera or whenever the measuring instrument is, each measuring instrument has potentially the same privacy issues. If you’re measuring population health on a broader basis, that’s why it’s anonymized. If you’re measuring with a smart thermometer, you need permission from the person to do it. If you’re measuring on an individual basis, a worker coming into a workplace and they stand in front of a kiosk and they’re using our technology in front of a camera, that employee has given permission. However you measure, whatever technology is all subject to the same policy, operational requirements, and regulatory requirements. VentureBeat: What happened in Westport? Chell: On the public side, the community itself had a bit of an outcry. They were worried about Big Brother. And, fair enough. The software doesn’t identify people in a public safety environment, but that’s fine. So there’s some pushback, and that’s just going to take some time from a policy perspective in that specific jurisdiction. On the social distancing aspect of it, we have been, quite frankly, inundated with requests from other jurisdictions that want to move forward with pilots or at least understand the pilot. We’ve had a great opportunity to have discussions with them about — their first question is, does this invade privacy, how does it work, and once they’ve done that due diligence, as Westport did, they clearly understand that it doesn’t. VentureBeat: Why do you think there is still demand given the first pilot ended so quickly? Chell: They’re interested to move forward with pilots because the underlying issue here is there is greater liability for public officials. As they authorize their towns to reopen, if the town gets sick again and they haven’t taken proactive steps [such as measuring] what social distancing was happening, or where hotspots emerged, that is their bigger issue in terms of liability. VentureBeat: So you’re arguing that Westport may have had a privacy outcry, but they’ve still got a problem on their hands? Chell: I would suggest they and every other jurisdiction that we’ve talked to understand clearly that the bigger issue is the liability if they don’t do something. We talked with dozens and dozens of more jurisdictions since then. A few of which we’re going to move forward with on pilots to look at, in particular, and most importantly social distancing, and then secondary on the health measurement platform. That’s on the public side. On the private side, it’s even more prolific. There probably isn’t — I’m sure there is, but it doesn’t feel like it to us anyway — an airport or a convention center or casino operator or a hospitality group or a theme park that hasn’t had some level of inquiry, or an IT services group or security services group. We’re all worried and concerned about how do we reopen? What’s the world look like post COVID-19? How do I track people back to my theme park and what are my liabilities now that they’re in my theme park, and what data do I have to report? FAA and Transport Canada are telling airports, “You have to have the best practices policy now for social distancing. Because there’s going to be times when we’re going to call for social distancing, and then there’s going to be times where you can relax a little bit, but we need records of how you’re doing it, we need to see proof of distribution of people.” So they’re looking at the system that we’ve got to measure, social distancing and health measurement, as proof of best practice. VentureBeat: Going forward, do you think the public sector is only going to be interested in social distancing, while the private sector will be interested in health monitoring? Chell: I think in general, out of the gate, that’s very likely the case, yes. I think there will need to be some more policy and operationalization of the health measuring data for public safety to effectively know how to use it. That said, the people that we have talked to, they really want the [anonymized health monitoring] data. They are not interested in picking somebody out and seeing a video feed of each person, what their health condition is. But they do want the data because it is important to understand how it meshes with social distancing, and how you reopen or how you shut down economies. On an anonymized basis, I do think that we’ll see public safety using this data. On the private side, like for workplace safety, I’ll give you an example. Las Vegas, they’re concerned about attracting people back. But they are also very concerned about employees that are coming in, and if those employees are the source of the hotspot or the infection, then what’s the liability they face? When an employee swipes a card to get into a building, they have terms and services that they have to agree to. They have to conduct themselves a certain way, they wear a uniform, whatever it is. We see this now at Amazon and a number of different places, where they need to ensure that those people coming into the facility aren’t coming in with an infection or respiratory conditions. So to the extent that you’ve got a large convention crowd coming in, you may have that same type of liability consideration as well. So in those cases, I can see video feeds being used. And I think they need to be used. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,478
2,020
"ProBeat: Alphabet's Sidewalk Labs won't be the last to play the coronavirus card | VentureBeat"
"https://venturebeat.com/2020/05/08/probeat-alphabets-sidewalk-labs-wont-be-the-last-to-play-the-coronavirus-card"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: Alphabet’s Sidewalk Labs won’t be the last to play the coronavirus card Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Yesterday, Alphabet’s Sidewalk Labs killed its Toronto smart city project , meaning raincoats designed for buildings, heated pavement, and object-classifying cameras will not be traded for unprecedented data collection. Privacy advocates celebrated — Big Brother would not be gaining even more invasive power to surveil residents. But this story is far from over. Whether one hoped for a smart city or feared it, the reality is this project did not live or die on its merits. Nor did it get axed because of “unprecedented economic uncertainty,” as Sidewalk CEO Daniel Doctoroff suggested. The pandemic was just the scapegoat. The rest of 2020, and possibly beyond, is going to be filled with stories about companies pulling back due to the economy. Look out for them, because they are going to be instructive of what were the riskiest bets in the first place. If you run a business, it might be time to rip off the Band-Aid yourself. Pandemic or not, it is always instructive to follow the money. Sidewalk Labs is a Google sister company under the Alphabet umbrella. Other Alphabet companies include Calico, CapitalG, DeepMind, GV, Google Fiber, Jigsaw, Loon, Makani, Verily, Waymo, Wing, and X. These moonshots are not broken out in Alphabet earnings because frankly, none are profitable. Instead, they are lumped together under a line item called “Other Bets.” Last week, Alphabet reported its Q1 2020 earnings during which Other Bets revenue was down 21% to $135 million, while losses were up 29% to $1.1 billion. Yes, Other Bets burned eight times more cash than it generated. Q1 2020 was a special quarter for Alphabet. Not because it was the worst quarter for Other Bets — there have been worse ones, if you can believe it. Not because it overlapped with the pandemic — Alphabet seems to be handling the downturn, so far. Q1 2020 was special because it was the first full quarter in which Sundar Pichai oversaw Alphabet. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In December, when Google’s CEO also became Alphabet’s CEO , I explained we knew what to expect : Alphabet companies will either become more focused or get folded into Google. Maybe Pichai has made a decision about Sidewalk Labs. Maybe he hasn’t. Either way, the Sidewalk Labs project was low-hanging fruit — the ROI for a Google smart city was never there. Don’t get me wrong. I was critical of the Sidewalk Labs approach and have generally argued that tech company expansions need to be less arrogant and more transparent. It’s one thing for a company to be able to halt development of an app overnight. It’s completely another to walk away from building a smart city overnight. Imagine if the smart city already had residents living in it and Sidewalk Labs decided to pull the plug. Is that really the type of control we want to hand over to tech giants? And yet, Sidewalk Labs’ withdrawal from Toronto is not democracy thwarting surveillance capitalism. Soon after the news broke, the government-backed agency Waterfront Toronto stated “this is not the outcome we had hoped for.” This case had more to do with the chickens coming home to roost at Alphabet; the pandemic was just the excuse. Keep an eye out for other companies using “unprecedented economic uncertainty” as cover to cancel projects, leave markets, and/or pivot — regardless of whether you are cheering for them or not. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,479
2,020
"ProBeat: Beware and be weary of permanent work from home | VentureBeat"
"https://venturebeat.com/2020/05/15/probeat-beware-and-be-weary-of-permanent-work-from-home"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: Beware and be weary of permanent work from home Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Twitter CEO Jack Dorsey emailed employees this week to say they could work from home permanently after the coronavirus pandemic lockdown passes. Twitter was one of the first major tech companies to order its employees to work from home in response to the pandemic. Last week, Facebook and Google told their respective employees they can work from home through the end of the year. But Twitter went further and said they don’t ever have to come back. Some see Twitter’s remote work forever as a bellwether for workplace strategies that start in tech and proliferate everywhere. So, we’re all going to take meetings from bed, wear sweatpants, and skip commuting forever? Be careful what you wish for. Some companies will undoubtedly follow in Twitter’s footsteps. The temptation to offload commercial real estate costs will be too great. But there will also be companies that go back to work just as before, with some health measures of course. And don’t expect that group to be only businesses that cannot function without face-to-face interactions. Broadly speaking, for most white-collar jobs, WFH will become more accepted — after everyone at your company has experienced the pros and cons of working from home, more of the workforce doing so a few times a week will no longer raise eyebrows. In fact, I think it is premature to be projecting commercial real estate collapses, workers leaving cities in droves, and permanent offices closures across the globe. Yes, there is currently “a new normal,” but this is a temporary shift — there will be a new normal post-pandemic, and we cannot predict what that will be. Instead, we can observe that millions are relearning how to work, which is bound to bring change. Still, let us not paint change with a broad brush. Be wary First, I think it’s important to acknowledge that WFH is a luxury. Most workers don’t have the privilege, let alone the means, to work from home. Most jobs cannot be performed over the internet. Would I want to live in a world where everyone has the option to work from home? Sure. But the key word there is option. Let’s make sure the pendulum doesn’t swing too far. Imagine a world where the default is that most work can only be done from home. The office is the luxury. Employee in-person interactions don’t exist. Let’s go even further. Do we want a world where if you don’t have internet, employment is not available to you? We have not yet seen a company transition from offering WFH as an option to offering an office as an option. Doing that at Twitter scale (some 4,600 employees) is not going to be easy. The company does, however, have one huge advantage: It was already working on such a transition long before the coronavirus. Was your company? Decentralization and remote work was a top priority for us pre-COVID. We’ll continue to learn and improve to make the experience even better, but it starts with empowering people to work where they feel most creative, comfortable, and safe. https://t.co/SnWdJYlrn9 — Leslie Berland (@leslieberland) May 12, 2020 I’ve been working from home for 12 years ( here are my tips ). To this day, I struggle with separating work life and personal life. It’s not easy to set those boundaries, and it’s not going to suddenly get easier when millions more have to do it. The upside is that I have the advantage of experience. The pandemic has had zero impact on my work life. While many have celebrated and complained about all the adjustments they have had to make, some WFH people have not skipped a beat. But many have. It was fun for the first week and maybe even enjoyable for two. Doing it all year round presents plenty of challenges, including everything from mental health to team morale. Furthermore, many employees enjoy going into work. Many argue that in-person communication is irreplaceable. If you are in that camp and your company switches to permanent WFH, going into the office is never going to be the same again. You simply won’t know who is going to be there. And what happens when your company takes the office away completely? Be weary Finally, I think what many people are missing is which part of Twitter’s move is radical. The option for workers to permanently work from home is not radical. Many companies offer that. The fact that some employees will never visit the office is not radical. Again, many companies have that. The radical part is that Twitter is removing the expectation of employees to visit the office. Twitter chief HR officer Jennifer Christie said, “Opening offices will be our decision; when and if our employees come back will be theirs.” As I’ve argued before, the future of work is remote. Companies like GitLab and WordPress have successfully built decentralized companies. But being distributed from the start is not the same thing as trying to move existing offices to employees’ homes. Other companies will try to follow Twitter and may even succeed, but I suspect most that do will quickly backpedal. It’s not easy to build company culture. It’s even harder to rebuild one after one has already taken form. In a year or two, we might even see reports of permanent WFH being rolled back. Look out for the first rumblings of such realizations on Twitter. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,480
2,020
"Oculus Quest expands safety, hand tracking, and remote work options | VentureBeat"
"https://venturebeat.com/2020/05/18/oculus-quest-expands-safety-hand-tracking-and-remote-work-options"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Oculus Quest expands safety, hand tracking, and remote work options Share on Facebook Share on X Share on LinkedIn Elixir from Magnopus is one of the first several Oculus Quest titles to use hand tracking rather than requiring a wireless controller for input. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. One year after the Oculus Quest first hit stores, Facebook’s all-in-one virtual reality headset is widely believed to be one of the industry’s most successful and innovative products, benefiting from frequent drips of new OS features and soft sales statistics. Today, the company is celebrating Quest’s first anniversary with a collection of updates, notably including improvements to the headset’s integrated and third-party software capabilities, along with some interesting details on app sales. Perhaps the most visible improvements are coming to Quest’s safety system, Guardian , which is adding a Playspace Scan feature to automatically identify real-world objects that may trigger Guardian warnings. This allows users to manually remove them before fully immersing in VR. The Guardian wireframe border will also expand beyond its current single color (red), letting users choose blue, purple, or yellow options to flag physical contact danger zones. After five months of beta testing , Quest’s hand tracking feature is finally becoming generally available to users, and Facebook will officially open the floodgates to third-party developers on May 28. Beyond Quest’s OS-level use of the feature, several hand-controlled titles will be available for purchase in the Oculus Store this month, including Magnopus’s Elixir , Aldin Dynamics’ Waltz of the Wizard , and Fast Travel’s Curious Tale of the Stolen Pets. Users interested in exploring hand tracking within VR films may prefer the Cinematic Narratives Set, a two-movie bundle with a Colin Farrell-voiced romantic story called Gloomy Eyes, and a “world of miniatures” film called The Line. Noting the coronavirus pandemic’s impact on working from home, Facebook says the Oculus Store will increase its focus on productivity and collaborative apps over the coming months, adding the multi-user, cross-platform XR meeting app Spatial, as well as a distraction-free VR workspace app called Immersed this summer. The latter app promises to provide Quest users with access to multiple computer screens at once, with the option to work by yourself or share a space and/or screens with coworkers. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Last, but not least, Facebook is offering more small data points to the big picture of Quest sales. The company says users have spent more than $100 million on Quest content over the past year, with over 10 titles individually generating $2 million in revenue, including VR adventure Moss and action game Pistol Whip. Those aren’t huge numbers; Facebook said last month that 20 titles had individually surpassed $1 million in revenue, which works out to around 50,000 purchases of a $20 app, roughly the median price of Quest titles. To celebrate the anniversaries of both Quest and Rift S , the Oculus Store will offer special sales for each platform starting on May 21, hopefully with discounts capable of creating even larger uptake for both VR titles and their developers. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,481
2,020
"Magic Leap's consumer retreat is good news for the AR/XR industry | VentureBeat"
"https://venturebeat.com/2020/04/23/magic-leaps-consumer-retreat-is-good-news-for-the-ar-xr-industry"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion Magic Leap’s consumer retreat is good news for the AR/XR industry Share on Facebook Share on X Share on LinkedIn Magic Leap 1 is a tool for 3D visualization. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Although I cover the VR and AR industries with genuine enthusiasm, writing about Magic Leap has never been fun. The company radically overpromised what it would deliver as a user experience, created a product too expensive for consumers to buy, and barely found third-party developers willing to support its platform with apps. On the (very rare) occasions I’ve actually seen Magic Leap headsets in public, they’ve proved fidgety to wear, with small “augmented” viewing areas and only modestly interesting software. Yesterday, we reported that Magic Leap was laying off an unspecified number of employees and sharpening its focus on enterprise opportunities, but its blog post didn’t do justice to what was happening. As affected employees took to social media, it became clear that the company was shuttering its consumer business and eliminating half of its workforce — around 1,000 people. “Pivot” is way too soft of a word to describe the carnage, and my heart broke for all the people whose jobs were lost yesterday. It would be easy to describe the news as a sign of some broader disinterest in AR or mixed reality (XR) technologies, but that’s clearly not true. The coronavirus pandemic has made clear that XR devices will be highly useful for helping people work at home and participate in social gatherings without being physically present. Over the next few years, a CEO’s ability to lead a meeting holographically may be as plausible as Emperor Palpatine directing his minions from afar — science fact following science fiction, and possibly even becoming a symbol of tech savvy and status. As much as I respect some of the talented people who work (or worked ) at Magic Leap, it hasn’t ever struck me as a viable consumer AR device maker. I’ll give the company bravery points for daring to dream so big on augmented reality that it needed to create not only the glasses but also its own operating system, computing device , ecosystem, and “ Magicverse ” to power them. It also wins a boldness award for actually securing the funding, people, technologies, and manufacturing necessary to bring its vision to life. But throwing all the money in the world at Magic Leap was not going to result in a consumer product. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! For the larger mixed reality industry, the problem was that much of the actually available XR investment money in the world had already been thrown at Magic Leap. XR startups routinely (and sometimes loudly) complained that they couldn’t get much or any funding for their projects because Magic Leap’s grand vision had sucked all the cash and energy out of the investing community. So there was a zero-sum game here to some extent; by exiting the consumer space, Magic Leap is simultaneously making its former consumer talent available to other XR companies and enabling investors interested in consumer AR to place their bets elsewhere — whenever they’re ready to spend cash again. Magic Leap had a lot of hurdles to overcome as a consumer business , but most of them come down to the financial consequences of a tech startup trying to own the “whole stack” rather than just a piece of the pie. Instead of just getting into the AR glasses business, which has cost even the trillion-dollar Apple years of time and untold cash, Magic Leap built (and then had to convince people to buy) the computer and OS to power them. Average consumers might be willing to spend $500 on AR glasses, but Magic Leap set its initial price for developers at $2,300, then offered a $3,000 package in an effort to cover ongoing service costs. For consumers, those numbers screamed “no way” from the start and never improved. If Magic Leap had relied on an existing mobile platform — iOS or Android — it could have focused on just getting the glasses right. That’s what Nreal is doing with its Light consumer AR glasses , which presume users will have a Qualcomm Snapdragon-powered phone running Android, enabling the glasses to hit a $500 price point. Developers who want to support Light can just build Android apps with Nreal-specific features, which makes a lot of sense, particularly for XR companies with limited coding resources. Apart from Nreal, many companies in the XR space could benefit from an outflux of Magic Leap resources. Niantic — a proven success in smartphone-based AR — is apparently building its own hardware and software solution with support from Qualcomm, and it could pick up some of Magic Leap’s gaming talent. Lots of smaller developers could pick up individuals who have spent years of their lives working to solve big and little problems in the AR space ahead of broad consumer adoption. And don’t shed a tear for Magic Leap. It has had billions of dollars at its disposal to tackle virtually every conceivable aspect of how augmented reality will impact daily life — a subject Apple , Facebook , and other well-funded tech companies continue to pursue in earnest, guaranteeing that AR will indeed become a big deal soon enough. Even after the layoffs, the company still has plenty of money, a thousand people, and major corporate backers (such as Google and AT&T ) to help it focus on enterprise AR, which is more than most of its glasses-making rivals can say. Magic Leap’s troubles aside, my gut feeling is that everything’s going to work out just fine for the XR industry as a whole. It’s clear at this point that the Magicverse won’t be at the center of everything. But once the dust settles, expect a big bang-class multiverse of competing and perhaps more viable visions, backed by more practical and affordable AR hardware. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,482
2,019
"Skillprint studies how you play games to unlock your hidden skills | VentureBeat"
"https://venturebeat.com/2019/09/19/skillprint-studies-how-you-plays-games-to-unlock-your-hidden-skills"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Skillprint studies how you play games to unlock your hidden skills Share on Facebook Share on X Share on LinkedIn Skillprint's Chethan Ramachandran and Davin Miyoshi. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Play games, unlock your inner superpowers, and win at life. That’s the idea behind Skillprint , a startup that uses advanced analytics to study how you play games and uses that information to discern your personality and what kind of jobs might be right for you. Skillprint plans to use entertaining mobile games to help people discover their strengths in the real world. The more people play, the more insights they learn, and the more opportunities they unlock. Over time, as people build a profile in the Skillprint app (in closed beta), it will become more predictive and suggest ways to unlock their “inner superpowers” to help them get ahead in life. This work is the brainchild of Chethan Ramachandran and Davin Miyoshi, cofounders of Oakland, California-based Skillprint. Ramachandran was the founder of Playnomics and the winner of our best game startup contest in 2010. He sold that company to Unity Technologies in 2014 and helped it develop its analytics business. Unity Analytics is now processing 1.5 billion events in games per month. Ramachandran met with Miyoshi, one of the creators of the social casino game business that ultimately built GSN Network into a big game company, and they started their company. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “We’re not building a games company, per se, but we are going to use games for a greater purpose,” Ramachandran said in an interview with GamesBeat. “And that purpose is essentially helping people figure out their strengths, and understand what they’re meant to do in the real world. And a lot of this came out of both of our respective backgrounds.” We spoke for more than an hour, and I found it fascinating. You are what you play Above: Skillprint can discern your personality by the way you play games. They believe that the future of work and learning is upon us, and it will require continuous upskilling. A Pew Research Center survey , found that 54% of adults in the labor force say that it will be essential for them to get training and develop new skills throughout their work life in order to keep up with changes in the workplace. Skillprint wants to be everyone’s personal coach in this future, driving the right suggestion for learning and/or work to the right person at the right time. At the core of Skillprint’s technology is a massive predictive pattern matching engine. This, coupled with neuroscience research around how game mechanics translate into cognitive and personality traits, gives the company the ability to read anyone’s play patterns and make inferences about who that person is in the real world: how bold they are, how analytical, how strategic or creative, for example. The company is looking to partner with third-party hyper-casual and casual mobile game makers to join its audience network. As people play, they will start to see unique behavioral traits about how they think and act derived from their gameplay. For game makers, Skillprint represents a unique way to increase engagement, grow traffic, find high-quality users, and ultimately monetize. Game makers will be able to join the Skillprint cross-promotion network by integrating a simple software development kit (SDK) which will route users between the games and the Skillprint app. The Skillprint team brings a significant amount of domain expertise to this opportunity. The rest of the team includes some of the world’s leading experts in large scale data processing, personality assessment, and game design and production. Getting back in the game Above: Skillprint wants to unlock your inner superpowers by analyzing how you play. “At Playnomics, we saw a ton of data,” he said. “Obviously, the game publishers were interested in using that data for monetization predictions, stuff like that. But along the way, you know, I saw people’s personality come alive in games, data around their aptitude and their skills. And so I kind of became obsessed with that even when I was running Playnomics.” “After I left Unity, I spent a couple of years investing in advising around this theme about whether you really could understand how the mind works by using digital technology, and I went to all the neuroscience labs you can imagine,” Ramachandran said. “And what became clear after a while is all of them were using games. And they were all using games to predict different things about people’s mindsets. The cognitive skills, how they plan, how they execute, how they make decisions, whether you’re bold, whether you’re a risk-taker, the personality, you know, are you extroverted or introverted? Are you open to new experiences? Are you agreeable or not? And where you are in the spectrum? And then things like their skills and aptitude?” Ramachandran felt compelled to get back into the startup world. “I had to kind of come back in. And so I think what Davin and I were really interested in, and we’ve been talking about this for years, you know, in this world where the future of work that looks to be really liquid and kind of fluid, between learning, and picking up some skills and getting a gig and switching jobs and trying to figure out what career to have,” he said. “No one is really building an agent for people that sits in their pocket to get to understand their core strengths, and tells them, this is what you’re really good at.” Ramachandran wants to be able to definitively say who you should be working with and in what kind of job. “We think that you could build an agent, by having people play, having people play a series of actually existing games. So part of what we think we can do is plug into anywhere people play, kind of read how people are playing consumers are playing, and then give consumers a sense of what their core strengths skills, their inner superpowers are.” At Mesmo, Miyoshi launched an early role-playing game on Facebook targeted at women. His team was acquired by GSN and moved into social casino games. By the time he left in 2013, GSN had 75 million users around the world. He started an online lending company for small businesses in Colombia. Starting with mobile games Above: Green Panda Games’ stable is all about hyper-casual mobile titles. For now, the company is focusing on interpreting how people play simple mobile games, such as hyper-casual games. But over time, it wants to add more sophisticated games into the mix as well. “I really think play is core to human nature,” Ramachandran said. “There is 25 years of neuroscience on hyper-casual games and the mechanics they use to indicate things about people in the real world. For instance, there’s a lot of research around a match-3 game, about how good someone is at scanning in the real world and visually finding things. The research is just super clear.” You can find out if someone is a risk-taker or timid. “What they found is that the correlation between how someone plays a match-3 game, and someone who’s really strong in scanning the real world is super high,” he said. “You can actually predict how good someone is at things. I don’t know how you feel about the public perception of games these days. It feels like this dirty, awful bad for society. I think it’s exactly the opposite. I think they’re really useful. And they can really tell you something about people. And if we can help them become real and useful in the real world, that’s going to be pretty awesome.” What games say about you Above: Candy Crush Soda Saga Right now, the company is looking at commercial games to see what they can teach the company about gamers. Even silly games can be instructive. “We had the silliest billiards game you could ever imagine. And we saw clusters of people are highly social, highly competitive, people who are collectors and explorers,” Ramachandran said. “And then we put little incentives in front of those groups. And against the control group, the personality group had a 30 times response to the personality incentive. And the reason was, is you were just talking to people at their core level, how they approach things. And so I think that’s a little bit more about how we’re going to do it. Take very simple games, where most anybody can play simply the control schemes, and then look at the differences in how people make decisions and choices, either by measuring against science or by pattern matching.” The company has five employees, and they’re bootstrapping right now. It’s moving into its private beta test soon. Over time, the data will get better, and come from a variety of game sources. Skillprint can build up an ontology, like a tree of related things, that ultimately tell you something about people. “Our ontology is game mechanics, game types, play data, behavioral traits, and ultimately, skills and real world opportunities,” he said. “And as we start to build that, they’ll get more and more predictive, because we’ll start to see edge cases will say, oh, Davin is actually really, really bold in these situations.” Privacy issues? Above: Apple CEO Tim Cook delivers a keynote on privacy during the European Union’s privacy conference. Some of this might sound creepy when it comes to privacy. “We’re very cognizant of privacy. I think that’s something that we can get ahead of. The way to think about this data is that it is not identifiable in so it is not personally identifiable. It’s actually really privacy compliant. And you’re also giving the data in the hands of the consumer, and letting them have controls as to how that data is being used. And whether they want to connect to opportunities, and what amount of data they want to keep in show and opportunities will have to be very compliant with GDPR [European privacy law],” Ramachandran said. The value of games Some people recognize the value of games. McAfee found in a report that gamers will be good cybersecurity researchers. At the same time, roughly two-thirds of Americans are unhappy or disengaged in their jobs. “Twenty-five percent of all 25-year-olds want to switch their careers no matter what,” he said. “And so it’s just like staggering, you know, and 36% of the U.S. population is working in the gig economy. You’re looking at a world where there is no safety net. Everybody’s kind of their own free agent, everybody’s building their own portfolio skills. How do you take all that and be someone’s ally, so that they can naturally use what’s happening in the world, but use it to their advantage? And that’s the ultimate vision of where we want to go.” “I don’t know if you’ve read any of Jane McGonigal ‘s research, but she makes a point that, if you tell people, you’re playing something for a purpose, then all of a sudden, their perspective on what they’re playing changes dramatically. And it becomes actually really, really meaningful to them,” he said. “If you tell them that what they’re playing is a waste of time, it becomes a waste of time, right? So it’s almost like you have to define it ahead of time and say, there is value here. And then the whole your whole mindset shifts and how you view what you’re doing on both sides, the parent and the child.” He added, “And we’d be more than happy to beat the drum of saying, you know, these games you think are kind of throw away, time wasters are actually pretty valuable, and can be really revealing.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,483
2,019
"HP unveils $599 Reverb VR headset with super pixel density and 114-degree FOV | VentureBeat"
"https://venturebeat.com/2019/03/19/hp-unveils-599-reverb-vr-headset-with-super-pixel-density-and-114-degree-fov"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HP unveils $599 Reverb VR headset with super pixel density and 114-degree FOV Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. HP hasn’t yet made a huge mark on the consumer virtual reality market, but its newest product could change that. Reverb is a Windows Mixed Reality headset with 2,160 by 2,160-pixel screens and a 114 degree field of view, enabling users to experience VR with sharper graphics than before — at a $599 starting price. The displays are really Reverb’s key selling point. By contrast with HTC’s $800 Vive Pro , which has 1,440 by 1,600 resolution per eye, HP’s headset delivers considerably more pixels that can further reduce the “ screen door effect ,” where pixels and the gaps between them are visible to users. Reverb actually has twice as many pixels per eye — over 4.66 million versus Vive Pro’s 2.3 million — which is very impressive, assuming a computer’s video card has enough power to fill the screens 75 times per second. HP also is using improved lenses that enable the screens to deliver a wider field of view than the 105 degree FOV predecessor. Made from a combination of plastic and fabric, Reverb is designed to be lightweight — only 1.1 pounds, slightly more than the original Oculus Rift — while packing integrated cameras for inside-out tracking. Windows Mixed Reality mode incorporates a view of the real world without the user having to take off the goggles. Reverb also supports Steam VR’s large collection of apps and games. In addition to the $599 consumer model, a $649 “Professional” enterprise version will ship with a replaceable fabric face mask and separate cable so multiple users can use the headset. Each version will include the same dual Bluetooth controllers HP previously released for its lower-end VR headset. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Both versions of Reverb will ship in April. HP will continue to sell the lower-end model for $449. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,484
2,014
"Google's mobile search results get 'mobile-friendly' label to show you which sites work best on your phone | VentureBeat"
"https://venturebeat.com/2014/11/18/googles-mobile-search-results-get-mobile-friendly-label-to-show-you-which-sites-work-best-on-your-phone"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s mobile search results get ‘mobile-friendly’ label to show you which sites work best on your phone Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today announced the debut of a new “mobile-friendly” label on its mobile search results page. The company says the change will be rolling out globally over the next few weeks, so don’t fret if you don’t see it yet. The new indicator is meant to show you which sites are optimized for your phone. You will only see the new label if you’re using a phone: it shouldn’t appear on the desktop version of Google Search, even if sites on the results page are indeed optimized for mobile. Here is the label in action: Google says webpages are eligible for the “mobile-friendly” label if they meet the following criteria: Avoids software that is not common on mobile devices, like Flash Uses text that is readable without zooming Sizes content to the screen so users don’t have to scroll horizontally or zoom Places links far enough apart so that the correct one can be easily tapped The above four requirements are detected by Googlebot. If you want to see whether your site is eligible for the label, you can check your pages with Google’s Mobile-Friendly Test by dropping in URLs and hitting the blue Analyze button. If your website fails, you’ll want to read Google’s updated Webmasters Mobile Guide documentation and check out the Mobile usability report in Google Webmaster Tools. The company also offers a how-to guide for third-party software like WordPress or Joomla if your website is hosted on a Content Management System (CMS) and you’d like to try a mobile-friendly template. The above links are currently only available in English but Google says translations are coming “within the next few weeks.” This isn’t the first time Google has experimented with such a label, but it is the first time the company has decided to roll it out to everyone globally. For a long time now, the search giant has been experimenting with ranking search results based on whether sites are optimized for mobile. Google didn’t share any more details today on if and when the mobile-friendly criteria would one day become a ranking signal. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,485
2,015
"Google starts using indexed apps and its 'mobile-friendly' label as ranking factors for search results | VentureBeat"
"https://venturebeat.com/2015/02/26/google-starts-using-indexed-apps-and-its-mobile-friendly-label-as-ranking-factors-for-search-results"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google starts using indexed apps and its ‘mobile-friendly’ label as ranking factors for search results Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today announced two changes to how it ranks search results. Google search is starting to take indexed apps and sites it labels as “mobile-friendly” into consideration. Google has been experimenting with various levels of app indexing for years, with early features showing up as early as December 2013. Now it will use information from indexed apps as a ranking factor, but only for signed-in users who have said apps installed. This means Google search may surface content from indexed apps more prominently, showing you content from inside an app installed on your mobile device. Back in November, Google started labeling sites as “mobile-friendly” to denote those that are optimized for phones. Now, the company is looking at using this as a ranking factor as well, but only in mobile searches. Still, this will affect all languages worldwide, and the company expects it “to have a significant impact in our search results.” Google says the app indexing change goes into effect today, though it didn’t reveal when it would affect users who aren’t signed in. The mobile-friendly change will meanwhile begin rolling out on April 21. Chances are you won’t see the effects yourself for a few months; however, mileage will vary depending on what you search for and how often. If you’re an app developer, you can find out more about app indexing on the Google Developers page. There’s also a step-by-step guide if you want to add support right away. If you’re a webmaster, you can make sure you have a mobile-friendly site by checking out the developer guide , testing your own pages using the Mobile-Friendly Test , and grabbing a full list of mobile usability issues across your site using the Mobile Usability Report. Even if you don’t care about SEO, you’ll probably want to do this, as it will improve your site’s performance. In short, Google is pulling apps, mobile, and the web closer together with this ranking change. At the same time, the company is helping developers and webmasters improve their offerings, and getting even more data about where and how users are accessing information. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,486
2,015
"Google's Mobilegeddon: Everything you need to know | VentureBeat"
"https://venturebeat.com/2015/04/20/googles-mobilegeddon-everything-you-need-to-know"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Mobilegeddon: Everything you need to know Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Mobilegeddon has arrived. Google today updated its algorithm to rank sites it labels as “mobile-friendly” higher on mobile search results. Because the company’s search engine drives so much traffic, many are worrying, panicking, and even calling the update “Mobilegeddon” (a not-so-clever play on the terms Mobile and Armageddon). On February 26 , Google first announced plans to roll out mobile ranking changes on April 21 (an unprecedented move — the company almost never announces algorithm changes in advance). Yet this adjustment has been in the works for longer than that: The company started labeling sites as “mobile-friendly” on November 18 to denote those that are optimized for phones. A webpage is eligible for the “mobile-friendly” label if it meets the following criteria, as detected in real time by Googlebot: Avoids software that is not common on mobile devices, like Flash Uses text that is readable without zooming Sizes content to the screen so users don’t have to scroll horizontally or zoom Places links far enough apart so that the correct one can be easily tapped Google is now using this label as a ranking factor across all languages worldwide; the update applies to individual pages, not entire websites. As a result, searchers get “high-quality and relevant results where text is readable without tapping or zooming, tap targets are spaced appropriately, and the page avoids unplayable content or horizontal scrolling.” Even the company has said it expects the update will “have a significant impact in our search results.” Still, the scope is only mobile searches on phones; that means desktop and tablets will not be affected. If you run a website, whether it’s a personal one or for a business, you may see an impact on your traffic, specifically any you receive from Google’s mobile search site. In theory, it will go up if your site is optimized for mobile and down if it isn’t. Mobile search traffic could of course also remain completely unchanged — it will take a few days or even weeks for the data to start pouring in, and then we’ll know for sure which sites were affected and by how much. Many online services have been scrambling to improve their mobile sites as a result, but of course not everyone has reacted quickly enough. Thankfully, Google provided practical pointers in February, so if you’re behind, you can catch up fairly easily: Check if your site is mobile-friendly by running the Mobile-Friendly Test on your webpages. Get a full list of mobile usability issues across your site using the Mobile Usability Report. Refer to the developer guide for more information and tips. If, for whatever reason, you’re not interested in search engine optimization (SEO), these are still steps you should make an effort to go through. Even if you don’t rely on mobile search traffic from Google, improving your mobile site’s performance definitely won’t hurt. It’s possible that Mobilegeddon is an overreaction from the SEO community — nobody outside of Google really knows just how much of an impact the ranking change will have on the company’s search engine algorithm. We do know, however, that the mobile-friendly label is just one of many signals that Google uses to rank search results. In other words, Google isn’t going to start promoting sites just because they’re mobile-friendly — they still have to be relevant and useful. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,487
2,015
"Google Chrome now has over 1 billion users | VentureBeat"
"https://venturebeat.com/2015/05/28/google-chrome-now-has-over-1-billion-users"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Chrome now has over 1 billion users Share on Facebook Share on X Share on LinkedIn At the I/O 2015 developer conference today, Sundar Pichai, Google’s senior vice president of product, announced that Chrome has passed 1 billion active users. Less than a year ago, Google revealed Android has over 1 billion active users. These are indeed Google’s biggest ecosystems. Google also shared that Google Search, YouTube, and Google Maps all have over 1 billion users as well. Gmail will reach the milestone next; it has 900 million users. Chrome is arguably more than a browser: It’s a major platform that Web developers have to consider. In fact, with regular additions and changes, developers have to keep up to ensure they are taking advantage of everything available. To view all of VentureBeat’s Google IO coverage, click here. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,488
2,016
"Google will start ranking 'mobile-friendly' sites even higher in May | VentureBeat"
"https://venturebeat.com/2016/03/16/google-will-start-ranking-mobile-friendly-sites-even-higher-in-may"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google will start ranking ‘mobile-friendly’ sites even higher in May Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today announced it is rolling out an update to mobile search results in May that “increases the effect” of its mobile-friendly ranking signal. The goal is to “help our users find even more pages that are relevant and mobile-friendly,” though the company didn’t share exactly how much of an impact it expects the change to have. In November 2014, Google started labeling sites as “mobile-friendly” to denote which pages are optimized for phones. In February 2015 , Google announced plans to roll out mobile ranking changes on April 21 (an unprecedented move — the company almost never announces algorithm changes in advance). A webpage is eligible for the “mobile-friendly” label if it meets the following criteria, as detected in real time by Googlebot: Avoids software that is not common on mobile devices, like Flash Uses text that is readable without zooming Sizes content to the screen so users don’t have to scroll horizontally or zoom Places links far enough apart so that the correct one can be easily tapped As promised, Google started using this label as a ranking factor across all languages worldwide in April. Dubbed “Mobilegeddon” (does that make this Mobilegeddon 2?), the change applied to individual pages, not entire websites. In short, if you still haven’t updated your site, you have even more reason to do so now, before May. To check if Google deems your site mobile-friendly, use the Mobile-Friendly Test. Google also offers a Webmaster Mobile Guide with more details for web developers. If, for whatever reason, you’re not interested in search engine optimization (SEO), these are still steps you should make an effort to take. Even if your site doesn’t rely on mobile search traffic from Google, improving your mobile site’s performance definitely can’t hurt — your users will definitely appreciate it. And of course, if your site is already mobile-friendly, Google promises you will not be affected by this update. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,489
2,017
"Google Search removes 'mobile-friendly' label, will start negatively ranking mobile interstitials in 2017 | VentureBeat"
"https://venturebeat.com/2016/08/23/google-search-removes-mobile-friendly-label-will-start-negatively-ranking-mobile-interstitials-in-2017"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Search removes ‘mobile-friendly’ label, will start negatively ranking mobile interstitials in 2017 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today announced two updates to mobile search results: an aesthetic one rolling out now and an algorithmic one coming next year. The former consists of removing the “mobile-friendly” label in search results and the latter will punish mobile sites that use interstitials. The goal is to “make finding content easier for users,” though as always, the company didn’t share exactly how much of an impact users and webmasters can expect. In November 2014, Google started labeling sites as “mobile-friendly” to denote which pages are optimized for phones. In February 2015 , Google announced plans to roll out mobile ranking changes in April 2015 , and then in March 2016 , it promised to start ranking “mobile-friendly” sites even higher in May. Now that the ranking changes have been in place for months, and the team feels like it has made an impact, the label is no longer needed. Indeed, Google says 85 percent “of all pages in the mobile search results now meet this criteria and show the mobile-friendly label.” So the label is going away “to keep search results uncluttered.” Again, this is strictly an aesthetic change: The mobile-friendly criteria will continue to be a ranking signal. If your site is in the 15 percent group, here’s a quick recap. A webpage is considered “mobile-friendly” if it meets the following criteria, as detected in real time by Googlebot: Avoids software that is not common on mobile devices, like Flash Uses text that is readable without zooming Sizes content to the screen so users don’t have to scroll horizontally or zoom Places links far enough apart so that the correct one can be easily tapped But Google isn’t done just yet. The company wants to keep “improving” mobile sites by ranking them lower and higher in its search results based on its own criteria. The company now wants to tackle “intrusive interstitials” as they “provide a poorer experience to users than other pages where content is immediately accessible.” After January 10, 2017, pages where content is not easily accessible (present on the page and even available for indexing by Google Search, but visually obscured by an interstitial) when coming from mobile search results “may not rank as highly.” To be clear, Google isn’t planning to punish all types of interstitials. Interstitials that Google doesn’t like include showing a popup that covers the main content (immediately or delayed), displaying a standalone interstitial that the user has to dismiss before accessing the main content, and using a layout where the above-the-fold portion is similar to a standalone interstitial but the original content is inlined underneath. Interstitials that Google deems OK include legal obligations (cookie usage or for age verification), login dialogs on sites where content is not publicly indexable, and banners that use a reasonable amount of screen space and are easily dismissible. Google emphasizes that this new signal “is just one of hundreds of signals that are used in ranking.” But if you can avoid using shitty interstitials, you’ll help your users and your Google traffic. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,490
2,018
"Google Search will start ranking faster mobile pages higher in July | VentureBeat"
"https://venturebeat.com/2018/01/17/google-search-will-start-ranking-faster-mobile-pages-higher-in-july"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Search will start ranking faster mobile pages higher in July Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today announced a new project to improve its mobile search results: factoring page speed into its search ranking. As the company notes, page speed “has been used in ranking for some time” but that was largely for desktop searches. Starting in July 2018, page speed will be a ranking factor for mobile searches on Google as well. In November 2014, Google started labeling sites as “mobile-friendly” to denote pages optimized for phones. The company then spent the next few years experimenting with using the label as a ranking factor , ultimately pushing those changes in April 2015 and increasing the effect in May 2016. The label was removed in August 2016 as the company noted that most pages had become “mobile-friendly.” Google now plans to wield that power again to make mobile pages load faster. Here is how the company explains it: The “Speed Update,” as we’re calling it, will only affect pages that deliver the slowest experience to users and will only affect a small percentage of queries. It applies the same standard to all pages, regardless of the technology used to build the page. The intent of the search query is still a very strong signal, so a slow page may still rank highly if it has great, relevant content. The move is part of a bigger push at Google to speed up the mobile web. Earlier this month, the company started rolling out its new Search Console to website owners globally. The tool lets web developers analyze their site’s indexing on Google Search, view analytics, peruse inbound links, submit and remove content for crawling, monitor malware, and so on. Google will not be offering a tool that directly indicates whether a page will be affected by this new mobile ranking factor starting in July. Instead, the company points to three of its own resources that developers can use to evaluate their mobile page’s performance: Chrome User Experience Report , Lighthouse , and PageSpeed Insights. Interestingly, the announcement doesn’t mention Google’s Accelerated Mobile Pages (AMP) project. At its I/O developers conference last year, the company shared that AMP pages now load twice as fast from Google Search , and just last week the team announced that AMP URLs will be getting a makeover. It doesn’t look like implementing AMP is enough to get a boost from this upcoming Speed Update — Google wants developers to improve their mobile site performance across the board. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,491
2,019
"Android passes 2.5 billion monthly active devices | VentureBeat"
"https://venturebeat.com/2019/05/07/android-passes-2-5-billion-monthly-active-devices"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Android passes 2.5 billion monthly active devices Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google revealed a handful of milestones during its I/O 2019 developer conference in Mountain View today, but the biggest was undoubtedly that Android now powers 2.5 billion active devices. “We’re here to talk about Android version, 10, and we get to celebrate a milestone together,” Android product manager Stephanie Cuthbertson said onstage. “Today there are over 2.5 billion active Android devices.” The last time Google reported on this figure was also at I/O, in May 2017, when Android passed 2 billion monthly active devices. Google thus managed to add some 500 million devices in 24 months, or about 20 million devices per month. In September 2015, Android had 1.4 billion active users. People can of course use multiple devices and multiple accounts, so the number of users is likely lower than 2.5 billion. Nonetheless, the growth is impressive. 10 years and now over 2.5 billion active devices. Thanks for joining us on this journey. #io19 pic.twitter.com/wC2VcVgEBS — Android (@Android) May 7, 2019 Let’s put that into context. That’s more than the 1.5 billion PCs that Microsoft estimates are running Windows worldwide, a figure that hasn’t been updated in years. It’s also more than Facebook’s 2.38 billion monthly active users , as of last month. Android was the company’s first platform to reach the 2 billion mark. For a while now, Google has pointed out that it has eight products with more than a billion users: Android, Chrome, Google Play, Gmail, Google Drive, Google Maps, Google Search, and YouTube. But Android remains way ahead of the pack. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,492
2,020
"Alphabet reports $41.2 billion in Q1 2020 revenue: Google Cloud up 52%, YouTube up 33%, and Other Bets down 21% | VentureBeat"
"https://venturebeat.com/2020/04/28/alphabet-earnings-q1-2020"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Alphabet reports $41.2 billion in Q1 2020 revenue: Google Cloud up 52%, YouTube up 33%, and Other Bets down 21% Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google parent company Alphabet today reported earnings for its first fiscal quarter of 2020, including revenue of $41.2 billion, net income of $6.8 billion, and earnings per share of $9.87 (compared to revenue of $36.3 billion, net income of $8.3 billion, and earnings per share of $11.90 in Q1 2019). At $33.8 billion, Google advertising made up 82% of Alphabet’s total revenue for the quarter. Given the global pandemic’s impact on advertising, many are poring over Alphabet’s numbers to see how bad the damage is — but as this quarter ended in March, it only shows a glimpse at what’s to come. Analysts had expected Alphabet to earn $40.3 billion in revenue and report earnings per share of $10.38. The company thus beat on revenues but missed on earnings per share. Its stock was down 3% in regular trading and up 7% in after-hours trading. This is some good news for the first full quarter that Sundar Pichai is leading both Alphabet and Google as CEO. “Given the depth of the challenges so many are facing, it’s a huge privilege to be able to help at this time,” Pichai said in a statement. “People are relying on Google’s services more than ever, and we’ve marshaled our resources and product development in this urgent moment.” Slowdown in Q1, more to come in Q2 While Alphabet’s Q1 2020 revenues were up 13% versus last year, CFO Ruth Porat warned the quarter ended on a low note. “Performance was strong during the first two months of the quarter, but then in March we experienced a significant slowdown in ad revenues,” she said. Traffic acquisition costs were up 8.6% to $7.45 billion. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On the earnings call, Pichai said the “significant and sudden” hit to advertising in March “correlated to the locations and sectors impacted by the virus and related shutdown orders.” Porat said the company anticipates that the second quarter “will be a difficult one” for its advertising business. But she noted that it would be “premature to comment on timing, given all the variable here.” Alphabet also grew its headcount by 19% to 123,048 employees in Q1 2020. The company is slowing down hiring, however, so the number will be another one to watch this year. Google Cloud Google’s cloud division is facing an uphill battle against market leaders Amazon Web Services (AWS) and Microsoft Azure. The division includes revenue from Google Cloud Platform, as well as G Suite, making the comparison with other public cloud providers difficult. Google has consistently said that GCP growth tends to be higher than the cloud division overall, meaning G Suite’s growth is lower. But there is some good news for G Suite — Google Meet , the company’s Zoom competitor, is growing quickly. “Last week, we surpassed a significant milestone,” Pichai said on the earnings call. “We are now adding roughly 3 million new users each day and have seen a 30-fold increase in usage since January. There are now over 100 million daily Meet meeting participants.” Google Cloud revenues in Q1 2020 hit $2.78 billion, up 52% from $1.83 billion in Q1 2019. That’s a big jump, but we don’t know how it compares to previous quarters, as Alphabet only began breaking out Google Cloud in the previous quarter. We thus have only one other data point: Google Cloud revenue was up 53% in Q4 2019. YouTube Alphabet also only started breaking out YouTube as a separate line item in its earnings last quarter. YouTube ads brought in $4.04 billion in Q1 2020, up 33% from $3.03 billion in Q1 2019. It’s worth noting that Alphabet also counts other non-advertising revenue for YouTube, which isn’t included in this figure. The company hides that revenue in the “Google other” line item, which this quarter was $4.4 billion. That segment includes hardware sales for devices such as Chromebooks, Pixel phones, and Nest products (like smart speakers). Other Bets Unlike the way it handles Google Cloud and YouTube, Alphabet has been breaking out its Other Bets for years. The losses always outweigh the gains because, well, these are moonshots after all. Other Bets did worse in Q1 2020 than in Q1 2019. Revenue was down 21% to $135 million in Q1 2020, while losses were up 29% to $1.1 billion. Since becoming the head of Alphabet and Google, Pichai was expected to be under a lot of pressure to clean up Other Bets. Given the current economic climate, that pressure will likely only increase this year. Other Bets encompass the other companies under the Alphabet umbrella, including Calico, CapitalG, DeepMind, GV, Google Fiber, Jigsaw, Loon, Makani, Sidewalk Labs, Verily, Waymo, Wing, and X. As always, the line item leaves us with more questions than answers. Other Bets shows how much Alphabet is investing in its crazy research projects, but we still have no idea how much the individual projects (self-driving cars, internet balloons, anti-aging labs) cost to run, or whether any single one of them is profitable (unlikely). VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,493
2,020
"Google announces Web Vitals, user experience and performance metrics for websites | VentureBeat"
"https://venturebeat.com/2020/05/05/google-announces-web-vitals-user-experience-and-performance-metrics-for-websites"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google announces Web Vitals, user experience and performance metrics for websites Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today announced Web Vitals , an initiative to provide web developers and website owners with a unified set of metrics for building websites with user experience and performance in mind. Web Vitals is the set of quality signals that Google believes are “essential to delivering a great user experience on the web.” Over the years, Google has offered a slew of tools to help business owners, marketers, and developers improve user experiences. The company is now admitting that the sheer number created “its own set of prioritization, clarity, and consistency challenges for many.” In short, Google has realized the information overload was contradictory and confusing. This is an attempt at a reset. Born online, Google’s revenues are directly tied to the web. The company has a vested interest in improving the web’s user experience. Given Google’s reach, including over 1 billion Chrome users and over 2.5 billion monthly active Android devices , not to mention Google Search, anyone with a website needs to track what Google prioritizes. If you’ve been paying attention to Google’s vision for the web and Chrome releases , some of these metrics will be familiar. Google wants site owners to gather their own real-user measurement analytics. The company is also launching an open source web vitals JavaScript library and a developer preview of a Core Web Vitals extension. Other browsers have shipped the current Core Web Vitals draft specifications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Annual updates for Core Web Vitals Measuring quality of user experience is often site and context specific. Core Web Vitals attempt to spell out what Google considers critical for all web experiences. This year’s Core Web Vitals include loading experience, interactivity, and visual stability of page content. Google says these metrics capture important user-centric outcomes, are field measurable, and have lab diagnostic metric equivalents. Specifically: Largest Contentful Paint measures perceived load speed and marks the point in the page load timeline when the page’s main content has likely loaded. First Input Delay measures responsiveness and quantifies the experience users feel when trying to first interact with the page. Cumulative Layout Shift measures visual stability and quantifies the amount of unexpected layout shift of visible page content. Next, Google wants to make Core Web Vitals easy to access and measure across its own tools. Chrome UX Report already lets site owners see how real-world Chrome users experience their site. The BigQuery dataset already surfaces publicly accessible histograms for all of the Core Web Vitals. Over the coming months, Google plans to update Lighthouse , Chrome DevTools , PageSpeed Insights , and Search Console’s Speed Report , and also release a new REST API. Google plans to update Core Web Vitals annually. The company is also promising “regular updates on the future candidates, motivation, and implementation status.” You’ll want to watch web.dev for those. For 2021, Google is promising “building better understanding and ability to measure page speed, and other critical user experience characteristics.” The company shared a few examples: extending the ability to measure input latency across all interactions, not just the first; new metrics to measure and quantify smoothness; and primitives that will enable delivery of instant and privacy preserving experiences. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,494
2,020
"Google launches Android Studio 3.6 with Google Maps in the Android Emulator | VentureBeat"
"https://venturebeat.com/2020/02/24/google-launches-android-studio-3-6-with-google-maps-in-the-android-emulator"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches Android Studio 3.6 with Google Maps in the Android Emulator Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today launched Android Studio 3.6, the latest version of its integrated development environment (IDE), with a specific focus on “addressing quality in primarily code editing and debugging use cases.” This release is the first release since Project Marble, a fancy name for an initiative Google announced late last year to improve Android Studio’s fundamental features. Android Studio 3.6 introduces a small set of new features, polishes existing features, and addresses the usual bugs and performance improvements. The new release comes less than a week after Google launched Android 11 Developer Preview 1. While developers can use other IDEs to build on Android, the latest features arrive first in Android Studio. Version 3.6 includes a new way to quickly design, develop, and preview app layouts using XML. Google Maps is now integrated right into the Android Emulator extended control panel so developers no longer have to manually type in GPS coordinates to test location features in their app. It’s also now easier to optimize your app and find bugs with automatic memory leak detection for Fragments and Activities. You can now download Android Studio 3.6 for Windows, Mac, and Linux directly from developer.android.com/studio. If you are already using Android Studio, you can get the latest version in the navigation menu (Help => Check for Update on Windows/Linux and Android Studio => Check for Updates on OS X). Google released Android Studio 3.5 in August. The version number 3.6 suggests this isn’t a significant release, but if you build for Android, there might be features of note in the list below. Android Studio 3.6 features Here’s the rundown of what version 3.6 brings to the table: Split view in design editors : Design editors, such as the Layout Editor and Navigation Editor, now provide a Split view that enables you to see both the Design and Code views of your UI at the same time. Split view replaces the Preview window and can be configured on a file-by-file basis to preserve context information like zoom factor and design view options, so you can choose the view that works best for each use case. To enable split view, click the Split icon in the top-right corner of the editor window. Color picker resource tab: It’s now easier to apply colors you have defined as color resources. The color picker now populates the color resources in your app for you to choose and replace color resources values. The color picker is accessible in the design tools as well as in the XML editor. View binding : Allows you to more easily write code that interacts with views by providing compile-time safety when referencing views in your code. When enabled, view binding generates a binding class for each XML layout file present in that module. In most cases, view binding replaces findViewById. You can reference all views that have an ID with no risk of null pointer or class cast exceptions. These differences mean that incompatibilities between your layout and your code will result in your build failing at compile time rather than at runtime. Android NDK updates : Previously supported in Java, these features are now also supported in Kotlin. You can navigate from a JNI declaration to the corresponding implementation function in C/C++ (view this mapping by hovering over the C or C++ item marker near the line number in the managed source code file). You can automatically create a stub implementation function for a JNI declaration (define the JNI declaration first and then type “jni: or the method name in the C/C++ file to activate). IntelliJ Platform Update : The IntelliJ 2019.2 platform release includes many improvements from a new services tool window to much improved startup times. Add classes with Apply Changes : You can now add a class and then deploy that code change to your running app by clicking either Apply Code Changes or Apply Changes and Restart Activity. Android Gradle Plugin (AGP) updates : Support for the Maven Publish Gradle plugin, which allows you to publish build artifacts to an Apache Maven repository. The Android Gradle plugin creates a component for each build variant artifact in your app or library module that you can use to customize a publication to a Maven repository. Additionally, the Android Gradle plugin has made significant performance improvement for annotation processing/KAPT for large projects — AGP now generates R class bytecode directly, instead of .java files. New packaging tool: The default packaging tool has been changed to zipflinger for debug builds. You should see an improvement in build speed, but you can also revert to using the old packaging tool by setting android.useNewApkCreator=false in your gradle.properties file. Android Emulator – Google Maps UI: Android Emulator 29.2.12 includes a new way for app developers to interface with the emulated device location.The Google Maps user interface is embedded in the extended controls menu to make it easier to specify locations and also to construct routes from pairs of locations. Individual points can be saved and re-sent to the device as the virtual location, while routes can be generated through typing in addresses or clicking two points. These routes can be replayed in real time as locations along the route are sent to the guest OS. Multi-display support: Emulator 29.1.10 includes preliminary support for multiple virtual displays. Users can configure multiple displays through the settings menu (Extended Controls > Settings). Resumable SDK downloads: When downloading Android SDK components and tools using the Android Studio SDK Manager, Android Studio now allows you to resume downloads that were interrupted instead of restarting the download from the beginning. In-place updates for imported APKs : Android Studio now automatically detects changes made in the imported APK file and gives you an option to re-import it in-place. Previously, when changes to those APKs were made, you would have to manually import them again and reattach symbols and sources. Attach Kotlin sources to imported APKs : Support for attaching Kotlin source files to imported APKs. Leak detection in Memory Profiler : The Memory Profiler now has the ability to detect Activity and Fragment instances that may have leaked. To get started, capture or import a heap dump file in the Memory Profiler, and check the Activity/Fragment Leaks checkbox to generate the results. Deobfuscate class and method bytecode in APK Analyzer : When using the APK Analyzer to inspect DEX files, you can now deobfuscate class and method bytecode. While in the DEX file viewer, load the ProGuard mappings file for the APK you’re analyzing. When loaded, you will be able to right-click on the class or method you want to inspect by selecting Show bytecode. Android Studio 3.6 also includes the usual performance improvements and bug fixes on top of the new features ( full release notes ). Google didn’t share its plans for the next version, but we’re likely to hear more at its I/O 2020 developers conference in May. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,495
2,020
"Google launches Android 11 Developer Preview 4, delays beta schedule due to coronavirus | VentureBeat"
"https://venturebeat.com/2020/05/06/google-launches-android-11-developer-preview-4-delays-beta-schedule-due-to-coronavirus"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches Android 11 Developer Preview 4, delays beta schedule due to coronavirus Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today launched the fourth Android 11 developer preview with app compatibility and performance improvements. This is significant — an Android 11 Developer Preview 4 wasn’t supposed to exist. Android 11 Beta 1 was supposed to arrive in May. Hence the other piece of news today: There’s a new release schedule that pushes Beta 1 to June 3, Beta 2 to July, and Beta 3 to August. Because Google’s I/O developer conference, where the first Android beta typically debuts, is canceled, Google is hosting an online developer event to kick off the betas instead. #Android11: the Beta Launch Show starts at 8 a.m. Pacific (11 a.m. Eastern) on June 3 with a keynote featuring executives Dave Burke and Stephanie Cuthbertson. The final Android 11 release is still slated for Q3 2020, but don’t expect it before late August, if not September. We should have seen this coming. Google was forced to delay Chrome 81 , skip Chrome 82 altogether, and move Chrome 83 up a few weeks. It makes sense that if the coronavirus impacted the Google developers building Chrome, it would also impact the Google developers building Android. “When we started planning Android 11, we didn’t expect the kinds of changes that would find their way to all of us, across nearly every region in the world,” Google VP of engineering Burke wrote today. “These have challenged us to stay flexible and find new ways to work together, especially with our developer community.” The poorly named #Android11: the Beta Launch Show will span topics that were originally planned for I/O 2020. You can expect Jetpack Compose , Android Studio , and Google Play to all get proper airtime. Burke also promised a post-show live Q&A. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Android 11 DP4: No new features You can download Android 11 DP4 now from developer.android.com — if you have the previous preview, Google will also be pushing an over-the-air (OTA) update. The release includes a preview SDK with system images for the Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL, Pixel 3a, Pixel 3a XL, Pixel 4, and Pixel 4 XL, as well as the official Android Emulator. Those eight Pixel phones are a tiny slice of the over 2.5 billion monthly active Android devices — the main reason developers are eager to see what’s new in the first place. Google was expected to release Android 11 to more phones with the first beta, but it looks like everyone will have to wait another month for that. Android 11 DP1 brought 5G experiences, people and conversations improvements, Neural Networks API 1.3, privacy and security features, Google Play System updates, app compatibility, connectivity, image and camera improvements, and low latency tweaks. DP2 built on those with foldable, call screening, and Neural Networks API improvements. DP3 added app exit reasons updates, GWP-ASan heap analysis, Android Debug Bridge (ADB) Incremental, wireless debugging, and data access auditing. DP4 is really just a stopgap measure. Burke says it gives developers “some extra time for you to test your app for compatibility and identify any work you’ll need to do.” Google recommends releasing a compatible app update by June 3 so you can get feedback from the larger group of beta users that will eager to finally try Android 11. New Android 11 schedule Google launched Android 11 DP1 in February (the earliest it has ever released an Android developer preview), Android 11 DP2 in March, and Android 11 DP3 in April. Unlike last year, Google did not make the previews available via the Android Beta Program , which lets you get early Android builds via over-the-air updates on select devices. You need to manually flash your device — Android 11 is not ready for early adopters to try, just developers. Last year, there were six betas. This year, there were supposed to be three developer previews and three betas. Now we are on track for four developer previews and three betas. Here’s the new Android 11 schedule: February: Developer Preview 1 (Early baseline build focused on developer feedback, with new features, APIs, and behavior changes.) March: Developer Preview 2 (Incremental update with additional features, APIs, and behavior changes.) April: Developer Preview 3 (Incremental update for stability and performance.) May: Developer Preview 4 (App compatibility and performance improvements.) June 3: Beta 1 (Final SDK and NDK APIs; Google Play publishing open for apps targeting Android 11.) July: Beta 2 (Platform Stability milestone. Final APIs and behaviors.) August: Beta 3 (Release candidate build.) Q3: Final release (Android 11 release to AOSP and ecosystem.) If you haven’t started testing yet, now is the time. After you’ve flashed Android 11 onto your device or fired up the Android Emulator, update your Android Studio environment with the Android 11 Preview SDK ( setup guide ). Then install your current production app and test the user flows. For a complete rundown on what’s new, check the API overview , API reference , and behavior changes. To help you test, Google made many of the targetSdk changes toggleable , so you can force-enable or disable them individually from Developer options or ADB. The greylists of restricted non-SDK interfaces can also be enabled/disabled. To help Google get to beta, give feedback and report bugs here. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,496
2,017
"SEO is not enough in the age of voice | VentureBeat"
"https://venturebeat.com/2017/12/10/seo-is-not-enough-in-the-age-of-voice"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest SEO is not enough in the age of voice Share on Facebook Share on X Share on LinkedIn Google - Mic - Voice Recognition Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With technology transforming virtually every aspect of our lives, it should be noted that many of the innovations we’re now seeing have existed for decades in science fiction novels and television. The capacity to talk to a computer (and have it talk back) was a staple of Gene Roddenberry’s Star Trek , where the Starfleet computer was voiced by Roddenberry’s wife, Majel. The 1970 movie Colossus: The Forbin Project featured a supercomputer that was intended to prevent war and proclaimed itself “the voice of World Control.” And before Google’s self-driving cars, the 1980s brought us KITT, an advanced artificially intelligent, self-aware, and nearly indestructible car from the TV show, Knight Rider. Today’s voice applications may not have quite this level of panache and power, but there is no doubt they are infiltrating our daily lives and making their way toward mainstream adoption. According to Location World , 40 percent of adults now use voice search once per day, and 60 percent of these people started using voice search in the last year. Comscore predicts that 50 percent of all searches will be voice searches by 2020. And a survey from Stone Temple Consulting found that over 60 percent of people use voice search at home and 58 percent use voice search to look something up on their smartphone. A major driver of this growth is the fact that Google Home, Alexa, Siri, and Cortana have demonstrated around 92 percent accuracy in understanding the human voice. Voice search has become a convenient, user-friendly experience, and consumers are embracing it. In 2016, Amazon’s Echo became the company’s most popular product during the holidays. Voice search is one of the most important technology/computing trends of this year and next. And we are finally getting into semantic search capabilities. This will have profound implications for brands, who need to adapt beyond SEO to participate in (and reap the benefits of) this emerging voice-powered landscape. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Seamless, fast, and convenient The rules around SEO were built on a user typing a query into a browser-based search engine. Voice search is a different animal. Let’s say someone’s been locked out of their apartment and uses voice search to find a locksmith. If the voice query is, “I need a locksmith to get into my house,” a brand like HomeAdvisor could present two highly rated locksmiths within two miles of their house. The most helpful response is one that knows the location of the request, recommends a service provider rated highly by a neighbor, then goes even further by automatically offering to call the business and even setting up an appointment. The Stone Temple survey found that the top three rationales behind voice query usage were “It’s fast,” “The answer is read out loud back to me,” and “I don’t have to type.” Furthermore, 60 percent of voice search users want more answers and fewer search result options. When typing something into Google, it may not be a problem if dozens of options show up in the results. However, with voice search, people don’t want multiple results — they want one or two quick answers. Understanding these use cases and behavior preferences is key to succeeding at voice search optimization. Conversational long tail keywords Another factor brands need to consider is the nature of the search queries themselves. Voice search usually involves long tail search terms of five words or more, instead of one, two, or three typed words. Voice search users conduct semantic searches in full sentences, which makes it possible to create more specialized content. Marketers need to consider that apps, landing pages, and strategic content should be more conversational and should anticipate what type of sentences people may ask within the context of a topic. It’s important to understand the natural, colloquial language people use around a keyword or search term and optimize for those words. For example, someone may say, “I want to order kung pao chicken delivery.” There is a full story in that request, including where the person is located, the specific menu item, the ratings of nearby restaurants, whether the user has ordered from a specific restaurant before, and whether the restaurant in question offers delivery. As another example, if someone says: “Tell me how much a Tesla S costs,” the search response is simply a price. But if the user’s next question is, “What colors are available?” or “Where can I buy one near me?” the voice search browser should understand that the subject is still a Tesla. Adapting beyond SEO means accounting for this back-and-forth question and answer process. Marketers need to understand the entire conversation around a keyword or search term so they can determine the best search results messaging. Brands can get a sense for conversational long tail keywords through focus groups in which they ask participants how they would inquire about their brand or product. The more contextual information gained from these focus groups, the better the brand can optimize the search results. Personalized messaging Because voice search provides so much rich context, brands have the opportunity to create various tailored advertising tactics. One method is to create specific landing pages for a search result, which is much easier to do today through platforms like JSON, which also allow for data-driven personalization of the landing page. Another tactic, requiring the brand to have a voice query customer’s IP address, is to use past purchasing or browsing history to create personalized search result messaging. Accuracy is important. AI can seem like a sentient being, and it can be unsettling if it doesn’t come back with the right response. But voice search is no longer just a far-off concept dreamt up by science fiction writers. In today’s highly competitive market, brands need to set themselves up for success by understanding changing consumer behaviors around search and the unique aspects of voice-enabled technology. SEO alone is not enough. Good marketers understand the climate, environmental analysis, demographics, and primary, secondary, and tertiary targets and evolve their advertising methods to make maximum impact with their campaigns. John Francis is the director of digital strategy of Hawthorne , an award-winning, technology-based advertising agency specializing in analytics and accountable brand campaigns for over 30-years. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,497
2,019
"TikTok owner Bytedance unveils Snapchat clone Duoshan | VentureBeat"
"https://venturebeat.com/2019/01/15/tiktok-owner-bytedance-unveils-snapchat-clone-duoshan"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages TikTok owner Bytedance unveils Snapchat clone Duoshan Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. On the heels of teen-focused social app TikTok’s skyrocketing popularity, parent company Bytedance today announced Duoshan , a standalone video-messaging app that borrows a page from Snapchat’s playbook. Duoshan, which is Chinese for “flashes,” allows users to record and send each other short videos that disappear after 72 hours. The company says it will initially focus on finding loyal users for the app in its home market of China. Duoshan is available on Android and iOS platforms in China, though at the time of writing, the iOS app was still in beta and was no longer accepting new test users. A spokesperson for Bytedance, which is the world’s most valuable startup, declined to comment on Duoshan’s roadmap beyond the Chinese market. In contrast to most other social messaging apps, Duoshan does not publicly display likes and comments. Instead, the company says it will share likes and comments to users privately. In the app, users are offered just two sections: one for posting new videos and the other to discover popular videos. “We are seeing more and more Douyin (Chinese for TikTok) users share their videos through other social media platforms and channels. With the launch of Duoshan, we are creating our first video-based social messaging app to allow users to share their creativity and interact directly with their family and friends,” Douyin president Zhang Nan said in a press statement. First impressions of DuoShan: Yet another Snapchat clone. Built on the pretty much nonexistent TikTok social graph, with no iOS version. You can imagine the sighs of relief at Tencent head office. pic.twitter.com/ocpIy4qdha — Matthew Brennan (@mbrennanchina) January 15, 2019 Bytedance also announced that TikTok has reached 250 million daily active users, up from the 500 million monthly active users across 150 countries it had reported in June last year. The app, which was merged with Musical.ly in 2017, surpassed Facebook, YouTube, and Snapchat in monthly installs in the U.S. last September, according to marketing research firm Sensor Tower. The arrival of Duoshan will further fuel Bytedance’s quest to become the most popular social messaging platform in China, where it is challenging the dominance of Tencent’s WeChat app. WeChat, which has more than 1.1 billion monthly active users, is already blocking download links to the Duoshan app on its platform. While WeChat remains ubiquitous in China, in recent years people in several markets have shown an interest in video apps. To accelerate its growth in the U.S. market, Bytedance has inked marketing deals with several celebrities, including Khloe Kardashian, Nick Jonas, and Nina Dobrev. “Communicating through videos will provide our users a completely new experience that cannot be delivered through text, voice, and images. We believe Duoshan will play a key role in setting the new trend for visual communication,” said Xu Luran, head of product for Duoshan. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,498
2,019
"Disney+ launches with massive video library, laggy apps, and surprises | VentureBeat"
"https://venturebeat.com/2019/11/12/disney-launches-with-massive-video-library-laggy-apps-and-surprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Disney+ launches with massive video library, laggy apps, and surprises Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Disney’s streaming video service Disney+ was destined to be one of the year’s biggest releases the moment it was announced for a November 12, 2019 release. Now the service is live in the United States, and though it’s facing the sort of early speedbumps encountered by a restaurant packed with people on opening night, it’s already putting to shame Apple’s launch of Apple TV+ at the beginning of this month. For $7 per month, Disney+ offers users access to hundreds of movies and TV shows spread across five categories: Disney, Pixar, Marvel, Star Wars, and National Geographic. While the Disney collection looks somewhat like the cable Disney Channel, all of the content is available on demand, including multiple seasons of past and current TV shows, plus decades of popular Disney films. Pixar’s collection of computer-generated films and shorts, virtually every Star Wars film and TV show, and most of the Marvel cinematic universe can be streamed, as well. Early adopters are being treated to some surprises. On a positive note, if you’re using a 4K display on a high-bandwidth connection, you can experience all of the Star Wars films remastered in 4K resolution with cinema-quality audio. Moreover, the day one scope of the service is staggering: Disney+ has launched with the aforementioned catalog content, plus Fox movies ( Avatar ) and TV shows ( The Simpsons ), and its own original programs. Episode one of the Star Wars spinoff The Mandalorian is available now, as are The World According to Jeff Goldblum , Pixar in Real Life , and Marvel Hero Project , plus movies such as The Imagineering Story and Noelle. Although Disney recently released a video spotlighting the service’s day one content, it used a deliberately overwhelming format, such that viewers are just now discovering some of the library’s gems. Obscure cartoons such as 2011’s Tron Uprising and 1979’s Spider-Woman are joined by Pixar’s library of brief “shorts,” plus National Geographic documentaries and animal shows, including Dr. Oakley: Yukon Vet and The Incredible Dr. Pol. For the time being, new Marvel shows such as Loki , WandaVision , and Falcon and the Winter Soldier are nowhere to be found on the app. Each of these programs is slated to launch in the months to come, with new episodes appearing once per week rather than in binge-ready dumps of content. Disney has already announced a huge collection of new series to keep the streaming service relevant, relying heavily on Marvel and Star Wars characters to win subscribers. Less positively, the service’s client apps — which to Disney’s credit launched simultaneously across numerous popular platforms, including Android, iOS, game consoles, and smart TVs — are intermittently throwing up “Unable to connect to Disney+” error messages and denying users access to sections of the content. As of press time, error messages initially came up when attempting to access certain parts of the iOS and Roku apps, though they appeared to be clearing up, a situation that may remain in flux as first-time users log into their accounts. Coming a year and a half after Disney began working on the service , today’s launch of Disney+ offers a marked contrast with the November 1 Apple TV+ launch. Apple released its subscription service for $5 per month with only a small amount of content — a handful of original shows, plus one movie — and no catalog of past videos , relying on stars such as Oprah, Jennifer Aniston, and Jason Momoa to attract audiences. While Apple TV+ arrived in 100 countries at once, reviews of the new shows were largely mediocre, and the entire library could be consumed in its opening weekend. Apple has given away one-year subscriptions to buyers of its Apple TV, iPad, iPhone, iPod touch, and Mac devices. Disney is offering a seven-day free trial of Disney+ to new users before the $7 monthly charge starts, with the option of an annual $70 bundle to save a little money. To encourage further subscriptions, the company temporarily offered a three-year commitment discount to its D23 members and has partnered with Verizon to offer one year of free service to some wireless customers. So far, Disney’s content — new and old — has received rave reviews, suggesting that the service will have long legs as a streaming video option alongside established players such as Netflix. As of today, the service is available in the United States, Canada, and the Netherlands, and it will expand to further countries in the coming weeks. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,499
2,020
"Aclima will map the air quality on every block in the Bay Area | VentureBeat"
"https://venturebeat.com/2020/01/14/aclima-will-map-the-air-quality-on-every-block-in-the-bay-area"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aclima will map the air quality on every block in the Bay Area Share on Facebook Share on X Share on LinkedIn Aclima is mapping air quality on every block in the Bay Area. Aclima is announcing that it plans to map air pollutants and greenhouse gases block-by-block throughout the San Francisco Bay Area region, which is home to 7.6 million people. It is part of an effort to bring “hyper-local” air quality to the region, first by measuring local air quality data and then sharing that information publicly so action can be taken to reduce pollution. The company is announcing today it is sending its fleet into Santa Clara County, at an event in San Francisco with various Bay Area officials who represent the nine-county region. In mapping every block, Aclima is following in the footsteps of Google StreetView, where cars took pictures of every street by driving through every neighborhood. This kind of undertaking is just as big as StreetView if it were to cover the world, which is one of Aclima’s ambitions. Air pollution and climate-changing emissions are causing widespread damage to human health, to our shared environment, and to our economies, Aclima CEO Davida Herzl wrote in a blog post. “In our neighborhoods, levels of pollutants can be five to eight times higher from one end of a block to another. In order to diagnose and act on these hyperlocal air quality issues, we need to better understand what’s happening in the air around us,” she wrote. Above: Aclima is hiring full-time drivers to map pollution. The Bay Area Air Quality Management District exists to assure that Bay Area residents breathe clean air, and it has partnered with Aclima to bring unprecedented visibility into air pollution and climate change emissions. Aclima said it will deliver an in-depth, comprehensive picture of hyperlocal air quality in all nine counties of the Bay Area — covering more than 5,000 square miles. This is the first time a mobile environmental sensor network will be deployed across an entire metropolitan region, and Aclima will use this as a precedent for launching regional scale mapping around the world in the years to come. Google has chosen Aclima as their global scaling partner, and Aclima will use Google StreetView cars for that project. But this program with the Bay Area Air Quality Management District is separate, and Aclima will use its own fleet of sensor-enabled cars. The Air District has one of the most extensive air quality monitoring networks in the United States, with more than 30 monitoring stations throughout the region. This new layer of Aclima hyperlocal air quality data will complement regulatory stations by accurately measuring and analyzing air quality at block-by-block resolution, in the neighborhoods around and between these stations throughout the Bay Area. Over the next several years, the mobile sensing network will continuously map air pollutants and climate-changing emissions across the region, measuring and regularly refreshing baseline averages of block-by-block air quality. Aclima is systematically gathering multiple samples day and night, weekdays and weekends, to create a rich, dense dataset from which to generate a clear picture of persistent levels of pollution and emissions sources. On every public street in the Bay Area over the course of a year, Aclima’s cars will be measuring air pollutants including PM 2.5, ozone, and nitrogen dioxide, as well as greenhouse gases including carbon dioxide and methane. This program will bring an unprecedented level of access and visibility to air quality data at the neighborhood level across the entire Bay Area region, Jack Broadbent, executive director for the Air District, said in a statement. This will inform lawmakers in making decisions that protect the health of Bay Area residents, he said. To support a wide range of efforts to reduce emissions and protect public health, the Air District has subscribed to Aclima Pro, a web-based application that makes the measurement data from Aclima’s environmental sensor network accessible for scientific analysis and decision support. This powerful new tool identifies and diagnoses pollution hotspots, informs action, and measures the effectiveness of policies and interventions over time. To support public awareness, Aclima also makes address-based insights available to the public online. Above: Davida Herzl is CEO of Aclima. Aclima has been around for 10 years mapping air quality and analyzing it. During that time, Aclima has designed, built, and validated the hardware, software, and methodologies to deliver accurate, precise, and scientifically rigorous data. The resulting environmental intelligence informs action by governments, companies, researchers, and the public to reduce emissions and protect public health, at both the local and global level. The company is expanding its fleet of drivers, hiring full-time drivers and driver coordinators throughout the Bay Area this year. The company is also hiring across the board, including for engineering, design, data science, business, and community engagement roles. Aclima has 50 employees. Aclima has already begun mapping in Alameda, Contra Costa, San Francisco, and San Mateo counties, starting with the Richmond-San Pablo area, which will have access to maps of block-by-block air quality early this year. After today’s launch in Santa Clara County, Aclima will expand into Marin, Napa, Solano, and Sonoma counties. This initiative broadens the reach and impact of previous efforts across California in West Oakland, San Francisco, and San Diego. Most recently, through Aclima’s work with West Oakland Environmental Indicators Project and the Rising Sun Center for Opportunity, the company generated and analyzed nearly 10 million data points on PM 2.5, ozone, carbon monoxide, carbon dioxide, nitric oxide, and nitrogen dioxide levels. Insights from Aclima are already helping support and assess the impact of the recently adopted West Oakland Community Action Plan, which defines a new model for community-centered emission reduction plans. “To protect both human and planetary health, the time has come to take deep and systemic climate action, to design innovative policies, and to invent entirely new ways to understand and improve our environment,” Herzl wrote. “Through collaboration between local government, industry and community leaders, we can protect one of the most precious resources for life on Earth: the air we breathe.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,500
2,019
"Uber creates AI to generate data for training other AI models | VentureBeat"
"https://venturebeat.com/2019/12/18/uber-creates-ai-to-generate-data-for-training-other-ai-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Uber creates AI to generate data for training other AI models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Generative adversarial networks (GANs) — two-part AI systems consisting of generators that create samples and discriminators that attempt to distinguish between the generated samples and real-world samples — have countless uses , and one of them is producing synthetic data. Researchers at Uber recently leveraged this in a paper titled “Accelerating Neural Architecture Search by Learning,” which proposes a tailored GAN — dubbed Generative Teaching Network (GTN) — that generates data or training environments from which a model learns before it gets tested on a target task. The paper states that GTNs helped speed up searches by up to nine times compared with approaches that use real data alone, and that GTNs are competitive with state-of-the-art architectures that achieve top performance while using “orders of magnitude” less computation. As the contributing authors explain in a blog post , most model searches require “substantial” resources because they evaluate models by training them on a data set until their performance no longer improves. This process might be repeated for thousands or more model architectures in a single cycle, which is both expensive in terms of computation and incredibly time-consuming. Some algorithms avoid the cost by only training for a small amount of time and taking the resulting performance as an estimate of true performance, but this training can be further sped up by tapping machine learning — i.e., GTNs — to create training data. GTNs achieve success by creating unrealistic data that’s helpful in the course of learning. They’re able to combine information about many different types of an object together, or focus training mostly on the hardest examples, and to evaluate the model-in-training on real-world data. Furthermore, they use a learning curriculum — a set of training examples in a specific order — to improve performance over generators that produce unordered random distributions of examples. In experiments, the team says that models trained by GTNs achieved 98.9% accuracy on the popular open source MNIST data set in 32 steps (about 0.5 seconds) of training, over which they ingested 4,096 synthetic images once (less than 10% of the images in the MNIST training data set). Evaluated on another data set — CIFAR-10, which is designed to measure model search performance — the models learned up to four times faster than on real data for the same performance level, even when compared with an optimized real-data learning algorithm. Moreover, performance on GTN data turned out to be generally predictive of true performance — that is, achieving the same predictive power as achieved with only 128 steps on GTN-generated data would have required 1,200 steps on real data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Because GTNs evaluate each architecture faster, they are able to evaluate more total architectures within a fixed compute budget. In every case we tried, using GTN-generated data proved to be faster and led to higher performance than using real data. That result held even when we gave the real-data control ten days of compute compared to two-thirds of a day for GTN,” wrote the coauthors. “Through our research, we showed that GTN-generated training data creates a fast … method that is competitive with state-of-the-art … algorithms, but via an entirely different approach. Having this extra tool of GTNs in our … toolbox can help Uber, all companies, and all scientists around the world improve the performance of deep learning in every domain in which it is being applied.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,501
2,018
"Here's what people are really doing with their Alexa and Google Home assistants | VentureBeat"
"https://venturebeat.com/2018/11/17/heres-what-people-are-really-doing-with-their-alexa-and-google-home-assistants"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Here’s what people are really doing with their Alexa and Google Home assistants Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. I’m a strong believer in conversational interfaces — especially voice. Conversation is the natural way humans communicate, and it’s the future of human-computer interaction. If you remember the videos of two-year-olds swiping on iPhones and iPads, something similar is happening with devices like Alexa and Google Home: Kids already know how to interact with them. Last year, my team conducted a survey of Alexa and Google Home users to better understand their behavior and satisfaction with the devices. It showed that interest in voice apps was beginning to really take off, with all types of enterprises and brands entering the space — media, CPG, retail, food delivery, banking, and a wide variety more. This year, we re-ran the survey to see if, or how, user behaviors and feelings towards the devices may have changed. We also dove deeper into some of the interests based on demographics. The survey, conducted by Dashbot using Survata, covered 1,019 Alexa and Google Home owners across the U.S. The key takeaways this year: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Voice assistant devices are behavior-changing The core features tend to be the most frequently used Discovery of third-party voice apps is still an issue Users are quite likely to use the devices to make purchases Owners are satisfied with their devices and highly recommend them. Voice assistants continue to be behavior-changing As we saw last year, voice assistant devices are changing behavior. People use them throughout the day for a variety of use cases. Nearly 75 percent of respondents use their voice devices at least once a day, with 57 percent using their device multiple times a day. These numbers are very similar to the results last year. If we look closer at male versus female usage, approximately 64 percent of men and 53 percent of women use their devices multiple times a day. Among people who use their devices the least (less than once a month), women tend to predominate at 7 percent compared to just 1.4 percent of men. More than 65 percent of respondents indicated the devices have changed their behaviors or daily routines. About a quarter felt the device has changed their behavior a lot, whereas 40.5 percent thought it has at least a little bit. Only 19 percent said the device has not changed their behavior. A number of respondents described in their own words how much they rely on the device, how integrated it is in their life, and how surprised they are by how much they use it. As voice assistants become more ubiquitous and the technology is embedded into even more types of devices, I expect to see more significant changes in behavior. If you are a heavy Alexa or Google Home user, how often have you caught yourself about to talk to the device when away from home — at work or in a hotel room while traveling? Amazon and Google are working on this though through their business initiatives to provide devices in hotels and other locations. Men tend to report more behavior changes than women. Nearly 33 percent of men answered “yes, it has a lot” compared to 20 percent of women. As we saw with the frequency of usage, with women skewing more to infrequent usage, we also see a higher percent of women finding the device has not been behavior changing: 23.3 percent of women answered “no” compared to 13.7 percent of men. Interestingly, even the 19 percent of respondents who indicated the device has not changed their behavior still use the device fairly regularly. Of those indicating “no,” roughly 33 percent still use the device multiple times a day, and another 17 percent use the device at least once a day. Core features are the most frequently used We asked respondents what features they use most frequently. It turns out, listening to music, checking weather, and asking for information, are the most common use cases. They’re also core functionality of the devices. Using specific third-party skills is less common (more on that in a moment). Approximately 75 percent of respondents use the device to listen to music, 66 percent check the weather, and 63 percent ask for information. Approximately 58 percent of the those who listen to music do so multiple times a day, whereas only 34 percent of those checking the weather do so multiple times a day. On the lower end of usage, only 23 percent of respondents use their devices for controlling home automation. However, those who do, do so quite frequently. Nearly 63 percent of respondents who use the device for home automation do so multiple times a day, and another 22 percent do so at least once a day. If we look at the usage based on gender, interesting differences emerge. While the top three use cases are the same for both male and female, women tend to have slightly higher usage for each — roughly 5-6 percent higher. For example, nearly 77 percent of women listen to music while 71 percent of men do. There are some features that men are significantly more likely to use than women. For example, nearly 42 percent of male respondents use the devices for sports scores compared to 18 percent of women. Other features include getting news (49 percent of men vs. 40 percent of women), shopping (36 percent of men to 26 percent of women), playing games (33 percent of men to 22 percent of women), and home automation (29 percent of men to 18 percent of women). Speaking of shopping, let’s take a closer look at this use case. Users are willing to make purchases through their devices Both Alexa and Google let users make purchases through their own e-commerce services and — with the addition of account linking — other retailers and services. Developers and brands can also monetize their voice apps through subscriptions and “in-app” purchases. We asked respondents whether they have ever made a purchase through their voice assistant. It turns out 43 percent of respondents have, including 58 percent of men and 32 percent of women. In regards to what respondents are purchasing, products from the providers own e-commerce service (Amazon or Google Shopping) are the most common at nearly 83 percent. Interestingly, food delivery is also fairly common at 53 percent. The “reorder” case, i.e. the ability to reorder the same items as the previous order, works quite well through these interfaces, as it can be done in shorter, more concise statements than complex menu ordering. We’ve also heard from many food delivery services that reordering is quite common — consumers tend to order the same thing each time. We also asked respondents how likely they are to make a purchase in the future. Approximately 41 percent said they are “very likely” to make a purchase in the future, with an additional 20 percent saying they are “likely” to do so. Interestingly, one of the biggest indicators of whether someone has made a purchase in the past, or is more likely to make a purchase in the future, is whether they have both an Alexa and a Google Home. Over 56 percent of respondents who own both devices have made a purchase in the past, compared to 43 percent who only have an Alexa and 39 percent who only have a Google Home. In terms of future purchases, similarly 57 percent of respondents who own both are “very likely” to make a purchase in the future, compared to 41 percent of those who only have an Alexa and 35 percent who only have a Google Home. It may be that consumers who have both devices tend to be early adopters and more likely to try making a purchase through the device. Discovery of third-party voice apps is still an issue Voice interfaces are still a relatively new space. Between Alexa and Google Home, there are approximately 50 million devices in the US. Approximately 40,000 third-party skills exist for Alexa. We found in our last survey, that many respondents did not even know the term for a third-party voice app is a “Skill” on Alexa and an “Action” on Google Home. The good news is, consumers are using third-party skills, they’re just not using very many of them. Based on the survey, 48 percent of respondents use between one and three voice apps, and an additional 26 percent use between four and six. Only about 15 percent of respondents said they do not use any. We asked respondents what their favorite voice apps are. The more common responses were the native features — listening to music, checking weather, and getting info. The more common third party-apps named include Pandora, Spotify, Uber, and Jeopardy. For third-party app makers, both discovery and user acquisition are challenges. The most common ways users find out about Skills and Actions are through social media, friends, and the device app stores. We often hear from brands and developers that social media, either paid or organic, is one of the best channels for user acquisition for voice apps. According to the survey, over 43 percent of respondents found skills through social media. Viral video influencer campaigns are also recommended as they serve two purposes — reach through the influencer, and instruction on how to interact with the voice app. Since it’s a new space and a new user interface, users may not know what they can say to, or do with, the particular voice app. With Alexa, users can ask the device for the latest Skills or recommendations, even within categories. The device will walk through a set of Skills, listing each by name and asking if the user wants to install or continue. In addition, Alexa supports a “can fulfill intent” that developers and brands can implement to help users discover their voice apps. For example, if an Alexa Skill can support ordering a pizza, the developer can list that as a “can fulfill” intent and potentially be recommended by the device when a user asks to order a pizza. Google Home does not yet appear to have a searchable directory via voice. Asking the device for the latest Actions, or recommended Actions, results in either the fallback “I don’t understand” type of response or attempts to provide some form of definition depending on the request — e.g. describing a “sports action” when asking for the latest “sports Actions.” User satisfaction is high Users tend to be quite satisfied with their voice devices and recommend them highly. We asked respondents how satisfied they are with the device’s ability to understand, the device response, and the overall experience. The results are quite positive. In regards to the device’s ability to understand, nearly 44 percent of respondents were very satisfied, and an additional 34 percent were somewhat satisfied. Only about 13 percent were either somewhat, or very, unsatisfied. Similarly, in regards to the device responses, 44 percent of respondents were very satisfied, and an additional 35 percent were somewhat satisfied. Only about 12 percent were either somewhat, or very, unsatisfied. Based on the overall experience, 53 percent of respondents were very satisfied, and an additional 29 percent were at least somewhat satisfied. Only 10 percent were either somewhat, or very, unsatisfied. In addition, we asked respondents if there was anything about the device that surprised them, and the results also indicate a high level of satisfaction. Owners were surprised by how much the devices can do and how knowledgeable the devices are. A fairly common comment was how quickly the device updates itself — “every day something new” and “like Christmas everyday.” Owners are quite happy with their devices and would happily recommend them. When asked how they would rate the device overall on a one to five star scale, the respondents’ average rating was 4.4 stars. When asked to rate how likely they are to recommend the device to others on a one to five scale, the respondents rated 4.4 as well. If we look closer at the ratings based on the impact the device has had on behavior, we see overall positive results. Respondents who said the device has changed their behavior a lot rated the devices 4.9 stars and are very likely to recommend the device, with a 4.9 rating as well. Even users who said the device has not changed their behaviors rated their device nearly 4 stars and are still likely to recommend the devices with a 3.8. We asked respondents if anything surprised them about the devices, and the more common responses were: How much the device can do How smart the device is and the ability to answer a variety of questions The ease of use The ability to understand the user’s request The user’s dependence on the device and how life-changing the device is The speed of responses The quality of responses While most of the comments were generally positive, there was a small number of complaints. The biggest complaint (still occurring rarely compared to all the positive responses) was the frustration with the device’s ability to understand a user’s request. Conclusions Overall, owners of Alexa and Google Home devices are very happy with their devices. They are pleasantly surprised by all the things the devices can do, how smart the devices are, and how reliant they have become on the devices. While the voice assistant space is still relatively new, there is an opportunity for brands to monetize as there is a strong indication of willingness to make purchases through the devices. As more brands develop voice apps, it will be interesting to see what use cases they support — how they take advantage of the voice interface and whether they implement monetization opportunities. As many of the respondents mentioned, the devices are continuously getting better — not just in terms of improved comprehension, but in all the functionality provided. I continue to be bullish on this space and look forward to seeing what the future holds. Arte Merritt is the CEO and co-founder of Dashbot , a chatbot analytics platform for Alexa, Google Home, Facebook, Slack, Twitter, SMS, and other conversational interfaces. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,502
2,019
"Amazon wants smart home device setup to be a 'zero-touch' experience | VentureBeat"
"https://venturebeat.com/2019/07/05/amazon-wants-smart-home-device-setup-to-be-a-zero-touch-experience"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon wants smart home device setup to be a ‘zero-touch’ experience Share on Facebook Share on X Share on LinkedIn The third-generation Echo Dot. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The smart home device market is poised for serious growth. Just ask analysts at IDC, which forecasts that shipments will experience a 26.9% year-over-year uptick in 2019 to 832.7 million units. By 2023, IDC expects nearly 1.6 billion devices will ship to customers’ homes worldwide. Amazon is counting on it. Of the 75% of respondents to a recent Dashbot survey who use voice assistants like Alexa at least once a day, 23% say they control smart home devices with their assistant. Of that group, 63% tap assistants for home automation multiple times a day. Perhaps it’s no wonder, then, that Alexa is becoming more proficient at controlling lightbulbs, garage door openers, smart locks, and other smart devices. In October, a few months after Amazon launched an API that gives Alexa the ability to communicate with motion and door sensors, the Seattle company introduced developer tools to connect smart cameras and doorbells to Echo devices. On the newly launched Echo Show 5 , a discovery panel highlights popular smart home device tasks. More recently, Amazon demonstrated Alexa Conversations for seamless multi-turn interactions, bringing easy-to-remember commands, like “Alexa, start cleaning” to appliances like Roomba robots. To better understand Amazon’s work in the smart home ecosystem as it relates to Alexa, we spoke with Nathan Smith, who heads up the customer experience team creating new features for Alexa smart home customers. Here’s a lightly edited transcript of our discussion. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! VentureBeat: I thought we could start with a high-level overview of Amazon’s approach to the “smart home” and voice interactions and then dive into some of the ideas you and your team are pursuing to make managing connected devices easier with Alexa. That sound good? Nathan Smith: Sure. We think the smart home is in a period of mass adoption and expansion right now. Classically, it has comprised much more tech-forward earlier adopters, but we’re past that. There are now more than 60,000 products that work with Alexa from 7,400 different manufacturers, and a trend we’re seeing is that Alexa is democratizing control of these devices. One of the things I’m most excited about this year is a new feature that uses machine learning and artificial intelligence to help Alexa understand not just what you say, but what you actually mean, and then provide a simple user experience around that. The problem we’re solving came from customer feedback as we were onboarding people who didn’t necessarily have context concerning which smart devices were named what around their house. We ran into this over and over again — people were having trouble remembering the names of devices, which was only exacerbated as they added more devices to their homes. What we’ve done is make Alexa a little bit more human-like. If you ask Alexa something like “Hey, Alexa, turn on the Sofa Lights” but the lights you’re trying to turn on are called Living Room Lights and Alexa is uncertain about which you mean, she’ll helpfully suggest “Oh, you know, did you mean Living Room Lights?” This technology, which allows people to speak more casually in their homes and go beyond the strict syntax that Alexa previously understood, helps in a lot of different real-world use cases. One is words that have similar transcriptions and another is mixed characters, like when people add emojis to their [own] or their devices’ names [in the Alexa smartphone app]. It can resolve words without being strict about the exact pronunciation, and it can even help in multilingual cases. If you’re using a mix of names across different languages, Alexa can learn from that. The context is that we’re trying to build toward a world where Alexa understands you in a much more natural way, rather than training people to talk in Alexa’s terms. If we have a pretty good idea of what you’re saying, we’ll simply perform the intended task, but what we’re evolving toward is a model where Alexa gets ground truths from customers. We don’t want to take the power of customers away without asking a clarifying question if we’re not 100% certain about something, but we also want Alexa to be helpful in ambiguous cases. We started rolling out this feature in the U.S. at the end of December and recently expanded it to Canada, Australia, the U.K., and India. In terms of early results, when Alexa prompts a customer with a suggestion, they’re accepting it 80-90% of the time, on average. VentureBeat: Which other factors does Alexa take into account when determining how to respond to a command, misspoken or not? Smith: Gathering ground truths and assimilating them into semantic and behavioral models that learn from you in a very human way — the way a child would ask questions about the world — underpins the machine learning side [of Alexa]. What our models really do is layer on signals in terms of device state and behavioral signals — like which devices are usually switched on at which times — in addition to environmental signals, like date and time. The models use all of these to generate suggestions. There’s a lot more work to do, and we think that we can expand the reach of this sort of helpfulness to other scenarios. We’re seeing more and more customers from different walks of life and different technology backgrounds using smart home devices with Alexa, and this is a first step to taking bleeding-edge technology and using it to help simplify the customer experience. VentureBeat: AI and machine learning are obviously at the core of Alexa, from its language processing and understanding to the way it intelligently routes commands to the right Alexa skill. What are some of the other challenges you and your team are solving with AI? What has it enabled you to achieve? Smith: At the feature level, there’s Hunches, where Alexa provides information based on what it knows from connected sensors or devices. It checks if when you say a command such as “Alexa, good night” whether your garage lights are still on and whether they’re usually off at that time of day, which informs the response. Alexa will say something like “Good night. You know, by the way, I noticed that your garage lights are on. Would you like me to turn them off for you?” and give customers helpful feedback at certain stages of smart home routines without requiring them to dig into a bunch of app screens. These features use machine learning techniques enabled by Amazon Web Services. We run these real-time capabilities at scale on the SageMaker platform, which has given us the ability to iterate a lot more quickly. VentureBeat: It seems, as you said a moment ago, that smart home adoption is on the rise, perhaps driven in part by cheaper connected devices, like Philips’ recently announced Bluetooth-compatible Hue series. What are some of the other ways you’re making onboarding simpler for first-time buyers? Smith: We’ve been working really hard on that for a while now, and one of the things we’re most excited about is this ability to have a zero-touch setup. Last year, we announced Wi-Fi Simple Setup, which lets you quickly configure Amazon Wi-Fi devices like the Amazon Smart Plug. Basically, you plug it in and then Alexa will say “Hey, I found your new device.” There’s no other setup necessary. We’re bringing that same experience to Bluetooth Low Energy light bulbs like the new Philips Hue products, and we’re really working to expand the usage of this technology broadly. As for configuration post-setup, once you get a device talking to Alexa, we released a couple of features at the end of last year that help you do some of the other setup and context-gaining by voice that you might need to have a fully natural interaction with Alexa. We want customers to be able to do things like put their devices in rooms so that when they refer to one device in a set of several, Alexa targets the right device. That’s why we rolled out last year a more contextually sensitive setup experience. If you say “Alexa, turn on the lights,” she can walk you through with voice setting up a room and putting lights in there. We’ve seen customers really take to this because it doesn’t get in the way of controlling the device for the first time. VentureBeat: I’m sure you have to account for different Alexa device form factors, right? I’m talking about an Echo Dot versus an Echo Show. Smith: We think of it as a mesh among the different modalities — among the app, voice, and screen — because each has different strengths. Voice is really great when you’re trying to do something hands-free, but not great when you’re trying to do something quietly. That’s where we lean on screen-based interactions. What we’re really excited about is ensuring that, as more diverse customers start to use Alexa, we’re keeping up with their needs and not looking backward and saying “OK, how do we teach these customers the sort of patterns of the past?” Instead, we’re using technology like machine learning to look forward and learn from them. The key is using the technique that’s right for the type of problem, whether it’s examining a behavioral pattern or trying to establish semantic similarity with ground truths, and then tuning a meta-model that takes those individual signals into account, producing a user experience that’s helpful instead of one that makes assumptions. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,503
2,019
"Google's Evolved Transformer achieves state-of-the-art performance in translation tasks | VentureBeat"
"https://venturebeat.com/2019/06/14/googles-evolved-transformer-achieves-state-of-the-art-performance-in-translation-tasks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Evolved Transformer achieves state-of-the-art performance in translation tasks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The Transformer, a type of AI architecture introduced in a 2017 paper (“ Attention Is All You Need “) coauthored by scientists at Google, excels at writing prose and product reviews, synthesizing voices, and crafting harmonies in the style of classical composers. But a team of Google researchers believed it could be taken a step further with AutoML, a technique in which a “controller” system identifies a “child” architecture that can then be tailored to a particular task. Remarkably, the result of their work — which they describe in a newly published paper and accompanying blog post — achieves both state-of-the-art translation results and improved performance on language modeling compared with the original Transformer. They’ve released the new model — Evolved Transformer — as part of Tensor2Tensor, a library of open source AI models and data sets. Traditionally, AutoML approaches begin with a pool of random models that the controller trains and evaluates for quality. The process is repeated thousands of times, and each time results in new vetted machine learning architectures from which the controller learns. Eventually, the controller begins to assign high probability to model components that achieve better accuracy on validation data sets and low probability to poorly scoring areas. Discovering the Evolved Transformer with AutoML necessitated the development of two new techniques, the researchers say, because the task used to evaluate the performance of each architecture (WMT’14 English-German translation) was computationally expensive. The first — warm starting — seeded the initial model population with the Transformer architecture instead of random models, which helped ground the search. Meanwhile, the second — Progressive Dynamic Hurdles (PDH) — augmented the search to allocate more resources to the strongest candidates, enabling the controller to terminate the evaluation of “flagrantly bad” models early and award promising architectures more resources. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: The Evolved Transformer architecture. So what’s so special about the Evolved Transformer? As with all deep neural networks, the Evolved Transformer contains neurons (functions) that transmit “signals” from input data and slowly adjust the synaptic strength — weights — of each connection, which is how the model extracts features and learns to make predictions. Furthermore, the Evolved Transformer has attention, such that every output element is connected to every input element and the weightings between them are calculated dynamically. Like most sequence-to-sequence models, the Evolved Transformer contains an encoder that encodes input data (sentences in translation tasks) into embeddings (mathematical representations) and a decoder that uses those embeddings to construct outputs (translations). But the team notes that it contains something rather unconventional, as well: convolutional layers at the bottom of both the encoder and decoder modules in branching pattern, such that inputs run through two separate convolutional layers before being added together. Whereas the original Transformer relied solely on attention, then, the Evolved Transformer is a hybrid that leverages the strengths of both self-attention and wide convolution. Above: The Evolved Transformer’s performance compared with the Transformer. In tests, the team compared the Evolved Transformer with the original Transformer on the English-German translation task used during the model search, and found that the former achieved better performance on both BLEU (an algorithm for evaluating the quality of machine-translated text) and perplexity (a measurement of how well probability distribution predicts a sample) at all sizes. At larger sizes, the Evolved Transformer reached state-of-the-art performance with a BLEU score of 29.8, and on experiments involving translation with different language pairs and language modeling, they observed a performance improvement of nearly two perplexity. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,504
2,020
"Google's AI language model Reformer can process the entirety of novels | VentureBeat"
"https://venturebeat.com/2020/01/16/googles-ai-language-model-reformer-can-process-the-entirety-of-novels"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s AI language model Reformer can process the entirety of novels Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Whether it’s language, music, speech, or video, sequential data isn’t easy for AI and machine learning models to comprehend — particularly when it depends on extensive surrounding context. For instance, if a person or an object disappears from view in a video only to reappear much later, many algorithms will forget how it looked. Researchers at Google set out to solve this with Transformer, an architecture that extended to thousand of words, dramatically improving performance in tasks like song composition, image synthesis, sentence-by-sentence text translation, and document summarization. But Transformer isn’t perfect by any stretch — extending it to larger contexts makes apparent its limitations. Applications that use large windows have memory requirements ranging from gigabytes to terabytes in size, meaning models can only ingest a few paragraphs of text or generate short pieces of music. That’s why Google today introduced Reformer , an evolution of Transformer that’s designed to handle context windows of up to 1 million words. By leveraging techniques like locality-sensitive hashing (LSH) and reversible residual layers to use memory efficiently and reduce complexity over long sequences, it’s able to run on a single AI accelerator chip using only 16GB of memory. The code and several example applications are available in open source, ahead of the Reformer paper’s presentation at the 2020 International Conference on Learning Representations in Addis Ababa, Ethiopia in April. As with all deep neural networks, Transformers contain neurons (mathematical functions) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection. That’s how all AI models extract features and learn to make predictions, but Transformer uniquely has attention such that every output element is connected to every input element. The weightings between them are calculated dynamically, in effect. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As my colleague Khari Johnson notes , one of the biggest machine learning trends of 2019 was the continued growth and proliferation of natural language models based on this Transformer design. Google open-sourced BERT , a Transformer-based model, in 2018. And a number of the top-performing models released this year, according to the GLUE leaderboard — including Nvidia’s Megatron , Google’s XLNet , Microsoft’s MT-DNN , and Facebook’s RoBERTa — were based on Transformers. XLNet 2 is due out later this month, a company spokesperson recently told VentureBeat. Above: Top: Image fragments used as input to Reformer. Bottom: “Completed” full-frame images. Reformer, then, computes hash functions (functions used to map data of arbitrary size to fixed-size values) that match up similar vectors (algebraic constructions used to represent human-readable data in machine learning) instead of searching through all possible pairs of vectors. (For example, in a translation task, where each vector from the first layer of the network represents a word, vectors corresponding to the same words in different languages may get the same hash.) When the hashes are assigned, the sequence is rearranged to bring elements with the same hash together and divided into segments to enable parallel processing. Attention is then applied within these much shorter segments and their adjoining neighbors, greatly reducing the computational load. Reformer also recomputes the input of each layer on-demand rather than storing it in memory, thanks to the aforementioned reversible memory. Activations — functions that determine the output of the network, its accuracy, and its computational efficiency — from the last layer of the network are used to recover activations from any intermediate layer, using two sets of activations for each layer. One is progressively updated from one layer to the next, while the other captures only the changes to the first. “Since Reformer has such high efficiency, it can be applied directly to data with context windows much larger than virtually all current state-of-the-art text domain [data sets],” wrote contributing researchers Łukasz Kaiser, a Google staff research scientist, and Nikita Kitaev, a student at the University of California, Berkeley, in a blog post. “Perhaps Reformer’s ability to deal with such large datasets will stimulate the community to create them.” The research team experimented with Reformer-based models on images and text, using them to generate missing details in images and process the entire novel Crime and Punishment (which contains 211,591 words). They show that Reformer can generate full-frame images pixel by pixel, and that they can take in novel-length text in a single round of training. The authors leave to future work applying the technique to even longer sequences and improving their handling of positional encodings. “We believe Reformer gives the basis for future use of Transformer models, both for long text and applications outside of natural language processing,” added Kaiser and Kitaev. In an interview late last year, Google AI chief Jeff Dean told VentureBeat that larger context would be a principal focus of Google’s work going forward. “We’d still like to be able to do much more contextual kinds of models,” he said. “Like right now BERT and other models work well on hundreds of words, but not 10,000 words as context. So that’s kind of [an] interesting direction.” Reformer would appear to be a promising first step in that direction. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,505
2,018
"Amazon's Echo Look fashion assistant lacks critical context | VentureBeat"
"https://venturebeat.com/2018/08/03/amazons-echo-look-fashion-assistant-lacks-critical-context"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon’s Echo Look fashion assistant lacks critical context Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon’s Echo Look is an Alexa fashion assistant that combines human and machine intelligence to tell you how you look in an outfit, keeps track of what’s in your wardrobe, and recommends clothes to buy from Amazon.com. Made generally available to the public in recent weeks, the Echo Look debuted in April 2017, but was available by invite only for more than a year — a first for Alexa-enabled devices. Over time, Amazon will team Echo Look with Prime Wardrobe , an Amazon program akin to modern fashion companies like Stitch Fix and Trunk Club that lets users try on clothes and send back what they don’t want to buy. All the meanwhile, Amazon’s facial recognition software Rekognition keeps making headlines for being used by U.S. law enforcement agencies and misidentifying more than two dozen members of Congress as criminals. Let’s examine why it can be a lot of fun to use the Echo Look, why it took Amazon a year to make the device generally available, and why its fashion assistant’s AI is inherently biased. What Alexa’s fashion assistant has learned to do It took more than a year to roll out the Echo Look, Amazon director of Echo product management Linda Ranz told VentureBeat in an interview, because the Echo Look is Amazon’s first venture into computer vision for consumers, and the company wanted to get it right. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Over the course of a year of testing, a number of features were added to the Echo Look smartphone app that accompanies the device, including collections, which gives users the ability to organize their outfits into categories. The Echo Look app can automatically create collections based on seasonality, your style, or the kinds of photos you upload each time you say “Alexa, take a picture.” The app also color-filters and recommends clothes to buy from the Amazon marketplace. Echo Look recommendations are currently fueled by photos uploaded to the app, but it’s really easy to imagine these recommendations also being informed by shopping history and user activity within Amazon’s store. Style Check is an original Echo Look feature that lets you compare two outfits to each other, and it’s the cornerstone of the Alexa fashion assistant experience. One feature added during the closed beta is Style Check Reasons, which offers an explanation for why the AI model chooses one outfit over another. The AI may tell you these colors look better together, or one pairing of shirt and pants match better. The reasoning behind Reasons, Ranz said, is a combination of human and machine intelligence. “It started out highly human and will become more and more machine, but as one might expect with fashion trends constantly changing, there will always be some human engagement in this keeping track of what styles are now in and what’s changing in fashion,” Ranz said. The human gaze Dozens of fashion specialists use fashion images to define Alexa’s fashion sense, a company spokesperson told VentureBeat. However, the people hired to give Alexa a sense of fashion appear to come from similar backgrounds. Skimming LinkedIn for a dozen of the fashion specialists training Alexa’s fashion assistant AI shows most live in the Seattle area, appear to be young women, and have past job experience at companies like Nordstrom, Zulily, and J.Crew. To broaden the definition of fashion for specialists training Alexa and to increase sensitivity to variations of body shape or ethnicity, Amazon fashion specialists periodically receive training, Ranz said. A company spokesperson declined to share details about the training process. “We want to make sure that we’re not narrowly focused in one area or another, so it’s something that we do … both orientation as we bring someone on board and training,” she said. “Because if you’re a fashion stylist for Nordstrom’s, you can go to a very specific target segment. Amazon by nature will appeal to a much broader set of customers, and so it’s an ongoing set of work that we do with that group.” Even if Amazon managed to hire an amazingly diverse group of individuals from a range of different backgrounds and developed great training, it would still be a challenge to create an AI fashion assistant that appeals to every person’s needs. Like Project Debater from IBM Research or machine learning meant to work alongside musicians or dancers , it may be easy to take a look at the results and say whether or not something can be called great or terrible, but it can be tough to quantify art, to give a numerical value so AI to deliver fashion recommendations. Given how insecure people can be about how they look and present themselves to the world, the challenge of measuring for AI bias in a fashion assistant is fraught. Context is key One gigantic missing piece — perhaps the most important missing element of Echo Look recommendations today — is context. If you’re going to a work function, then you dress accordingly. The same can be said for a PTA meeting or a dive bar. You might want to get a bit more edgy on date night but tone things down the first time you meet your spouse’s parents, for example. The Echo Look app can quickly A/B test your style with Spark, an online community of humans who vote to pick the best outfit. When I ran tests on my own outfits, human votes on Spark were close in percentage to results from Alexa. But share a photo in Spark with context and you may receive a very different response. For example, a Style Check that just asked “Which one?” received 287 votes, with humans voting 15/85 for an outfit compared to 27/73 from Alexa. The vote doesn’t match up exactly, but it’s similar. But ask “Which one looks best for Friday night?” and I got the opposite: a 29/71 for human votes versus 69/31 for Alexa. When toggling between Style Check results, oftentimes I received results that said one outfit received 70 or 75 percent out of 100. For people like myself who do not consider themselves to be fashionably inclined, Echo Look can be a lot of fun. It can help you do things like quickly choose the right outfit for the right occasion, but you should take the results with a grain of salt — and perhaps accept that humans who teach fashion to machines will always encounter certain challenges. The future of style The Echo Look will initially focus on fashion, and the lowest hanging fruit in that category is Prime Wardrobe. Ranz declined to talk about future plans for Prime Wardrobe and Echo Look, but of course the end game for technology like an Alexa fashion assistant is the sale of clothes directly from Amazon. “What we’ve done with those recommendations on the app is we’ve taken our entire clothing catalog and narrowed it to those brands and styles that we think will be most interesting to this set of customers,” she said. It’s not tough to imagine potential next steps, such as letting Alexa offer you custom-fitted clothes or help you find a new style. I would describe my own fashion sense as fairly business casual (dress shirts and slacks) with a smattering of dope t-shirts. The Echo Look is designed to help you find clothes that match your preferred style — but what if you don’t want to dress like you normally do? What if you want to add some grunge or elegance or flair that will still fit in with the rest of your wardrobe? That’s not available today, but seems like the sort of feature that could be one day. Taking this a step further, what if Amazon just made clothes for you? Researchers from Adobe and the University of California, San Diego designed an AI model to create new personalized clothes for you based on the outfits in your wardrobe. Amazon’s test balloon in your bedroom Cameras are one of the leading sensors for gathering data in the AI world, and Echo Look is a test balloon for placing hardware with computer vision in your bedroom. Besides a deeper integration with Prime Wardrobe, these same forms of AI could be incorporated into a mirror Amazon patented to dress you or the home robot reportedly due out from Amazon next year. Should it gain the ability to provide more exact measurements, the Echo Look could combine with, or supplement tech like, a Naked 3D body scanner to enter areas like personal health and fitness. Alexa isn’t alone in providing AI-driven fashion services. Samsung’s Bixby Makeup helps you test things like shades of lipstick in AR, a major selling point of the Galaxy S9, and a Bixby smart speaker will reportedly make its debut in the coming weeks. Google Assistant’s and Pinterest’s computer vision features, both named Lens, offer visual search and style-matching features to help users find similar fashion. However Amazon chooses to deploy computer vision for consumers, it likely only begins with fashion. The Echo Look could be part of a long-term effort to put computer vision in consumers’ homes. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,506
2,020
"Facebook details the AI behind its shopping experiences | VentureBeat"
"https://venturebeat.com/2020/05/19/facebook-details-the-ai-behind-its-shopping-experiences"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook details the AI behind its shopping experiences Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook today announced improvements to the shopping experiences across its platform, including Facebook Shops , a new way for businesses to set up a single online store for customers to access on Facebook and Instagram. The company characterized the new products — all of which are powered by a family of new AI and machine learning system — as a step toward its vision of an all-in-one AI assistant that can search and rank products, while at the same time personalizing its recommendations to individual tastes. Ecommerce businesses like Facebook Marketplace lean on AI to automate a host of behind-the-scenes tasks, from learning preferences and body types to understanding the factors that might influence purchase decisions. McKinsey estimates that Amazon, which recently deployed AI to handle incoming shopper inquiries , generates 35% of all sales from its product recommendation engine. Beyond ranking, AI from startups like ModiFace , Vue.ai , Edited , Syte , and Adverity enable customers to try on shades of lipstick virtually, see model images in every size, and spot trends and sales over time. “We’re seeing a lot of small businesses that never had online presences get online for the first time [as a result of the pandemic,]” said Facebook CEO Mark Zuckerberg during a livestream this afternoon, who revealed that over 160 million small businesses around the world use the platform’s services. “This isn’t going to make up for all of their lost business, but it can help. And for lots of small businesses during this period, this is the difference between staying afloat or going under … Facebook is uniquely positioned to be a champion for small businesses and what helps them grow and what keeps them healthy.” Facebook says its AI-powered shopping systems segment, detect, and classify images to know where products appear and deliver shopping suggestions. One of those systems — GrokNet — was trained on seven data sets containing images of products that millions of users post, buy, and sell in dozens of categories, ranging from SUVs to stiletto heels to side tables. Another creates 3D views from 2D videos of products, even those obscured by dim or overly bright lighting, while a third spotlights apparel like scarfs, ties, and more that might be partially obscured by their surroundings. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! GrokNet Facebook says that GrokNet, which can detect exact, similar (via related attributes), and co-occurring products across billions of photos, performs searches and filtering on Marketplace at least twice as accurately than the algorithm it replaced. For instance, it’s able to identify 90% of home and garden listings compared with Facebook’s text-based attribution systems, which can only identify 33%. In addition to generating tags for colors and materials from images before Marketplace sellers list an item, as part of a limited test, it’s tagging products on Facebook Pages when Page admins upload a photo. In the course of pre-training GrokNet on 3.5 billion images and 17,000 hashtags and fine-tuning it on internal data sets across 96 Nvidia V100 graphics cards, Facebook says it used real-world seller photos with “challenging” angles along with catalog-style spreads. To make it as inclusive as possible for all countries, languages, ages, sizes, and cultures, it sampled examples of different body types, skin tones, locations, socioeconomic classes, ages, and poses. Rather than manually annotate each image with product identifiers, which would have taken ages — there are 3 million possible identifiers — Facebook developed a technique to automatically generate additional identifiers using GrokNet as a feedback loop. Leveraging an object detector, the approach identifies boxes in images surrounding likely products, after which it matches the boxes against a list of known products to keep matches within a similarity threshold. The resulting matches are added to the training set. Above: A graphic illustrating Facebook’s GrokNet architecture. Facebook also took advantage of the fact that each training data set has an inherent level of difficulty. Easier tasks don’t need that many images or annotations, while more difficult tasks require more. Company engineers improved GrokNet’s accuracy on tasks simultaneously by allocating most of the training to challenging sets and only a few images per batch to simpler ones. The productized GrokNet, which has 83 loss functions — i.e., functions that map events of variables onto numbers representing some cost associated with the events — can predict a range of properties for a given image, including its category, attributes, and likely search queries. Using just 256 bits to represent each product, it produces embeddings akin to fingerprints that can be used in tasks like product recognition, visual search, visually similar product recommendations, ranking, personalization, price suggestions, and canonicalization. In the future, Facebook says it will employ GrokNet to power storefronts on Marketplace so that customers can more easily find products, see how those products are being worn, and receive relevant accessory recommendations. “This universal model allows us to leverage many more sources of information, which increases our accuracy and outperforms our single vertical-focused models,” the company wrote. “Considering [all these] kinds of issues from the start ensures that our attribute models work well for everyone.” 3D views and AR try-on A complementary AI model powers Facebook’s 3D views feature, which is now available on Marketplace for iOS in a test. Building on the 3D Photos tool Facebook introduced in February, it takes a video shot with a smartphone and post-processes it to create an interactive, pseudo-3D representation that can be spun and moved up to 360 degrees. Facebook uses a method called simultaneous localization and mapping (SLAM) for the reconstruction, where a map of an unknown environment or object is created and updated while an agent’s (smartphone’s) location is simultaneously tracked. The smartphone’s poses are reconstructed in 3D space, and its paths are smoothed with a system that detects abnormal gaps and maps each pose into a coordinate space that corrects for discontinuities. To maintain consistency, the smooth camera paths are mapped back into the original space, reintroducing discontinuities and ensuring that objects remain recognizable. Facebook’s SLAM technique also combines observations from frames to obtain a sparse point cloud, which consists of the most prominent features from any given captured scene. This cloud serves as guidance to the camera poses that correspond to viewpoints best representing objects in 3D; images are distorted in such a way that they look like they were taken from the viewpoints. A heuristic outlier detector finds key points that could introduce distortions and discards them, while similarity constraints make the featureless parts of the reconstructions more rigid and out-of-focus areas look more natural. Beyond 3D reconstructions, Facebook says that it will soon draw on its Spark AR platform checkout to allow customers to see how items look in various places. (Already, brands like Nyx, Nars, and Ray-Ban use it in Facebook Ads and Instagram to power augmented reality “try-on” experiences.) The company plans to support try-on for a wider variety of items — including home decor and furniture — across apps and services including Shops, Facebook’s feature that enables businesses to sell directly through the network. Segmentation To imbue services like Marketplace with the ability to automatically isolate clothing products within images, Facebook developed a segmentation technology it claims achieves state-of-the-art performance compared with several baselines. The tech — an “operator” called Instance Mask Projection — can spot items like wristbands, necklaces, skirts, and sweaters photographed in uneven lighting or partially obscured, or even shown in different poses and layered under other items like shirts and jackets. Instance Mask Projection detects a clothing product as a whole and roughly predicts its shape. This prediction serves as a guide to refine the estimate for each pixel, allowing global information from the detection to be incorporated. The predicted instance maps are projected into a feature map that’s used as input for semantic segmentation. According to Facebook, this design makes the operator suitable for clothing parsing (which involves complex layering, large deformations, and non-convex objects) as well as street-scene segmentation (overlapping instances and small objects). Above: A schematic of Facebook’s Instance Mask Projection system. Facebook says it’s training its product recognition systems with the operator across dozens of product categories, patterns, textures, styles, and occasions, including lighting and tableware. It’s also enhancing the tech to detect objects in 3D photos, and in a related effort, it’s developing a body-aware embedding to detect clothing that might be flattering for a person’s shape. “Today, we can understand that a person is wearing a suede-collared polka-dot dress, even if half of her is hidden behind her office desk. We can also understand whether that desk is made of wood or metal,” said Facebook in a statement. “As we work toward our long-term goal of teaching these systems to understand a person’s taste and style — and the context that matters when that person searches for a product — we need to push additional breakthroughs.” Toward an AI fashion assistant Facebook says its goal is to one day combine these disparate approaches into a system that can serve up product recommendations on the fly, matched to individual tastes and styles. It envisions an assistant that can learn preferences by analyzing images of what’s in a person’s wardrobe, for instance, and that allows the person to try favorites on self-replicas and sell apparel that others can preview. To this end, Facebook says its researchers are prototyping an “intelligent digital closet” that provides not only outfit suggestions based on planned activities or weather, but also fashion inspiration informed by individual products and aesthetics. It’s like a hardware-free, ostensibly more sophisticated take on the Echo Look , Amazon’s discontinued AI-powered camera that told customers how their outfits looked and kept track of what was in their wardrobe while recommending clothes to buy from Amazon.com. Companies like Stitch Fix , too, use algorithms to help pick out clothes sent to customers, choose the clothes kept in inventory, and keep track of things customers found online that they love. Facebook anticipates that new systems will ultimately be required to adapt to changing trends and preferences, ideally systems that learn from feedback on images of potentially desirable products. It recently made progress with Fashion++ , which uses AI to suggest personalized style advice like adding a belt or half-tucking a shirt. But the company says advancements in language understanding, personalization, and “social-first” experiences must emerge before a truly predictive fashion assistant becomes a possibility. “We envision a future in which [a] system could … incorporate your friends’ recommendations on museums, restaurants, or the best ceramics class in the city — enabling you to more easily shop for those types of experiences,” said Facebook. “Our long-term vision is to build an all-in-one AI lifestyle assistant that can accurately search and rank billions of products, while personalizing to individual tastes. That same system would make online shopping just as social as shopping with friends in real life. Going one step further, it would advance visual search to make your real-world environment shoppable. If you see something you like (clothing, furniture, electronics, etc.), you could snap a photo of it and the system would find that exact item, as well as several similar ones to purchase right then and there.” Facebook’s renewed focus on ecommerce comes as the company contends with flattening ad sales resulting from the pandemic. Even as online sales skyrocketed over the past few months, Facebook declined to increase Marketplace’s commission — 5%, compared to Amazon and Walmart’s 15% — likely to maintain a competitive edge. Some analysts estimate Marketplace will become a $5 billion-plus annual revenue stream for Facebook in the long term, all else being equal. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,507
2,019
"Self-driving car startup Gatik emerges from stealth with $4.5 million in seed funding and Walmart partnership | VentureBeat"
"https://venturebeat.com/2019/06/06/self-driving-car-startup-gatik-emerges-from-stealth-with-4-5-million-in-seed-funding-and-walmart-partnership"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Self-driving car startup Gatik emerges from stealth with $4.5 million in seed funding and Walmart partnership Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Gatik , a two-year-old startup developing an autonomous vehicle stack for business-to-business short-haul logistics, today emerged from stealth with $4.5 million in seed funding led by Innovation Endeavors, with participation from Trucks Venture Capital, Dynamo Venture Capital, Fontinalis Partners, and AngelPad. Coinciding with the close of its funding round, the Toronto and Palo Alto, California-based company announced that it’s inked a deal with Walmart and additional commercial partners that’ll be revealed later in the year. “We are a strong believer in autonomous vehicle technology, and we look forward to learning more about how Gatik’s innovation can benefit our customers in the coming months,” said senior vice president of digital operations at Walmart Tom Ward of the partnership. Gatik’s platform taps level 4 autonomous light commercial trucks and vans — i.e., trucks and vans capable of operating with limited human input and oversight in specific conditions and locations (as defined by the Society of Automotive Engineers ) — to fulfill on-demand and scheduled deliveries up to a collective distance of 200 miles. Its fleet of retrofitted Ford Transit vans, which the company has been testing on public roads in California since the first quarter of 2018, together with its robust orchestration software ensures that goods are transported up to 50% less expensively in city environments between locations, Gatik claims. “There is a huge gap between autonomous Class 8 big rig trucks, which can only operate on highways, and smaller automated vehicles such as sidewalk robots and Nuro vehicles, which are restricted by operation speed, capacity, distance, and the curb. Gatik fills the critical ‘middle mile’ part of logistics, which is only becoming more valuable as a layer in the $800 billion logistics ecosystem,” said Trucks Venture Capital founding general partner Reilly Brennan. “We’re inspired by Gatik’s vision and expertise to solve the untapped market of urban short-haul delivery logistics.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Gatik is the brainchild of Carnegie Mellon graduate and CEO Gautam Narang and CTO Arjun Narang (CTO), brothers and cofounders that have worked together in the field of robotics, AI, and machine learning for over 10 years. Along with third cofounder, chief engineer Apeksha Kumavat, they and the company’s roster hail from Carnegie Mellon, Ford, and Honda’s self-driving research units; from robotics firm OTSAW Digital; and from teams that competed in Defense Advanced Research Projects Agency challenges and the Google X Prize for moon rover technology. “Our team is made up of the very best minds from academia and industry,” said Kumavat. “Collectively, we have made critical contributions and technological breakthroughs in robotics and machine learning that has enabled us to build and launch an autonomous service that is filling a gap in the market where the vehicles are on the road (not on the sidewalk), operating on city roads, and traveling longer distances (typically up to 100 miles), and with a much higher payload.” Currently, Gatik is targeting third-party logistics providers (such as FedEx, UPS, and USPS), consumer goods distributors, food and beverage distributors, medical and pharmaceutical distributors, and auto parts distributors. Gatik isn’t Walmart’s first foray into driverless car technology, it’s worth noting. In January, it collaborated with Udelv to pilot autonomous deliveries in select stores. And in November, Ford teamed up with Postmates to test self-driving grocery delivery in the Miami area. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,508
2,020
"Coronavirus fears halt autonomous vehicle testing for Uber, Cruise, Aurora, Argo AI, Waymo, and others | VentureBeat"
"https://venturebeat.com/2020/03/17/coronavirus-halts-autonomous-vehicle-testing-for-uber-cruise-aurora-argo-ai-waymo-others"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Coronavirus fears halt autonomous vehicle testing for Uber, Cruise, Aurora, Argo AI, Waymo, and others Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Waymo, Uber, GM’s Cruise, Aurora, Argo AI, and Pony.ai are among the companies that have suspended driverless vehicle programs in the hopes of limiting contact between drivers and riders. It’s a direct response to the ongoing health crisis caused by COVID-19, which has sickened over 250,000 and killed more than 10,000 people around the world. On March 26, Lyft said it would cease all of its autonomous vehicles testing for the time being. “Our priority is the safety of our employees and passengers,” a spokesperson told VentureBeat via email. “We have temporarily paused the operation of our employee pilot as well as our … testing in Palo Alto, California.” Waymo announced that it would pause its Waymo One ride-hailing service in Phoenix, Arizona and autonomous car testing on public California roads. Initially, the Alphabet subsidiary pledged to limit Waymo One rides to its fully driverless cars, which don’t require human operators behind the wheel. But on Thursday, Waymo said it would suspend all service until April 7 in the “interests of the health and safety of … riders, trained drivers, and [the] entire [Waymo] team.” “We’re temporarily suspending all Waymo service,” wrote the company in a statement, adding that it has committed to providing funds to enable its partners to compensate staff assigned to work their normal hours if they have symptoms of COVID-19 or are quarantined. “You’ll hear from us when you can ride again.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Only a subset of Waymo One’s over 1,500 monthly active riders were being matched with fully driverless cars, it’s worth pointing out. To become eligible to ride in one, customers had to join Waymo’s Early Rider pilot program, which has a waitlist. Uber halted autonomous vehicle operations on March 16, and the company told VentureBeat that the ATG team continues to execute on projects from home with offline virtual simulation tools like Autonomous Visualization System and VerCD. Uber had briefly resumed testing in San Francisco, starting March 10, over a month after it received a California Department of Motor Vehicles (DMV) license. And it was previously operating fleets manually in Dallas, Toronto, and Washington, D.C. “Our goal is to help flatten the curve of community spread,” said Uber Advanced Technologies Group (ATG) CEO Eric Meyhofer in a statement. “Following recent guidance from local and state officials in areas where we operate our self-driving vehicles, we are pausing all test track and on-road testing until further notice.” “The safety and well-being of our employees and our community is our top priority. Out of an abundance of caution, we have asked Cruisers across all our locations who can conduct their work remotely to do so until further notice.” – @ArdenMHoffman1 , Chief People Officer (1/2) — Cruise (@Cruise) March 9, 2020 Cruise’s chief people officer Arden Hoffman said the company has suspended operations and closed all San Francisco facilities for the time being, with a plan to reopen them in three weeks. (The company confirmed that it plans to pay autonomous vehicle operators during this period.) One of the programs affected is a ride-hailing pilot in San Francisco called Cruise Anywhere that allows Cruise employees to get around mapped areas using an app. Aurora VP of operations Greg Zanghi told VentureBeat that Aurora’s entire team — including its test drivers — is working from home and that everyone will continue to get paid. In lieu of on-the-road tests, the company will use digital systems like its Virtual Test Suite to fuel development and testing efforts. “We recognize that this is an entirely unprecedented situation with unique challenges and we all need to come together and support one another,” said Zanghi. “While we continue to strive for work excellence, families come first and we are encouraging everyone to do what is needed to take care of their families. Our top priority is keeping our community safe and healthy, while also keeping our teams feeling supported, motivated, and connected.” As for Argo AI, a spokesperson told VentureBeat that while it hasn’t experienced a “significant impact” from COVID-19, it has taken steps to allow employees to work from home, including pausing car testing operations at all of its locations. Argo was conducting tests in Pittsburgh, where it’s based, as well as in Austin, Texas; Miami, Florida; Palo Alto, California; Washington, D.C.; and Dearborn, Michigan. “Argo AI places the highest priority on ensuring our employees and contractors have a safe, secure, and healthy work environment,” said the spokesperson. “Our safety operators are going on paid leave, effective tomorrow, and we’ll continue to monitor the situation and adjust accordingly.” Pony.ai decided to suspend its public PonyPilot service for three weeks starting March 16, along with its autonomous vehicle commuter pilot for the Fremont, California local government. The company recently launched both programs following a multi-month robotaxi service in Irvine, California — dubbed BotRide — in partnership with Hyundai (which provided KONA Electric SUVs) and Via (which supplied the passenger booking and assignment logistics). Tech giant Baidu has also ceased all self-driving activities in California, following Santa Clara County guidelines. Self-driving company Zoox said it is halting testing in San Francisco and Las Vegas until April 7. All drivers will be paid during the shutdown, according to a spokesperson. “As always, the safety and health of our team and the community are paramount. Therefore, in accordance with the Public Health Order, we have suspended all vehicle operations in San Francisco and Las Vegas until April 7,” said the spokesperson. “Our drivers will continue to be paid during this time. Along with everyone else, we will continue to evaluate this challenging situation as it evolves.” Self-driving delivery vehicle startup Nuro also said it would suspend all operations in Texas and Arizona. Yandex says it is “prioritizing safety” throughout the situation. All employees, including the self-driving team, have been told to work remotely for now. The company’s offices remain open, but most of the self-driving team is now working from home. And though Yandex’s testing locations have not been significantly affected by COVID-19, the company says it’s following local guidelines in all areas it operates in, including taking specific precautions to properly disinfect the cars and facilities based on practices it uses for ride-hailing and car-sharing services. 1/5 In the interest of the health and safety of our riders and the entire Waymo community, we’re pausing our Waymo One service with trained drivers in Metro Phoenix for now as we continue to watch COVID-19 developments. — Waymo (@Waymo) March 17, 2020 Concern over the spread of COVID-19 was the chief motivator behind the industry-wide suspensions in autonomous vehicle testing. Waymo said it made its decision “in the interests of the health and safety of our riders and the entire Waymo community.” This follows at least one incident of a human safety driver in a Waymo vehicle refusing to pick up a passenger because a local case of COVID-19 had been reported. (Waymo continues to pick up passengers as part of its Waymo One program in Phoenix, with a small number of completely driverless vehicles.) In related news, Uber and Lyft today said they would stop allowing customers to order shared rides in order to help contain the virus. Uber also suggested that drivers roll down windows to “improve ventilation” and asked riders to wash their hands before and after entering cars. In the U.S. at the time of writing, the total number of coronavirus cases stood at 4,226, with 75 deaths, as reported by the Centers for Disease Control and Prevention. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,509
2,020
"Hyundai and Aptiv redeploy autonomous vehicles to deliver meals to vulnerable people in Las Vegas | VentureBeat"
"https://venturebeat.com/2020/05/11/hyundai-and-aptiv-redeploy-autonomous-vehicles-to-deliver-meals-to-vulnerable-people-in-las-vegas"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hyundai and Aptiv redeploy autonomous vehicles to deliver meals to vulnerable people in Las Vegas Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The Hyundai-Aptiv Autonomous Driving Joint Venture, a collaboration between Hyundai and Aptiv (formerly Delphi) to develop autonomous vehicle technologies, is using driverless vehicles in Las Vegas to deliver food to families in partnership with a nonprofit. In a conversation with VentureBeat last week, CEO Karl Iagnemma described the effort as “a way to try to give back to the community.” “We recently started doing meal delivery in Las Vegas about a week and a half ago,” he said. “It’s not about changing our business model. But with that said, looking toward the future, delivery is an area that we will devote attention to.” Three of the company’s driverless BMW 5 series cars — which are equipped with lidar sensors, radars, and RGB cameras — are supporting Delivering with Dignity, a program launched in March by Clark County officials that aims to provide meals to vulnerable families at risk of contracting coronavirus. On a weekly basis from Monday to Friday, safety drivers are making contactless deliveries from restaurants including Buddy V’s at the Venetian, Valencian Gold, and Honey Salt Restaurant, wearing personal protective equipment at all times to ensure their safety. Deliveries are being made throughout the Valley and are anticipated to continue as long as the pandemic continues. Iagnemma says there aren’t currently plans to expand, but that the Hyundai-Aptiv Autonomous Driving Joint Venture intends to work with Delivering with Dignity in the future. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As of May 6, the organization’s over 690 volunteers had delivered over 26,500 meals “The delivery use case is one that I think is an outstanding application of autonomous vehicles for obvious reasons, and it’s one that we can adapt our core technology to address,” added Iagnemma. “Today, we’ve been focused on moving people, but … we look forward to being able to apply our [systems] to a number of other use cases, potentially including [commercial] delivery.” The Hyundai-Aptiv Autonomous Driving Joint Venture is only the latest autonomous vehicle company to redeploy its cars for delivery amid the pandemic. Since mid-April, Cruise has been delivering food from San Francisco-Marin Food Bank and San Francisco New Deal to seniors in need, and Pony.ai’s cars are delivering groceries from ecommerce startup Yamibuy and working with the city of Fremont to distribute meals to a local emergency shelter program. Nuro’s latest R2 vehicles are transporting medical supplies to temporary coronavirus medical facilities in Sacramento and San Mateo County. And Beep, in partnership with the Jacksonville Transportation Authority and shuttle maker Navya, is continuing to transport coronavirus tests at the Mayo Clinic campus in Florida. Elsewhere, startup Neolix says its vans have delivered medical supplies and supplemented labor shortages in areas within China hit hardest by coronavirus, as well as delivering food to health workers who are caring for patients. Starship Technologies and KiwiBot autonomous delivery robots are delivering sanitary supplies, masks, antibacterial gels, and hygiene products in communities around the U.S. And self-driving truck company TuSimple is offering a free service for food banks in Texas and Arizona. The Hyundai-Aptiv Autonomous Driving Joint Venture Roughly two years ago, Lyft partnered with the Hyundai-Aptiv Autonomous Driving Joint Venture to launch a fleet of autonomous vehicles on the former’s network in Las Vegas. A product of Aptiv’s mobility and services group, the vehicles — which have since been grounded as a result of the pandemic — became available to the public beginning May 2018 on an opt-in basis. Prior to the shutdown, the cars gave Lyft customers over 100,000 rides from more than 3,400 destinations in the Las Vegas area, including restaurants, hotels, entertainment venues, and other high-traffic locations, like the Las Vegas Convention Center and McCarran International Airport. The Hyundai-Aptiv Autonomous Driving Joint Venture has historically highlighted its work with local governments and transit agencies — including Clark County, the City of Las Vegas, and the Regional Transportation Commission. It also noted that its Command Center — which furnishes its development team with data like vehicle health and diagnostics, vehicle ride status, and popular ride times and locations — enables it to keep vehicles on the road while serving passengers, complementing its 130,000-square-foot garage with full-calibration lab spaces and car chargers. Beyond its joint operation with Lyft and its Hyundai partnership, the Hyundai-Aptiv Autonomous Driving Joint Venture is piloting autonomous vehicles across Boston, Singapore, Las Vegas, and Pittsburgh. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,510
2,020
"Some autonomous vehicle companies resume public testing, others express caution | VentureBeat"
"https://venturebeat.com/2020/05/13/some-autonomous-vehicle-companies-resume-public-testing-others-express-caution"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Some autonomous vehicle companies resume public testing, others express caution Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In March, the coronavirus pandemic forced companies testing autonomous vehicles in the U.S. to pause operations temporarily. Safety drivers, along with engineers and developers, were told to stay home until further notice, as shelter-in-place orders made public road testing impossible. In the interim, these companies — among them Waymo, Cruise, Uber, Lyft, and Aurora — leaned on simulation to continue development of their autonomous vehicle platforms. But Waymo resumed testing its cars this week — a tacit acknowledgment that simulation supplements but can’t replace real-world experience. Some — but not all — of its competitors intend to follow suit. The pressure to do so hinges on the idea that without real-world testing they can’t demonstrate the safety of their systems to regulators or the public, potentially putting the brakes on service rollouts long after the pandemic ends. Indeed, Ford has pushed the launch of its service from 2021 to 2022. Waymo CEO John Krafcik told the New York Times that the pandemic delayed work by at least two months. And analysts like Boston Consulting Group’s Brian Collie now believe broad commercialization of autonomous vehicles won’t happen before 2025 or 2026, at least three years later than originally anticipated. Back on the road Amazon-backed Aurora has resumed road testing “in accordance with county orders,” deploying cars in California’s Santa Clara and Alameda counties. The cars are driven by full-time, mask-wearing operators who adhere to “enhanced” cleaning measures and have explicitly volunteered for the role. Operators who choose to stay home are working with Aurora’s triage and labeling teams. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The health and safety of our vehicle operators and community members is our priority as we put our vehicles back on the road,” a spokesperson told VentureBeat. “We are taking a measured approach and have implemented the necessary precautions.” For its part, Zoox is planning to restart operations in Las Vegas through coordination with the “appropriate authorities.” In the coming weeks, the company will establish health and safety protocols while identifying rollout timelines, it says. Beijing-based Baidu says it resumed activities in California last week with six-foot distancing rules and daily vehicle cleaning and disinfection. The company has also committed to providing “essential” medical supplies to its staff, including masks. A waiting game But some companies are taking a more cautious approach, either delaying or developing testing plans in response to changing county and state regulations. Cruise, Lyft, Argo AI, and Uber said they had nothing to share with respect to the timeline of their testing plans. Ford partner Argo AI told VentureBeat it’s working with city and state officials in all six cities where it conducts testing ahead of vehicle redeployment. It plans to mandate social distancing and limit interactions between employees, institute cleaning procedures in alignment with Centers for Disease Control and Prevention (CDC) guidelines, supply personal protective equipment, install physical barriers to separate the left and right seats in its test vehicles, and enhance vehicles’ in-air filtration with a HEPA-certified cabin filter plus two five-stage recirculation devices. Lyft told VentureBeat it’s following guidance from the CDC and local officials for when it might resume both testing and its employee pilot in Palo Alto. “We’ll continue to monitor the situation and will follow local shelter-in-place orders as we look forward to getting our testing program back on the road when we resume our operations,” a spokesperson said via email. In the near term, Cruise, which GM has invested in, is devoting resources to a philanthropic effort with the San Francisco-Marin Food Bank and San Francisco New Deal to deliver meals to residents in need using self-driving vehicles. The company says safety drivers involved with the partnership are working on a voluntary basis and have been provided protective equipment such as masks. Pony.ai, which recently teamed up with Yamibuy to deliver goods to customers in Irvine and is delivering meals to “vulnerable communities” in Fremont, has no plans to resume robo-taxi services in May, including its partnerships with Fremont to transport city employees and with Hyundai and Via to shuttle around customers in Irvine. However, it says it’s following guidance from the CDC and the state of California to protect its operators who are in the field. These precautions include: Sanitizing vehicles before and after work Shifting working time to avoid contact between employees Mandating the use of masks and gloves Limiting operation facility capacities to 10 people or fewer and vehicles to a single operator Conducting body temperature checks on a daily basis at facilities “We will keep monitoring the situation and carefully plan out how we go back to normalcy,” said a Pony.ai spokesperson. “We continuously improve our autonomous driving technology, along with the essential services we provide.” Yandex hasn’t resumed testing either, but it says it’s in touch with authorities in areas where it has vehicles so that it can “quickly respond” to changing guidelines and be ready to relaunch. For instance, in Michigan, where the company had started testing ahead of the 2020 North American International Auto Show, its drivers are on paid leave until at least May 28, when the state’s shelter-at-home order is set to expire. In Moscow and Israel, Yandex says it adopted precautions to continue its robo-taxi operations while minimizing the risk of viral spread. Only one safety driver is allowed in a vehicle per shift and cars are regularly disinfected. Medical checks are mandatory for employees, and Yandex’s Moscow facility has been rearranged in accordance with social distancing rules. “We hope to expand our on-road autonomous vehicle testing in continued coordination with regulations,” said a Yandex spokesperson. “We will implement the same stringent precautionary [steps] across all of our testing locations as we’re able to restart our global testing.” Preventing infection Autonomous vehicle companies’ new safety measures come as Uber and Lyft have announced they will require drivers and riders to adhere to a set of infection-preventing standards. Both must wear face coverings and confirm that they haven’t been recently diagnosed with COVID-19, and drivers must agree to keep their vehicles clean and sanitize their hands frequently. That’s short of the steps Baidu took after the relaunch of its robo-taxi service on March 7 in Changsha, China, where its cars are equipped with an infrared thermometer that takes passengers’ temperatures and where “ambulance-level” UVC (Ultraviolet-C) lamps carry out regular disinfection. UVC lamps, which are powerful enough to cause eye damage, have been shown in preliminary studies to destroy the coronavirus on surfaces within minutes. Beyond preventing infection, the measures could help ease the fears of safety drivers and riders alike. According to a Statista survey, 49% of respondents in the U.S. said they’d be much less likely to use a ride-hailing service if coronavirus were to spread in their community. Perhaps unsurprisingly, ride-sharing companies have reported steep declines in ridership, with Uber customers booking 70% fewer trips in cities hit hard by the coronavirus. In the process, the safety measures could help right the balance sheets of companies with enormous R&D spend; autonomous vehicle startups burn $1.6 million a month on average, according to Pitchbook. Waymo — which prior to the pandemic was reportedly only generating hundreds of thousands of dollars in annual revenue — yesterday raised $750 million in anticipation of harsher headwinds. Lyft, Zoox, and autonomous trucking startups Kodiak Robotics and Ike were recently forced to lay off employees. And Zoox hired Qatalyst Partners to explore a potential sale while it attempts to secure capital. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,511
2,020
"Optimus Ride begins delivering food to families in need in Washington, D.C. | VentureBeat"
"https://venturebeat.com/2020/05/28/optimus-ride-begins-delivering-food-to-families-in-need-in-washington-d-c"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Optimus Ride begins delivering food to families in need in Washington, D.C. Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Optimus Ride , a startup developing self-driving shuttles principally for transportation, today announced that three of its vehicles will begin making deliveries at the Yards, a waterfront development in Washington, D.C., beginning this week. From D.C.-area restaurant and brewery Bluejacket, they’ll ferry boxes containing a week’s worth of food and ingredients furnished by the Neighborhood Restaurant Group and the Arcadia Center for Sustainable Food and Agriculture to families in need. Recipients are identified by Pathways to Housing, a nonprofit organization dedicated to ending homelessness. The meal kits, which will also be delivered to the nearby Van Ness Elementary School, will contain a mix of products requiring prep (such as proteins, fresh produce, and grains) and those that are ready to eat (like chicken soup). The company’s driverless fleet will distribute them to the families on a weekly basis, serving a total of 5,000 meals. Autonomous vehicle companies such as Cruise, Pony.ai , Hyundai-Aptiv Autonomous Driving Joint Venture , and Nuro have deployed small fleets to transport prescriptions, food, and other essentials to those in need during the pandemic. Self-driving cars and robots still require disinfection, but they could minimize the risk of spreading the coronavirus in some cases. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Optimus Ride says prior to the pandemic it designed, built, and began the operation of a self-driving system at the Yards that was supervised by on-site and remote staff. The company eventually plans to expand beyond meal box delivery to commercial services for residents, including transportation and restaurant, retail, and business delivery. “Our strategy has proven to be highly tractable,” wrote Optimus Ride CEO Ryan Chin in a recent blog post. “Optimus is on the verge of achieving this, given our world-class engineering team and our geofenced strategy model, which allows us to more safely and in a timely manner move to a fully driverless system operation with remote monitoring.” Chin, who formerly led the City Science Initiative at the MIT Media Lab, says Optimus Ride’s cars will someday be capable of level 4 autonomous driving , meaning they’ll operate with limited human input and oversight in specific conditions and locations (as defined by the Society of Automotive Engineers ). They tap Nvidia’s Drive AGX Xavier platform, which Nvidia — an Optimus Ride investor — claims is capable of delivering 30 trillion operations per second. Optimus Ride has flown largely under the radar since October 2017, when its partnership with real estate developer LStar Ventures brought its self-driving car service to the 1,550-acre Union Point neighborhood in Weymouth, south of Boston. It’s an MIT spinout founded by a team of DARPA Urban Challenge competitors and became one of the first to secure a driverless vehicle permit from the Massachusetts Department of Transportation roughly four years ago, with tests of its 25-plus car fleet starting in Raymond L. Flynn Marine Park in Boston’s Seaport District. Optimus Ride piloted its suite of vehicle mapping, control, and orchestration software on the campus of the Perkins School for the Blind in Watertown, Massachusetts in late 2016. More recently, ​it kicked off a deployment within Paradise Valley Estates​ — a private 80-acre assisted living community located in Fairfield, California — and adapted the program to deliver 50-80 meals a day to residents during the pandemic. Optimus Ride operates much like May Mobility , a startup that develops an autonomous vehicle stack and works with manufacturers to install it in low-speed, compact fleets. It’s also similar to French company Navya , which has sold 67 driverless shuttles in 16 countries. Like May and Navya, Optimus Ride says it can integrate its white-label autonomous system into “any vehicle type” — for now, lightweight cars that fit a handful of passengers — and it sees cities, public transit systems, and ride-sharing services as potential customers. Optimus Ride’s operation at the Yards is its second collaboration with developer Brookfield Properties in the greater Washington, D.C. area. Last year, the car company deployed a self-driving fleet at Halley Rise, Brookfield Properties’ 3.5 million square foot mixed-use development in Reston, Virginia. The fleet has given over 41,000 rides, to date. The Halley Rise deployment followed the launch of Optimus Ride’s self-driving shuttle and food delivery service at the Brooklyn Navy Yard, a 300-acre shipyard and industrial complex located in Northwestern Brooklyn. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,512
2,018
"Wearables are about to blow up: Industry sales to hit $19 billion by 2018 | VentureBeat"
"https://venturebeat.com/2013/10/15/wearables-are-about-to-blow-up-industry-sales-to-hit-19-billion-by-2018"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Wearables are about to blow up: Industry sales to hit $19 billion by 2018 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. All those wearable devices hitting the market are going to make a lot of people flithy rich. The wearable devices industry, which includes smart watches and glasses, will be worth $19 billion by 2018. That’s a big jump over the $1.4 billion the industry is expected to pull in this year, according to Juniper Research , which produced the numbers. Why such massive growth? Juniper points to two factors: consumer demand and the rise of subscription services. The latter is particularly key. While most wearables are being sold solely as devices right now, it won’t be long until every wearable maker also offers a separate service component to generate recurring revenue. Consider devices like the Filip, a kid-friendly smartwatch that can also make phone calls. By teaming up with AT&T, Filip is creating an extra revenue stream that goes beyond just device sales. Above: If Juniper is right, devices like the Fitbit Force are going to be big moneymakers. In other words, hardware + demand + services = $19 billion. Juniper’s Research comes not long after fellow research company Berg Insight estimated that wearable device sales will climb to 64 million by 2017. “A perfect storm of innovation within low power wireless connectivity, sensor technology, big data, cloud services, voice user interfaces, and mobile computing power is coming together and paves the way for connected wearable technology,” Berg analyst Johan Svanberg wrote earlier this month. The two research firms, it seems, are very much on the same page. Svanberg, however, took Berg Insight’s observations further and argued that, in order for the wearable industry to see those big numbers, companies first have to create multipurpose devices that can also stand on their own. The point is a valid one. One of the biggest issues with Samsung’s Galaxy Gear ( which we aren’t fond of ) is that the device needs to be connected into a smartphone in order be truly useful. But does the rise of multipurpose devices also that mean we’ll see fewer devices like the new Nike+ Fuelband SE , which is very good at fitness tracking but not much else? Probably not. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,513
2,015
"How businesses are putting wearables to work | VentureBeat"
"https://venturebeat.com/2015/02/15/how-businesses-are-putting-wearables-to-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How businesses are putting wearables to work Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The next wave of wearables promises everything from smart rings and watches to connected headbands and headphones, but it’s actually the business implications for wearable devices that have many of us so excited. While some observers believe we are still several years away from seeing wearables roll out for business use, data shows that the move has already begun. A Forrester survey found that of 3,000 global technology decision-makers, 68 percent of executives called wearables a “priority” for their companies. Several key trends are converging and driving an urgency to invest more time, resources, and money into wearable devices: Wearable computing presents a huge opportunity for companies looking to boost productivity and drive efficiency while connecting with customers, partners, and employees in new and meaningful ways. Above: Disney’s MagicBand Disney, for example, introduced the MagicBand in 2013, a radio frequency (RF) bracelet that Walt Disney World guests can use to enter the parks, unlock hotel rooms, and buy food and merchandise at the Disney Resort. From a business standpoint, Disney is able to collect general marketing data, such as shopping preferences or park visitation habits (no personal guest data is stored) and further personalize and improve overall customer experience while in its parks. Above: The Alpine Metrics app At GE, the CFO of Intelligent Platforms Business, Ken Bowles, uses the Alpine Metrics app, Intelligent Forecasting, to get a quick glance at key commercial performance indicators. The app gives users the freedom to get notifications and graphic indicators on a phone/tablet app or laptop when they need to drill down further. Because the app is able to distill massive amounts of data within the real estate of a smartwatch, the user is able to focus more time on the most impactful metrics, while simultaneously keeping a beat on how the business is progressing and performing. Wearables are also finding their way into a number of other industries, most notably healthcare and energy. In healthcare, they’ve assisted with hands-free review of patient records, dictation of medical notes, voice commands, and monitoring the health of patients. For example, Google Glass is being used by the likes of Dignity Health to access and input important information into patients’ records in real-time without requiring physicians to be tethered to the computer. Thalmic’s Myo, a gesture controlled armband, also works with Google Glass to give physicians super powers by allowing them to control an X-Ray view with the flick of a wrist. This is particularly useful in the sterile hospital work environment. Above: Thalmic Labs’ Myo Gesture Control Armband In the energy industry, we’re seeing early pilot programs where smart glasses are being used to manage compliance tasks and downstream events (i.e. a steam leak on an oil rig), enabling field service workers to document (hands free) any issues on site, log/activate a case, access a full repair history, and patch-in an expert real-time to support expedited closure of the case. The possibilities are endless, and the companies taking early pilot steps will learn, iterate, and help contribute to the outcome of wearable technology as it evolves and becomes cheaper, faster, and more functional. Magnet for Investment Both startups and established technology leaders are seriously exploring the use cases for wearables. CB Research says that investment in private wearable startups was on pace to hit $1 billion in 2014 – nearly as much as the previous four years combined. At Salesforce, we recently launched Salesforce Wear to focus on wearable computing in the workplace. Over the past six months, other tech giants have also introduced important initiatives in wearables, from Apple’s announcement of the Apple Watch to the recent unveiling by Microsoft of HoloLens. With leaders like Facebook, Google, Intel, and Samsung also pursuing wearables, it’s a good bet that there’s an exciting future to anticipate. Wearables will enable teams to be more connected to the digital world while being more present in the real world. Checking a mobile phone or opening a laptop during a meeting or while out in the field can be a distraction and a little time consuming. But by glancing at a connected smartwatch or peering through connected eyewear, a sales rep or field service technician could quickly and discreetly access critical information, all while being hands free. The key will be to assure that wearable technology, and enterprise wearable applications, leverage business data to highlight the right information at the right time to drive the right business action. The potential here is incredible. Explore more VentureBeat guest posts from industry insiders here. The time may soon come when wearable devices will be as integral to our lives as smartphones are today, sitting innocuously on our persons as unobtrusive as jewelry or clothing, yet infinitely more valuable to our businesses. William Gibson famously said “the future is already here, it’s just not evenly distributed.” That’s certainly the case with wearables, and it’s why you will see more solutions for the workplace this year as the technology – and applications – become increasingly vital to our professional lives. Lindsey Irvine is Global Director of Strategic Partnerships and Business Development at Salesforce. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,514
2,019
"Samsung leaks Galaxy Buds, Galaxy Fit, and Galaxy Watch Active wearables | VentureBeat"
"https://venturebeat.com/2019/02/15/samsung-leaks-galaxy-buds-galaxy-fit-and-galaxy-watch-active-wearables"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung leaks Galaxy Buds, Galaxy Fit, and Galaxy Watch Active wearables Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The standard collection of pre-event rumors unofficially outed all of Samsung’s new wearables ahead of its Galaxy Unpacked Event, and now Samsung has confirmed the new products with an official leak of its own. As discovered by SamCentralTech , an update to the “Pick your device” screen of the Android app Galaxy Wearable (formerly Samsung Gear) shows several new devices: Galaxy Buds earphones, Galaxy Fit and Fit e fitness trackers, and the Galaxy Watch Active smartwatch. Of the three, the least established is Galaxy Buds, a pair of truly wireless in-ear headphones that appear to nestle inside ear canals with rubber tips, leaving larger circles visible outside. While rumors have suggested that the Buds will come with a wireless charging case that can be refueled using inductive chargers on the backs of new Galaxy smartphones, today’s leak only officially confirms the name and buds themselves, as well as white and black color options. More detailed specs will need to wait for the event (or another leak). The Galaxy Fit and Fit e fitness trackers appear to be direct sequels to Samsung’s prior Gear Fit devices. Besides confirming the names and apparently imminent announcement, the leak shows a black metallic watch-like device with a long color screen that displays the time and a step counter, attached to a seemingly non-detachable black plastic band. The Fitbit competitors are expected to include GPS, heart rate monitors, and the ability to store music. Galaxy Watch Active’s inclusion on the list isn’t a real surprise at this point, as prior leaks appeared to reveal its name and all of its technical specifications. The new wearable was already expected to drop the rotating bezel of Samsung’s flagship wearable, 2018’s Galaxy Watch, for a sleeker design. But a reference to the watch in a 40mm size may explain one discrepancy in recent leaks, which have questioned whether the screen will be 1.3 inches or 1.1 inches in width. Just like the Galaxy Watch, it’s possible that Samsung will offer Active in two sizes with the same 360 x 360 screen resolution. We’ll cover the Galaxy Unpacked Event on February 20 as it happens in San Francisco. Based on other leaks, Samsung is also expected to reveal the Galaxy S10 smartphone family and three new retail stores in California, New York, and Texas. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,515
2,020
"Apple's Q1 2020 revenue hits record $91.8 billion, boosted by wearables | VentureBeat"
"https://venturebeat.com/2020/01/28/apples-q1-2020-revenue-hits-record-91-8-billion-boosted-by-wearables"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple’s Q1 2020 revenue hits record $91.8 billion, boosted by wearables Share on Facebook Share on X Share on LinkedIn Apple's September 10, 2019 introduction of the iPhone 11 Pro. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Having trimmed the prices of most iPhone models just ahead of the 2019 holiday season, and launched multiple new services to goose its subscription numbers, Apple positioned itself for the strongest holiday season in its history. Today, the company announced its fiscal first quarter 2020 results, and overall, they blew away expectations: $91.8 billion in quarterly revenue, versus $84.3 billion in the year-ago quarter and $88.3 billion in the first quarter of 2018. The big number is particularly significant because Apple provided a wide estimate range last quarter, predicting revenues between $85.5 and $89.5 billion — all better than its famously troubled prior holiday performance , but potentially either falling short of or exceeding the 2018 holiday season. Given that the company has historically offered and exceeded optimistic quarterly growth estimates, the range appeared to signal uncertainty over the tariff impact of an ongoing U.S.-China trade dispute , currency exchange rates, and consumer demand for late-stage 4G phones. All of the issues were resolved in the company’s favor. “We are thrilled to report Apple’s highest quarterly revenue ever, fueled by strong demand for our iPhone 11 and iPhone 11 Pro models, and all-time records for Services and Wearables,” said Apple CEO Tim Cook. “During the holiday quarter our active installed base of devices grew in each of our geographic segments and has now reached over 1.5 billion. ” This quarter, Apple says it sold $55.957 billion in iPhones, $7.160 billion in Macs, and $5.977 billion in iPads. Its combined “wearables, home and accessories” sales were $10.01 billion, while services hit $12.715 billion. One year ago, iPhones were at $51.98 billion, Macs at $7.416 billion, iPads at $6.729 billion, wearables and accessories at $7.308 billion, and services at $10.785 billion. In other words, the company experienced dips in both Mac and iPad sales, but growth across all of the other categories. While iPhones remain Apple’s strongest business segment, wearables and services continue to grow in importance. The Apple Watch Series 5 was a modest update from its predecessor, but overall sales were likely buoyed by aggressively discounted Series 3 models. Additionally, an early 2019 update to the popular wireless AirPods headphones was followed up by the late October release of AirPods Pro , a smaller in-ear model with noise cancellation. Despite selling at a higher price point, the Pro model was largely sold out throughout the holiday season. Apple also debuted a collection of new services during the second half of 2019, including the Apple Card credit card , Apple Arcade subscription game service , and Apple TV+ video streaming service. The company doesn’t break down revenues from individual services, and TV+ subscriptions are believed to be almost exclusively driven by free trials at this point, but reviews for the Card, Arcade, and prior Apple Music services have been generally positive. In October, Apple predicted that its gross margin would fall between 37.5% and 38.5%, with operating expenses in the $9.6 to $9.8 billion range, $200 million of other income, and a tax rate of 16.5%. Before today’s release, the average forecast of analysts was that the company would eke out $88.4 billion in sales, just enough to beat the company’s 2018 record, with $4.54 in earnings per share. Such a slim jump would have been similar to the very modest gains seen in prior 2019 quarters. Instead, Apple’s $91.8 billion in revenues represented a 9% jump over last year’s holiday quarter, with $4.99 in earnings per share, up 19%. The company also said international sales constituted 61% of the quarter’s revenue. Both year over year and quarter over quarter, Apple’s regional net sales were up across four of its five key territories. Quarterly numbers jumped from $14.946 billion to $23.273 billion in Europe, from $11.134 billion to $13.578 billion in Greater China, from $29.322 billion to $41.367 billion in the Americas, and to $7.378 billion from $3.656 billion in the Asia Pacific region. Japan, however, saw a quarterly jump to $6.223 billion from $4.982 billion, but a year over year decline from 2019’s $6.91 billion number. For the first fiscal quarter of 2020, Apple is predicting revenue between $63 and $67 billion, gross margin between 38% and 39%, operating expenses between $9.6 and $9.7 billion, $250 million of other income, and a tax rate of 16.5%. Once again, the company is issuing a $0.77 per share cash dividend, payable on February 13 to shareholders on record as of February 10, 2020. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,516
2,020
"Facebook and Plessey pair on consumer AR glasses with microLED screens | VentureBeat"
"https://venturebeat.com/2020/03/30/facebook-and-plessey-pair-on-consumer-ar-glasses-with-microled-screens"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook and Plessey pair on consumer AR glasses with microLED screens Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. While companies such as Nreal are already in the process of releasing consumer AR wearables, Facebook is continuing to work on lightweight AR glasses that can be used for a full day between recharges — a development process that may take years to complete. But one of the key components appears to have come into focus, as U.K.-based display maker Plessey announced today that it’s working with Facebook to create “new technologies for potential use in the AR/VR space,” notably including consumer AR displays. The deal between Plessey and Facebook isn’t an acquisition, but rather a dedication of Plessey’s LED manufacturing operations to Facebook’s use. Plessey notes that Facebook’s ongoing research efforts and demonstrated success with Oculus Quest make the social networking giant “one of the companies best-positioned to make consumer-ready AR glasses a reality,” and suggests that the manufacturer will focus on prototyping microLED displays to help Facebook develop its next-generation computing platform. Plessey previously supplied smart glasses maker Vuzix with wearable screens, and has most recently focused on microLED technology , one of several display technologies competing for viability in future AR glasses. Early AR displays have struggled to produce eye-filling, bright, and colorful visuals, instead “augmenting” only a postage stamp-like box within the user’s field of view, but Plessey says its CMOS-based RGB displays combine high pixel density and very high brightness, delivering low power consumption despite high frame rates. According to earlier reports, Facebook is working on both an all-day AR wearable as its long-term vision — potentially for release between 2023 and 2025 — and an interim alternative that will fill the gap until the complete solution is ready for release. The company is said to be working with glasses maker Luxottica on fashionable frames, and its top researchers expect AR technology to become one of the key transformational technologies of the next 50 years. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,517
2,020
"Samsung finally wins first approval for a Galaxy Watch blood pressure app | VentureBeat"
"https://venturebeat.com/2020/04/21/samsung-finally-wins-first-approval-for-a-galaxy-watch-blood-pressure-app"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung finally wins first approval for a Galaxy Watch blood pressure app Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Samsung’s long-gestating effort to bring blood pressure monitoring to its Galaxy Watch wearables took a step forward today with the company’s announcement that it has finally received clearance to make its blood pressure app available in one country. This is the first positive sign in a regulatory process that’s been underway for more than a year. But the potentially lifesaving feature still has some major hurdles to overcome before it sees widespread consumer use. On a positive note, Samsung’s Health Monitor app has been cleared by the South Korean government under “software as a medical device” guidelines, which enable existing devices to add new health features through apps. This will enable South Korean users of the Galaxy Watch Active2 — and “upcoming Galaxy Watch devices” — to do partially cuffless blood pressure monitoring , at least in that country. Regulatory approval is required on a country-by-country basis for new medical features. Blood pressure monitoring is critical in detecting hypertension, a potentially serious indicator of heart distress, and hypertensive crises, which could indicate that a stroke, heart attack, or other major organ failure is either imminent or in process. Since most people don’t wear blood pressure cuffs on their arms all day, and pneumatic cuffs noisily inflate on the bicep or wrist to take measurements, offering this functionality in a comfortable and persistently used wearable could easily save lives. But the challenge for Samsung and others has been to make a wearable’s readings medically accurate within a much smaller, non-pneumatic form factor, and Samsung’s Health Monitor app only gets part of the way there. Users must calibrate the app using a traditional blood pressure cuff every four weeks at a minimum, enabling the watch’s pulse wave analysis system to refine its findings. Each calibration requires taking three separate cuff readings, and it’s likely that users will do so at home, which will require buying a cuff for an additional $30 to $50. In other words, the Galaxy Watch won’t be able to entirely replace the need for a cuff, but it will fill in the gaps between conventional readings. Users will be asked to hit a “measure” button in the app to check their blood pressure, then wait roughly two minutes while sitting still without talking. Though it takes longer, it’s as simple to use as Apple’s ECG app for Apple Watches — a similar breakthrough that only years ago was nearly impossible to imagine on such small wearables. Now the challenge is to actually get Health Monitor into customers’ hands. Despite receiving South Korean approval, Samsung is only planning to start offering the app in the third quarter of 2020 and hasn’t yet named other countries where the functionality will be available. The company began testing a similar app, My BP Lab, back in February 2019 with the University of California, San Francisco, enabling some users in Australia, Canada, Germany, Singapore, the U.K., and the U.S. to trial blood pressure monitoring in Galaxy Watch Active and Active2 watches. It’s unclear at this point whether regulators in those countries will be on board for Health Monitor’s launch later this year. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,518
2,020
"Apple plans to reopen U.S. stores in April, following Trump guidance | VentureBeat"
"https://venturebeat.com/2020/03/24/apple-plans-to-reopen-u-s-stores-in-april-following-trump-guidance"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple plans to reopen U.S. stores in April, following Trump guidance Share on Facebook Share on X Share on LinkedIn A look at Apple's new retail store in downtown San Francisco. On May 21, 2016, people began lining up around the block for the grand opening. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Dovetailing with comments today from U.S. President Donald Trump that he will attempt to resume the country’s business operations by Easter, Apple has informed employees that it expects to begin reopening retail stores during the first half of April. The plan was disclosed in a memorandum from Apple retail chief Deirdre O’Brien. As an Apple senior VP of both retail and people, O’Brien is responsible for both the company’s stores and its workforce, which has been scattered from Apple’s Cupertino headquarters into home offices following the outbreak of the novel coronavirus disease COVID-19. O’Brien told employees that the company will be extending its work-from-home arrangements through at least April 5, and will re-evaluate those arrangements on a weekly basis based on workers’ locations. Apple currently plans to begin the process of reopening brick and mortar locations on a staggered basis, rather than bringing the entire U.S. chain back at once. While the company has closed stores across multiple territories, it has continued to offer online ordering with free home delivery throughout the coronavirus outbreak, and has even launched new products — updated iPad Pro and MacBook Air models — despite regional shelter-in-place requirements for citizens. The company has previously moved to reopen stores and contract manufacturing facilities in China that were affected by an earlier coronavirus outbreak in Wuhan. It also pulled from Chinese distribution a third-party app that was being used in China to share COVID-19 news despite government censorship, as well as a plague-related game. Health officials in the United States have warned that the premature resumption of business as normal will lead to a spike in COVID-19 infections that citizens and hospitals are unprepared to handle, potentially leading to hundreds of thousands if not millions of deaths. Following weeks of lockdown measures, President Trump suggested this week that closing down businesses to fight the outbreak was a cure more dangerous than the disease, which he trivialized as flu-like and unavoidable. After publicly floating the prospect of a reversal, Trump today formally pushed to get the U.S. “opened up” by roughly Easter, which will be observed this year on April 12. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,519
2,020
"The 5-point 2020 iPad Pro review: One small step for Apple, one potential leap for AR | VentureBeat"
"https://venturebeat.com/2020/03/26/the-5-point-2020-ipad-pro-review-one-small-step-for-apple-one-potential-leap-for-ar"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review The 5-point 2020 iPad Pro review: One small step for Apple, one potential leap for AR Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. If you follow tablet computers, you know new iPad Pro releases tend to be important, since Apple always reserves its fastest A-series processors and latest top-of-line features for its premium-priced flagships. While some of these capabilities later trickled down to other models, iPad Pros were Apple’s first tablets with Pencil and Smart Connector support, four speakers, five microphones, TrueDepth cameras, and 11- and 12.9-inch screens. Given some of these milestones, it’s easy to forget that Apple follows a tick-tock pattern with iPad Pro releases, alternating between big updates and minor refreshes. Mid-2017’s largely forgotten second-generation iPad Pro looked just like its 2015 predecessor, switching from an A9X chip to an A10X Fusion, gaining a 120Hz ProMotion screen, and boosting camera performance. Then late 2018 models debuted all-new chassis designs, Face ID, USB-C connectors, and supercharged A12X Bionic processors. I was a huge fan of the 2018 iPad Pro and have used one for hours every day since its release. By comparison, Apple’s 2020 iPad Pro is destined to be one of the largely forgotten models. It’s almost exactly like the 2018 version, except for a substantially revised rear camera system and some small internal changes. Disappointingly, the “new” A12Z Bionic processor is only modestly faster than the old A12X. On the other hand, Apple has modestly tweaked this year’s storage capacities and pricing, perhaps to offset an expensive new trackpad-keyboard accessory that’s coming in May. My executive summary: The 2020 iPad Pro still leads the tablet pack, but it doesn’t feel anywhere near as future-proof as its predecessor did in 2018. Here are the five key things you need to know before purchasing or passing. 1. The new rear camera system and augmented reality Apple’s single biggest change to the 2020 iPad Pros is obvious from the outside: a redesigned rear camera system. 2018 iPad Pros had a single large lens akin to the iPhone XR’s — a 12-Megapixel, f/1.8 aperture “1X” camera with no optical zoom abilities — but the 2020 iPad Pro uses the square camera block design found in 2019’s iPhone 11s. Somewhat like the standard iPhone 11, Apple adds a “0.5X,” f/2.4 aperture ultrawide camera to the prior 1X option, but it’s not exactly the same configuration. The iPad’s second camera has only 10 megapixels and promises a 125-degree field of view, compared with the iPhone 11’s 0.5X specs of 12 megapixels and 120 degrees. One might assume this means that the iPad Pro’s ultrawide pictures are a little wider than the iPhone 11’s, with a slightly greater ability to “see” the totality of a room or other space with the 0.5X lens, a feature that matters for augmented reality. You might also expect 10MP photos to be a little lower in resolution than 12MP ones. But neither assumption would be correct. The iPhone 11’s ultrawide camera takes noticeably wider and sharper photos than the new iPad Pro’s. Rather than outputting the 10MP output natively, Apple upscales the pictures to 12MP so they’re a little less sharp on close inspection. And the iPad Pro doesn’t smoothly zoom between 0.5X and 1X when you hit the zoom out button in Apple’s Camera app. Another twist is that the iPad Pro’s 1X camera isn’t quite as wide as the iPhone 11’s. The difference here is less noticeable, but you can see it. Images from the devices’ 7MP front cameras are comparatively indistinguishable. For reasons that aren’t yet entirely clear , Apple is focusing a lot of attention on the 2020 iPad Pro’s potential as an augmented reality display device. As was predicted by supply chain analysts some time ago, the new iPad also adds 3D scanning capabilities in the form of a lidar (light detection and ranging) sensor that uses laser pulses to digitally map environments, objects, and people within a five-meter range. For most people, this sensor will have comparatively less value than the iPhone 11 Pro’s third (2X) camera. Instead of being able to optically zoom in whenever you’re shooting images, the lidar capability is there solely for AR apps. Right now, one of the only two ways to test the lidar sensor’s abilities is to load Apple’s previously underwhelming Measure app, which used comparative photography to estimate the size and measurements of objects, and note that it now returns results that are more accurate than before. They’re not always perfect; in the shot below, Measure guesstimated a 7-inch length at 6.5 inches, though it was closer to spot-on in other readings, and faster besides. The other way is to load prior AR apps and note that they’re faster at acquiring positioning data and adjusting digital elements than before. But developers will need to use ARKit 3.5, literally just released this week, to really take advantage of the new lidar hardware. You can preview how it will work using the Safari browser’s AR feature, triggered by certain pages on Apple’s website. Here, a digital iPad Pro is occluded by a real-world foreground object. The occlusion is unrealistically blurry at this point, but with the prior-generation iPad Pro, the digital model appears to be overlapping everything rather than blending in at all. From where I stand, Apple’s pattern of adding new tentpole hardware features to its devices without adequate software support has become tiring and occasionally problematic. As just one example, 3D Touch in iPhones was supposed to be a big deal but went nowhere with developers and eventually disappeared from new models. But unlike 3D Touch, which was merely one of multiple new “major” iPhone features in its day, the lidar sensor is the signature differentiator between this iPad and its predecessor. It could be viewed as ahead of its time, but it’s also a feature no one was asking for in a tablet. Will lidar see much use from developers? Just like 3D Touch, that remains to be seen. Despite the hardware’s potential, it’s hard to call it a giant leap forward for AR without great software, and Apple largely dropped the ball on delivering anything compelling for the new iPad Pro’s launch. It’s reasonable to speculate that Apple rolled out lidar to help a small but growing number of companies bolster their still nascent AR retail efforts — making an online store’s digital sofas look like they’re really in your living room — or help them prototype lidar-based AR games that will eventually debut on iPhones and Apple glasses. Only the passage of time will determine whether these camera tweaks alone justified a new iPad Pro launch. 2. 2018 performance and accessories If there’s any single huge surprise with the 2020 iPad Pro, it’s what hasn’t changed from the 2018 model — overall performance. As disclosed by spec sheets, the new iPad has an A12Z series chip, the first time Apple has used a “Z” as a suffix for one of its custom A-series processors. The name was apparently meant to signal that the A12Z is a slight upgrade from 2018’s A12X, rather than the generational or process technology leap one might have expected from an A13X or A14X, either of which would have been expected, given the iPad Pro’s upgrade history. Unfortunately, the A12Z doesn’t appear to be much different from the A12X in performance — a real bummer, given the 1.5-year spread between releases. On the CPU side, Geekbench 5’s single-core (1105) and multi-core (4692) numbers for the new iPad Pro are virtually identical to scores posted for the 2018 iPad Pro (averaging 1110/4600), while its GPU-focused Metal compute benchmark (9938) is between 7% and 13% better than 2018’s range of 8800-9304, thanks to one additional GPU core — up to eight from the A12X’s seven. (Note: There are small variations between prior 11-inch and 12.9-inch iPad Pro benchmark scores, based on RAM and screen resolution differences.) A comprehensive system-wide benchmark, Antutu, pegs the new iPad Pro at 742712 versus 2018’s 716358, a performance boost of only 3.7% between the A12X and A12Z. The higher number reflects improvements to both the GPU (383239 from 363953) and memory (94708 from 88457), with barely any changes in CPU or UX scores. Wireless performance is supposed to be a little better on the new iPad Pro, but I didn’t see it in my tests. Like the iPhone 11 series, Apple has upgraded the new iPad Pro to Wi-Fi 6 (802.11ax) — the latest wireless standard, which is only just beginning to see router support. I have a 1-Gigabit per second connection with Wi-Fi 5 (802.11ac) and saw download speeds ranging from 372-522Mbps on the new iPad, compared with 331-528Mbps on the prior model. Upload speeds hit the service’s sub-40Mbps cap. There are smaller under-the-hood hardware changes, as well. Apple has upgraded the quality of the 2018 model’s five-microphone array, lowering the noise floor and improving clarity. In addition to helping with direct-from-device audio recording, this may enable Siri to “hear” speech better during standard queries and dictation. It’s also been claimed, without documentation, that Apple added its semi-mysterious U1 ultra wideband location chip to the iPad Pro, having debuted it almost pointlessly in the iPhone 11. The chip is eventually supposed to enable more precise geographic location of any device it’s inside of, as well as faster data transfers between proximate devices. But for now, it’s largely a curiosity, supported only by Apple’s AirDrop feature. I saw no evidence in the iPad Pro’s current AirDrop interface that U1 is actually inside the new tablet. Apple continues to specify each model at 10 hours of battery life — numbers they will almost always meet or exceed, apart from the most GPU-demanding applications, though they can fall well below that for games and will begin to show signs of decreased longevity after a year or a year and a half of daily battery use. While this year’s 12.9-inch iPad Pro has the same battery capacity as its predecessor, the new 11-inch model has a slightly smaller battery to make space for the larger camera module, which means its run times are likely to be a little lower. Just as was the case in 2018, each iPad Pro comes with a one-meter USB-C to USB-C cable and an 18-watt USB-C wall charger — lower than its supported peak charging speed of 45 watts. So if you’re looking for a second or replacement charger, Aukey’s compact GaN charger Omnia Mix (above) offers laptop-class 65 watts of power and two ports for $50, while a smaller Minima model delivers 18 watts in a much smaller housing for only $22. 3. The upcoming Magic Keyboard and alternatives As I said during my otherwise positive review of the last iPad Pro, Apple’s hardware was legitimately awesome by late 2018 hardware standards but crippled by its inability to serve as a convertible replacement for a laptop — something Microsoft has unquestionably done better with its Surface tablets. This year, Apple is finally addressing the deficit, though there are at least three necessary steps in the process and they’re all at different stages of completion. The first step was enabling the iPad to work with not only external keyboards, as has been an option since 2010, but more specifically hybrid keyboard and trackpad accessories. Apple added preliminary trackpad and mouse support to iPadOS 13.0 last year, then surprised everyone by upgrading the feature to full system-wide support in iPadOS 13.4 this week. As of today, users can use Bluetooth 5.0 or a USB-C cable to connect the new iPad Pros (and many past iPads) to trackpads and mice, as well as future combined keyboard-trackpad solutions. There will be at least three types of combined accessories: Apple’s own iPad Pro Magic Keyboards (above), which will arrive in May for $299 (11-inch) or $349 (12.9-inch); Apple-overseen alternatives such as Logitech’s $150 Combo Touch , also arriving in May; and independently designed alternatives such as Brydge’s Pro Plus , a $200/$230 design scheduled for mid-April. Each design is substantially different from the others. Apple’s Magic Keyboard for iPad Pro evolves its vinyl and microfiber Smart Folio with a combined keyboard and trackpad that look like smaller versions of those parts on a MacBook laptop. For whatever reason, Apple omits all of the dedicated function keys found on its Mac keyboards but does include scissor switch key mechanisms and a multi-touch trackpad. Apart from the possibly glass trackpad and plastic keys, the keyboard housing appears to be metal, and the iPad is held above the typing surface with an adjustable cantilevered hinge. A pass-through USB-C port can be used to charge the iPad and Magic Keyboard at the same time. Logitech’s Combo Touch has only been announced for lower-end iPads, but it demonstrates that a functionally similar accessory can be had for half of Apple’s prices. Like Microsoft’s Surface keyboards, this design uses a fabric and plastic housing and relies on a kickstand to prop the iPad up, which may be less than ideal for use on a lap. However, the case provides full iPad protection, and the backlit keyboard includes a row of function keys, unlike Apple’s. Brydge’s Pro Plus (above, stylized Pro+) looks substantially like the bottom casing of a MacBook. Made from space gray machined aluminum, it has a full plastic backlit keyboard and multi-touch trackpad, its own battery (with three-month longevity), a metal hinge, and a protective frame to hold the iPad Pro. Like Logitech’s design and unlike Apple’s, it has a set of function keys beyond the basic five rows of inputs. It’s unclear at this point how any of these solutions will compare with a traditional laptop. But Apple’s certainly a lot closer than it was a year and a half ago. 4. Software Viewed from a 20,000-foot perspective, the iPad Pro’s software experience is pretty good today — assuming you can live with iPadOS-limited versions of apps, which are presently in a state of flux. Most are optimized to take up the full screen of an 11-inch or 12.9-inch display, regardless of whether it’s in landscape or portrait mode, which I continue to view as a suboptimal way of using such large screens. But semi-awkward split screen options are also available. For the past decade, the general trend has been toward bringing iPhone apps ever closer to their desktop equivalents, which will continue with future keyboard/trackpad updates and Apple’s macOS Catalyst initiative. But there’s no doubt that even professional-specific iPad apps, such as Adobe’s latest release of Photoshop , aren’t up to par with macOS/Windows versions or likely to reach that level over the next year. Apple has made enough screen size tweaks to iPads over the years that there’s no guarantee any given model will have 100% native software screen support. iPad Pros graduated from Apple’s prior 9.7-inch standard iPad size to 10.5- and 11- inch displays, but as of our 2018 iPad Pro review , there was very little software optimized for the newest screen — most apps displayed with black borders. All four 12.9-inch iPad Pro generations have kept the same screen size, but the 2018 model lost its Home button, requiring small software tweaks. Even today, some apps continue to have black borders on the 12.9-inch Pro screen (shown below). It’s fair to say that most major iPad apps have proper support for both the 11-inch and 12.9-inch displays, with the vast majority also supporting multiple split-screen multitasking features. However, there are plenty of legacy iPadOS-ready exceptions that still have small black bars on at least 11-inch iPad Pros, such as NBC’s SNL app, as well as numerous games, like Activision’s Geometry Wars 3 and Namco’s Pac-Man CE DX. These are in addition to iPhone-focused apps that still haven’t been optimized for any iPad screen — including Instagram, DoorDash, and Postmates — and continue to run in portrait mode with large black bars. All of this could change. On the iPad side, Apple says that by April 30, 2020 “all apps that run on iPhone must support all iPhone screens and all apps that run on iPad must support all iPad screens” — assuming developers want to submit or resubmit apps for App Store approval. Additionally, Apple’s Catalyst software is strongly incentivizing third-party developers to create Mac apps that also run on iPads, with device-specific interfaces. But we’ll have to wait and see whether these initiatives push developers to make good use of current-generation iPad Pro displays and capabilities or they’ll just bow out of future app updates. [ Updated : Hours after publication of this review, Apple pushed back the April 30 deadline to June 30, once again giving developers additional time to comply with its “all iPad screens” requirement.] Once the new Magic Keyboard is out, the iPad Pro’s only limitations compared with a MacBook Air will be its software and iPadOS. I’m optimistic that both will continue to improve over time, but despite Apple’s numerous iPad multitasking iterations, I still find the Mac’s windowed approach far superior for work and wish the iPad Pros made better use of their large, beautiful screens. 5. Storage capacities, pricing, and conclusions Although the new iPad Pro was only modestly refreshed for 2020, Apple does deserve some credit for one other tweak this year: value pricing. Beyond updating the camera system, Apple has boosted each entry level model’s storage capacity to 128GB, enabling most customers to get a perfectly viable device for $799 (11-inch) or $999 (12.9-inch). Previously, the entry level models were stuck with only 64GB, a marginal capacity for many prospective uses. While Apple still makes plenty of money off upgraded storage, it has shaved $50 off the cost of 256GB ($899/$1099), 512GB ($1099/$1299), and 1TB ($1299/$1499) models and quietly includes 6GB of RAM inside every iPad Pro — not just the 1TB 12.9-inch model, which previously had 6GB while all other models had only 4GB. Although cellular models still carry steep $150 premiums, Apple’s educational discounts knock $50 off any 11-inch model or $100 off any 12.9-inch model. All of this is to say that both new models deliver superior value for the dollar, compared with their predecessors, though the differences aren’t night and day. Moreover, if you factor in the cost of a $299+ iPad Pro Magic Keyboard and/or $129 Apple Pencil, an iPad Pro still isn’t going to be a small purchase. And as I noted earlier, this year’s models don’t feel anywhere near as future-proof as their predecessors did in 2018. The iPad Pro’s lead over pure tablet rivals was huge at that point, but A12Z is only a small step forward. After nearly two decades of reviewing Apple products, the biggest challenge I face in recommending 2020’s iPad Pros is institutional knowledge — users have been burned before with half-step upgrades and seven-month hardware refreshes. Consequently, this isn’t a model I’d recommend to most purchasers of the 2018 iPad Pro, and I’m hesitant to suggest that anyone else buy into it — absent a very concrete need for either 2018-caliber flagship tablet performance or AR-specific features. No one outside Apple knows whether there will be another, fancier iPad Pro release later this year , or whether it will wait until 2021. But since the A12 series will be at least two full processor generations behind at that point, I feel certain that the next model will represent as giant a leap forward for the iPad’s performance as the lidar sensor could be for its AR functionality. So if you’re considering the purchase of a 2020 iPad Pro, here’s my advice: Unless you expect to aggressively use the improved camera system, look for a discontinued 2018 model instead. It runs 99.99% of software identically, delivers virtually the same performance, and works with a larger variety of cases that are readily available from retailers during the COVID-19 outbreak. The 2020 iPad Pro is a slightly better piece of hardware, but its improvements won’t matter to most users, and at Apple’s flagship prices you should expect more than just small year-over-year gains. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,520
2,020
"Apple's choice to keep stores closed is a smart call during stupid times | VentureBeat"
"https://venturebeat.com/2020/04/03/apples-choice-to-keep-stores-closed-is-a-smart-call-during-stupid-times"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion Apple’s choice to keep stores closed is a smart call during stupid times Share on Facebook Share on X Share on LinkedIn Apple Store UTC in San Diego, California. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. If it wasn’t obvious before, it’s now crystal clear that we’re living in dangerous times — an era when government officials somehow continue to question the seriousness of the deadly coronavirus , and medical experts require not just safety masks but security details. Less than two weeks ago, U.S. President Donald Trump said he wanted businesses and churches reopened by Easter — April 12 — and a leaked memo from Apple detailed the company’s plan to start reopening some brick and mortar stores within the same general timeframe. At that point, an argument could have been made for reopening select stores in unaffected areas — say, cities without bans on public gatherings, or countries that haven’t yet experienced outbreaks — but even so, getting back to business as usual seemed at least somewhat premature. Trump’s claim at the time was that the economic cost of shutting America down to fight COVID-19 was more dangerous than the disease itself, but health officials have universally disputed that, noting that death tolls could climb into the hundreds of thousands or millions. Today, over 55,000 people have died , and more than a million have been diagnosed as infected, numbers that are expected to dramatically rise by April 15. So it’s not a surprise that Apple reversed its prior plans this week, letting employees know (in another leaked memo ) that it’s now planning to keep its retail stores closed until early May. Though that’s a full month away right now, Apple retail SVP Deirdre O’Brien wrote that the company continues “to monitor local conditions for every Apple facility on a daily basis” and will make “reopening decisions on the basis of thorough, thoughtful reviews and the latest guidance from local governments and public health experts.” O’Brien’s latest memo leaves ambiguous whether the company’s May reopening plans are limited to U.S. stores, but they’re certainly covered. Those of us who have lived through prior outbreaks (albeit at a much greater distance than this) know from experience that scientists, epidemiologists, and other medical professionals try their best to manage these situations, but work with incomplete and evolving data that requires course corrections over time. Yesterday’s best practice may be tomorrow’s mistake; reasons for cautious optimism may give way to utter caution, or flip in the opposite direction. People who try to make business decisions based on the current state of medical guidance are taking a risk, regardless of whether they opt for boldness or caution. Regardless, this clearly isn’t the right time for a bold resumption of the prior status quo. Beyond the changing federal message, uneven measures taken by individual U.S. states and cities have apparently given some people the impression that they needn’t self-isolate or even follow the most basic social distancing guidelines. This has become a problem in places with gathering bans, where some people openly disobey the rules, as well as places without bans, where hoping people will exercise good judgment is apparently too much to ask, and COVID-19 infections continue to grow. In the absence of a uniform, unambiguous federal policy that keeps everything closed until the outbreak subsides, the best we can hope for is that companies put the long-term interests of their employees and customers ahead of their short-term desire to generate revenue. Apple may have been ready to start moving in the wrong direction last month, and it’s still not doing everything right across the board , but keeping at least its U.S. retail locations closed through early May was the right call for public health. Here’s hoping that other retailers — and government officials — are wise enough to follow its example. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,521
2,020
"Snap jumps 20% as coronavirus spurs use in Q1 2020 | VentureBeat"
"https://venturebeat.com/2020/04/21/snap-earnings-q1-2020"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snap jumps 20% as coronavirus spurs use in Q1 2020 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. ( Reuters ) — Snap on Tuesday beat Wall Street estimates for quarterly revenue and user growth for its Snapchat app as more people seek entertainment while they stay at home during the global coronavirus pandemic. Snapchat, known for its disappearing messages, said it saw a jump in usage in the last week of March compared with the end of January, as people increasingly used the app to communicate with friends and family. Usage also increased for Snapchat’s original content and in-app games. Shares of Snap jumped as much as 20% to $14.80 in trading after the bell. Revenue rose 44% in the quarter, but the company’s net loss decreased only slightly as Snap said it continues to invest heavily in features like augmented reality technology. Daily active users (DAU) on Snapchat rose 20% to 229 million in the first quarter ended March 31, compared with a year earlier. The figure stood at 218 million in the fourth quarter. DAU, a widely watched metric by investors and advertisers, beat analysts’ average estimate of 224.68 million, according to Refinitiv data. Revenue, which Snap earns mainly by selling advertising on the app, increased 44% from a year earlier to $462.47 million. The company said higher revenue in the first two months of the quarter offset lower growth in March, when advertisers began to tighten marketing budgets as non-essential stores closed amid the pandemic. Analysts had expected revenue of $428.80 million in the first quarter. The company said it would shift resources on its sales team to serve advertisers in industries like gaming, home entertainment, and consumer packaged goods, which are expected to see higher demand from people stuck at home. “Direct response” advertisers, or those seeking to increase sales through their ads rather than name recognition, were a bright spot, said Evan Spiegel, CEO of Snap, in prepared remarks. For example, Snap could help movie studios pivot to digital or streaming releases, he said. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Snap said it would not provide its usual guidance for the next quarter, given the uncertainty caused by the coronavirus. “These high growth rates in the beginning of the quarter reflect our investments in our audience, ad products, and optimization and give us confidence in our ability to grow revenue over the long term,” Spiegel said. Research firm eMarketer downgraded growth estimates for the global advertising industry this year to 7% from 7.4% — a difference of $20 billion — due to the coronavirus, in a report last month. Average revenue per user in the first quarter was $2.02, up from $1.68 in the prior year. Snap’s net loss declined slightly to $305.9 million, or 21 cents per share, from $310.4 million, or 23 cents per share, a year earlier. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,522
2,020
"Facebook apps now used monthly by more than 3 billion people | VentureBeat"
"https://venturebeat.com/2020/04/29/facebook-earnings-q1-2020"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook apps now used monthly by more than 3 billion people Share on Facebook Share on X Share on LinkedIn Facebook Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook today reported earnings for its first fiscal quarter of 2020, including revenue of $17.74 billion and net income of $4.9 billion, compared to revenue of $15.1 billion and net income of $2.43 billion in Q1 2019. Year-over-year revenue is up 17%. Facebook earnings beat analyst estimates that predicted Facebook would earn $17.5 billion in revenue and report earnings per share of $1.74. Microsoft earnings also beat analysts’ estimates today, while Alphabet , which like Facebook draws the majority of its revenue from advertising, announced earnings above analysts’ estimates on Tuesday. In a call with analysts after the close of markets today, Facebook CEO Mark Zuckerberg said, “For the first time ever, there are now more than 3 billion people actively using Facebook, Instagram, WhatsApp, or Messenger each month.” That’s up from 2.99 billion people March 31. Monthly use of the main Facebook app grew 10%, up from 2.38 billion in Q1 2019 to 2.6 billion. Daily active users are also up 11%. Zuckerberg said Facebook expects to take a “meaningful economic hit” throughout the public health emergency, referred to advertising as a volatile industry sensitive to macroeconomic trends, and said he’s worried fallout from COVID-19 will be worse than some people are predicting. Zuckerberg believes the efficacy of shelter-in-place orders will largely determine the economic fallout from COVID-19, and he expressed concern that shelter-in-place orders are being lifted too soon in some areas. “I worry that reopening certain places too quickly before infection rates have been reduced to very minimal levels will almost guarantee future outbreaks and worsen longer-term health and economic outcomes,” he said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Facebook reported that advertising revenue was flat in the first weeks of April, following a significant reduction in advertising and ad pricing at the end of March amid the economic downtown caused by COVID-19. “After the initial steep decrease in advertising revenue in March, we have seen signs of stability reflected in the first three weeks of April, where advertising revenue has been approximately flat compared to the same period a year ago, down from the 17% year-over-year growth in the first quarter of 2020,” the company said in a statement reporting earning results. “The April trends reflect weakness across all of our user geographies as most of our major countries have had some sort of shelter-in-place guidelines in effect.” In other activity brought on by COVID-19, the social media company said it plans to inform users when they’ve been exposed to COVID-19 misinformation. Facing competition from Houseparty and Zoom , whose usage rates have soared amid shelter-in-place orders, WhatsApp recently shared plans to expand video call participation capacity from four to eight people and Facebook launched Messenger Rooms for video calls with up to 50 users. Facebook grew its headcount by 28% to 48,268 employees worldwide in Q1 2020. In response to COVID-19, Facebook expects to slow its headcount growth and construction activity in the year ahead. Due to economic uncertainty, Facebook told analysts executives will not provide specific revenue guidance on Q2 2020. “We expect our business performance will be impacted by issues beyond our control, including the duration and efficacy of shelter-in-place orders, the effectiveness of economic stimuli around the world, and the fluctuation of currencies relative to the U.S. dollar,” Facebook CFO David Wehner said during the call. Facebook listed a $5 billion FTC settlement and $5.7 billion investment in Jio among major Q1 2020 expenses. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,523
2,020
"Twitter beats Q1 2020 revenue estimates as coronavirus drives engagement | VentureBeat"
"https://venturebeat.com/2020/04/30/twitter-beats-q1-2020-revenue-estimates-as-coronavirus-drives-engagement"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Twitter beats Q1 2020 revenue estimates as coronavirus drives engagement Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Twitter reported that its Q1 revenue increased 3% to top analysts’ estimates as it continued to struggle outside the U.S. due to product issues and coronavirus quarantines. For the first three months of 2020 , Twitter had $808 million in revenue, up from $786.9 million for the same period one year ago and well ahead of the $776 million projected by analysts. That bump in revenue seemed to come mainly from Twitter’s efforts to more effectively monetize its current user base. Twitter uses a self-invented metric called “Monetizable Daily Active Usage” (mDAU) to track the efficiency of its advertising system. Twitter defines mDAU as “users who logged in or were otherwise authenticated and accessed Twitter on any given day through Twitter.com or Twitter applications that are able to show ads.” For Q1, the company said mDAU grew 24% year-over-year to 166 million, up from 134 million the previous year and 152 million in the previous quarter. In a letter to shareholders , the company attributed that increase to “seasonal strength, ongoing product improvements, and global conversation related to the COVID-19 pandemic.” As a result, U.S. revenue was $468 million, up 8% year-over-year, but international revenue was $339 million, down 4% year-over-year. Twitter blamed the latter on problems it continues to have with personalization and data settings for its mobile app, as well as the impact of COVID-19 in the Asia region. Twitter reported a loss of $8 million for the quarter, or $.01 per share, beating the average estimate of a $.02 per share loss. Due to uncertainty surrounding the coronavirus, the company did not offer any guidance on future earnings. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,524
2,020
"E3 2020 cancellation: Game industry reacts to physical vs. digital marketing | VentureBeat"
"https://venturebeat.com/2020/03/11/e3-2020-cancellation-game-industry-reacts-to-physical-vs-digital-marketing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages E3 2020 cancellation: Game industry reacts to physical vs. digital marketing Share on Facebook Share on X Share on LinkedIn Bethesda event at E3 2019 Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The Entertainment Software Association announced today that the coronavirus forced it to cancel the Electronic Entertainment Expo , the big game trade that takes place (well, usually) every June in Los Angeles. More than 65,000 industry professionals and fans go to the event, which is in its 25th year. But the show had a lot of problems with major vendors pulling out, such as Sony , Electronic Arts, Blizzard, and others. We asked different leaders of the game industry for their reaction to this and the earlier cancellation of the Game Developers Conference. We wondered what this means for physical versus digital marketing of games, whether more events will be affected, and if digital events can accomplish the same tasks. Here’s what they said. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Renee Gittins, executive director of the International Game Developers Association Above: Renee Gittins is executive director of the IGDA. The cancellation of two of the largest events for game developers to connect and market their games will certainly affect marketing strategies for 2020 and beyond. I believe we will see more online announcements planned by larger organizations, while smaller studios will come together to support each other for their own announcements. The IGDA is launching a program to promote the game and fundraising launches of our members to our audience of over 100,000 game developers and fans. We hope that programs like this will help soften the blow from the loss of these events. Events planned for later in the year are likely to wait to see how COVID-19 handles the warmer weather in the northern hemisphere. We have already seen a large push of social and influencer marketing in games, and it will likely grow with events being cancelled. For most consumers, there is little difference between digital and physical events, unless they are one of the 100,000 plus attending events like Gamescom and PAX West in person. However, these changes would more significantly affect press and developers, as in-person connections and demos are more rich, supportive, and valuable. The IGDA has expanded its online communities and support, from the IGDA Discord to regular webinars. While the IGDA has always been a great resource for best practices, white papers, and studies, we also have active online communities through our local chapters, special interest groups, and the entire international organization. These resources and communities can support all developers around the world affected by these event cancellations. Michael Condrey, head of Take-Two’s 31st Union studio Above: Michael Condrey is the founder of a new studio for 2K. It isn’t easy to cancel tentpole events like E3 and GDC, but I’m proud that our industry is taking a proactive approach to the COVID-19 outbreak. Our studio priority, with support of Take Two and 2K, is to protect the health of our employees and do what’s right for the broader SF Bay Area community of developers and gaming fans. The canceling of E3 has some implications from a marketing standpoint, as its historically been one of the premier events that showcase the great new games of the upcoming holiday season. Industry events represent much more than just promoting new games, however. While we can show new gameplay in a digital way, having the opportunity to spend time with peers and fans in the community is valuable and inspiring. I’ve attended nearly every E3 over the past two decades, and some of my fondest moments include sharing our game’s first public demo timed with E3. Knowledge sharing and the advancement of our craft at GDC holds a special place in my heart. And nothing in our industry matches the excitement of standing at the opening of Gamescom as a sea of fans race through the halls in search of their favorite games. That said, we are in a new day an age, both with social distancing due to COVID-19, and the power of digital and social marketing lifting games to unprecedented heights. I suspect that Gamescom gets impacted this year, but like imagining an NBA game with no fans in the stands, I don’t think the digital events can capture the energy, excitement and anticipation in the same way that live events have captured for our industry. Michael Pachter, managing director at Wedbush Securities Above: Michael Pachter at GamesBeat 2016. It definitely puts more pressure on individual companies to attract attention for their games. Without a central focus on the spectacle of E3, game announcements will trickle out ,and it will be more difficult to attract a large audience for each. Yes, I think we’ll get a better feel for whether digital and social marketing can replace the spectacle and pizzazz of the E3 conference itself. Gamescom is late August, so they should have the luxury of another eight weeks to see what happens. My best guess is that it will not get canceled, since the virus follows a very clear curve and will likely be past the crisis stage by June. However, if the conference organizers are forced to decide soon, they will probably have to cancel. No, digital events don’t accomplish the same thing. There is something to be said for critical mass from an industry conference. Mihai Pohontu, CEO of game development agency Amber Above: Mihai Pohontu, CEO of Amber. Our business is reliant on contact with publishers, in order to pitch new game concepts and understand their publishing slate needs, in order to assess whether there’s a profile match between our studios and the genres/platforms they’re targeting. While these contacts can happen via remote meetings, our experience has been that nothing is as effective as in-person interactions. Not only is information exchanged in conferences meetings, but we can also establish bonds of trust and even form lasting friendships. There’s always the element of serendipity, as you can make new connections at networking events and hear of opportunities that otherwise wouldn’t be available. Amber is working to contain the damage caused by the cancellation of events across the industry, but I expect there will be a significant impact on the game development community at large, particularly on small indie studios who don’t have a biz dev infrastructure or relationships to rely on. Mike Vorhaus, CEO of Vorhaus Advisers Above: Mike Vorhaus, CEO of Vorhaus Advisors The major sponsors of E3 have been trying to quit, or have quit, E3, and this has been going on for years. I think this year will be another nail in the long-term coffin of E3. I don’t think anyone really believes that these conferences are important for consumer sales (E3 use to be when you were booking all your physical distributors in May for the fall), but rather, these are important [in-person meeting] opportunities, seeing old friends, and building the brand of the company with the industry people. I don’t think anyone is going to be very sad about the demise of E3. I imagine Gamescom will be canceled if things don’t quiet down. It is all going to be a function of time and spread of the disease. I am not worried this will hurt my business much. 1 2 View All The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,525
2,020
"Xbox Series X specs revealed: Here's what matters and why | VentureBeat"
"https://venturebeat.com/2020/03/16/xbox-series-x-specs-revealed-heres-what-matters-and-why"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Xbox Series X specs revealed: Here’s what matters and why Share on Facebook Share on X Share on LinkedIn Generations of games live inside this box. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Microsoft revealed more details about its upcoming next-gen console this morning, and that includes the full Xbox Series X specs. But while it’s always fun to stare blankly at a list of gigabytes and gigahertz, let’s dive in and pick apart what matters. Here is the full list of the Xbox Series X specifications direct from Microsoft: Above: Xbox Series X has got it where it counts, kid. The most important features in this list are the CPU, the storage, and the GPU. Let’s break down each one. Why the Xbox Series X CPU matters The Xbox Series X has an AMD CPU that uses the company’s Zen 2 architecture. This means it uses an enhanced 7nm process that can cram 15 trillion transistors onto a chip about the size of the Xbox One X’s. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The CPU has 8 cores that all run at a very fast clock speed for a console. Developers will have an option between 3.8 GHz with hyperthreading turned off or 3.66GHz with hyperthreading enabled. What does that mean? Hyperthreading (AMD calls it SMT) is the tech that enables a CPU to ingest data from two lanes. Think of the core as a mouth, and the threads are the hands. With hyperthreading, one thread can actively feed the mouth while the other hand is getting the next bite ready. Hyperthreading is not something that most games take advantage of — at least not yet. And that’s why developers have an option. Expect most games through the first year or so to stick with 8 cores and 8 threads at 3.8 GHz. A Zen 2 chip with 8 cores at that clockspeed is going to chew through most current-gen games with no issue. This is a massive upgrade over Xbox One and PlayStation 4. Hyperthreading will then give Xbox a certain amount of futureproofing. The rest of the computing world has embraced multiple CPU threads, and it’s only a matter of time before games do the same. Especially with AMD’s high core-count CPUs with hyperthreading selling so well in the desktop market. Why the Xbox Series X SSD storage matters Storage is one of the most talked about aspects of the next-gen consoles. Even before Microsoft started talking about the Xbox Series X, Sony was talking up its next-gen solution to reduce load times. Well, now we know what Microsoft is putting inside the box. It’s a 1TB custom NVME SSD. An SSD is a solid-state drive. That’s a bunch of superfast flash storage that doesn’t rely on a spinning disc like traditional hard drives. NVME means non-volatile memory express, and it’s a new interface that is capable of a higher data throughput than the old SATA interface. We have NVME SSDs on PC, and for now, even that is overkill. No game maxes out the read speeds for even a SATA SSD. For reference, SATA 3 is capable of transfering about 550MB per second. NVME, meanwhile, can run as high as 3,500MB per second. This is another example of Microsoft preparing the Xbox Series X to last for years. Games may not max out SATA today, but that was largely due to the criminally slow console HDDs (maybe 100MB per second at best) holding everything else back. Putting NVME in consoles should enable a whole new generation of games that don’t just load levels faster, but they should stream in data during the game faster as well. What about external storage? Using a superfast NVME SSD inside the Xbox poses a problem, though. Most external storage solutions are too slow. This will make expanding space to save games expensive and, even worse, confusing. Microsoft has an answer for this, though. It’s putting an expansion-card slot on the back that will accept a 1TB NVME SSD that is identical to the internal storage. This will ensure that games run exactly as they should whether they’re on an internal or external device. But even this solution isn’t perfect. The Xbox Series X has full backward compatibility with Xbox One and also robust support for Xbox 360 and the original Xbox. Those games do not need such a futuristic drive. Microsoft has an answer for this as well. You can plug in an external HDD to the system’s USB 3.2 port. This should give you access to massive amount of space for older games. And USB 3.2 is fast enough that you could even take advantage of the loading speeds of an external SSD. Why the Xbox Series X GPU matters The last major hardware pillar for Xbox Series X is the GPU. It has 12 teraflops of power with 52 compute units running at 1.825 GHz. Again, what does all of that mean? It doesn’t matter too much. It’s more important that it’s running on a custom version of AMD’s new RDNA 2 architecture. AMD hasn’t launched any RDNA 2 GPUs yet. But the first RDNA video cards are powerful and efficient. Their performance-per-watt is one of their most impressive attributes. And that means with console optimizations, developers may get some astonishing results out of RDNA 2. Digital Foundry is already reporting those kinds of jumps in performance. A two-week-old port of Gears 5 running on Xbox Series X with minimal optimizations is already performing on par with a PC using an RTX 2080 from Nvida. That was a $700 GPU. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,526
2,020
"Why the PlayStation 5's DualSense controller is special | VentureBeat"
"https://venturebeat.com/2020/04/07/why-the-playstation-5s-dualsense-controller-is-special"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why the PlayStation 5’s DualSense controller is special Share on Facebook Share on X Share on LinkedIn Sony PlayStation 5 controller, the DualSense. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The PlayStation 5 is launching this holiday, and Sony is releasing a new controller to go along with the system. The PS5’s DualSense is a followup to the DualShock gamepads. It features the same basic structure with symmetrical dual analog sticks and the familiar face buttons. But Sony is building on that foundation with new features such as haptic feedback and adaptive-resistance triggers. The idea is to enable players to feel their games. “We had a great opportunity with PS5 to innovate by offering game creators the ability to explore how they can heighten that feeling of immersion through our new controller,” PlayStation platform boss Hideaki Nishino wrote in a blog post. “This is why we adopted haptic feedback, which adds a variety of powerful sensations you’ll feel when you play, such as the slow grittiness of driving a car through mud. We also incorporated adaptive triggers into the L2 and R2 buttons of DualSense so you can truly feel the tension of your actions, like when drawing a bow to shoot an arrow.” DualSense also includes multiple microphones, which enables you to communicate with friends even if you don’t have a headset. Sony is also dropping the “Share” button and replacing it with “Create.” This introduces new ways for players to create and share content from their gameplay. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Our goal with DualSense is to give gamers the feeling of being transported into the game world as soon as they open the box,” said Nishino. “We want gamers to feel like the controller is an extension of themselves when they’re playing – so much so that they forget that it’s even in their hands.” Sony is sending the DualSense out to developers now. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,527
2,020
"Apple releases iOS 13.5 beta with coronavirus exposure notification (Updated: Google too) | VentureBeat"
"https://venturebeat.com/2020/04/29/apple-releases-ios-13-5-beta-with-coronavirus-exposure-notification"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple releases iOS 13.5 beta with coronavirus exposure notification (Updated: Google too) Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Leading into the expected June debut of iOS 14, Apple has been testing what was expected to be the final beta version of iOS 13 — previously numbered “13.4.5.” Today, the company is changing up the beta cycle with the release of iOS 13.5, a revised version that includes the coronavirus exposure notification system it co-developed with Google. Prior to now, features in the new iOS point release were relatively trivial, including an updated music link sharing feature for social networks, and bug fixes. But the addition of the exposure notification tool is significant. It will enable users of both iPhones and Android phones to optionally cooperate in sharing anonymized location data, flagging potential COVID-19 contacts. If a user is diagnosed with the virus, an anonymous notification can be passed on to people who were within Bluetooth proximity of the user’s phone, alerting them to the prospect of infection. Apple and Google have worked aggressively to build trust in their contact tracing solution, adding more privacy-focused features into the system last week. But it remains unclear whether a critical mass of users will actually adopt the feature, and whether it will practically deliver positive results. The United States and some EU countries have backed the initiative, accepting the tech companies’ approach to user privacy and decentralizing data, while the United Kingdom has signaled that it will instead rely on its own app and sharing solution. Scientists and researchers have expressed concern about the potential privacy implications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The COVID-19 Exposure Notifications feature appears as an on-off switch in iOS 13.5, disclosing that the iPhone is using Bluetooth to “securely share your random IDs with nearby devices,” while giving users the choice to share their diagnosis with others. Authorized apps will be able to provide notifications if you’ve been exposed, as well as performing on-device calculations of exposure risk level, based on estimated distance, duration, and other factors. It’s unknown at this stage how significant battery drain will be when using the feature. A smaller feature is designed to speed up iOS 13.5’s authentication of users wearing face masks to protect against the virus. iPhones with Face ID will now quickly display the passcode entry screen when a mask is detected, rather than delaying for a full scan attempt. Described as “beta 3” as it’s carrying forward the release cycle from iOS 13.4.5, iOS 13.5 is available now for download from Apple’s developer site, as well as an over-the-air update for iPhones running the prior beta OS. The prior iOS 13.4 beta cycle took 1.5 months from start to finish , and the final version of iOS 13.5 is expected to be released in mid-May, expanding the number of users with access to the new features. Apple’s software development tool Xcode 11.5 has also been released with developer API-level support for the feature, and although its iPadOS, macOS, tvOS, and watchOS operating systems aren’t expected to add contact tracing, they’ve all been bumped to the .5 suffix as well. Update at 10:30 a.m. Pacific: A private Google Play services update beta is also being released today, enabling some Android Studio developers to begin testing the exposure notification feature. Android devices will not require a full system update; the new feature will be enabled on phones running Android 6.0 or later. Both companies plan to release sample code for developers on Friday. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,528
2,019
"Amazon reports $59.7 billion in Q1 2019 revenue: AWS up 41%, subscriptions up 40%, and 'other' up 34% | VentureBeat"
"https://venturebeat.com/2019/04/25/amazon-earnings-q1-2019"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon reports $59.7 billion in Q1 2019 revenue: AWS up 41%, subscriptions up 40%, and ‘other’ up 34% Share on Facebook Share on X Share on LinkedIn The Amazon logo is seen at the Young Entrepreneurs fair in Paris Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon today reported earnings for its first fiscal quarter of 2019, including revenue of $59.7 billion, net income of $3.6 billion, and earnings per share of $7.09 (compared to revenue of $51.0 billion, net income of $1.6 billion, and earnings per share of $3.27 in Q1 2018). North American sales were up 17% to $35.8 billion, while international sales grew 9% to $16.2 billion. Analysts had expected Amazon to earn $59.65 billion in revenue and report earnings per share of $4.72. The retail giant thus only slightly beat on revenue but destroyed on earnings per share. The company’s stock was flat in regular trading, but up some 1% in after-hours trading. Amazon gave second quarter revenue guidance in the range of $59.5 billion and $63.5 billion, compared to a consensus of $60.88 billion from analysts. AWS, subscriptions, and ‘other’ Amazon Web Services (AWS) continued to be the star of the show, growing 41% in sales to $7.7 billion. AWS thus accounted for about 13% of Amazon’s total revenue for the quarter. AWS is the cloud computing market leader, ahead of Google Cloud and Microsoft Azure. Subscription services were up 40% to $4.3 billion. That would mainly constitute Amazon Prime , which the company is expanding to offer deals at places like Whole Foods. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Amazon’s “other” category, which mostly covers the company’s advertising business, jumped 34% to $2.7 billion in revenue. The company knows plenty about what its customers want to buy, or even don’t want to buy, and it’s increasingly leveraging that for its advertising business. Fire TV, Amazon Future Engineer, and Alexa Amazon is notorious for not sharing numbers unless they’re very good. The company revealed in today’s report that Fire TV now has more than 30 million active users. Amazon CEO Jeff Bezos decided not to focus on the money in this earnings report, but rather to talk up the company’s latest education initiatives. “The son of a working single mom, Leo Jean Baptiste grew up speaking Haitian Creole in a New Jersey home without internet access. He’s also one of our inaugural group of 100 high school seniors to receive a $40,000 Amazon Future Engineer scholarship and Amazon internship,” Bezos said in a statement. “He rose to the top of his class and is set to study computer science at college this fall, with the dream of getting a job in machine learning. Our passion for invention led us to create Amazon Future Engineer so we could help young people like Leo from underrepresented groups and underserved communities across the country. In addition to 100 college scholarships a year, we’re funding computer science classes in 1,000 high schools and counting, and inspiring younger kids to explore coding through coding camps and after-school programs. We love this program, and we can’t wait to see what Leo and his fellow future engineers invent.” Last quarter, Bezos talked about Alexa, but only in passing. Amazon is nowhere near ready to break out the voice assistant in its earnings reports. Alexa is simply contributing to overall Amazon retail sales, which is of course the company’s main revenue driver. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,529
2,020
"Driving customer engagement with personalized financial experiences (VB Live) | VentureBeat"
"https://venturebeat.com/2020/03/06/driving-customer-engagement-with-personalized-financial-experiences-vb-live"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live Driving customer engagement with personalized financial experiences (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Envestnet | Yodlee Personalized insights and superior customer experiences are now the norm in the financial services industry. How do you compete? For key findings from the most recent Forrester report on customer advocacy in the financial services sector, register now for this VB Live event! Register here for free. Money and financial health is a deeply emotional and personal issue for most consumers, and for most banks, credit card issuers, insurers, and wealth management firms, tapping into that vein has been a challenge. Instead, firms have relied on branch and agent distribution for their competitive advantage — but that avenue is closing fast. Not only are nimble fintech companies delivering online and digital customer experiences at a fraction of the cost of legacy IT, these tools help them connect with customers on a new level. And that’s the new compete-or-die advantage: delivering customer experiences that demonstrate an obsession with connecting to customers’ financial needs in a deeply personalized way in order to earn their loyalty. Forrester Research calls it “customer advocacy,”: when a customer feels that their financial services company is committed to acting in the best interest of their clients, rather than their bottom line. Customers reward the company in return: Forrester found customers who rate their financial services firms high on customer advocacy are more likely to consider those firms for future purchases — while firms whose customers rate them lowest for customer advocacy have the fewest customers who would buy from them again. Loyal customers invest more, borrow more, and buy more products from that firm. And that loyalty is what drives stronger retention, increased future purchase intent, greater share of wallet, and improved brand advocacy. Customer advocacy starts with providing superior customer experiences that are effective, frictionless, and at the end has customers walking away feeling good. But there’s a big difference between just delivering good customer experiences and actually putting customer needs first. Financial services firms need to find ways to demonstrate that they truly value their customers’ business, understand their financial goals, and are always working to help them improve their financial well-being. Forrester found that there are four cornerstones for customer advocacy. Firms need to: 1. keep things simple, 2. act benevolently, 3. be transparent, and 4. build trust by continually helping customers improve their financial well-being. Simplicity means delivering frictionless experiences every time and ensuring that every interaction is as easy as possible. In other words, resolving problems in just one call, explaining products simply in accessible language, making it easy to open an account, keeping claims processing straightforward, and so on. Transparency isn’t just a nice-to-have, but a need-to-have, with regulators increasingly demanding accountability from the financial services sector. Consumer demand isn’t far behind. It’s here that digital customer tools especially shine. They offer customers ways to explore a company’s policies as well as control their relationship with their company, from how often a firm contacts them to how it uses their personal information and more. Benevolence encompasses both customers and their financial health as well as a financial firm’s connection to the community and the environment. That could mean policies that take customer circumstances into account and offer support, as well as a firm’s commitment to doing good in its community, whether by donation or volunteering. Trust has been hard to earn in the financial services sector, ever since the global financial crisis. Forrester found that even now many Americans still do not feel that financial firms are actually their allies. Companies can earn trust by demonstrating their diligence in protecting customer data, for instance, or offering support in emergencies, protecting their credit, and so on. It all boils down to this financial services firms need to prove that they always have their customers’ best interests at heart, in both their digital and human touch points. To get there, firms need to dial down the hard sell, and shift their focus toward connecting with customers and helping them accumulate wealth. That takes working closely with customer experience colleagues who know how to design experiences that leave customers feeling good, and have them coming back again and again. To learn more about the key drivers of customer advocacy at leading financial service firms, why improving customer perceptions is not only good for your brand but good for your bottom line, and a look at the key findings from Forrester’s customer advocacy survey, register now for this VB Live event. Don’t miss out! Register here for free. You’ll learn: How advocating for customers drives a sustainable competitive advantage Why the wealth management sector scored highest for customer advocacy in a recent Forrester survey How customer advocacy is linked to increased future purchase intent How to improve customer engagement through hyper-personalized digital banking experiences Speakers: Alyson Clarke , Principal Analyst, Forrester Dustin Walsey , Co-founder and CEO, Buckle Jim Del Favero , Chief Product Officer, Personal Capital Katy Gibson , VP, Application Products, Envestnet | Yodlee Evan Schuman , Moderator, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,530
2,016
"Vida Health raises $18 million to connect people with chronic diseases to health coaches | VentureBeat"
"https://venturebeat.com/2016/12/08/vida-health-raises-18-million-to-connect-chronic-disease-sufferers-with-health-coaches"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Vida Health raises $18 million to connect people with chronic diseases to health coaches Share on Facebook Share on X Share on LinkedIn Vida Health Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Vida Health , an online health platform that connects people suffering from chronic diseases with health coaches, has raised $18 million in a Series B round led by Canvas Ventures, with participation from Nokia Growth Partners (NGP) and Aspect Ventures. Founded in 2014, Vida Health targets those with conditions such as diabetes, depression, anxiety, and high blood pressure, and pairs them with their own mentor — this could be a personal trainer, nutritionist, nurse, therapist, or other support person. It’s also open to general “wellness coaching” for those seeking to live healthier lives, and Vida says its platform has serviced more than 30,000 individuals since its inception two years ago. Replete with native mobile apps, the Vida platform can also integrate with many popular health and fitness trackers, including Fitbit, MyFitnessPal, and Apple Health, giving coaches direct access to their users’ vital stats. This lets them formulate tailored plans and also allows them to adjust the program based on data and to make recommendations in real time over text, voice, or video chat. Above: Vida Health The company had previously raised a $5 million Series A round in 2014, and this latest cash influx will allow it to grow its platform “to serve more consumers who are managing chronic conditions or simply want to improve their health,” according to a statement. “Since launching two years ago, we have seen men and women use the Vida platform to dramatically improve their health and even reverse chronic conditions, like diabetes and hypertension,” said Vida cofounder and CEO Stephanie Tilenius. “For many, it’s transformed their lives. The potential to prevent, manage, and reverse chronic health conditions that affect half of all Americans, and reduce our nation’s healthcare burden, is huge.” A quick look at the numbers reveals the size of the market Vida is looking to tap. Almost 40 percent of U.S. adults are considered obese, and more than 70 percent overweight. Almost 10 percent of the population (29.1 million people) are thought to have diabetes, and more than a third (86 million people) are in the pre-diabetes stage. According to the Centers for Disease Control and Prevention (CDC), as much as $2.5 trillion is spent on chronic disease care in the U.S. each year. Though Vida is open to individual consumers, it also touts its services to companies looking to improve the health of their employees — thus cutting the cost of healthcare. Clients include Steelcase, eBay, and FICO. The enterprise represents a growing market for digital health platforms. Back in August, Accolade closed a $93.6 million funding round to help employers cut healthcare costs, with a platform that guides employees through the “costly, complex, and fragmented” world of healthcare. And such demand isn’t limited to health improvement — last month, BetterUp raised $12.9 million for a platform that connects employees with professional development coaches, while Everwise nabbed $16 million to connect protégés with mentors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,531
2,020
"Tyto Care raises $50 million to grow its telehealth examination and diagnostic platform | VentureBeat"
"https://venturebeat.com/2020/04/07/tyto-care-raises-50-million-to-grow-its-telehealth-examination-platform-globally"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tyto Care raises $50 million to grow its telehealth examination and diagnostic platform Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. With social distancing, self isolation, and remote working emerging as the standard behavioral protocol to combat the COVID-19 outbreak, this is creating new opportunities for fledgling industries and technologies to step up and show their true worth. And that includes telehealth. While the ability to deliver health care services online and through other virtual communication conduits is not new, COVID-19 ushers in a new sense of urgency to enable medical examinations from afar. And it’s against that backdrop that Tyto Care , a New York-headquartered telehealth startup with Israeli roots, today announced that it had raised $50 million in a round of funding co-led by Insight Partners, Olive Tree Ventures, and Qualcomm Ventures, with participation from Orbimed, Echo Health, Qure, Teuza, and others. Remote care Founded in 2012, Tyto Care has developed a handheld exam kit that anyone can buy. The accompanying mobile app and clinician dashboard connects the patient with health care professionals to receive a diagnosis, treatment plan, and prescriptions — all without leaving their home. The Tyto Care kit is available for $300 from Best Buy in the U.S., or from Tyto Care’s private health system partners in several countries including the U.K., Canada, Spain, France, Switzerland, Russia, Israel, Thailand, and Uruguay. Above: Tyto Care: A mother examines her child on behalf of a remote doctor. With Tyto, a health care provider can examine the patient’s lungs, heart, throat, ears, skin, abdomen, and body temperature. This can help diagnose all manner of conditions including ear infections, allergies, coughs and respiratory issues, cold and flu, fevers, and more. Above: A physician using Tyto Care The kit bundle includes everything required for these assessments, including an otoscope for ear inspections and a stethoscope for heart, lungs, and abdomen. Above: Tyto Care device, add-ons, and phone. In addition to the money it makes from selling hardware, Tyto Care’s business is built around a software-as-a-service (SaaS) model, with health care systems paying a fee to use the the company’s platform. The COVID effect Countless telehealth startups have raised sizable investments in recent times — in the past month alone K Health and 98point6 raised nearly $100 million between them to bring various AI-powered smarts to the telehealth realm. The global telehealth market is currently pegged as a $25 billion industry, according to a recent MarketsAndMarkets report , and this figure had been expected to double within five years. However, with the coronavirus forcing industries across the spectrum to adapt, the true number could end up being much higher. Tyto Care said that it had already seen a threefold growth in sales last year, and in the wake of the COVID-19 outbreak hospitals and other health-focused organizations have increased their use of Tyto Care’s telehealth solution to examine quarantined patients remotely. “Telehealth is heeding the call of the COVID-19 pandemic, and we are proud that our unique solution is aiding health systems and consumers around the world in the fight against the virus,” said Tyto Care cofounder and CEO Dedi Gilad. “This new funding comes at a pivotal moment in the evolution of telehealth and will enable us to continue to transform the global healthcare industry with the best virtual care solutions.” Prior to now, Tyto Care had raised around $57 million from big-name investors including Walgreens, and with another $50 million in the bank the company said it’s well-financed to expand its commercialization in the U.S., Europe, and Asia — as well as bring new AI and machine learning-based home diagnostics services to market. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,532
2,020
"Medici raises $24 million as telehealth demand surges | VentureBeat"
"https://venturebeat.com/2020/04/27/medici-raises-24-million-as-telehealth-demand-surges"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Medici raises $24 million as telehealth demand surges Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The COVID-19 crisis has been a boon for many businesses, particularly those specializing in remote communication and collaboration. Microsoft Teams and Zoom’s video conferencing tools have surged in demand , while web-based visual design platform Lucid locked down $52 million in financing in just two weeks as investors scramble to back companies capable of thriving during the pandemic. Telemedicine is another field that has seen unprecedented demand as patients seek to access medical care without breaching social-distancing protocols. In March, virtual health consultations grew by 50% , according to Frost and Sullivan research, with general online medical visits on course to hit 200 million this year — up significantly from the 36 million anticipated before COVID-19 struck. Against this backdrop, telehealth startup Medici today announced that it has raised $24 million in a series B round of funding. Founded out of Austin, Texas in 2016, Medici is essentially a WhatsApp for remote medical care and serves as an all-in-one platform for messaging, voice calls, and video chat. Patients can search for and connect with doctors, veterinarians, and other health care providers through the mobile app , while doctors can choose to receive payments for their telehealth visits directly through the app. Above: Medici mobile app Additionally, Medici allows doctors to issue e-prescriptions during or after a consultation and offers in-app translations to circumvent language barriers. It also comes with $1 million in liability insurance. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Medici had previously raised nearly $50 million, and with another $24 million in the bank it is looking to “accelerate its growth” and capitalize on a “huge uptick” in patient signups and consultations. As with other telehealth platforms, demand for Medici has gone through the roof in the wake of the COVID-19 crisis — the company said between February and April it experienced a nearly 1,500% surge in new patient registrations. While many general consultations can be carried out remotely, certain types of examinations are difficult to perform through a virtual platform, particularly when specialist equipment is needed. This is why a number of companies have developed special home-use kits that send vital data to health care professionals. Earlier this month, New York-based Tyto Care closed a $50 million round of funding for a telehealth examination and diagnostic platform that includes an otoscope for ear inspections and a stethoscope for heart, lungs, and abdomen. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,533
2,020
"Google open-sources faster, more efficient TensorFlow runtime | VentureBeat"
"https://venturebeat.com/2020/04/29/google-open-sources-faster-more-efficient-tensorflow-runtime"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google open-sources faster, more efficient TensorFlow runtime Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google today made available TensorFlow RunTime (TFRT) , a new runtime for its TensorFlow machine learning framework that provides a unified, extensible infrastructure layer with high performance across a range of hardware. Its release in open source on GitHub follows a preview earlier this year during a session at the 2020 TensorFlow Dev Summit, where TFRT was shown to speed up core loops in a key benchmarking test. TFRT is intended to address the needs of data scientists looking for faster model iteration time and better error reporting, Google says, as well as app developers looking for improved performance while training and serving models in production. Tangibly, TFRT could reduce the time it takes to develop, validate, and deploy an enterprise-scale model, which surveys suggest can range from weeks to months (or years). And it might beat back Facebook’s encroaching PyTorch framework , which continues to see rapid uptake among companies like OpenAI, Preferred Networks, and Uber. TFRT executes kernels — math functions — on targeted hardware devices. During this development phase, TFRT invokes a set of kernels that call into the underlying hardware, focusing on low-level efficiency. Compared with TensorFlow’s existing runtime, which was built for graph execution (executing a graph of operations, constants, and variables) and training workloads, TFRT is optimized for inference and eager execution, where operations are executed as called from a Python script. TFRT leverages common abstractions across eager and graph executions; to achieve even better performance, its graph executor supports the concurrent execution of operations and asynchronous API calls. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google says that in a performance test, TFRT improved the inference time of a trained ResNet-50 model (a popular algorithm for image recognition) by 28% on a graphics card compared with TensorFlow’s current runtime. “These early results are strong validation for TFRT, and we expect it to provide a big boost to performance,” wrote TFRT product manager Eric Johnson and TFRT tech lead Mingsheng Hong in a blog post. “A high-performance low-level runtime is a key to enable the trends of today and empower the innovations of tomorrow … TFRT will benefit a broad range of users.” Contributions to the TFRT GitHub repository are currently limited, and TFRT isn’t yet available in the stable build of TensorFlow. But Google says that it’ll soon arrive through an opt-in flag before eventually replacing the existing runtime. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,534
2,017
"Twitter explains why it won't disclose the number of daily users. Kinda. | VentureBeat"
"https://venturebeat.com/2017/08/02/twitter-explains-why-it-wont-disclose-the-number-of-daily-users-kinda"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Twitter explains why it won’t disclose the number of daily users. Kinda. Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Following Twitter’s earnings release last week , I called its refusal to release the number of daily users “indefensible.” But in a series of letters with the U.S. Securities and Exchange Commission, Twitter tried to defend it anyway. For the moment, the SEC says it’s satisfied with those explanations — though Twitter is still not going to give us the number. The debate was disclosed yesterday when Twitter made public a series of “Comment Letters” it had exchanged with the SEC. These are typically letters initiated by the SEC when it has questions regarding a company’s securities filings. They remain secret until the inquiry is complete, at which point the company makes the full exchange public. So let’s start there: On July 3 , the SEC said it was satisfied with Twitter’s responses and indicated that it was ending the correspondence for now. The SEC sent its first letter back on March 31 , when it requested more information about several issues. These included how Twitter calculated its tax liabilities, its non-GAAP earnings, and why it stopped talking about metrics like Tweet impressions. The SEC also wanted a clearer definition of the metric “Daily Active Users/Usage.” Was this an average of daily users over the last month of the quarter or over the entire quarter? In a response on April 28 , Twitter said DAUs were calculated over the entire quarter. The company also emphasized what it has said publicly: It’s now more focused on growing DAUs than monthly average users: For the fourth quarter of 2016, the Company’s product strategy was focused on making existing users (i.e., MAUs) more active by improving the relevancy of Tweets sent to them in notifications and in their timelines. The Company believes that these initiatives had a greater effect on more frequent users because, in part, the platform learns more about these users through their interactions with the platform and can do a better job through product improvements providing more relevant content to such users to bring them back on another day. In a follow-up letter on May 10 , the SEC seemed to grudgingly acquiesce. But then it wondered (as many of us have) if DAUs are Twitter’s most important metric, shouldn’t it disclose the actual number? To wit: Please explain why you present the actual number of MAUs but only the percentage change in DAUs, and tell us how the percentage change information provides an investor with a clear understanding of user engagement on your platform. Also, in the proposed disclosures provided in response to prior comment 2, you state that prior to the third quarter of fiscal 2016, you discussed the ratio of MAUs to DAUs. Please tell us whether management believes this ratio is a key metric used to measure user engagement on your platform or tell us what key metrics you use to measure such engagement. Twitter replied on June 2 that giving the actual number of DAUs might be too confusing for investors: The absolute number of DAUs is less important than the percentage change in DAUs because the key factor is whether engagement is increasing or decreasing on a relative basis…The Company also focuses investors on percentage change rather than absolute DAU numbers to avoid confusion when comparing the Company with other companies that disclose information regarding DAUs, but use different definitions of DAUs that may include different segments of their respective user bases. Twitter cited Facebook as an example. It said Facebook counted DAUs as people who use the Facebook application or the Messenger application. Twitter said such a comparison would not be fair to Twitter: For example, Facebook discloses total DAU, but includes in that number users who only log into its separate messaging mobile application without breaking out how many DAUs come just from that application. Accordingly, investors would not be able to compare performance between the Company and this other company. And that was fine and dandy for the SEC, at least for now, though I wouldn’t expect analysts and shareholders, who are perhaps a tad more intelligent than Twitter thinks, to stop asking for the DAU number. Bloomberg, which first spotted the Comment Letters , did note that Twitter made at least one concession on this issue. It said after its last earnings call, in response to analyst questions, that the number of DAUs is less than 50 percent of its 328 million MAUs. That’s worth noting because in its letters Twitter had sold the SEC on the idea that the percentage didn’t matter: Prior to disclosing percentage changes in DAU, the Company discussed the MAU-to-DAU ratio on a few prior occasions to provide investors with a perspective about whether engagement (DAU) was tracking user growth (MAU). Those discussions are no longer relevant given the Company discloses percentage changes in DAU, so investors are able to see how DAU growth is tracking MAU growth. Additionally, the Company notes that disclosure of the MAU-DAU ratio would indirectly disclose the absolute number of DAU, which is a disclosure that the Company believes will shift focus away from the percentage change in DAU, which is currently a more relevant measure of user engagement trends. Finally, on a fun note, the SEC asked Twitter whether it expected to ever become a taxpayer. In other words: Is the company ever going to generate enough profits to have to pay into the U.S. federal treasury? Twitter’s response: not anytime soon. Even if the company does ever generate real, actual profits, it has lost so much money over the years that it will likely be able to use those losses to offset any profits in future years: The Company respectfully advises the Staff that the determination that the Company has not been, and is not expected to be, a taxpayer for the foreseeable future in certain jurisdictions, such as the U.S., was based upon the Company’s GAAP losses in these jurisdictions, which is an approach consistent with the Company’s previous disclosures. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,535
2,019
"Twitter is slowly perfecting the art of inventing nonsensical performance metrics | VentureBeat"
"https://venturebeat.com/2019/04/23/twitter-is-slowly-perfecting-the-art-of-inventing-nonsensical-performance-metrics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Twitter is slowly perfecting the art of inventing nonsensical performance metrics Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In its latest effort to innovate around its business model of opacity, Twitter officially today unveiled its latest self-invented metric, dubbed “monetizable daily active usage.” Naturally, this being Twitter, the company decided to retire the more standard Monthy Active User metric during a quarter in which it announced MAUs were actually good, MAUs rose to 330 million in the first quarter of 2019, up from 321 million the previous quarter. Still, that’s slightly down from one year ago, an indication that Twitter continues to stagnate more than 13 years after its founding. Not to mention remaining a cesspool of bigotry, racism, and misogyny that it can’t seem to fix. But, first things first. Let’s find some happier metrics! The company announced last quarter that it would focus on mDAU. And in doing so, it did not even have the courtesy to explain why the “m” would be lower case. Twitter defines mDAU as “users who logged in or were otherwise authenticated and accessed Twitter on any given day through Twitter.com or Twitter applications that are able to show ads.” Somewhat amazingly, Twitter also notes: “Additionally, our calculation of mDAU is not based on any standardized industry methodology and is not necessarily calculated in the same manner or comparable to similarly titled measures presented by other companies.” Yup. That’s ’cause Twitter just made it up. A search of SEC filings the last four years reveals that exactly one company uses this metric : Twitter. Another fun disclosure about Twitter’s so-called metrics: While these numbers are based on what we believe to be reasonable estimates for the applicable period of measurement, there are inherent challenges in measuring usage and user engagement across our large user base around the world. Furthermore, our metrics may be impacted by our information quality efforts, which are our overall efforts to reduce malicious activity on the service, inclusive of spam, malicious automation, and fake accounts. Translation: These metrics represent our best efforts though we can’t be sure they’re right but we’re trying our best but this stuff is hard so ¯\_(ツ)_/¯. So, Grand Canyon-sized caveats aside, how did this magical mystery metric mDAU pan out? According to Twitter’s earnings report, mDAUs grew from 120 million to 126 million over the course of 2018, and grew this past quarter to 134 million. Eureka! Growth! One could note that this also indicates that some portion of those 196 MAUs don’t fall into this more elite, refined mDAU bucket. These are effectively worthless bottom-feeding parasites from a business model-monetization-advertising-value chain point of view. Rather than getting more people onto Twitter’s platform, which appears to be futile, Twitter’s main mission now is to stick some ads in front of these freeloading riffraff. Still, growth is growth is growth, disproving that assertion by the Talking Heads that “Facts don’t do want I want them to.” This is 2019 and facts do in fact do what publicly traded companies want them to. Of course, Twitter has made earnings metric obfuscation something of a specialty. Back in 2017, the company began emphasizing “daily average usage,” in which it insisted “usage” was synonymous with “users.” It highlighted at the time double-digit percentage growth in this DAU category. But just for yuks, it said it would not release the absolute numbers of DAUs, just percentage increases or decreases. I pointed out then that this was absurd: Twitter’s unwillingness to share the underlying number is indefensible. If DAUs are the metric the company wants to be judged by, then the data would seem to be material to investors. Period. Instead, analysts have made a cottage industry out of trying to calculate just how many DAUs Twitter actually has. So, at least mDAUs come with underlying numbers. Progress of a sort, I suppose. Assuming they’re more or less accurate. But ditching MAUs makes it hard to measure longer-term progress and just serves to point out again and again that Twitter has pretty much maxed out the number of people on planet Earth who want to be on Twitter. Which, perhaps, is the real point. For now on, mDAUs will be the metric Twitter wants Wall Street to watch. And if Apple can stop reporting the number of its most important product that it sells — well, who is to say what any company is required to tell investors? Let the Olympic Games of Inventing Metrics begin, and may the company with the most lawyers and accountants win. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,536
2,020
"Twitter hits $1 billion in quarterly revenue, ‘monetizable daily users’ jump 21% to 152 million | VentureBeat"
"https://venturebeat.com/2020/02/06/twitter-hits-1-billion-in-quarterly-revenue-monetizable-daily-users-jump-21-to-152-million"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Twitter hits $1 billion in quarterly revenue, ‘monetizable daily users’ jump 21% to 152 million Share on Facebook Share on X Share on LinkedIn Twitter's profile page on Twitter.com Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Twitter has hit $1 billion in quarterly revenues for the first time in its history as it touts improved advertising income and an increase in active users. The social networking giant announced its Q4 2019 financial and user metrics this morning, revealing revenues of $1.01 billion — a year-on-year (YoY) increase of 11% on the record $909 million reported last year , and a quarter-on-quarter (QoQ) rise of around 22% compared to Q3 2019. The bulk of Twitter’s income comes from advertising, which constituted $885 million, or 88% of its revenue, during the last quarter. This was an annual increase of 12%, and Twitter noted that its domestic U.S. market underpinned much of the uptick. “We reached a new milestone in Q4, with quarterly revenue in excess of $1 billion, reflecting steady progress on revenue product and solid performance across most major geographies, with particular strength in U.S. advertising,” said Twitter CFO Ned Segal. Twitter beat analyst estimates on both revenue and users — hitting 152 million monetizable daily active users (mDAU) during the last quarter, up 21% on the 126 million reported last year and 5% on the previous quarter. Twitter defines an mDAU as anyone who logs in through Twitter.com or its mobile apps and is able to view advertisements, so this excludes TweetDeck or third-party clients. Domestically, Twitter increased its mDAU by 4 million from last year and 1 million from the previous quarter, with its U.S figure now sitting at 31 million. Improvements According to Twitter, user growth was driven chiefly by product improvements it made last year relating to “increased relevance” of content in users’ Home timeline and better notifications. “Our work to increase relevance and ease of use delivered 21% mDAU growth in Q4, with more than half of the 26 million mDAU added in 2019 directly driven by product improvements,” Twitter cofounder and CEO Jack Dorsey said. “Entering 2020, we are building on our momentum — learning faster, prioritizing better, shipping more, and hiring remarkable talent, all of which put us in a stronger position as we address the challenges and opportunities ahead.” One of these challenges relates to the pervasiveness of toxicity on the Twitter platform. During the previous quarter’s earnings, the company said it now proactively removes half of all abusive tweets, meaning it uses automated tools to scrub abuse without waiting for manual reports. And in a U.S. election year, Twitter — along with the other big technology platforms — is also keen to highlight its efforts to thwart fake news and misinformation. “In Q4, we increased our efforts to protect the integrity of election-related conversations and proactively limit the visibility of unhealthy content on Twitter, resulting in a 27% decline in bystander reports on tweets that violate our terms of service,” the company said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,537
2,019
"How video games can address climate change | VentureBeat"
"https://venturebeat.com/2019/05/12/how-video-games-can-address-climate-change"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How video games can address climate change Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. 65% of Americans play video games. So what better way to educate people about an issue like climate change than a video game. That was the thinking behind two recent games — Eco and Jupiter & Mars — that hit the market recently. We had the good fortune of having the leaders of the studios that made those games speak at our GamesBeat Summit 2019 event last month. Their talks illustrate the differences in approaches that they took in making games to raise awareness about climate change. Sam Kennedy, CEO of the environmentally-focused game developer Tigertron , spoke at the close of our conference with Amy Jo Kim, founder of Game Thinking and co-creator of games like The Sims. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “A lot of us in the game industry — probably some of you — work for a while. Then we want to do something with meaning, something that impacts the world, and is more than just entertainment,” said Kim, while introducing Kennedy. “Sam went all in on that.” Kennedy said, “As a studio, we wanted to do games that could ideally inspire people about the Earth and the environment. The topic that was front and center in our minds was climate change. We are seeing the effects today but if you project out in the years to come, it is really frightening.” Jupiter & Mars debuted on Earth Day 2019, April 22, on the PlayStation 4 and PSVR. Proceeds of the game will help charities. And John Krajewski, CEO of Strange Loop Games , spoke with Eric Gradman, chief technology officer and mad man at Two Bit Circus, about the creation of Eco. I really like how these sessions, which were inspired by game developer Dave Taylor, turned out. Aspyr enabled Kennedy to fly out from New York and give the talk. Above: John Krajewski (left), CEO of Strange Loop Games, and Eric Gradman, CTO and mad man at Two Bit Circus. Krajewski’s team at Strange Loop Games created Eco, a multiplayer simulation game that requires players to work together to create a society that can shoot down a meteor from destroying a planet. Our PC gaming editor Jeff Grubb got obsessed with the educational game in the past year. Strange Loop Games worked with a couple of universities and had a grant from the U.S. Department of Education. The company also raised money via Kickstarter. The game debuted on Windows, Mac, and Linux in early 2018. The results are fascinating. Check out the full talk below. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,538
2,020
"The DeanBeat: Will cloud gaming make climate change worse? | VentureBeat"
"https://venturebeat.com/2020/01/31/the-deanbeat-will-cloud-gaming-make-climate-change-worse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: Will cloud gaming make climate change worse? Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Sometimes I like to think about long-term problems. Since the Australian wildfires have put the issue of climate change my mind, I’m wondering whether our visions for great things in technology and gaming, such as cloud gaming or the Metaverse, will make the problem worse. At the moment, cloud gaming is probably not producing enough pollution to be on anybody’s list of the top contributors to climate change. But if the dreams of cloud gaming companies come true, then we’ll need to start worrying about their contribution to the problem. And it’s great to have games that raise awareness about the issue of climate change — The Climate Trail and Jupiter & Mars and Eco — but then there is the small irony that if those games become really popular, then they will also contribute to climate change. Data centers melting the polar ice caps? Above: The Climate Trail deals with climate change. In the big picture, data centers and the tech gadgetry that connects to them are a concern. SoftBank and Arm predict that the internet of things — everyday devices that are smart and connected — could reach more than a trillion units by 2035. To date, Arm’s customers have shipped 150 billion processors. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Those things are going to connect to data centers, over 5G networks or other internet connections. A Bloomberg story said that power efficiency gains in data centers have bottomed out, according to the Uptime Institute. “Even with efficiency gains, data center electricity demand is voracious and growing; that growth has a number of implications for the power grid and for power utilities,” the Bloomberg story said. Add to that the problem of the slowing of Moore’s Law, the 1965 prediction by Intel chairman emeritus Gordon Moore that the number of transistors on a chip would double every couple of years. That was a proxy for continuous electronics progress, meaning that as long as Moore’s Law continued, computing would become more efficient, doing more computations for the same cost or less power. Intel, one of the world’s biggest chip makers, has struggled with its transition to the next doubling. That has raised concern that the law that held up for the past 55 years is coming to its end as we approach the limits of the laws of physics. That slowdown comes at a bad time as the demand for data centers rises. Where cloud gaming makes this worse Above: Are data centers going to melt the polar caps? Some things about cloud gaming concern me. Some wags have figured out that streaming high-end games consumers something like 100MBs a second. Based on that, if you play a game like Red Dead Redemption 2 (with more than 100 hours of gameplay), then that one game played across a month could exceed your monthly data cap with cable providers. That’s a lot of computing usage, and it will put pressure on data centers. If we’re going to live in a series of connected virtual worlds, which are like being inside video games like in Ready Player One or The Matrix , I can’t imagine that’s going to make this problem of electricity consumption any better. What are the answers? Above: The African rainforest, turning brown from climate change I have been asking chip industry executives about this question. Arm’s CEO Simon Segars told me that “Moore’s law aside, I think that the deployment of IoT and the AI processing of data can do a lot to help with some of the issues of climate change. We’ve been a believer, and publicly outspoken, on the role that technology can play in addressing all of the U.N. global goals, whether it’s to do with climate change, quality of water, education, whatever. If you look across the global goals, technology can help with all of them. There’s a lot of inefficiency. This thermostat cranking out freezing cold air when we’re all not enjoying it — a wall switch would help here. But there’s really a lot of inefficiency in the world. There’s low-hanging fruit here that doesn’t take much to address. We have the technologies we need for that now.” Most of the responses I have received from executives such as Arm’s Drew Henry , AMD’s Mark Papermaster , and others fall into this kind of category. Sure, the internet of things will consume material and energy resources, but it will make us more efficient. But I don’t see anyone really making nuanced arguments about how to architect the data centers and the internet of things in the right way. A study in Nature estimated that data centers consume about 0.3% of the world’s electricity but are on their way to becoming a far bigger slice of the pie. It also raised concern about the rise of cryptocurrencies such as Bitcoin and the structure of blockchain — seeking to verify a fact through the coordination of a lot of computers — is a real energy hog. Nvidia acknowledges that cloud gaming data centers have an impact on the environment, and it is thinking about ways to make its efforts carbon neutral. But it doesn’t have a solution yet. Meanwhile, Microsoft and Google have committed to making their data centers carbon neutral or negative. That means employing alternative energy sources such as solar and other clever ideas. Offsetting the demand for electricity Above: One of Google’s data centers. Norman Liang, a game industry observer, noted that cloud gaming could end up saving money if it means that players will buy less hardware in the future. If you can stream great games with high-end graphics and play them on any piece of hardware, even old machines, then you don’t have to buy as many consoles or PCs. Old game consoles and PCs are big sources of electronic waste, as owners have no secondary or long-term use for outdated technologies. In that way, companies who spend more on capabilities in the cloud could offset spending by consumer’s at the edge. There is also a lot of “dark fiber” throughout the world, or unused fiber optic cables that have been under-utilized when it comes to transporting data. By tapping this resource, the world’s networks could become more efficient without incurring more expenses or power consumption. “I’m an optimist by nature. What opportunity do we have, when we have more data available to us than we’ve ever had, and more computation than we’ve ever had?” said Papermaster, the chief technology officer at Advanced Micro Devices. “You look at what was announced by the Department of Energy with the Frontier supercomputer, where we’re partnering with HPE Cray to deliver 1.5 exaflops of computing in 2021. If you marry that with, as you said, this massive amount of data we’ve never had before, what analytics can we run that we never have before that can improve society? What can we do, based on that kind of analysis, that can inform us on how to contain climate change?” Let’s hope that the optimists are right, but I’d sure love to see real studies on this issue and how to change our direction if that’s necessary. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,539
2,018
"Earthrise: the story behind our planet's most famous photo | Photography | The Guardian"
"https://www.theguardian.com/artanddesign/2018/dec/22/behold-blue-plant-photograph-earthrise"
"When Bill Anders took this photograph from the Apollo spacecraft on Christmas Eve in 1968, our relationship with the world changed forever US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing Film Books Music Art & design TV & radio Stage Classical Games The Earth from Apollo 8 as it rounded the dark side of the moon. Photograph: Nasa/AFP/Getty Images The Earth from Apollo 8 as it rounded the dark side of the moon. Photograph: Nasa/AFP/Getty Images Photography Earthrise: the story behind our planet's most famous photo When Bill Anders took this photograph from the Apollo spacecraft on Christmas Eve in 1968, our relationship with the world changed forever Sat 22 Dec 2018 05.02 EST T his photograph is now half a century old. It was taken by the astronaut Bill Anders on Christmas Eve 1968 as the Apollo 8 spacecraft rounded the dark side of the moon for a fourth time. When Earth came up over the horizon, Anders scrabbled for his Hasselblad camera and started clicking. In that pre-digital age, five days passed. The astronauts returned to Earth; the film was retrieved and developed. In its new year edition, Life magazine printed the photo on a double-page spread alongside a poem by US poet laureate James Dickey: “And behold / The blue planet steeped in its dream / Of reality, its calculated vision shaking with the only love.” This was not quite the first look at our world from space. Lunar probes had sent back crudely scanned images of a crescent Earth shrouded in cloud. A satellite had even taken a colour photo that, in the autumn of 1968, the radical entrepreneur Stewart Brand put on the cover of his first Whole Earth Catalog. The next edition, in spring 1969, used Anders’s photograph, by now known as Earthrise. Brand’s catalogue was a DIY manual for the Californian counterculture, a crowdsourced compendium of life hacks about backpacking, home weaving, tantra art and goat husbandry. Its one-world, eco ethos was a weird offshoot of the macho tech of the space age – those hunks of aluminium run on rocket fuel and cold war rivalries. But then looking back at Earth was itself a weird offshoot of the moon missions. It just happened that Apollo 8’s aim – to locate the best lunar landing sites – needed high-res photography, which was also good for taking pictures of planets a quarter of a million miles away. Brand was one of a group of environmental activists who felt that an image of “Spaceship Earth” would bring us all together in watchfulness and care for our planetary craft and its precious payload. “Earthrise”, though, did more than just corroborate this gathering mood. With its incontestable beauty, a beauty that had needed no eye of a beholder for billions of years, it caught the human heart by surprise. The crew of the Apollo 8 spacecraft (Bill Anders, 3rd left) following the lunar orbital mission, 27 December 1968. The Earth pictured in Earthrise looks unlike traditional cartographic globes that mark out land and sea along lines of latitude and longitude. Slightly more than half the planet is illuminated. The line dividing night and day severs Africa. Earth looks as if it is floating alone in the eternal night of space, each part awaiting its share of the life-giving light of the sun. Apart from a small brown patch of equatorial Africa, the planet is blue and white. At first glance it seems to have the sheen of blue-veined marble. But look closer and that spherical perfection softens a little. Earth divulges its true state as oceanic and atmospheric, warmly welcoming and achingly vulnerable. The blue is light scattered by the sea and sky. The white is the gaseous veneer that coats our planet and lets us live. You can just make out the “beautiful blue halo”, with its gentle shift from tender blue to purple black, that Yuri Gagarin noticed on his first low-orbit flight. That halo is our fragile biosphere, and is all that stands between us and the suffocating void. Fifty years ago the biochemist James Lovelock was working for Nasa and developing a theory of Earth as a single, self-regulating superorganism. It had reached a homeostatic state conducive to life, he believed, against odds as long as “surviving unscathed a drive blindfold through rush-hour traffic”. Lovelock’s theory, later named Gaia after the Greek goddess of the Earth, makes a special kind of sense when you gaze at our planet from the moon. The poet Archibald MacLeish caught it best for a piece in the New York Times on Christmas Day 1968 – oddly, when only the Apollo 8 crew had seen Earthrise. All MacLeish had to go on was the live television broadcast by the astronauts on 23 December, when Jim Lovell had pointed his camera out of the cabin window and captured a coarse monochrome image of Earth from 175,000 miles away. “To see the Earth as it truly is, small and blue and beautiful in that eternal silence where it floats,” MacLeish wrote, “is to see ourselves as riders on the Earth together, brothers on that bright loveliness in the eternal cold.” MacLeish had jumped the gun by a few days; other writers had anticipated this sight centuries earlier. In his fantastical narrative The Man in the Moon (1638), the author and divine Francis Godwin has his hero fly to the moon in a machine harnessed to a flock of wild swans. As he ascends into space, the world’s landmasses diminish, not just in size but in significance – Africa is “like unto a pear that had a morsel bitten out upon one side of him” – while the ocean seems “like a great shining brightness” and the whole Earth “masks itself with a kind of brightness like another moon”. Godwin grasped that from space Earth would look terraqueous, and far more aqua than terra. In Jules Verne’s Around the Moon (1870), three adventurers, fired to the moon by a giant space gun, look back at Earth and see “its delicate crescent suspended in the deep blackness of the sky” and “its light, rendered blueish by the thickness of its atmosphere”. When the narrator of HG Wells’s The First Men in the Moon (1901) sees Earth from a spaceship, its three-dimensionality starkly reveals itself in the dance of sunlight and shadow on its surface. “The land below us was twilight and vague,” he writes, “but westward the vast grey stretches of the Atlantic shone like molten silver.” These early sci-fi writers guessed correctly that, from space, a world made of water and air would shimmer as if it were alive. They saw Earthrise before the astronauts did. Fanciful space flights, full of backward glances at our world, occur in very ancient texts. The lesson is always the same: don’t imagine you matter. The vastness of the world, which renders your own life so little, is itself a speck of dust adrift in the vast cathedral of space. Cicero ’s Republic imagines the dead Roman general Scipio Africanus , hero of the second Punic war, appearing to his grandson, Scipio Aemilianus, in a dream. The younger Scipio finds himself in the heavens gazing down on Carthage, which he will later destroy in the third Punic war. The world has shrunk to the point where he is “scornful of our empire, which covers only a single point upon its surface”. The Apollo 8 crew (from left): James Lovell, Bill Anders and Frank Borman. “O how ridiculous are the boundaries of mortals!” wrote Seneca , imagining Earth from the same cosmic perspective. A fable by the second-century satirist Lucian of Samosata tells of a “sky-man” who flies to the moon. Looking back at Earth, he sees “how little there was for our friends the rich to be proud of … The widest-acred of them all had not a single Epicurean atom under cultivation.” The ancients knew how minuscule our preoccupations would seem from afar. The whole earthers of the 1960s thought that photographic proof might help us to see this obvious truth in new ways. Border disputes, imperial wars, the enslavement of other peoples, the spoiling of the planet for selfish and ephemeral gain: all would be exposed as what the cosmologist Carl Sagan called “the squabbles of mites on a plum”. And yet, to a mite, that plum is everything. The power of Earthrise as an image derived partly from its being a picture of the plum taken by a mite – one of the first three to escape the plum’s gravity. With the brown-grey desert of the moon as contrast, the Earth shone as if alert to its singularity. As Marina Benjamin writes in her book Rocket Dreams (2003), this made it “difficult not to imbue the planet with exemplariness”. Earthrise was edited for anthropocentric ends. The Apollo 8 crew saw Earth to the side of the moon, not above it, and to them it seemed tiny. Anders compared it to being “in a darkened room with only one visible object, a small blue-green sphere about the size of a Christmas-tree ornament”. Nasa flipped the photo so that Earth seemed to be rising above the moon’s horizon, and then cropped it to make Earth look bigger and more focal. Earthrise was an Earth selfie, taken by earthlings. The Indus river basin in Pakistan, photographed from a satellite. Since 1968 the earthlings have had many visual reminders of their cosmic irrelevance. Their home planet has been demoted to what Sagan called the “pale blue dot”, the tiny fleck of Earth, no bigger than a pixel, in a photo of our galaxy taken in 1990 by the Voyager 1 probe from 3.7bn miles away. But we prefer to ignore the evidence. In our daily lives we are all flat earthers. We carry on thinking that the sun rises in the east and sets in the west. Nor is our stewardship of the planet any less culpably forgetful. We still squabble over the juiciest bits of the plum; we still fight and die over its pinprick empires; the rich and powerful seem more ludicrously puffed-up than ever. Did the 22-year-old Donald Trump, when he first saw Earthrise , feel awed and humbled by human smallness? I’m guessing no. Our species is just as venal and measly-minded as it was half a century ago. Perhaps more so, now that the technology that excites us is not the rocket blasting us into deep space, but the computer coding that blasts us through a wormhole into cyberspace: a human-built universe composed of our own self-admiring obsessions, exhilarating and exhausting enough to fill a lifetime. Still, Earthrise must have changed something. What’s seen can’t be unseen. Perhaps it flits across your mind when you open Google Earth and see that familiar virtual globe gently spinning. Just before you click and drag to fly yourself to some portion of the world no bigger than an allotment, you may briefly take in, with a little stomach lurch, that this slowly revolving sphere holds close to 8 billion people, living out lives as small and short and yet meaningful as the universe is infinite and eternal and yet meaningless. On that gigantic, glistening marble, mottled with blue-white swirls, lies everyone. This article was amended on 22 December 2018 to correct a distance and on 26 December to correct a picture caption. Explore more on these topics Photography Space Nasa James Lovelock Yuri Gagarin features More on this story More on this story What an urban spaceman tells us about the human condition 1 Sept 2019 RPS 2019 science photographer of the year – shortlist 13 Aug 2019 The 2019 Wellcome photography prize: close focus on the human condition 1 Jun 2019 … … 100 years on: the pictures that changed our view of the universe 12 May 2019 … … 'Bucket-list shot': Australian gets rare photo of space station in front of moon 27 Mar 2019 Big tick energy: how a tiny flea created a revolution in British art 22 Apr 2019 … … Royal Society Publishing photography competition 2018 winners 5 Dec 2018 Earth Science in Our Lives: photography competition winners 2018 – in pictures 15 Oct 2018 The Apollo programme: first steps into space, 50 years on 7 Oct 2018 … … Unforgettable underwater photography - in pictures 12 Apr 2018 Most viewed Most viewed Film Books Music Art & design TV & radio Stage Classical Games News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top "
16,540
2,020
"Amazon deploys thermal cameras at U.S. warehouses to scan for fevers faster | VentureBeat"
"https://venturebeat.com/2020/04/18/amazon-deploys-thermal-cameras-at-u-s-warehouses-to-scan-for-fevers-faster"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon deploys thermal cameras at U.S. warehouses to scan for fevers faster Share on Facebook Share on X Share on LinkedIn The logo of Amazon is seen at the company logistics center in Boves, France, September 18, 2019. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. ( Reuters ) — Amazon has started to use thermal cameras at its warehouses to speed up screening for feverish workers who could be infected with the coronavirus, employees told Reuters. The cameras in effect measure how much heat people emit relative to their surroundings. This requires less time and contact than forehead thermometers, earlier adopted by Amazon, the workers said. Cases of the virus have been reported among staff at more than 50 of Amazon’s U.S. warehouses. This has prompted some workers to worry about their safety and walk off the job. Unions and elected officials have called on Amazon to close buildings down. The use of cameras, previously unreported, shows how America’s second-biggest corporate employer is exploring methods to contain the virus’ spread without shuttering warehouses essential to its operation. U.S. states have given Amazon the green light to deliver goods with nearly all the country under stay-at-home orders. In France , Amazon has closed six of its fulfillment centers temporarily — one of the biggest fallouts yet from a dispute with workers over the risks of coronavirus contagion. Other companies that have explored using the thermal camera technology include Tyson Foods and Intel. The camera systems, which garnered widespread use at airports in Asia after the SARS epidemic in 2003, can cost between $5,000 and $20,000. This week and last, Amazon set up the hardware for the thermal cameras in at least six warehouses outside Los Angeles and Seattle, where the company is based, according to employees and posts on social media. Thermal cameras will also replace thermometers at worker entrances to many of Amazon’s Whole Foods stores , according to a recent staff note seen by Reuters and previously reported by Business Insider. The company performs a second, forehead thermometer check on anyone flagged by the cameras to determine an exact temperature, one of the workers said. An international standard requires the extra check, though one camera system maker said the infrared scan is more accurate than a thermometer. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How widely Amazon will deploy the technology at a time when camera makers are grappling with a surge in demand could not be determined. A Whole Foods representative said cameras ordered weeks ago were starting to arrive for use. Amazon confirmed that some warehouses have implemented the systems to streamline checks. The company is taking temperatures “to support the health and safety of our employees, who continue to provide a critical service in our communities,” it said in a statement. Early this month, Amazon said it would offer face masks and start checking hundreds of thousands of people for fevers daily at all its U.S. and European warehouses. Associates walk up to a Plexiglas screen, and an employee on the other side scans their forehead by pointing a thermometer through a small hole. That process has not been without challenges. A worker performing temperature checks in Houston said his proximity to associates made him uncomfortable , in spite of the screen separating them. “I didn’t sign up for this,” he said. A Los Angeles-area employee, who also spoke on condition of anonymity, said a line once formed outside her warehouse, and employees could not receive masks until after they had entered the building and had their temperatures taken. The thermal camera system is faster, two other workers said, with no stopping in front of a screen necessary. The cameras connect to a computer so an employee at a distance can view the results, one said. Amazon did not disclose whose gadgets it was using. One of the employees, at a warehouse outside Seattle, said the technology came from Infrared Cameras in Texas. Reached by phone, ICI CEO Gary Strahan said he would not confirm or deny his company’s working with Amazon. Other purveyors include U.K.-based Thermoteknix and U.S.-based FLIR Systems. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,541
2,020
"Apple and Google build more privacy and flexibility into Bluetooth contact tracing tech | VentureBeat"
"https://venturebeat.com/2020/04/24/apple-and-google-build-more-privacy-and-flexibility-into-bluetooth-contact-tracing-tech"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple and Google build more privacy and flexibility into Bluetooth contact tracing tech Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Apps using Apple and Google’s contact tracing solution, due out soon, will be able to encrypt metadata associated with smartphones and randomly generate keys for identifying phones to make it harder to identify an individual or track people. A seed version of the app will be released Tuesday for iOS users, supporting Apple devices released within the last 4 years, in order for public health authorities to begin testing it. Apple and Google spokespeople, as well as privacy advocates, insist that high levels of user trust are needed for a voluntary app approach to succeed. Those spokespeople said the updates are the result of conversations with key stakeholders around the world. As part of other updates introduced today, contact events will now be recorded in five-minute intervals for a maximum of 30 minutes. The update will also let developers making apps for public health officials customize signal strength and duration thresholds to define which contacts are dangerous and who receives a smartphone alert after a person tests positive for COVID-19. Agreement about what level of signal strength or duration of contact can result in COVID-19 transmission and thus merits a contact event is not uniform, and allowing local control of what constitutes a credible threat allows local officials to deem what’s best, not Apple or Google. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Standards for what constitutes a dangerous exposure could change based on new knowledge or local conditions. For example, recorded interactions may be able to increase sensitivity in areas where outbreaks are particularly bad. On April 10, Apple and Google, creators of the most popular mobile operating systems in the world, announced an unprecedented partnership to create a common API for Android and iOS smartphone apps to exchange and record contact events. Rather than use cell phone tower triangulation or GPS, the method relies on Bluetooth Low Energy signals and carries out compute and stores contact events locally on smartphones in what’s called a decentralized process. Many privacy advocates and the makers of privacy-conscious apps in the U.S. and EU agree that decentralization, or tracking without the use of a centralized repository, does away with the prospect that a single hack can reveal sensitive user information for all participants. Members of the TCN protocol, a group of about a dozen organizations working with cryptology, AI, and Bluetooth experts that believe in decentralized contact tracing, have collaborated with Apple and Google since the tech giants agreed to work together about a month ago. The update will include more detailed Received Signal Strength Indicator (RSSI) information. Apps using the Apple-Google API will also be able to record the number of days since an exposure event has occurred. Also part of the update today: Apple and Google will now refer to the solution as an exposure notification app instead of a contact tracing app. In phase II of the partnership, due out in the coming months, smartphones using Android or iOS will automatically track contact events. Users will have to update their phones and then opt-in to participate. Using this approach, people won’t have to download the app. But if a person tests positive, a doctor can suggest they download the app to alert people who might have been exposed. European Union officials earlier this week asserted that contact tracing apps must surpass a 60% installation threshold in order to be effective. Apple and Google spokespeople last week declined to predict what level of app download is effective. Dissenters to the idea identify high levels of COVID-19 spread by asymptomatic users as a major reason why the Apple-Google solution and others like it could fail, while supporters say such tracking is essential to reopening society and economic activity. Privacy and public health officials say ample testing availability is essential if contact tracing apps, working along human contact tracers, are to play an effective role in fighting COVID-19. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,542
2,020
"Machine learning could check if you’re social distancing properly at work | MIT Technology Review"
"https://www.technologyreview.com/2020/04/17/1000092/ai-machine-learning-watches-social-distancing-at-work"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Machine learning could check if you’re social distancing properly at work By Karen Hao archive page Andrew Ng’s startup Landing AI has created a new workplace monitoring tool that issues an alert when anyone is less than the desired distance from a colleague. Six feet apart: On Thursday, the startup released a blog post with a new demo video showing off a new social distancing detector. On the left is a feed of people walking around on the street. On the right, a bird’s-eye diagram represents each one as a dot and turns them bright red when they move too close to someone else. The company says the tool is meant to be used in work settings like factory floors and was developed in response to the request of its customers (which include Foxconn ). It also says the tool can easily be integrated into existing security camera systems, but that it is still exploring how to notify people when they break social distancing. One possible method is an alarm that sounds when workers pass too close to one another. A report could also be generated overnight to help managers rearrange the workspace, the company says. Under the hood: The detector must first be calibrated to map any security footage against the real-world dimensions. A trained neural network then picks out the people in the video, and another algorithm computes the distances between them. Workplace surveillance: The concept is not new. Earlier this month, Reuters reported that Amazon is also using similar software to monitor the distances between their warehouse staff. The tool also joins a growing suite of technologies that companies are increasingly using to surveil their workers. There are now myriad cheap off-the-shelf AI systems that firms can buy to watch every employee in a store, or listen to every customer service representative on a call. Like Landing AI’s detector, these systems flag up warnings in real time when behaviors deviate from a certain standard. The coronavirus pandemic has only accelerated this trend. Dicey territory: In its blog post, Landing AI emphasizes that the tool is meant to keep “employees and communities safe,” and should be used “with transparency and only with informed consent.” But the same technology can also be abused or used to normalize more harmful surveillance measures. When examining the growing use of workplace surveillance in its annual report last December, the AI Now research institute also pointed out that in most cases, workers have little power to contest such technologies. “The use of these systems,” it wrote, “pools power and control in the hands of employers and harms mainly low-wage workers (who are disproportionately people of color).” Put another way, it makes an existing power imbalance even worse. hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done. By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work. By Will Douglas Heaven archive page Generative AI deployment: Strategies for smooth scaling Our global poll examines key decision points for putting AI to use in the enterprise. By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
16,543
2,020
"Newzoo: Global esports will top $1 billion in 2020, with China as the top market | VentureBeat"
"https://venturebeat.com/2020/02/25/newzoo-global-esports-will-top-1-billion-in-2020-with-china-as-the-top-market"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Newzoo: Global esports will top $1 billion in 2020, with China as the top market Share on Facebook Share on X Share on LinkedIn Newzoo's latest revised forecast for esports revenue growth. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Global esports revenues will surpass $1 billion in 2020 for the first time — without counting broadcasting platform revenues, according to market researcher Newzoo. China is the largest market by revenues, with total revenues of $385.1 million in 2020, followed by North America, with total revenues of $252.8 million. Newzoo noted that it has re-evaluated the size of the esports market, based on improved methodologies. Some media have been critical of Newzoo’s hype around esports in the past. Globally, the total esports audience will grow to 495.0 million people in 2020, Newzoo said. Esports Enthusiasts (people who watch more than once a month) make up 222.9 million of this number. In 2020, $822.4 million in revenues—or three-quarters of the total market—will come from media rights and sponsorship. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Esports audience growth In the coming year, the global esports economy will generate revenues of $1.1 billion, a year-on-year growth of 15.7%. Most of these revenues (74.8%) will come from sponsorships and media rights, which will total $822.4 million, a 17.2% increase from last year. Consumer spending on tickets and merchandise will total $121.7 million, while another $116.3 million will come from game publishers’ investments into the esports space, via supporting tournaments through partnerships or as white-label projects with professional tournament organizers. Newzoo also said the global esports audience will reach 495.0 million this year, made up of 222.9 million Esports Enthusiasts and a further 272.2 million Occasional Viewers. In 2020, the average revenue per Esports Enthusiast will be $4.94, up 2.8% from 2019. “As the esports market matures, new monetization methods will be implemented and improved upon,” said Remer Rietkerk, head of esports at Newzoo, in the report. “Likewise, the number of local events, leagues, and media rights deals will increase; therefore, we anticipate the average revenue per fan to grow to $5.27 by 2023.” Above: Newzoo’s forecast for the top revenue streams for esports. Mobile has unlocked esports for emerging markets—a trend visible in countries like Vietnam, where titles like PUBG Mobile and Garena Free Fire have exploded in popularity. As such, emerging esports markets will show the highest compound annual growth rate (CAGR for 2018-2023), with regions such as Southeast Asia (24.0% CAGR), Japan (20.4%), and Latin America (17.9%) accelerating to close the gaps between themselves and older, more developed esports markets. China will remain the largest esports market in 2020, with revenues of $385.1 million. These revenues will grow with a CAGR (2018-2023) of 17.0% to reach $540.0 million by 2023. Most of these revenues will come from sponsorships, which will grow from $187.1 million in 2019 to $222.4 million in 2020. Digital goods will be the fastest-growing revenue stream toward 2023, growing from $7.1 million in 2020 to $17.2 million by 2023. North America will be the second-largest region in terms of revenues with $252.5 million, followed by Western Europe as the third-most revenue-generating region with $201.2 million in 2020. China will be host to the largest esports audience with 162.6 million in 2020, followed by North America with an audience of 57.2 million. Rietkerk said, “Our data highlights that 2019 was a seminal year for many teams, with tremendous growth in traditional revenue streams such as sponsorship. Meanwhile, leagues have been moving toward a ‘homestand’ system in which teams play at their own venues. This potentially opens the door to increased matchday revenues for teams, including returns from ticketing and concessions, as well as larger merchandise revenues.” And Rietkerk added, “The market is also maturing in entirely new ways, with innovative revenue streams starting to develop, such as streaming and digital goods. These are new ways to monetize that are not available to traditional sports; they also demonstrate a growing understanding of the competitive advantages esports has over sports. These revenue streams have become pioneering ways for teams, organizers, and publishers to grow the business.” With all of the dynamic changes, Rietkerk acknowledged that Newzoo needs to keep its model for the industry fresh. In 2019, there were 885 major events. Together, they generated $56.3 million in ticket revenues, up from $54.7 million in 2018. Total prize money in 2019 reached $167.4 million, a slight increase from 2018’s $150.8 million. Above: Esports audience size The League of Legends World Championship was 2019’s biggest tournament by live viewership hours on Twitch and YouTube, with 105.5 million hours. The Overwatch League was the most-watched league by live viewership hours on Twitch and YouTube, generating 104.1 million hours. The esports audience will grow to 495.0 million globally in 2020. Esports Enthusiasts will account for 222.9 million of this number, up 25 million year on year, and will increase with a CAGR (2018-2023) of 11.3% to 295.4 million in 2023. Meanwhile, the number of global Occasional Viewers will hit 272.2 million in 2020, up from 2019’s 245.2 million. This number will grow with a CAGR (2018-2023) of 9.6% to 351.1 million in 2023. In 2020, 2.0 billion people will be aware of esports worldwide, an increase from 2019’s 1.8 billion. China will continue to be the country/market that will contribute most to this number, with 530.4 million esports-aware people. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,544
2,019
"Microsoft is bringing Visual Studio to the browser, unveils .NET 5, and launches ML.NET 1.0 | VentureBeat"
"https://venturebeat.com/2019/05/06/microsoft-visual-studio-online-net-5-ml-net-1-0"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft is bringing Visual Studio to the browser, unveils .NET 5, and launches ML.NET 1.0 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Build is Microsoft’s annual developer conference, which makes Visual Studio and .NET the stars of the show. Build 2019 is no different: Microsoft previewed new Visual Studio features for remote work, unveiled the .NET roadmap, and launched ML.NET 1.0. In April, Microsoft launched Visual Studio 2019 for Windows and Mac. Two notable features were Visual Studio Live Share, a real-time collaboration tool included with Visual Studio 2019, and Visual Studio IntelliCode, an extension offering AI-assisted code completion. At Build 2019, Microsoft shared that IntelliCode’s capabilities are now generally available for C# and XAML in Visual Studio 2019 and for Java, JavaScript, TypeScript, and Python in Visual Studio Code. And IntelliCode is now included by default in Visual Studio 2019, starting in version 16.1 Preview 2. The company also previewed an algorithm that can locally track your edits — repeated edit detection — and suggest other places where you need that same change. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But that’s just the tip of the iceberg. Visual Studio is going remote Microsoft is experimenting with features that let developers work from anywhere, on any device. The company today announced a private preview for three such new capabilities: Remote-powered developer tools, cloud-hosted developer environments, and a browser-based web companion tool. If the future of work is remote , Microsoft wants to be ready. The top-requested Visual Studio Live Share feature on GitHub is individual remote development. Enter Visual Studio Remote Development, an alternative to using SSH/Vim and RDP/VNC, which lets Visual Studio users connect their local tools to a WSL, Docker container, or SSH environment. Available in private preview, the tool supports C# and C++. The ability to develop against remote machines brings plenty of advantages, Microsoft says, including being able to work on a different OS than the deployment target of your application, being able to leverage higher-end hardware, and having multi-machine portability. The next private preview lets developers provision fully managed cloud-hosted development environments on-demand. A cloud-hosted developer environment means developers spend less time onboarding new team members, moving between tasks, and installing dependencies and more time coding. The new service lets you spin up a cloud-based environment whenever you need to work on a new project, pick up a new task, or review a PR. And, of course, these environments can be connected to Visual Studio 2019 and/or Visual Studio Code. Microsoft also announced the private preview of Visual Studio Online , a new web-based editor based on Visual Studio Code. From online.visualstudio.com , you can access your remote environments and edit code in a browser. Visual Studio Online will support Visual Studio Code workspaces, Visual Studio’s projects and solutions, as well as IntelliCode and Live Share. It means you can join Visual Studio Live Share sessions or perform pull request reviews on the go.. NET 5 and beyond Microsoft also announced that it is skipping .NET 4 to avoid confusion with the .NET Framework, which has been on version 4 for years. Going forward, developers will be able to use .NET to target Windows, Linux, macOS, iOS, Android, tvOS, watchOS, WebAssembly, and more. .NET Core 3 will be succeeded by .NET 5, featuring new .NET APIs, runtime capabilities, and language features. Calling it .NET 5 makes it the highest version Microsoft has ever shipped and indicates that the company hopes it is the future for the .NET platform.. NET Core 3 closes much of the remaining capability gap with .NET Framework 4.8, enabling Windows Forms, WPF, and Entity Framework 6. .NET 5 will build on this work, Microsoft says, combining. NET Core ,. NET Framework , Xamarin , and Mono (the original cross-platform implementation of .NET) into a single platform. Microsoft made three promises for .NET 5: Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems. CoreFX will be extended to support static compilation of .NET (ahead-of-time – AOT), smaller footprints and support for more operating systems. Additionally, .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models. JIT has better performance for desktop/server workloads and development environments. AOT has a faster startup and a small footprint, which is required for mobile and IoT devices. .NET 5 will offer one unified toolchain supported by new SDK project types and a flexible deployment model (side-by-side and self-contained EXEs). Microsoft also shared its .NET roadmap. First, .NET Core 3 will ship in September. Next, .NET 5 will ship in November 2020, with the first preview available in the first half of 2020. Microsoft then intends to ship a major version of .NET once a year, in November. “This new project and direction is a game-changer for .NET,” Microsoft declared. “With .NET 5, your code and project files will look and feel the same no matter which type of app you are building. You will have access to the same runtime, API, and language capabilities with each app.” ML.NET 1.0 Private previews and roadmaps aside, Microsoft also had a notable launch today: ML.NET 1.0. Hitting general availability at Build 2019 is fitting, given that ML.NET 0.1 was introduced last year at Build 2018. You can download ML.NET 1.0 now from here. ML.NET is an open source and cross-platform framework that runs on Windows, macOS, and Linux. ML.NET’s internal version has been used for almost a decade to power Microsoft products like Powerpoint’s Design Ideas, Windows Hello, PowerBI Key Influencers, and Azure Machine Learning. The framework makes machine learning accessible for .NET developers ( samples ) so they can build AI into their applications with custom machine learning models. ML.NET lets developers create and use machine learning models targeting scenarios such as sentiment analysis, issue classification, forecasting, recommendations, fraud detection, image classification, and so on. ML.NET comes prepackaged with a set of transforms for data processing, ML algorithms, ML data-types, and extensions that provide accessibility to TensorFlow for deep learning scenarios and ONNX, among others. With ML.NET 1.0 released, Microsoft is looking forward to the next features, including: Improved AutoML experience for all ML scenarios Deep Learning support with TensorFlow and Torch Support for other data sources (e.g. SQL, Cosmos DB, etc.) Scale-out on Azure Improved tooling support for Model Builder and ML.NET CLI ML @ Scale with .NET for Apache Spark integration New ML Types in .NET Additional ML tasks Additionally, Microsoft is introducing new ML features and tooling experiences in Visual Studio. Automated Machine Learning (AutoML), given a data set, automatically figures out the featurization and algorithm selection phase to build the best-performing models. You can leverage the AutoML experience in ML.NET using the ML.NET command line interface (CLI, available now in preview ), the ML.NET Model Builder (Visual Studio extension now in preview ), or by using the AutoML API directly. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,545
2,020
"ProBeat: WFH tips I've learned after working from home for 12 years | VentureBeat"
"https://venturebeat.com/2020/03/13/probeat-work-from-home-wfh-tips-remote-work-team"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: WFH tips I’ve learned after working from home for 12 years Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. I’ve been working from home my whole career. Online journalism happens to be well suited for remote work, but not every job is. With the current COVID-19 coronavirus , more and more companies are telling their employees to work from home. As the outbreak has grown, some employees chose to WFH, then individual team leaders encouraged their teams to avoid the office. And suddenly, WFH has become mandatory policy for entire companies. Whether your job is ideal for working from home or not, this is your reality for the foreseeable future. And that’s why I figure now is the time to share what I’ve learned over the years. As I’ve argued before, the future of work is remote. While I’m publishing this list now, I hope it will be useful long after the COVID-19 pandemic is over — for employees as well as team leaders. I’ve broken the tips up into those two sections, but regardless of whether you report to someone or someone reports to you, I would encourage you to read them all. 10 work from home tips for you Without further ado, here are my work from home tips: Don’t work from your bed. At the very least use a table (if a desk is not available) and a chair. Don’t wear the same thing as you do to bed. What you wear doesn’t matter (yay), but make sure you change. Set work hours for yourself. Make sure the line between work time and you time is as distinct as possible. Drink water throughout the day. Don’t forget to eat. Overcommunicate with your team. It’s easy to forget that you’re all experiencing work differently because you’re not in the same physical space. Before you join a video call, check yourself and your surroundings. If it’s not required, disable video. Get up. Leave your computer, stretch, and walk around. Take advantage of your space. Cook for yourself, work out, run an errand — do whatever you couldn’t normally do in a physical office. Go out in the evening. Especially if you work from home for multiple days, make sure you have activities that take you away from your computer and out of the house. Ask when you need help. You can’t count on someone noticing that you’re struggling with something or hearing audible sighs. You don’t need to apply all these tips at once, and some may not work for you at all. Whether you’re transitioning to WFH now or have done so for a long time, the most important is to figure out what works for you. And as you do so, make sure you’re communicating what works and what doesn’t with your manager and teammates. 9 work from home tips for the team leader Speaking of which, if you are leading a remote team, here are a few more tips: Discuss hours, goals, or targets for your team members. Make clear what you expect them to get done and then let them do it. Don’t ask where your teammates are. Let them volunteer the information. Offer your time and make sure your team members know you’re available. You can’t just drop by their desk, but you can always check in via chat, email, video chat, or whichever tool your company prefers. Ask which tools are working and which aren’t. Be open to changing the tools and pushing management to make adjustments. Have regular meetings at set times. Offer to adjust those times to accommodate various schedules and time zones. Figure out how to make remote work more fun for your team. Share links that you come across online so that just like in a physical office, the day’s discussion isn’t strictly work related. Encourage everyone to take breaks. Lead by example. Make sure your tone isn’t being misinterpreted or misread. Add exclamation marks, emojis, and whatever else is necessary to accurately reflect how you would have said it in person. Tell your team members when you’re going mobile and when you’re signing off. Remind them when and how to reach out if they need you. Some of these may also work even if you aren’t leading a team — to apply yourself or as a reminder of what your boss might be struggling with. Did I miss something? I’m sure I did. Let me know your tip. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,546
2,019
"Office 365 gains AI-powered presenter coach and educational 3D models | VentureBeat"
"https://venturebeat.com/2019/09/25/office-365-gains-ai-powered-presenter-coach-and-educational-3d-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Office 365 gains AI-powered presenter coach and educational 3D models Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Back in June, alongside an updated PowerPoint Designer, Microsoft unveiled Presenter Coach, an AI-powered PowerPoint feature designed to provide guidance with respect to pacing, tone, and attention. Today, the company announced that Presenter will launch this week for Office 365 customers on the web, alongside inking in Office for the web, new Whiteboard templates, and 3D lesson plan models. Presenter Coach is in public preview, and the inking features are now generally available in PowerPoint for Windows and Mac. Digital pen annotation in Slide Show on PowerPoint hit the web this week, as did Whiteboard templates in public preview on Windows 10 (rolling out to iOS within a few days). As for the 3D models and lesson plans, they’re generally available to Office 365 subscribers in Windows. Presenter As you might recall, Presenter Coach walks users slide by slide through presentations and provides real-time feedback on cadence, profanity, and phrases that might be considered culturally insensitive. It also alerts presenters when they appear to be reading slides verbatim. At the end of each rehearsal session, it provides a detailed report with metrics like filler words used and their frequency, problematic slides, words per minute, and speed over time. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For instance, Presenter Coach detects the pace of speech and recommends changes that might help audiences better retain facts and figures. If a user inserts a disfluency like “um,” “ah,” “like,” “actually,” or “basically” or makes a potentially gender-charged reference like “you guys” or “the best man for the job” it will recommend alternatives. “Public speaking doesn’t have to be nerve-wracking,” wrote Microsoft 365 corporate vice president Jared Spataro. “Our public preview of Presenter Coach in PowerPoint for the web uses the power of AI to help business professionals, teachers, and students become more effective presenters.” Ink in Office PowerPoint has long supported inking features in some form or another, enabling users to handwrite words and convert them into text or draw shapes like hearts or clouds. But annotating slides directly while presenting hasn’t been possible — until now. Above: Inking in PowerPoint. Starting this week, Office users on the web can dispense with laser pointers in favor of real-time scribbling directly on slides. Annotation complements the Ink Relay feature in Slide Show, which conceals and reveals inked content written on slides and exposes the order in which ink was drawn. Whiteboard templates and lesson plans Templates in Microsoft’s class-platform Whiteboard sketchpad are as they sound: Each provides tips for running activities, along with structures and outlines that expand to fit content. At launch, you’ll find templates for KANBAN sprint planning, SWOT (strengths, weaknesses, opportunities, and threats) analysis, project planning, learning, and more, all of which can be added with a tap of the insert button in the app’s toolbar. Above: Templates in Whiteboard. In somewhat related news, Office 365 now boasts a set of 23 education-based 3D models, which live in the existing 3D model gallery. They join the new lesson plans by Lifelique, a company creating interactive 3D K-12 science curriculum aligned to NGSS and Common Core standards. Topics range from geology and biology to outer space. Above: 3D educational content in Office 365. “These engaging models help parents and teachers quickly communicate comprehensible and retainable information to students,” wrote Spataro. “The new lesson plans complement the models to create a comprehensive learning experience.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,547
2,015
"10 tech podcasts you should listen to now | VentureBeat"
"https://venturebeat.com/2015/04/03/10-tech-podcasts-you-should-listen-to-now"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 tech podcasts you should listen to now Share on Facebook Share on X Share on LinkedIn Don’t ever say there’s nothing good on. There’s a whole lot of podcasts about technology. The New & Noteworthy collection of tech podcasts on iTunes holds 2,000 of them. And some really are worth your attention. Podcasts first popped up nearly a decade ago, following the ascent of Apple’s iPod. In the past year or so, though, it feels like there’s been a renaissance for the form, especially with the growth of Serial , WBEZ’s offshoot of radio show This American Life. But Serial doesn’t cover technology. Here are 10 tech podcasts that are worth following in this “golden age.” 1) Talking Machines Former radio journalist Katherine Gorman has teamed up with Ryan Adams, a Harvard professor teaching classes on machine learning, to do a twice-monthly podcast on that very topic. Talking Machines is still pretty new, but it’s already landed interviews with prominent figures in machine learning, including Facebook’s Yann LeCun and Google’s Ilya Sutskever. [soundcloud url=”https://api.soundcloud.com/tracks/197784134″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /] 2) Andreessen Horowitz’s a16z podcast One of the most prominent Silicon Valley venture capital firms happens to make one of the most compelling tech podcasts. Partner Benedict Evans regularly provides great analysis of mobile apps and operating systems, and chief executives of Andreessen Horowitz companies, like Mesosphere’s Florian Leibert , talk about what they’re seeing out there in the world. And the sound is always crystal-clear — just what you would expect from an operation like Andreessen Horowitz. [soundcloud url=”https://api.soundcloud.com/tracks/199111103″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /] 3) Product Hunt Radio In this weekly podcast from Product Hunt , that growing San Francisco startup maintaining a daily list of products that launch, founder and chief executive Ryan Hoover speaks with founders, venture capitalists, tech journalists, and product people. [soundcloud url=”https://api.soundcloud.com/tracks/197197490″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /] 4) Partially Derivative This weekly podcast is an excuse for Chris Albon and Jonathan Morgan, two data scientists working at nonprofit tech company Ushahidi, to drink beer and talk about data science. They make it easy for laypeople to get excited about algorithms, often by discussing real-world examples. [soundcloud url=”https://api.soundcloud.com/tracks/186357299″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /] 5) The O’Reilly Data Show Podcast It’s always great when Ben Lorica, O’Reilly Media’s chief data scientist, comes through with another installment of his podcast. It’s about big data, and it’s nerdy, and it’s great. Previous guests include Apache Kafka guru Jay Kreps and graph analytics expert Carlos Guestrin. [soundcloud url=”https://api.soundcloud.com/tracks/197803402″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /] 6) The Wired.co.uk podcast The journalists working at the United Kingdom branch of Wired consistently turn out witty and interesting conversations. Oftentimes they skip the most prominent tech topics of the day and instead talk at length about offbeat and really interesting subjects. And they all have such great accents. [soundcloud url=”https://api.soundcloud.com/tracks/198938170″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /] 7) Hello World Shawn Wildermuth is a Microsoft-centric programmer who works at Atlanta software-development company WilderMinds. Every month or so, Wildermuth turns out a new episode of Hello World , which is basically a long and winding conversation with an experienced developer. 8) Stacey Higginbotham’s Internet of Things podcast Stacey Higginbotham has outfitted her house with a whole bunch of Internet-connected devices. She wrote about the Internet of Things (IoT) in articles and talked about it on podcasts at Gigaom, until it abruptly closed last month. Now Higginbotham has started her own new IoT podcast , and it’s off to a great start. [soundcloud url=”https://api.soundcloud.com/tracks/198749523″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /] 9) Exponent Each week, this podcast delivers a thoughtful conversation between Ben Thompson, founder of the tech blog Stratechery , and James Allworth, a coauthor (along with Clayton M. Christensen and Karen Dillon) of the best-selling business book How Will You Measure Your Life? I can feel myself getting smarter as I listen to this one. [soundcloud url=”https://api.soundcloud.com/tracks/197972843″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /] 10) What to Think — VentureBeat’s podcast! There’s no way I’m leaving What to Think off the list. Each week my colleagues and I talk about the biggest stories appearing on VentureBeat, and we interview people like Tim Draper , Walter Isaacson , and John McAfee. [soundcloud url=”https://api.soundcloud.com/tracks/196582181″ params=”auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true” width=”100%” height=”450″ iframe=”true” /] I realize I’m leaving out some very good tech podcasts. Go ahead and talk about your favorites in the comments. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,548
2,020
"AMD gained share against Intel in x86 processor market in Q4 | VentureBeat"
"https://venturebeat.com/2020/02/05/amd-gained-share-against-intel-in-x86-processor-market-in-q4"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AMD gained share against Intel in x86 processor market in Q4 Share on Facebook Share on X Share on LinkedIn AMD's new Threadripper costs $4,000. Advanced Micro Devices gained market share against Intel in the x86 microprocessor business for the quarter that ended December 31, according to market researcher Mercury Research. AMD gained share in the three major processor markets: laptops, desktops, and servers. That marks nine consecutive quarters of growth for AMD. Oddly, Intel claimed in its fourth-quarter earnings report on January 23 that it was gaining market share. “In 2019, we gained share in an expanded addressable market that demands more performance to process, move, and store data,” Intel CEO Bob Swan said in a statement. But asked for clarification, an Intel spokesperson said, “We define our total addressable market much broader than server CPUs.” For instance, Intel said its networking chip business grew $5 billion in 2019 revenue, and its internet of things (IoT) and Mobileye businesses were up 11% and 26%, respectively. Above: AMD’s market share is growing. But Mercury reported that AMD’s overall share in x86 processors (not counting semi-custom chips and IoT) was 15.5%, up 3.2 percentage points from a year ago. AMD’s share hasn’t been that high since the fourth quarter of 2013. In laptops, AMD’s share was 16.2% in the fourth quarter, up 4 percentage points. In desktops, AMD’s share was 18.3%, up 2.4 percentage points. In servers, AMD’s share was 4.5%, up 1.4 percentage points. Client share for AMD was 17%, up 3.5 percentage points. AMD said reception for the Ryzen 9 3950X and the third-generation Threadripper family for gamers explained the share gains in desktop. In servers, AMD has been ramping up its second-generation Epyc processors. For the previous third quarter that ended September 30, AMD had a 14.6% share of the overall x86 processor market, up 4 percentage points from a year ago, according to Mercury. “I think the unit share gains show good progress, and also that AMD has a ways to go, particularly in the server market, where it started near 0% share before Epyc launched,” said Patrick Moorhead, an analyst at Moor Insights & Strategy. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,549
2,019
"The truth about hypercasual games | VentureBeat"
"https://venturebeat.com/2019/03/24/the-truth-about-hypercasual-games"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The truth about hypercasual games Share on Facebook Share on X Share on LinkedIn Crossy Road is getting even more content. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Hypercasual games have seen unprecedented growth over the past year. Using the average lifetime value of hypercasual players based on data from ironSource’s platform on user level revenue, we estimate the approximate market for hypercasual games to be in the region of $2 billion to $2.5 billion in annual revenue. Even Goldman Sachs is getting in on the game with a $200 million investment in hypercasual powerhouse Voodoo. That’s why we decided to take an in-depth look at the size of the market for hypercasual games, what’s fueled its growth, and what its impact has been on the wider industry — is it cannibalizing other genres and just how sustainable is it really? What supported hypercasual’s growth To understand the effect that hypercasual games have had on the industry, it’s important to understand why this genre exploded at such a fast rate. There are two main reasons for this — the demographics and behavior of today’s mobile game players, and what kind of games appeal to them. The image of the gamer is not what it used to be, with a third being over 45 and women representing 55 percent of the market — not the typical image of the hardcore gamer. The EEDER report on mobile and tablet gaming further reveals that how and when people play games is also changing. Instead of lengthy playing sessions, the No. 1 time people play games is while multitasking at home, followed by waiting for someone, while on traveling, taking a break, and then in the bathroom. In all of these situations, players are increasingly looking for low-commitment entertainmentthey can enjoy in short bursts. Hypercasual fits well with this trend, as they are click-to-play games. Taken together, this updated picture of the gamer is one ideally placed to respond well to hypercasual games. It’s an installs-per-mille game The second reason is related to hypercasual’s move from relying on cross-promotion as their primary user acquisition strategy to cracking UA at scale. As ad monetization became a more lucrative revenue source and ARPU went up, hypercasual developers were able to bid more competitively in the UA market. They also honed their design capabilities, investing heavily in creative optimization in order to drive up their IPMs (installs per thousand impressions). For example, a hypercasual game with a high-performing playable ad could bid 40 cents and generate an IPM of 50, versus a midcore developer bidding $5 with a less powerful creative driving an IPM of 3. This ultimately allowed hypercasual developers to make relatively low bids compared with other genres, and still generate eCPMs which were extremely competitive. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Hypercasual’s effect on the wider gaming market One of the biggest concerns around hypercasual games, is whether or not this new genre has helped grow the market as a whole — bringing in players that are new to mobile gaming — or if has it simply cannibalized from other genres. And if it has brought new users to the market, are they ones that can be converted to players of non hypercasual games? On the face of it, yes, hypercasual games have brought new players to the market. A few years ago, Facebook and Google were the only real viable options for UA, since triple-A, midcore, and casual publishers focused on IAP, rather than ads, as their main source of monetization, which meant there wasn’t a huge amount of in-game inventory to market on. The intro of hypercasual publishers like Voodoo, Kwalee, and Playgendary brought a huge influx of impressions into the market, not only expanding the overall amount of available inventory but also doing so with inventory ideally suited for hypercasual UA campaigns. We can back this up after analyzing the aggregated data of 2.5 billion users in the IronSource network over two years, of which 660 million play hypercasual games. 520 million out of that 660 million play both hypercasual and IAP games, but interestingly, 101 million out of that 520 million played a hypercasual game first. This effectively means that 20 percent of new gamers who play both IAP and hypercasual games first played a hypercasual game and only then moved to IAP games — where hypercasual games confer an almost ‘nurturing’ effect on new gamers, warming them up for IAP games. Quality of the new inventory So we’ve proven that hypercasual games can bring in new gamers to the industry, but are these users high-value players who will go on to make in-app purchases or engage heavily with ads? Here is what we saw: Hypercasual users on average see 4.8 video ads, two times more video ads compared to users playing games in other categories Hypercasual users install on average ten times more apps when compared with users playing games in other categories (per 1000 daily active users) Hypercasual users install on average five times more apps for IAP advertisers than users playing games in other categories Even if the quality of these users were lower on average, hypercasual games still provide a huge opportunity for IAP advertisers to acquire a lot of new users. It’s then up to them to ensure they adjust bids and optimize creatives so that they’re only acquiring quality ones. Who’s converting best on hypercasual supply? We’ve established the huge growth in installs that hypercasual has brought to the market, and with it, a surge in the number of available impressions. But who’s winning these users? Today, the majority of hypercasual inventory is sold to other hypercasual games, or to cross promo campaigns from the same publisher — either way it’s hypercasual advertisers running campaigns on hypercasual supply. But it didn’t start out that way. Initially IAP advertisers represented the big spenders on hypercasual inventory simply because they were already doing UA at scale. While the conversion success of hypercasual campaigns on hypercasual inventory has shifted the balance more towards hypercasual advertisers, we still see IAP advertisers securing a third of the available inventory. It’s also worth mentioning brand advertisers as a group buying on hypercasual supply, as part of wider trend of brands buying more game inventory. In many ways hypercasual inventory represents an appealing segment for brands, since their audiences typically tend to be incredibly diverse. Hypercasual in 2019: Sustainable or not? Despite the huge growth in hypercasual games, they are unsustainable as a standalone genre. Somewhere along the line, money has to be generated as users move from one game to the next, and in a solely hypercasual world dominated by ads as opposed to IAP, that doesn’t happen enough. We see that about 60 percent of hypercasual inventory is sold to other hypercasual advertisers – in this equation no money changes hands. Out of the remaining 40 percent, about 33 percent goes to IAP advertisers and 7 percent to brands — this advertising spend is what makes the hypercasual economy sustainable. The future of the hypercasual economy is therefore heavily dependent on whether IAP games learn how to effectively buy on hypercasual inventory and continue to increase budgets. Just like the learning curve of buying on Facebook, IAP buyers must learn optimization techniques, like improved creatives, to better understand hypercasual users and how to monetize them after install. Additionally, brands are increasingly understanding the value advertising in-game with rewarded video can bring to them in terms of high-quality and engaged users, and high viewability. So, as brands increase their spend in-game and as IAP games learn to buy more effectively on hypercasual, the category will continue to grow throughout 2019. Will hypercasual be around in 2019? Definitely. And I’m sure the rest of the industry will be watching this space to see exactly how. A veteran entrepreneur with 12 years experience, Omer Kaplan is the CRO and cofounder of ironSource. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,550
2,020
"Adjust's Control Center automates repetitive tasks of mobile advertising | VentureBeat"
"https://venturebeat.com/2020/02/18/adjusts-control-center-automates-repetitive-tasks-of-mobile-advertising"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Adjust’s Control Center automates repetitive tasks of mobile advertising Share on Facebook Share on X Share on LinkedIn Adjust is automating repetitive marketing tasks. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Mobile measurement company Adjust has launched a technology called Control Center to automate mobile advertising, reducing repetitive workflows. The Berlin-based mobile measurement and fraud prevention company is launching its new product as part of a larger suite, dubbed Adjust Automate. With its release, Adjust aims to simplify the process of mobile advertising management. Mobile has become the new undisputed king of digital , and App Annie predicts marketers will invest over $240 billion in mobile ad spend in 2020. But as the industry grows, the process behind ad management has become increasingly complex. According to new research by Adjust: 81% of marketers surveyed said their company was planning to increase its marketing or advertising automation budget in 2020. Marketers handle an average of 19 advertising campaigns across approximately 14 different networks, highlighting the scale and complexity behind current marketing campaigns. When asked about their three biggest pain points in their roles, marketers listed merging and acting on disparate sources of data, individually updating bids and budgets, and accurate campaign management. Adjust’s data came from a survey conducted by Censuswide , which polled one hundred user acquisition managers and digital marketers based in the United States. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Paul H. Müller, cofounder and chief technology officer at Adjust, said that mobile is one of the most sophisticated and technical channels in marketing today, but it relies on a huge amount of manual work. Marketers have to adjust 250 distinct bids and spend limits every day, he said. That means that even a moderate number of campaigns can become complex to keep updated. Adjust’s Control Center was built to simplify these processes. Designed as a cross-app, cross-partner, and cross-network dashboard, marketers will be able to view data across all their apps and campaigns and act on it. Above: Adjust surveyed the biggest pain points for mobile marketers. The product follows Measure, which focuses on attribution and analytics, and Protect, which encapsulates its fraud prevention and cybersecurity solutions. The company claims these three product suites make marketing simpler, smarter, and more secure for the 32,000 apps working with Adjust. With Control Center, marketers can offload manual, routine tasks, leaving them free to focus on being creative, Müller said. The product also has the potential to be an equalizer in mobile marketing, increasing the number of campaigns one person can manage and allowing smaller teams to compete with larger marketing departments, he added. Control Center will be available as a separate package for clients and integrated into their existing dashboard, along with an Enterprise version that is fully customizable for the most sophisticated of advertisers. The launch follows a period of growth for Adjust. In 2019, the company announced multiple acquisitions and hiring of top talent. It also raised $227 million. In 2020, Adjust said it will focus on consolidating its existing product to become the growth engine for the mobile marketing ecosystem. In an email, Adjust said that Adjust Control Center is a marketing automation tool (part of the Adjust Automate product suite) that provides marketers with actionable analytics right in the Adjust dashboard. It’s a pivot table that collects campaigns from across all apps and connects it with the cost data of different channels. This way, the marketer has a centralized overview for all of their campaigns and the decisions they need to make to change them. The marketer can actually adjust campaign bids and volumes right from within this view. So the marketer no longer needs to go in and out of different interfaces to do the same job. Another powerful feature is the Rule Engine. It enables the marketer to define a set of conditions to automatically change a campaign’s bids and volumes. For example, if a campaign is raw as positive after seven days, it may automatically want to increase the campaign volume by 10%. This means, with the Rule Engine, the marketer can keep all of campaigns bids and volumes up to date around the clock without doing any clicking. In a nut shell, Adjust Automate aims to provide the marketer with a series of tools to cut down on the repetitive part of the job — freeing up time to actually focus on the creative work. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,551
2,018
"HTC will make Vive Focus standalone VR headset available in North America and dozens of other markets | VentureBeat"
"https://venturebeat.com/2018/11/08/htc-will-make-vive-focus-standalone-vr-headset-available-in-north-america-and-dozens-of-other-markets"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HTC will make Vive Focus standalone VR headset available in North America and dozens of other markets Share on Facebook Share on X Share on LinkedIn Dan O'Brien of HTC says the Vive Focus is going worldwide. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. HTC said it will make the Vive Focus standalone virtual reality headset available in North America and dozens of other markets. The move is part of an expansion aimed at taking VR into enterprise markets, where customers aren’t as sensitive to higher hardware prices. Previously, the headset was available only in China. Dan O’Brien, general manager of the Americas for HTC Vive, said at an event in San Francisco that the Vive Focus will be available for developers at $600, and the enterprises can buy the product for $750 in 37 new markets, including the U.S. and Europe. Vive Advantage and Vive Advantage+ are new services aimed at getting enterprises to adopt VR headsets with enterprise support. There will also be a new six-degrees-of-freedom (6DoF) controller for the Vive Focus in the coming months. This allows people to use the Vive Focus with both hands in VR, in contrast to the current 3DoF controller for use with one hand. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Hugo Swart, senior director of product management at Qualcomm Technologies, said mobile XR (extended reality) can drive technology in the enterprise. He said the standalone category, which sits between smartphones and PCs, fits nicely in the enterprise, with advantages such as wireless and two-hand controller experiences. “Enterprises are using VR to collaborate and engage with a complete solution,” said O’Brien. He said enterprises are already using VR to train thousands of people in everything from manufacturing to medical enterprises. In the past, the Vive Focus was available in China and other limited markets. The Vive Focus isn’t quite as powerful as the HTC Vive Pro or HTC Vive, which both use PCs for processing. The Vive Focus, however, does not need to be attached to a PC, as it has its own Qualcomm Snapdragon 835 processor as its brain. It has three hours of battery life and a screen resolution of 1,440 x 1,600 pixels per eye. It also has inside-out tracking, meaning it does not need separate sensors to be put up around your room. O’Brien said that 65 percent of people surveyed felt that VR could be used in training and simulation, and 59 percent felt it could be used in education. “We felt collaboration in a professional environment was an unsolved category,” O’Brien said. Vive Sync is a workforce collaboration tool targeted at this market. You can mark up shared documents and store them for future use. As many as 20 people can be in the same VR space. “Vive Sync offers us the ability to meet in groups in virtual reality,” he said. The device will use HTC’s Viveport as its single store, with apps available for multiple kinds of headsets from different vendors. For example, Shadow Creator’s Shadow VR head-mounted display will work with the Vive Wave platform, meaning it is Vive compatible, and apps for it are available in the Viveport store. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,552
2,019
"HTC Vive Pro Eye hands-on: Gaze into VR’s future with foveated rendering | VentureBeat"
"https://venturebeat.com/2019/01/10/htc-vive-pro-eye-hands-on-gaze-into-vrs-future-with-foveated-rendering"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HTC Vive Pro Eye hands-on: Gaze into VR’s future with foveated rendering Share on Facebook Share on X Share on LinkedIn HTC Vive Pro Eye. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Unexpectedly announced at an early CES 2019 media event, HTC’s latest and highest-end VR headset is the Vive Pro Eye — an upgraded version of the already premium Vive Pro with integrated eye-tracking hardware. The eye tracking can be leveraged for in-app controls, analysis of user attention during training sessions, and foveated rendering. If you’re not already familiar with foveated rendering , it’s about to be a big deal for VR. Cameras inside a headset precisely and quickly track the position of your pupils, enabling the GPU to know where it needs to focus its rendering resources — and where it can skimp. One Vive Pro Eye developer said that with foveated rendering the GPU was saving 30 percent of its power over standard rendering, performance that can be saved to conserve energy or used to increase detail within the area viewed by the pupil. The technology would be ideal for high-resolution gaming, but Vive Pro Eye is specifically being marketed at enterprise customers, and HTC suggests the still-unconfirmed price will be another step up from Vive Pro. Most gamers aren’t willing to pay extra for the standard Vive Pro, so game developers won’t likely make eye-tracking games for the Vive Pro Eye. One key selling point of the headset is a rapid-fire setup process that lets users enjoy a powerful VR experience with little manual adjustment — nothing more than turning an IPD knob to properly align the displays horizontally with your eyes. Ideally, this would be automated, but the headset uses blue dots to show you exactly where your eyes are located, and you just have to turn the dial to align two circles with the dots. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: The knob to adjust IPD is a tiny black dial on the edge of the headset, shown here on the left, but normally on the right when worn. Once that’s done, Vive Pro Eye’s screens go grey, and a blue dot moves through five positions to see that your pupils are able to register center, northwest, northeast, southeast, and southwest positions. Cameras hide inside the unit’s dark, well-padded chassis to continuously monitor your gaze. Above: A view into Vive Pro Eye’s interior, including gaze-tracking cameras. HTC believes that the likely customers for Vive Pro Eye will be luxury retailers, businesses seeking new communications tools, professional athletes, and other enterprise users. During CES, the company offered a bunch of demos to show off the Vive Pro Eye’s capabilities. Here are a few examples. ZeroLight BMW test drive The only Vive Pro Eye demo to use both eye tracking and foveated rendering, this app allows the user to experience a retail showroom style VR walkaround of the BMW M5, plus the opportunity to sit down in the car and watch it take off at a racetrack. Developer ZeroLight showed off the foveated rendering feature with two modes. Split-screen allowed viewers to compare “standard rendering,” where everything is rendered with the same level of detail, versus foveated, where only a portion of the screen was rendered in high detail and other parts were rendered with less detail. The difference was nowhere near as apparent as when ZeroLight activated a green/red mode that used green to indicate the high-detail rendering circle where your eye is looking and red for everything else. Pixelization is much more clearly evident in this mode — just look at the red in the screenshot above — but the key is that when the feature is working normally on Vive Pro Eye, the average user gets the benefits without even noticing it’s there. MLB Home Run Derby This demo used eye tracking in a basic way, permitting gaze alone to enable or disable menu settings by flipping switches if you looked at them for a few seconds. While the feature sounds boring, it actually enables a headset wearer to completely do away with a traditional controller for the purposes of starting up a game and then grab a buttonless controller, such as a tracked baseball bat, and start swinging. Home Run Derby worked well on the Vive Pro Eye, even though HTC doesn’t expect that the headset will be used by gamers. Company reps suggest that sports training apps will instead be used by professional athletes and coaches to make sure players are focusing and performing to the best of their ability. Training, simulations, social VR, and more Several other demos showed off potential uses of the Vive Pro Eye’s gaze-tracking feature. Ovation showed a public speaker training app that uses eye tracking to make sure you’re properly focusing on your audience, rather than on your teleprompter or notes, when addressing a crowd. Lockheed Martin’s Prepar3D is a complex flight simulator, augmenting the user’s physical flight stick and throttle controls with gaze tracking that can be used to activate the numerous subsystems within a fighter jet for a drone shoot-down mission. A business-focused social VR application enabled up to 20 people to co-exist in a virtual presentation space where objects, videos, and Powerpoint presentations could be easily called up for group viewing, sketching, and collaboration. Gaze tracking was used in an extremely basic way to make the eyes of 3D avatars actually show where people were looking during their sessions together, but in my demo, my collaborator’s eyes weren’t being tracked. It’s presently unclear which company is providing the eye-tracking hardware, as Tobii has said it won the HTC supply deal, but an HTC press release clearly lists 7invensun as the provider. Numerous demos at the event were experiencing hiccups with the eye-tracking feature, requiring software reboots that appeared to have something to do with eye tracking-specific driver issues — but when it worked, it worked very well. Regardless of where the new feature’s hardware comes from, HTC and its partner have plenty of time to debug the software between now and the second quarter of 2019, when Vive Pro Eye is expected to be released. I expect that the final version of the headset and its apps will work flawlessly with proper code. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,553
2,019
"Microsoft Teams passes 20 million daily users, up more than half in 4 months | VentureBeat"
"https://venturebeat.com/2019/11/19/microsoft-teams-passes-20-million-daily-users-up-more-than-half-in-4-months"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft Teams passes 20 million daily users, up more than half in 4 months Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — Microsoft said on Tuesday its workplace messaging app, Teams, has more than 20 million daily active users, up from 13 million in July. The software maker offers the app as part of some Office365 business packages, as well as a free version. Teams allows users to chat, share files, make calls, and hold web video conferences. Microsoft Teams, used by companies such as General Electric and SAP, competes with Slack. Slack, whose customers include Electronic Arts and Nordstrom, reported more than 10 million daily active users in the second quarter ended July 31. Slack’s shares fell 8.4% following the news, to $21.18. They are down 45% from the close of their first day of trade in June. Microsoft Teams also competes with Workplace by Facebook and Cisco’s Webex Teams. ( Reporting by Ambhini Aishwarya in Bengaluru, editing by Rashmi Aich and Clarence Fernandez. ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,554
2,020
"HTC is selling a standalone Cosmos Elite headset and tracking faceplate | VentureBeat"
"https://venturebeat.com/2020/03/31/htc-is-selling-a-standalone-vive-cosmos-headset-and-tracking-faceplate"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HTC is selling a standalone Cosmos Elite headset and tracking faceplate Share on Facebook Share on X Share on LinkedIn You can add a mod for external tracking to the HTC Vive Cosmos VR headset. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The Vive Cosmos had a shaky start , but HTC has made a lot of improvements to its latest VR headset. And those updates (which I’ll cover soon) may make you want to pick one up to play games like Half-Life: Alyx. This is especially true if you already own the original Vive. To encourage those VR enthusiasts to upgrade, HTC plans to sell a standalone version of the Cosmos Elite head-mounted display. The company is also beginning sales of the External Tracking Faceplate, which enables the Cosmos Elite to work with the Vive’s lighthouse trackers. The Vive Cosmos Elite headset begins shipping in April for $549. You also have the option of purchasing the External Tracking Faceplate for $199. That add-on enables your base stations to track the original Cosmos because it otherwise uses its built-in inside-out tracking cameras. The faceplate begins shipping in Q2. The Cosmos Elite comes with the faceplate included. It may all seem confusing if you’re just getting into VR. But the best option if you’re starting from zero (and already have a powerful PC) is to get a bundled system. This could mean the $700 Cosmos bundle. Or you could get the $899 option that comes with Vive Cosmos Elite, the base stations, and controllers. But today’s announcement from HTC is about catering to the VR early adopters. Those customers have already made investments in VR hardware, and now they want to pick and choose where they upgrade. The standalone Cosmos and External Faceplate Tracker enables that choice. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,555
2,020
"The DeanBeat: The coronavirus knocks down the dominoes of the GDC | VentureBeat"
"https://venturebeat.com/2020/02/28/the-deanbeat-the-coronavirus-knocks-down-the-dominoes-of-the-gdc"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: The coronavirus knocks down the dominoes of the GDC Share on Facebook Share on X Share on LinkedIn Plague Inc. is no longer available on the iOS App Store in China. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The dominoes tipped over for the Game Developers Conference in the past week, as fears of the coronavirus (COVID-19) prompted more big game companies to drop out of the conference. Unity, Microsoft, and Epic Games bailed out in quick succession on Thursday. And before that, Sony , Electronic Arts , PUBG, Kojima Productions, and Facebook/Oculus all dropped. [ Updated : 4:55 p.m. 2/28/20 — GDC has confirmed it will be postponed until the summer]. The GDC is an institution that draws 29,000 developers to San Francisco each March, and it’s a bellwether for the game industry when it comes to innovation, issues, and big launches. But the GDC got pummeled after a new case of the coronavirus was discovered in Northern California on Wednesday, the Centers for Disease Control warned of out-of-control spreading, and San Francisco’s mayor made an emergency declaration about the outbreak. The U.S. State Department has issued travel advisories. Other events, from Mobile World Congress in Barcelona to Facebook’s F8 developer conference scheduled for May, have also been canceled. The GDC going down would be just a blip in this larger state of affairs around the globe. Heck, it’s possible the Tokyo Olympics will be canceled. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! We’ve made the tough decision to cancel our on-the-ground activity at GDC 2020, due to current conditions with COVID-19. The health and safety of our employees, partners and friends is our top priority. More info to come on what we’ll be sharing online. https://t.co/xkujzb4v5c — Unity (@unity3d) February 27, 2020 Like many of my colleagues and professionals in the game industry, I’m struggling with this hysteria. It’s a bit like the aging video game Plague Inc. (which disappeared from iOS in China this week), where you see a fictionalized view of a pandemic. The GDC said this morning the show will take place as expected in March, and it is “watching closely for new developments around” the coronavirus. A cancellation feels inevitable, but I would be sad if that happened. Fear of a ghost town Above: The Epic Games event at GDC 2019. The fear is running deep, spooking the stock markets and broader society far beyond GDC and far beyond many other panics that I have witnessed. Each time one of the big game companies decided to skip GDC, they cited the health and safety of their employees. I see on social media that a number of other attendees are bailing out. I see a lot of dark humor about this. GDC is looking great this year. pic.twitter.com/77ERQAZMf1 — Martin van der Wolf (@Martin_Wolf) February 27, 2020 I’ve joked that this is how the zombie apocalypse begins, and it reminds me of the dark days of E3 , back in 2007 and 2008, when the show went down from 50,000 attendees to 5,000 or so. It was a ghost town. A poll by game developer site Gamesmith on Thursday of more than 1,000 game developers showed that 35% polled are still going, 14% are undecided, and 34% are definitely not going due to the coronavirus. Another 14% said they weren’t going for other reasons. As someone who runs a conference (GamesBeat Summit), I can sympathize with the GDC. A lot of paranoia is out there. But it’s also heartbreaking to see, as it is one of my favorite conferences of the year. And it feels like an overreaction. But I recognize that far brighter people than me can make pronouncements about how safe it is to attend. Many who want to go are saying, “Don’t panic.” Their common refrain is that more people have died from the flu every year than the coronavirus has killed so far. Irresponsible to attend? Above: GDC 2019’s #1ReasonToBe panel. In fact, you could argue attending would be irresponsible. What if you pick up the coronavirus and pass it on to weaker people who may die because of it? What if you pick up the germs and bring them home to your family? Those are strong arguments that trump our own selfish need for getting business or networking meetings done. Let’s not forget what’s at stake, and how fast this crisis has developed day by day. A month ago, the virus had killed 106 people and infected 4,515 people. As of today, it has killed more than 2,800 people and infected more than 82,000. It’s nowhere near the 646,000 the flu kills in a year. But doesn’t the growth rate of the virus, now present in 48 countries , strike fear in the heart of everybody? Some of this fear is turning into anger. We’ve seen a lot of hate directed against Chinese people for allegedly starting this mess. Some GDC attendees are trying to get their money back from airlines, hotels, and the show itself. The folks facing this anger have to tiptoe delicately. Dear @Official_GDC , Please reconsider your refund policy for folks who are deciding not to attend your conference in order to prioritize their health and wellness. Many developers spend their hard earned money to attend, and they shouldn't be punished for their caution. – Adam — ᴀᴅᴀᴍ ʙᴏʏᴇs (@amboyes) February 27, 2020 The GDC [updated] says those who reserved hotels through its site will get refunds. Meanwhile, I continue to get pitches from people still going to GDC, including a number of smaller companies that would be tough to meet with if the giants were still going to the show. It’s probably a good time to consider online-only events like Rami Ismail’s GameDev.World event. But I don’t want to get into too much of that conversation now, as it suggests opportunism. I’m more like in a state of mourning over what GDC is facing. I would like to see or interview many people, and now I know I won’t see them. It will be sad and lonely to see a lesser show when games have become a huge industry, pushing well above $100 billion and breaking into the mainstream culture like no other time. I see some amazing panels at GDC, and I appreciate them so much. It’s not just the GDC that is going to hit from this virus. Esports is on its crest, with lots of deals happening and tons of viewers flocking to watch these events. But while they are broadcast online, the excitement around esports events is that they take place in physical places, where people gather in close proximity and cheer on their favorite cyberathletes. It is suffering from the coronavirus. Above: Plague Inc. Evovled What solace do we have? At least people can stay at home, disconnected physically from each other, and play online games. At home, they can play games like Plague Inc., though they should heed the warning from the developers : This is fiction, not a scientific model. Online games should see a kind of boom while this virus runs its course. And as I think about playing some Call of Duty online, I would remind myself and everyone else about how nice it is to meet with friends and strangers and build new connections. As of now, I’m still going to GDC. A part of me feels like this setback has a small silver lining, as it will take us back to the classic GDC shows of the past. In recent years, I haven’t had time to check out many sessions or panels. That’s because big game companies show up with game previews that can last for hours, and platform companies stage sponsored press conferences that I have to cover for news. This lessens the chances for me to go to panels and hear about the clever ideas of prominent developers or listen to alternative viewpoints from indie devs. This year, all of that must-attend clutter is out of the way, as the big companies have bailed and the little folks are left. But I’m thinking about how much time I want to be there. And what would have to happen before I would decide on my own — assuming GDC doesn’t pull the plug itself — not to go? If a lot of the big companies bail, then other folks probably will as well. Based on that poll, GDC might shrink to below 10,000 people. Will it be safe for you to go? I’m not giving medical advice. But some big companies clearly aren’t taking the risk. And as one of my own event advisers said, “answer hazy, ask again later,” as the old Magic 8-Ball would have told us. P.S. Lastly, all of you Californians and Super Tuesday voters, don’t forget to vote in the primaries. Your vote counts. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,556
2,020
"Streaming 101: 4 tips for getting started | VentureBeat"
"https://venturebeat.com/2020/02/08/streaming-101-4-tips-for-getting-started"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Streaming 101: 4 tips for getting started Share on Facebook Share on X Share on LinkedIn StarCraft II in action. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In 2014, Amazon bought Twitch.tv for an eye-popping $1 billion. The popular streaming platform now claims to have more than 100 million monthly users, suddenly giving people a platform for building their brand as a gamer and a streamer. From my experience building the esports broadcasting livestream ESV TV , I learned working as a streamer is just as much about how you engage millions of viewers as it is about your skills as a gamer. Here are several tips for getting started as a streamer: 1. Find your niche The No. 1 mistake streamers make when starting out is they feel obligated to stream the most popular games. Streaming is all about engagement, and overextending your experiences across a variety of popular games makes it almost impossible to get noticed. That’s why I recommend narrowing in and identifying your niche. StarCraft II is where I got started. It’s not because it was the most popular game back then, but it’s the game that most interested me. I enjoyed playing it, so I wanted others to have that same experience. Later, when I mentored Choi “CranK” Jae Won for three years, I helped him discover his own niche within StarCraft II. We worked to have him stream playing on the U.S. server, which no one in South Korea was doing at the time. This presented a great opportunity for CranK to enter the English speaking esports realm. He eventually became one of the most popular StarCraft II streamers in the world, ranking No. 1 on the entire North American server. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! So, before you go streaming all the popular games, recognize it can limit your level of engagement unless you’re the absolute best of the best. I’m not necessarily saying you need to find your Starcraft II, but streaming multiple, less mainstream games makes it easier to establish a lot of niche groups of followers. From there, you can try your hand at the big guns. 2. Be consistently unique The key to engaging your audience is by being unique, interesting, consistent, and clear. Explain why you’re doing what you’re doing (such as completing a weekly challenge in Fortnite to unlock something) and your strategy behind achieving that goal. Strategy is key because it’s at the core of gaming. Develop your own formula and method for your approach and stick to it. Dr. Disrespect is one streamer who has really taken being unique to a new level. He dresses up in an extreme, flight suit-esque costume with sunglasses. That may seem a bit over-the-top, but by creating the unique persona of “Dr. Disrespect” and sticking to it, he’s attracted millions of followers ( and controversy as well ). One thing to consider; before you go all in on a wacky persona, it’s a good idea to run your stream by a friend or family member who isn’t deep in the gaming community. Get their outsider’s perspective. The goal is for your stream to appeal to both niche and broad audiences. Tyler Blevins is the prime example of this. Ninja , as he’s known in gaming, turned himself into the most popular streamer in the world by capitalizing on the Fortnite trend shortly after it launched in 2017. At the time, he only had 500,000 followers and streamed multiple different games. Ninja and Fortnite quickly grew alongside one another, with Ninja accumulating two million Twitch followers in the span of about six months. After streams, subscriptions, and eventually sponsorships started piling up, his family recognized how professional this really was turning out to be. Despite his success, he told CNBC that he doesn’t see gaming skill alone as a viable career path for everyone, so it’s important not to drop everything and start gaming your tail off. Ninja made sure to do well in school and keep up with other extracurricular activities like playing soccer in order to diversify his interests and skills to make himself stand out. 3. Navigating sponsorships The biggest mistakes streamers make when navigating sponsorships are lack of professionalism and diversified interests. As a gaming streamer there is an immediate misconception that maturity and professionalism are not top of mind. It’s up to you to smash that assumption by honing your skills and persona, and transforming yourself into a powerful brand. Back up professional behavior with a professional email address and corresponding social media channels. Keep them all equally professional in their content and style. Most sponsorship managers won’t have time to research your social and website pages, so back up your pitch or proposal with numbers. Highlight how many followers you have and how many times your content has been viewed. Essentially, it’s a sales pitch. Give your proposal, present the numbers and show you’re unique and interesting. Consider yourself a brand and build yourself around that brand. Many streamers aren’t able to break out beyond the gaming world. They may be a big deal among gamers but don’t transcend into the mainstream as a household name. Ninja is an exception because of the way he interacts with his followers and navigates sponsorships on more than just his Twitch streams. He broke the Twitch streaming record for concurrent views in 2018 by playing with the popular rapper, Drake. He engages across all of his social platforms. Like with any business, this gives your brand more reach and allows for more creativity and personality. When he was first starting out, his family often questioned whether he was spending too much time on his “craft.” Had Ninja not taken himself seriously, perhaps he wouldn’t be where he is today. Sponsors will look for something that separates you from the crowd, so it’s important to be unique across channels and to take yourself seriously so that others do the same. 4. Interactive streams garner a following Interacting with your audience is a great way to give people a reason to watch your stream. My stream, ESV TV, became a top 500 stream on Twitch.TV because of the stress we put on the quality of engagement. We optimized our streams during peak North American broadcasting hours and integrated more competitions with prizes and recognition. Viewers don’t want to be told to watch your stream, they simply want to enjoy it. Consider this: If 80% of your stream is spent begging people to follow you, that only leaves 20% of time spent showcasing your flashy skills and engaging your audience. If you casually interact with your viewers instead, in a way that doesn’t break your speech pattern or make what you’re saying seem like a script, and you make sure to give recognition to your regulars, the audience will connect with you and keep coming back. Consistency in your approach will also maximize viewership in the long run. Streamers who operate on a consistent schedule: Monday, Wednesday, and Friday night, for example — are much more likely to gain and retain followers than those who stream with no rhyme or reason. Treat yourself and your stream like a TV show — same time, same day. Skill is great, but unless you’re the best in the world at your chosen game, it will not carry you to success alone. This is especially important for streamers who are just starting out. Holding on to regular viewers will pay off not just with the number of followers you gain, but also the quality of engagement you earn. Having regular viewers makes your stream casual yet genuine, which viewers yearn for in the digital world. Once you’ve established a community of regular followers through interactive engagement, it’s important you spend time figuring out who your audience base is and acknowledge the loyal regulars. Think of yourself as a celebrity, someone with whom everyone wants to have the chance to engage. Unlike other sports, streaming makes this engagement easier for fans and players, so make small talk with your followers and reward your biggest supporters. It’s important to show your supporters that you’re in it for their viewing pleasure just as much as your own gaming expertise. This goes back to helping you develop a persona — much like with Ninja or CranK — that can boost your popularity over time. Becoming a streamer can be intimidating, but if it’s your passion these tips will help set you up for success. Are you ready to get started? Patrick Soulliere II is Global Esports and Gaming Marketing Manager at Ballistix (A Micron Company). GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,557
2,019
"Facebook lets users transfer images directly to Google Photos as part of data portability program | VentureBeat"
"https://venturebeat.com/2019/12/02/facebook-lets-users-transfer-images-directly-to-google-photos-as-part-of-data-portability-program"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook lets users transfer images directly to Google Photos as part of data portability program Share on Facebook Share on X Share on LinkedIn Facebook: mobile app and website Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook will allow users to transfer all of their photos and videos out of its platform and directly into Google Photos, the first major step in an ongoing initiative to address concerns — and new regulations — around data portability. The announcement comes two months after the social networking giant published a white paper that sought to address some of the key issues involved in making data portable between online services. However, today’s news pertains more directly to the open source Data Transfer Project (DTP) announced last year by Google, Facebook, Microsoft, and Twitter, with Apple joining the party just a few months ago. Ultimately, the DTP is all about reducing friction on behalf of both service providers and their users when it comes to moving data from one platform to another. The effort involves the major technology platforms working together to develop APIs that bridge their respective services so users don’t have to manually download and then re-upload their content. “At Facebook, we believe that if you share data with one service, you should be able to move it to another,” said Steve Satterfield, Facebook’s director of privacy and public policy, in a blog post. “That’s the principle of data portability, which gives people control and choice while also encouraging innovation.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The bottom line with data portability tools is that they have to be genuinely useful to the end user. Facebook already offers a data export tool that gives users a copy of some of the data the company holds on them — but this in itself doesn’t make the data all that usable, which is why companies have to work together to ensure their services play nicely. “We’ve learned from our conversations with policymakers, regulators, academics, advocates, and others that real-world use cases and tools will help drive policy discussions forward,” Satterfield added. “That’s why we’re developing new products that take into account the feedback we’ve received and will help drive data portability policies forward by giving people and experts a tool to assess.” Data portability The new data transfer tool — which can be accessed from within the settings menu in each user’s Facebook account — is available now in Ireland as part of the pilot phase, and Facebook has committed to rolling it out globally in early 2020. The company has also said that all exported content will be encrypted. Above: Facebook Data Transfer Tool for Google Photos Given the other partners currently signed up to the DTP, it’s likely we’ll see this tool expand so that photos and videos stored on the various platforms belonging to Facebook, Twitter, Microsoft, Google, and Apple can be moved seamlessly between them. When this will happen, and the extent to which it’s supported, is currently unknown. While Facebook says it firmly believes in data portability, it’s worth noting that antitrust probes are currently looking into the stranglehold big tech companies hold on users’ data, and a growing number of regulations around the world are specifically addressing the need for data portability in digital products. Europe’s General Data Protection Regulation (GDPR), which took effect last May , stipulates that users be able to easily transport their data between services. And the California Consumer Privacy Act ( CCPA ), which will take effect on January 1, 2020, also has provisions for data portability, as evidenced by this excerpt from section 1798.100(d): A business that receives a verifiable consumer request from a consumer to access personal information shall promptly take steps to disclose and deliver, free of charge to the consumer, the personal information required by this section. The information may be delivered by mail or electronically, and if provided electronically, the information shall be in a portable and, to the extent technically feasible, in a readily usable format that allows the consumer to transmit this information to another entity without hindrance. Being able to transfer photos and videos between services is just one step toward making data truly portable, and many argue that users should be able to transfer their entire social graph to make it easier for rival social networks to compete with Facebook’s dominance. Indeed, Facebook has hinted that it will be producing more portability tools in future — though what they look like only time will tell. “We want to build practical portability solutions people can trust and use effectively,” Satterfield said. “To foster that trust, people and online services need clear rules about what kinds of data should be portable and who is responsible for protecting that data as it moves to different services.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,558
2,017
"Microsoft acquires Kubernetes experts Deis from Engine Yard | VentureBeat"
"https://venturebeat.com/2017/04/10/microosoft-acquires-kubernetes-experts-deis-from-engine-yard"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft acquires Kubernetes experts Deis from Engine Yard Share on Facebook Share on X Share on LinkedIn Microsoft office sign. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft today announced that it has acquired the team behind Deis — which offers tools and services for working with applications packed up inside containers, many of which can run on a single physical server — from Engine Yard. In 2015 Engine Yard, a company that offers a platform as a service (PaaS) developers can use to build and run applications, acquired OpDemand, the startup through which the Deis open source software tools were first developed. Now the team is moving on to Microsoft. Terms of the deal weren’t disclosed. “We expect Deis’ technology to make it even easier for customers to work with our existing container portfolio including Linux and Windows Server Containers, Hyper-V Containers and Azure Container Service, no matter what tools they choose to use,” Scott Guthrie, executive vice president of Microsoft’s Cloud and Enterprise Group, wrote in a blog post. Engine Yard, which was founded in 2006, hasn’t announced much since the OpDemand acquisition. Microsoft, meanwhile, has been focusing on supporting open source tools such as Docker, Kubernetes, and Apache Mesos in order to encourage further cloud adoption, even when developers aren’t using Microsoft’s Windows Server operating system. Kubernetes, a container orchestration tool, is Deis’ primary focus. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Deis gives developers the means to vastly improve application agility, efficiency and reliability through their Kubernetes container management technologies,” Guthrie wrote. This acquisition comes a few months after public cloud market leader Amazon Web Services (AWS) introduced Blox open source container-management tools, which are designed to work with AWS’ EC2 Container Service (ECS) for hosting container-based applications. Google initiated the Kubernetes project in 2014. The Google Cloud Platform offers the Google Container Engine (GKE) based on Kubernetes. Last year Microsoft hired Kubernetes cofounder Brendan Burns from Google. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,559
2,018
"Kubernetes and microservices: A developers' movement to make the web faster, stable, and more open | VentureBeat"
"https://venturebeat.com/2018/05/06/kubernetes-and-microservices-a-developers-movement-to-make-the-web-faster-stable-and-more-open"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature Kubernetes and microservices: A developers’ movement to make the web faster, stable, and more open Share on Facebook Share on X Share on LinkedIn Google's Kelsey Hightower speaks at KubeCon. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The four years that William Morgan spent as an engineer at Twitter battling the Fail Whale gave him a painful view into what happens when a company’s rickety web infrastructure gets spread too thin. But while Twitter’s instability was highly publicized, Morgan realized that the phenomenon existed to some degree across the web, as companies were building applications in ways that were never intended to handle such scale. The result: Applications and software were becoming too expensive, too hard to manage, and too slow to deploy, they required too many developers and caused too much downtime. After leaving Twitter in 2014, Morgan wanted to use some of the lessons he had learned to help other companies reimagine the way they build applications for the web. That led to the founding in 2015 of Buoyant , whose application development tools have become part of an insurgent movement to fundamentally transform the way software and services are designed for the web. Referring to what is in some cases dubbed “microservices” or “cloud native computing,” the development philosophy holds that breaking applications into smaller, self-contained units can significantly reduce costs and time needed to write, deploy, and manage each one. The result should be a web that is faster yet more stable. And just as compelling to proponents, it should deliver a more open web that makes it easier for users to change cloud platforms. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While such shifts in development philosophy typically take many years, cloud native computing has caught fire and is having a big moment. Even though it remains small overall, the uptake and enthusiasm has even taken advocates like Morgan by surprise. “It’s kind of incredible how rapidly this has been growing,” he said. “It speaks to the fact that people are focused on the right thing, which is solving actual problems. What we are seeing here is just a beginning. But I think this could fundamentally change everything about web development.” That optimism was on display this week at the latest edition of a conference called KubeCon + CloudNativeCon Europe 2018 in Copenhagen, Denmark. The event drew Morgan, along with 4,300 other attendees from around the world. While conferences are never a perfect barometer of an industry, that figure is up from the 500 people who attended the first such gathering in November 2015. The event is organized by the Cloud Native Computing Foundation , an open source organization that operates under the umbrella of the Linux Foundation. CNCF was created just over three years ago to shepherd this new movement. The current momentum around cloud native computing seems to be a function of both the right solution and the right timing. As web development has evolved, there has been a tendency to develop “monolithic” applications — that is, software that contains most or all parts of the code for a given company or service. Over time, those code bases have grown to massive sizes and become hugely complex, which has led to a wide array of problems. Developing and maintaining such applications can take an enormous number of developers. Even for companies that have made the necessary investments and hired those developers, making any changes or updates can be cumbersome and take weeks. For others, the resources needed to build the technology can seem like an insurmountable challenge. “Software has gotten a lot more complex,” said Ben Sigelman, cofounder and CEO of LightStep , a San Francisco-based startup that makes performance management tools for microservices. “It’s gotten a lot more powerful, but it crossed a threshold where the complexity of the code to deliver those features requires hundreds and hundreds of developers. And once you have hundreds of developers working on the code, you’re in a dangerous place. It’s an efficiency issue, and it can lead to paralysis.” The solution, according to adherents of cloud native computing, is to break these big slabs of code into self-contained functions or features. This modular approach, particularly if it’s based on open sources tools, would ideally make constructing web services and maintaining them far more efficient. Each piece could be deployed or updated rapidly, by fewer developers, without having to worry that it’s going to blow up the entire code base. This concept of microservices has been floating around for some time. But it got a big boost in 2013 when San Francisco-based Docker released its first product. Docker helped popularize the use of “containers,” a technology that places all the necessary pieces for an application to run in one package. That allows the application to be moved across different platforms and operating systems without having to be rewritten. The following year, Google announced a project some of its engineers had been developing to enable the deployment and management of containerized applications called Kubernetes. While Google felt that Kubernetes could provide a powerful boost to cloud-based services, it also recognized that its acceptance would be limited as long as it was seen as a Google project, according to Kelsey Hightower, Google’s Kubernetes community member and co-chair of the KubeCon conference. So Google approached the Linux Foundation about open-sourcing Kubernetes. Those talks led to the creation of CNCF, which also counted Twitter, Huawei, Intel, Cisco, IBM, Docker, Univa, and VMware among its founding members. Rather than being just a Kubernetes project, the foundation decided to take a wider view, positioning itself as a body that would oversee and encourage development of all the pieces needed to build applications using this new model. CNCF says it now has 20 projects — including Kubernetes — in some stage of development. Just as critically, CNCF now counts every major cloud service provider as a member. Over the past 12 months, Dan Kohn, executive director of CNCF, said he has been surprised by how quickly industry partners, many of them cloud platform rivals, have bought into the movement and joined the foundation. Microsoft announced it was joining last July. Amazon Web Services joined last August and hosted a networking event for developers at KubeCon. While Docker was already a founding member, it announced it was going to become more deeply involved by donating another of its tools, Containerd, to CNCF last year and doing more to support Kubernetes. “The aspiration was always to get everyone around the table,” said Kohn. “Frankly, I didn’t expect it to happen so quickly.” Above: Dan Kohn, executive director of Cloud Native Computing Foundation. It wasn’t necessarily an easy decision for the companies. The fight to win in the public cloud space is a fierce one. Tools like Kubernetes and microservices eliminate some competitive advantages because they allow for easier data portability. For companies just moving into the cloud-based world, the ability to avoid the risk of vendor lock-in is another appeal of cloud native computing. “Everyone is so conscious of software coming from one source,” said Gareth Rushgrove, a Docker product manager. “They are worried they might not be able to get out from under one vendor solution. That’s one of the reasons CNCF and this community have been so useful.” So what was the catalyst? Most likely these companies saw the reality of where the market was heading as use of Kubernetes continued growing quickly. More companies are under pressure to digitize, and they increasingly see cloud native computing as a faster and easier way to get there. “The users are what is really driving this,” said Abby Kearn, executive director of open source organization Cloud Foundry , a CNCF sister group under the Linux Foundation umbrella. “They need to digitally transform and become technology companies. If you’re a bank, and you’re not transforming and trying to become a technology company, where does that leave you? You’ve got a ton of fintech companies coming for your customers.” At the three-day conference, organizers announced that Chinese internet giant JD.com had 20,000 servers running Kubernetes, making it one of the largest adopters in the world. A report commissioned by LightStep , which surveyed 353 developers from companies around the world, found that 86 percent expect microservices will become the default development architecture within five years. And the three-day KubeCon was packed with participating companies making announcements designed to expand the ecosystem of related products and services. Jason McGee, vice president and CTO of IBM Cloud Platform, said his company is betting big on cloud native solutions because they have the potential to help its customers move faster. Microservices are allowing companies to mix and match containerized, open source solutions so they don’t have to build everything from scratch. “I can build a microservice that you can re-use,” McGee said. “That will allow the overall industry to go faster. Right now, we spend a lot of time re-solving the same problems.” Naturally, this frenzy has sparked interest from venture capitalists. Buoyant has raised $14 million. And LightStep has raised $27 million over two rounds after Sigelman initially told potential Series A investors to hold off because he wasn’t sure how quickly users might embrace microservices. “I knew that the idea made sense,” said Sigelman, who worked at Google for nine years before striking out on his own. “I didn’t know about the timing. I told Series A investors to wait for it. I knew it was going to happen. I didn’t know when.” For all this optimism, there are still plenty of skeptics who see cloud native computing and microservices as an overhyped development fad. Even CNCF members openly acknowledge that many challenges lie ahead. Speaking on stage to open the conference, Kohn posed the question “Is our software good enough?” and then answered it by declaring, “No.” Part of the issue is that while there are many potential benefits to making the transition to microservices, older tools for things like communicating between applications and monitoring app performance won’t work in this environment. That same LightStep survey found that 99 percent of developers surveyed reported some “challenges” in using microservices, with more than half saying it was increasing their operational challenges. This begs the question: Is the move to microservices simply trading an old set of problems for new ones? Put another way, will the promised benefits significantly outweigh the headaches? Sigelman says the answer is yes, noting that the new architecture can potentially reduce the risk of a single failure bringing down someone’s entire system. “The operational efficiencies will be there regardless,” he said. “You’re going to get rid of some really profound problems.” Of course, that’s also an opportunity for people like Sigelman, whose company is making tools to solve some of those new issues. And indeed, that was true of many of the startups on hand, as many of the product announcements were aimed at plugging those gaps and bolstering the overall maturity of this approach. Still, conference organizers sought to infuse the gathering with a greater sense of urgency right from the start. Alexis Richardson, founder and CEO of Weaveworks and a CNCF board member, kicked off the conference by issuing a challenge to the audience. The wave of excitement around this market had propelled their movement further and faster than anticipated, he said. But adrenaline wasn’t enough. With more attention and more interest being showered on cloud native computing and microservices, the movement needed to grow up even more quickly and make sure all the pieces are in place to deliver on its hefty promises. “Think of CNCF and those first couple of years being like a startup,” Richardson said. “The startup phase is over.” (Disclosure: The Linux Foundation paid for VentureBeat’s travel expenses to Copenhagen for the KubeCon + CloudNativeCon Europe 2018 event. Our coverage remains objective.) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,560
2,020
"Apple and Google partner on Bluetooth interoperability for COVID-19 tracing apps | VentureBeat"
"https://venturebeat.com/2020/04/10/apple-and-google-partner-on-bluetooth-interoperability-for-covid-19-tracing-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple and Google partner on Bluetooth interoperability for COVID-19 tracing apps Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Apple and Google today said they’re working together on Bluetooth interoperability between Android and iOS devices to empower coronavirus tracking apps for smartphones. Apple and Google own the world’s two most widely used mobile operating systems. The news was announced today in a joint Apple – Google statement and will enable tracking of close proximity between people across Android and iOS devices. “First, in May, both companies will release APIs that enable interoperability between Android and iOS devices using apps from public health authorities. These official apps will be available for users to download via their respective app stores,” the statement reads. Apple and Google also plan to create a Bluetooth tracing platform that will allow users to opt-in and share their tracking history with government health authorities tracking the spread of the coronavirus. Apple and Google have faced questions in recent days from U.S. Senators about COVID-19 location data and data collection practices. Empowering third-party developers making proximity tracing apps could help power automated contact tracing, which proponents say may be crucial to resuming normal life and economic activity in the coming weeks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Through close cooperation and collaboration with developers, governments and public health providers, we hope to harness the power of technology to help countries around the world slow the spread of COVID-19 and accelerate the return of everyday life,” the joint statement reads. Bluetooth apps for contact tracing are being considered by a growing number of nations. Private Kit: Safe Paths, for example, is now in conversations with over 30 countries around the world as well as the World Health Organization and the U.S. Department of Health and Human Services for its coronavirus tracing app. On Thursday, makers of Private Kit: Safe Paths said they achieved a breakthrough in interoperability between Android and iOS devices. Draft Bluetooth and cryptology documentation released as part of the Apple-Google news says the contact tracing method will use Bluetooth Low Energy (BLE) and a 32-byte tracing key, a cryptographically protected code, to log contact between devices. Makers of existing apps like COVID Watch who succeeded in exchanging anonymized code between iOS and Android devices for the purposes of coronavirus tracing say they’ve encountered Android bugs and that iPhones can’t run the tracking app in the background, requiring users to keep their phones open for Bluetooth tracking to work for iOS devices. TraceTogether, an app launched by government authorities in Singapore, encountered similar issues with iOS devices. Privacy advocates in favor of decentralized methods of Bluetooth tracking with smartphones call it one of the most privacy-conscious methods of contact tracing available today. In response to the news, ACLU surveillance and cybersecurity counsel Jennifer Granick said the effectiveness of contact tracing apps will depend on trust and voluntary use and should not include any centralized repository of user data. “To their credit, Apple and Google have announced an approach that appears to mitigate the worst privacy and centralization risks, but there is still room for improvement. We will remain vigilant moving forward to make sure any contract tracing app remains voluntary and decentralized, and used only for public health purposes and only for the duration of this pandemic,” Granick said in a statement shared with VentureBeat. Updated 11:54 to add ACLU statement. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,561
2,018
"How Prague's Avast went from Soviet-era security project to $4.5 billion IPO | VentureBeat"
"https://venturebeat.com/2018/11/12/how-pragues-avast-went-from-soviet-era-security-project-to-4-5-billion-ipo"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Prague’s Avast went from Soviet-era security project to $4.5 billion IPO Share on Facebook Share on X Share on LinkedIn The Avast VirusLab at its Prague headquarters. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Early on a Thursday morning last May, executives from Prague-based Avast crowded onto a podium in the London Stock Exchange to cheer the start of trading for the cybersecurity company’s stock. They were understandably thrilled to be part of a $4.5 billion IPO, a milestone that placed them firmly in one of capitalism’s most coveted clubs. Yet Avast’s almost 30-year path to this point has been singular. The brainchild of a couple of Prague researchers working in a Soviet-era lab, Avast grew methodically and with little fanfare, surviving political and technological upheaval to become one of the most recognized names in endpoint cybersecurity for consumers and small businesses. With the IPO behind it and cybersecurity a hotter market than ever, the pride of Eastern European entrepreneurs now seems poised for an explosive decade. “Enterprise security gets a lot of attention, and people think that consumer products are primitive by comparison,” said Avast CEO Vince Steckler. “But what we have is an extremely sophisticated product based on machine learning and cloud infrastructure. It’s a major operation to protect millions of people around the world. And really, we’re still just getting started.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Avast CEO Vince Steckler Roll over, Vladimir Lenin While Prague’s historic city center bursts with architectural and cultural sites that recall centuries of history, Avast’s gleaming headquarters a few kilometers to the south may be the region’s greatest monument to the changes experienced since the Czech Republic slipped out of the Soviet bloc. The company built this HQ and moved into it in early 2016. In terms of physical form and ambiance, it could be dropped into the heart of Silicon Valley and it would fit right in. The building is 162,000 square feet, and Avast occupies seven floors. This includes 45 meeting rooms, a cafeteria that serves free food all day, indoor picnic tables where employees can eat together, a fitness area, a hammock room, pool tables, a golf simulator, a movie theater, a library, and a kid’s room. With wide work spaces and staircases, the interior is designed to spark chance encounters, random conversations, and, hopefully, an energetic and innovative culture. Above: Avast headquarters in Prague. It could not be further from Avast’s humble beginnings, when Eduard Kučera and Pavel Baudiš founded it in 1988. The pair had met at the Research Institute for Mathematical Machines in Czechoslovakia. It was there that Baudiš was examining a floppy disk when he spotted a virus — and wrote a program to remove it. That inspired him to ask Kučera to join him in creating a “cooperative” called Alwil to develop the software they named Avast. And had history not overtaken them, that might have been that. But in November 1989, the Velvet Revolution swept the streets of Prague, toppling the government, lifting playwright and poet Vaclav Havel to the presidency, and leading to the exit of Soviet troops and influence. Alwil became a joint partnership that produced an antivirus program for Windows 95, bringing it global sales and attention. But while Avast drew high marks for its technical quality, the business was troubled. With no money for marketing, it had to rely mostly on word of mouth to sell its software, which was loaded on disks. The founders decided to try a new approach as they fended off buyout offers. In 2001, they switched to a freemium model. The basic antivirus software became free, and users could pay for certain premium features. The user base exploded, growing from 1 million in 2004 to 20 million in 2006. The business side was now stable, if not hugely profitable. But acquisition offers continued to roll in, particularly after 2004, when the Czech Republic joined the European Union. Startups around the Prague area became tantalizing buyout targets for companies looking for not only products, but a way to tap into Eastern Europe’s technical talent pool. Determined to remain independent, the cofounders made another big change. They hired Steckler, a former Symantec executive who had been in cybersecurity for over a decade, to take on the CEO role in 2009. “I thought they had to really up their game,” Steckler said. “They had a great product, and it was a product that could stand by itself. But it was still not widely know, and there was not a clear path to monetization.” Above: Avast’s cafeteria. Big changes, big appetite Steckler launched a number of changes to turbocharge the business. A year after he came aboard, the company was renamed Avast and raised $100 million in venture capital. He also pumped resources into sales, marketing, and public relations — which had a total of one employee when he first joined. Within four years, Avast had 200 million users and had grown to 350 employees. By now, of course, the first wave of the internet bubble was long gone, and the world had moved from selling software on disks to downloads, thanks to the growing adoption of broadband. That gave Avast an even more efficient way to distribute its antivirus product for Mac and PC desktop computers. At the same time, consumer vulnerability was growing exponentially as the number of viruses and malware incidents soared. But Steckler’s most dramatic move came in 2016, when Avast announced it would pay $1.3 billion to acquire AVG, another antivirus company that had been based in the Czech Republic since its founding in 1992. AVG had a similar freemium model, plus a relationship with Microsoft, and its own large user base. In announcing the deal at the time, Steckler said the combined companies would have 400 million endpoints and a larger geographical reach, allowing Avast to feed even more data into the threat detection network it used to constantly update its products. The backbone of Avast’s security is a cloud-based network that combines machine learning and artificial intelligence to detect threats and develop solutions. Any device running Avast software is constantly looking for malicious files, and anything suspicious is funneled into that network for analysis. If a problem is detected, updates are pushed out to protect users. The deal also gave Avast ownership of AVG’s mobile security products. While 60 percent of Avast’s revenues today still come from desktop software, mobile is growing rapidly. “We are in a rapidly changing industry, and this acquisition gives us the breadth and technological depth to be the security provider of choice for our current and future customers,” Steckler said in a 2016 statement. “Combining the strengths of two great tech companies, both founded in the Czech Republic and with a common culture and mission, will put us in a great position to take advantage of the new opportunities ahead.” Above: The view of Prague from Avast’s HQ. Innovation and IoT blues The combined companies have been on a tear. Avast’s revenue grew from $340 million in 2016 to $652 million in 2017. Of course, that period also included the May 10 IPO in London. Steckler said the IPO was a chance to further raise Avast’s profile, strengthen its brand, and deliver a firm reminder that the company remains healthy and independent. “It’s a step in the company’s evolution,” he said. “This has been a labor of love for our founders for a long time. But [we have been] interested in being independent. And being listed on an exchange helps engineer trust.” However, after all the excitement, the start was not auspicious. The stock closed down the first day and continued to tumble for several weeks, from a first-day price of £246.00 per share to £208.20 in late June. The company’s first earnings report helped turn things around. For the first six months of 2018, revenue jumped to an adjusted $394.3 million from $359.2 million for the same period one year ago. The stock closed Thursday at £293.65. Above: Avast goes public. As the company continues to ride this growth, it’s also looking to the future. It is developing a product called Avast Smart Home Security — a platform for protecting IoT devices — for launch in the coming months. Steckler said this is part of a long-term vision pushed by the blurring lines between different platforms, such as desktop, mobile gadgets, and connected objects. “What people are going to care about is protecting their online life,” he said. “Right now, people think of protection for a single device. Those distinctions disappear when you move to a smartphone. But you can’t run endpoint protection on something like a security camera in your home. So you have to be protected at the network level.” Meanwhile, Steckler believes Avast’s location remains one of its strongest selling points, along with being positioned in a fast-growing, technically challenging market like endpoint security. The company now has a total of 1,700 employees, including 750 in Prague and 350 in Brno, where AVG was based. “Czechs are very well-educated and very well-respected in technical fields,” he said. “We don’t need to have people calling on retailers since we can distribute to them over the internet. There’s no go-to-market issue. This is just a fascinating location to be operating from. I think it’s a far better place to operate from than Silicon Valley or London.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,562
2,019
"How 'adversarial' attacks reveal machine learning's weakness | VentureBeat"
"https://venturebeat.com/2019/11/05/how-adversarial-attacks-reveal-machine-learnings-weakness"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How ‘adversarial’ attacks reveal machine learning’s weakness Share on Facebook Share on X Share on LinkedIn Professor Jamal Atif of the Université Paris-Dauphine speaks at the France is AI conference on October 23, 2019. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The use of computer vision technologies to boost machine learning continues to accelerate, driven by optimism that classifying huge volumes of images will unleash all sorts of new applications and forms of autonomy. But there’s a darker side to this transformation: These learning systems remain remarkably easy to fool using so-called “adversarial attacks.” Even worse is that leading researchers acknowledge they don’t really have a solution for stopping mischief makers from wreaking havoc on these systems. “Can we defend against these attacks?” said Nicolas Papernot , a research scientist at Google Brain, the company’s deep learning artificial intelligence research team. “Unfortunately, the answer is no.” Papernot, who is also an assistant professor at the University of Toronto, was speaking recently in Paris at the annual France is AI conference hosted by France Digitale. He was followed later in the morning by Jamal Atif , a professor at the Université Paris-Dauphine, who also addressed the growing threat of adversarial attacks to disrupt machine learning. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! At its most basic, an adversarial attack refers to the notion of introducing some kind of element into a machine learning model designed specifically to incorrectly identify something. During Papernot’s presentation, he cited this example from a recent research paper : On the left, the machine learning model sees the picture of the panda and correctly identifies it with a moderately high degree of confidence. In the middle, someone has overlaid this pixelated image that is not necessarily visible to the human eye into the panda image. The result is that the computer now is almost certain that it is a gibbon. The simplicity of this deception highlights a couple of weakness. First, image recognition for machine learning, while it may have greatly advanced, still remains rudimentary. Papernot noted that to “teach” machines to recognize various images of cats and dogs, one needs to keep the parameters and the images fairly basic, introducing quite a bit of bias into the sample set. Unfortunately, that makes the jobs of hackers much easier. Papernot pointed out that to disrupt these systems, which are often using publicly available images to learn, one doesn’t need to hack into the actual machine learning system. An external party can detect that such a system in searching for such images to learn, and from there it’s fairly easy to reverse-engineer the questions it’s asking and the parameters it has set. “You can choose the question the model is asking, and you find a way to make the model make the wrong prediction,” he said. “You don’t even need to have internal access. You can send the input, and see what prediction it’s making, and extract the model. You can use that process to replicate the process locally.” From there, it’s relatively straightforward to introduce some kind of deception that tricks the machine learning into learning all the wrong things. “What this means is that an adversary really doesn’t need to know anything about your model to attack,” he said. “They just need to know what problem it is trying to solve. They don’t need very many resources to steal your model and attack it.” Indeed, he said his own experiments with such extraction attacks found that they were successful up to 96% of the time. Of course, it’s one thing if an automated system is mistaking a cat for a dog. It’s another if it’s the basis of a self-driving car algorithm that thinks a stop sign is a yield sign. Of course, such attacks are being conducted in the physical world, with people placing marks on signs to trick self-driving cars. Recently, scientists at Northeastern University and the MIT-IBM Watson AI Lab, created an “adversarial t-shirt” that sported printed images to enable somebody to fool human detection systems. While AI and ethics tends to get the most public attention, researchers are increasingly concerned about the issue of adversarial attacks. Atif said during his presentation that while the issue was first identified over a decade ago, the number of research papers dedicated to the topic has “exploded” since 2014. For the upcoming International Conference on Learning Representations, more than 120 papers on the topic have been submitted. Atif said this growing interest is driven by a desire to find some kind of solution, which so far has remained elusive. Part of the problem is that while a machine learning system has to maintain a defined set of parameters, the variety of adversarial attacks is so extensive that there is no way to guess all the possible combinations and teach the system to defend itself. Researchers have tried experiments such as separating a machine learning system into several buckets that perform the same task and then comparing the results. Or interpreting additional user behaviors such as which images get clicked on to determine whether an image had been read correctly. Atif said researchers are also exploring greater use of randomization and game theory in the hopes of finding more robust ways to defend the integrity of these systems. So far, the most effective strategy is to augment a group of photos with examples of adversarial images to at least give the machine learning system some basic defense. At its best, such a strategy has gotten accuracy back up to only 45%. “This is state of the art,” he said. “We just do not have a powerful defense strategy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,563
2,019
"Google's AI lets you make music that sounds like Bach | VentureBeat"
"https://venturebeat.com/2019/03/20/googles-ai-lets-you-make-music-that-sounds-like-bach"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s AI lets you make music that sounds like Bach Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A Google Doodle due out later today will use AI to let you create a melody in a style that mimics composer Johann Sebastian Bach. After you pick your key and the music is harmonized, the tune can be shared on Facebook or Twitter or downloaded as a Musical Instrument Digital Interface (MIDI) file. Users can also turn any classical melody they create into a harmonious rock song. Doodles are drawings or interactive experiences shared on Google.com, often to celebrate noteworthy people or anniversaries of important events. This is the first Doodle created with artificial intelligence and the first to use tensor processing units, a company spokesperson told VentureBeat in an email. The Doodle is served up with TensorFlow.js and will be available to play with for the first time at 9 p.m. Pacific Time later today and on Google.com for 48 hours. The Bach Doodle is the product of Google’s Doodle team, its People and AI Research team ( PAIR ), and the open source Magenta project for making music using machine learning. A Magenta-powered Music Transformer was debuted last December. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To make the Doodle possible, the group trained CocoNet, a machine learning model that harmonizes music, using 306 of Bach’s chorale harmonizations. “[Bach’s] chorales always have four voices, each carrying their own melodic line, while creating a rich harmonic progression when played together,” Google AI program manager Lauren Hannah-Murphy wrote in a blog post. “This concise structure makes them good training data for a machine learning model. So when you create a melody of your own on the model in the Doodle, it harmonizes that melody in Bach’s specific style.” The Bach Doodle may be the first powered by AI, but it’s not the first tie-up between AI and the Doodle team, as Ryan Germick, principal designer on the Google Doodle team, also oversees the personality team for Google Assistant. That team helps decide how Google Assistant responds when you ask about its favorite color or its taste in music. This is the latest in a long line of interactive Doodles. To play with a larger list of interactive Doodles, visit the Doodle Archive. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,564
2,019
"OpenAI's Sparse Transformers can predict what comes next in lengthy text, image, and audio sequences | VentureBeat"
"https://venturebeat.com/2019/04/23/openais-sparse-transformers-can-predict-what-comes-next-in-lengthy-text-image-and-audio-sequences"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI’s Sparse Transformers can predict what comes next in lengthy text, image, and audio sequences Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Some months back, OpenAI debuted an AI natural language model capable of generating coherent passages from millions of Wikipedia and Amazon product reviews, and more recently, it demonstrated an AI system — OpenAI Five — that defeated 99.4% of players in public Dota 2 matches. Building on those and other works, the San Francisco research organization today detailed Sparse Transformers, an open source machine learning system it claims can predict what comes next in text, image, and sound sequences 30 times longer than was previously possible. “One existing challenge in AI research is modeling long-range, subtle interdependencies in complex data,” wrote OpenAI technical staff member Rewon Child and software engineer Scott Gray in a blog post. “Previously, models used on these data were specifically crafted for one domain or difficult to scale to sequences more than a few thousand elements long. In contrast, our model can model sequences with tens of thousands of elements using hundreds of layers, achieving state-of-the-art performance across multiple domains.” A reformulation of Transformers — a novel type of neural architecture introduced in a 2017 paper (“ Attention Is All You Need “) coauthored by scientists at Google Brain, Google’s AI research division — serves as the foundation of Sparse Transformers. As with all deep neural networks, Transformers contain neurons (mathematical functions loosely modeled after biological neurons) arranged in interconnected layers that transmit “signals” from input data and slowly adjust the synaptic strength — weights — of each connection. (That’s how the models extracts features and learns to make predictions.) Uniquely, though, Transformers have attention: Every output element is connected to every input element, and the weightings between them are calculated dynamically. Above: Corpora memory usage before and after recomputation. Attention normally requires creating an attention matrix for every layer and every so-called attention head, which isn’t particularly efficient from a computational standpoint. For instance, a corpus containing 24,000 samples of two-second audio clips or 64 low-resolution images might take up 590GB and 154GB of memory, respectively — far greater than the 12GB to 32GB found in the high-end graphics cards used to train AI systems. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! OpenAI’s approach minimizes memory usage by recomputing the matrix from checkpoints; the 590GB data set described above totals just 9.2GB after recomputation, and the 154GB compresses to 2.4GB. Effectively, the largest memory cost becomes independent of the number of layers within the model, allowing said model to be trained with “substantially greater” depth than previously possible. Because a single attention matrix isn’t particularly practical for large inputs, the paper’s authors implemented sparse attention patterns where each output computed weightings only from a subset of inputs. And for neuron layers spanning larger subsets, they transformed the matrix through two-dimensional factorization — a step they say was necessary to preserve the layers’ ability to learn data patterns. Above: Generating images with Sparse Transformers. In experiments involving Sparse Transformers models trained on popular benchmark data sets including ImageNet 64, CIFAR-10, and Enwik8 and containing as many as 128 layers, the researchers say they achieved state-of-the-art density estimation scores and generated novel images. Perhaps more impressively, they even adapted it to generate five-second clips of classical music. The researchers concede that their optimizations aren’t well-adapted to high-resolution images and video data. However, they pledge to investigate different patterns and combinations of sparsity in future work. “We think … sparse patterns [are] a particularly promising avenue of research for the next generation of neural network architectures,” Child and Gray wrote. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,565
2,019
"Resemble AI launches voice synthesis platform and deepfake detection tool | VentureBeat"
"https://venturebeat.com/2019/12/17/resemble-ai-launches-voice-synthesis-platform-and-deepfake-detection-tool"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Resemble AI launches voice synthesis platform and deepfake detection tool Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI and machine learning are powerful tools for the synthesis of speech. As countless studies have demonstrated, only a few minutes — and in the case of state-of-the-art models, a few seconds — are required to imitate a subject’s prosody and intonation with precision. Baidu’s latest Deep Voice service can clone a voice with just 3.7 seconds of audio samples, for example, and a recently released implementation from a July research paper makes do with about five seconds. The field’s rapid progress inspired Zohaib Ahmed, a former Magic Leap lead software engineer fresh off of stints at BlackBerry and Hipmunk, to cofound Ontario-based Resemble AI with Saqib Muhammad. The pair sought to adapt leading machine learning models for speech synthesis to scale, with the goal of building a service that would enable cloning voices from relatively small data sets. But alongside their voice synthesis product launch, Ahmed and Muhammad launched a tool to detect deepfakes. The two technologies are inextricably linked. Threat of deepfakes Ahmed and Muhammad had the foresight to realize that like any tool capable of creating convincing synthetic audio, Resemble’s platform could be abused by malicious actors. Deepfakes — media that replaces a person in an existing recording with someone else’s likeness — are multiplying, according to Amsterdam-based cybersecurity startup Deeptrace. It identified 14,698 deepfake videos on the internet during its most recent tally in June and July, up from 7,964 last December — an 84% increase within only seven months. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It’s troubling not only because deepfakes might be used to sway public opinion during, say, an election, or to implicate someone in a crime they didn’t commit, but because they’ve already been used to swindle at least one firm out of hundreds of millions of dollars. That’s why the Resemble team several months ago released an open source tool dubbed Resemblyzer , which uses AI and machine learning to detect deepfakes by deriving high-level representations of voice samples and predicting whether they’re real or generated. Given an audio file of speech, it creates a summary vector of 256 values (an embedding) that summarizes the characteristics of the voice spoken, enabling developers to compare the similarity of two voices or suss out who’s speaking at any given moment. “As researchers and entrepreneurs, we are thoughtful about the benefits and/or risks to society of what we are creating,” said Ahmed. “When you’re creating your voice on our platform, we take extreme measures to ensure the ownership of the voice.” Cloning voices for media After a soft launch earlier this year, Resemble announced the launch of Resemble Clone. According to CEO Ahmed, it’s meant to target the entertainment industry, with tools designed to optimize generated voices for virtual reality experiences, animated films and television, and audiobooks. “We set out to build a product that helps creatives get over the hurdle of crafting audio content,” said Ahmed. “With more audio content being produced year after year — smart speakers, Airpods, podcasts, audiobooks, and digital characters in virtual and augmented reality — there is a large and growing need for fast and accurate voice cloning. Resemble AI’s unique focus is to empower creatives, so they can control and produce content without sacrificing quality.” From an end-user perspective, the Resemble experience is akin to that of Lyrebird, which was acquired by Group founder Andrew Mason’s Descript in September. Like Resemble, Lyrebird had users record statements from real-time, dynamically generated prompts, which fed into cloud-hosted algorithmic models used to shape shareable, bespoke digital voice profiles. Resemble customers needn’t create new recordings, though — existing audio works too, funneled either through a web-based uploader or an API. (Resemble requires three minutes of audio to generate high-quality samples.) And the platform can create fictitious voices with somewhat humanlike emotions and intonations, which can be served to Google’s Dialogflow or any similar natural language understanding engine. Ahmed envisions game developers creating voices from actors during preproduction for scratching and iteration, or wholly novel voices tailored to fit an avatar or character’s personality. Another potential use case is the creation of soundalikes for intelligent assistants and voice apps, like the John Legend and Samuel L. Jackson voices on Google Assistant and Amazon’s Alexa, respectively. Resemble’s work isn’t entirely novel. Text-to-speech tech startup iSpeech offers comparable voice cloning tools, as does Modulate, Respeecher, and Bengaluru, India-based DeepSync. But investors like Firstminute investor Clara Lindh Bergendorff — who participated in Resemble AI’s $2 million seed funding round alongside Craft Ventures, AET Fund, and Betaworks — believe its media creation focus sets it apart in a text-to-speech market that some anticipate to be worth $3.03 billion by 2022. “We’re excited by the idea of Resemble making real-time creation and editing of audio content — which today is a painful bottleneck for creatives across industries — as easy and accessible as editing animated visual content,” she said. “Resemble is also well-positioned to ride wider audio waves, from the growth of audio content consumption and voice applications to growth in audio-first devices.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,566
2,018
"A look back at some of AI's biggest video game wins in 2018 | VentureBeat"
"https://venturebeat.com/2018/12/29/a-look-back-at-some-of-ais-biggest-video-game-wins-in-2018"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature A look back at some of AI’s biggest video game wins in 2018 Share on Facebook Share on X Share on LinkedIn OpenAI's Dota 2 battle arena. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. For decades, games have served as benchmarks for artificial intelligence (AI). In 1996, IBM famously set loose Deep Blue on chess, and it became the first program to defeat a reigning world champion (Garry Kasparov) under regular time controls. But things really kicked into gear in 2013 — the year Google subsidiary DeepMind demonstrated an AI system that could play Pong, Breakout, Space Invaders, Seaquest, Beamrider, Enduro, and Q*bert at superhuman levels. In March 2016, DeepMind’s AlphaGo won a three-game match of Go against Lee Sedol, one of the highest-ranked players in the world. And only a year later, an improved version of the system ( AlphaZero ) handily defeated champions at chess, a Japanese variant of chess called shogi , and Go. The advancements aren’t merely advancing game design, according to folks like DeepMind cofounder Demis Hassabis. Rather, they’re informing the development of systems that might one day diagnose illnesses, predict complicated protein structures , and segment CT scans. “AlphaZero is a stepping stone for us all the way to general AI,” Hassabis told VentureBeat in a recent interview. “The reason we test ourselves and all these games is … that [they’re] a very convenient proving ground for us to develop our algorithms. … Ultimately, [we’re developing algorithms that can be] translate[ed] into the real world to work on really challenging problems … and help experts in those areas.” With that in mind, and with 2019 fast approaching, we’ve taken a look back at some of 2018’s AI in games highlights. Here they are for your reading pleasure, in no particular order. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Montezuma’s Revenge Above: Map of level one in Montezuma’s Revenge. In Montezuma’s Revenge , a 1984 platformer from publisher Parker Brothers for the Atari 2600, Apple II, Commodore 64, and a host of other platforms, players assume the role of intrepid explorer Panama Joe as he spelunks across Aztec emperor Montezuma II’s labyrinthine temple. The stages, of which there are 99 across three levels, are filled with obstacles like laser gates, conveyor belts, ropes, ladders, disappearing floors, and fire pits — not to mention skulls, snakes, spiders, torches, and swords. The goal is to reach the Treasure Chamber and rack up points along the way by finding jewels, killing enemies, and revealing keys that open doors to hidden stages. Montezuma’s Revenge has a reputation for being difficult (the first level alone consists of 24 rooms), but AI systems have long had a particularly tough go of it. DeepMind’s groundbreaking Deep-Q learning network in 2015 — one which surpassed human experts on Breakout, Enduro, and Pong — scored a 0 percent of the average human score of 4,700 in Montezuma’s Revenge. Researchers peg the blame on the game’s “sparse rewards.” Completing a stage requires learning complex tasks with infrequent feedback. As a result, even the best-trained AI agents tend to maximize rewards in the short term rather than work toward a big-picture goal — for example, hitting an enemy repeatedly instead of climbing a rope close to the exit. But some AI systems this year managed to avoid that trap. DeepMind In a paper published on the preprint server Arxiv.org in May (“ Playing hard exploration games by watching YouTube “), DeepMind described a machine learning model that could, in effect, learn to master Montezuma’s Revenge from YouTube videos. After “watching” clips of expert players and by using a method that embedded game state observations into a common embedding space, it completed the first level with a score of 41,000. In a second paper published online the same month (“ Observe and Look Further: Achieving Consistent Performance on Atari “), DeepMind scientists proposed improvements to the aforementioned Deep-Q model that increased its stability and capability. Most importantly, they enabled the algorithm to account for reward signals of “varying densities and scales,” extending its agents’ effective planning horizon. Additionally, they used human demonstrations to augment agents’ exploration process. In the end, it achieved a score of 38,000 on the game’s first level. OpenAI Above: An agent controlling the player character. In June, OpenAI — a nonprofit, San Francisco-based AI research company backed by Elon Musk, Reid Hoffman, and Peter Thiel — shared in a blog post a method for training a Montezuma’s Revenge-beating AI system. Novelly, it tapped human demonstrations to “restart” agents: AI player characters began near the end of the game and moved backward through human players’ trajectories on every restart. This exposed them to parts of the game which humans had already cleared, and helped them to achieve a score of 74,500. In August, building on its previous work, OpenAI described in a paper (“ Large-Scale Study of Curiosity-Driven Learning “) a model that could best most human players. The top-performing version found 22 of the 24 rooms in the first level, and occasionally discovered all 24. What set it apart was a reinforcement learning technique called Random Network Distillation (RND), which used a bonus reward that incentivized agents to explore areas of the game map they normally wouldn’t have. RND also addressed another common issue in reinforcement learning schemes — the so-called noisy TV problem — in which an AI agent becomes stuck looking for patterns in random data. “Curiosity drives the agent to discover new rooms and find ways of increasing the in-game score, and this extrinsic reward drives it to revisit those rooms later in the training,” OpenAI explained in a blog post. “Curiosity gives us an easier way to teach agents to interact with any environment, rather than via an extensively engineered task-specific reward function that we hope corresponds to solving a task.” On average, OpenAI’s agents scored 10,000 over nine runs with a best mean return of 14,500. A longer-running test yielded a run that hit 17,500. Uber OpenAI and DeepMind aren’t the only ones that managed to craft skilled Montezuma’s Revenge-playing AI this year. In a paper and accompanying blog post published in late November, researchers at San Francisco ride-sharing company Uber unveiled Go-Explore, a family of so-called quality diversity AI models capable of posting scores of over 2,000,000 and average scores over 400,000. In testing, the models were able to “reliably” solve the entire game up to level 159 and reach an average of 37 rooms. To reach those sky-high numbers, the researchers implemented an innovative training method consisting of two parts: exploration and robustification. In the exploration phase, Go-Explore built an archive of different game states — cells — and the various trajectories, or scores, that lead to them. It chose a cell, returned to that cell, explored the cell, and, for all cells it visited, swapped in a given new trajectory if it was better (i.e., the score was higher). This “exploration” stage conferred several advantages. Thanks to the aforementioned archive, Go-Explore was able to remember and return to “promising” areas for exploration. By first returning to cells (by loading the game state) before exploring from them, it avoided over-exploring easily reached places. And because Go-Explore was able to visit all reachable states, it was less susceptible to deceptive reward functions. The robustification step, meanwhile, acted as a shield against noise. If Go-Explore’s solutions were not robust to noise, it robustified them into a deep neural network with an imitation learning algorithm. “Go-Explore’s max score is substantially higher than the human world record of 1,219,200, achieving even the strictest definition of ‘superhuman performance,'” the team said. “This shatters the state of the art on Montezuma’s Revenge both for traditional RL algorithms and imitation learning algorithms that were given the solution in the form of a human demonstration.” 1 2 View All The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "