id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
16,167 | 2,020 | "Launch Night In Google: How to watch, and what to expect | VentureBeat" | "https://venturebeat.com/2020/09/30/launch-night-in-google-how-to-watch-and-what-to-expect" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Launch Night In Google: How to watch, and what to expect Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
During its Launch Night In event, which kicks off today at 11 a.m. Pacific (2 p.m. Eastern), Google is expected to launch new hardware across its product families. Leaks and premature sales spoiled some surprises — eagle-eyed buyers managed to snag Google’s new Chromecast from Home Depot , while Walmart’s mobile app leaked the specs of the Nest Audio smart speaker. Still, there’s a chance Google has an ace or two up its sleeve.
Here’s what we expect to see during this afternoon’s livestream, which can be found here.
Pixel 5 and Pixel 4a 5G Pixel 5 It’s all but certain Google will announce two smartphones today: The Pixel 5 and Pixel 4a 5G. The Pixel 5 is the follow-up to last year’s Pixel 4 , while the Pixel 4a 5G is a 5G-compatible version of the Pixel 4a that launched in August.
While the Pixel 5 might be a successor in name, it’s a potential downgrade from the Pixel 4 in that it reportedly swaps the Qualcomm Snapdragon 855 processor for the less-powerful Snapdragon 765G. Leaks suggest the RAM capacity has been bumped from 6GB to 8GB, which could make tasks like app-switching faster. The Pixel 5 is also rumored to have a 4,080mAh battery, which would be the largest in any Pixel to date.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! We expect the Pixel 5 to retain the 90Hz-refresh-rate, 6-inch, 2340×1080 OLED display (19.5:9 aspect ratio) introduced with the Pixel 4, as well as the Pixel 4’s rear-facing 12.2-megapixel and 16-megapixel cameras. (The 16-megapixel camera might have an ultra-wide lens, rather than the Pixel 4’s telephoto lens.) As for the front-facing camera, it’s rumored to be a single 8-megapixel wide-angle affair. Some outlets report that there’s a fingerprint sensor on the rear of Pixel 5, harkening back to the Pixel 3, and Google has apparently ditched the Pixel 4’s gesture-sensing Soli radar in favor of a streamlined design.
Other reported Pixel 5 highlights include IP68-rated water- and dust-resistant casing, sub-6GHz 5G compatibility, and 18W USB-C charging and wireless charging. The Pixel 5 is anticipated to cost around $699 in the U.S., U.K., Canada, Ireland, France, Germany, Japan, Taiwan, and Australia, which would make it far cheaper than the $799-and-up Pixel 4.
Pixel 4a 5G The Pixel 4a 5G is a tad less exciting, but rumors imply it will sport a larger display than the Pixel 4 (potentially 6.2 inches versus 5.8 inches). It might also share the Pixel 5’s 2340×1080 resolution, processor, and cameras alongside a headphone jack, but supposedly at the expense of other components. The Pixel 4a 5G is rumored to make do with a 60Hz screen refresh rate, 6GB of RAM, a 3,885mAh battery, and Gorilla Glass 3 instead of the Pixel 5’s Gorilla Glass 6, with no IP rating for water or dust resistance.
The Pixel 4A 5G will cost $499, according to Google — a $150 premium over the $349 Pixel 4a. It will be available in the U.S., Canada, the U.K., Ireland, France, Germany, Japan, Taiwan, and Australia when it goes on sale, likely later today.
Chromecast with Google TV and Nest Audio Chromecast with Google TV Google’s new Chromecast dongle runs Google TV, a rebrand of Android TV, Google’s TV-centric operating system. Unlike previous Chromecast devices, it ships with its own remote control featuring a directional pad with buttons for Google Assistant, YouTube, and Netflix.
The new Chromecast supports 4K, HDR, and multiple Google accounts, as well as Bluetooth devices and USB-to-Ethernet adapters. But it doesn’t appear to tightly integrate with Google’s Stadia gaming service — at least not out of the box. The Verge’s Chris Welch, who managed to get his hands on a Chromecast unit early this week, reports that he sideloaded the Stadia app without issue and streamed a few titles with an Xbox controller.
The new Chromecast costs $50, or $20 less than the Chromecast Ultra.
Nest Audio Details about Nest Audio leaked more or less in full on Monday (courtesy of Walmart). The new speaker, which aligns with the design of the Nest Mini and Nest Hub, is covered in a mesh fabric (and 70% recycled fabric) and features four status LEDs and Bluetooth connectivity. It stands vertically and is substantially louder than the original Google Home speaker, with Google claiming it provides 75% louder audio and a 50% stronger base. Like the Google-made smart speakers before it, Nest Audio works with other Nest speakers and displays for multiroom audio and leverages Google Assistant for voice-controlled music; podcasts; and audiobooks from Spotify, YouTube Music, and more.
It’s also likely that Nest Audio will pack a dedicated AI chip for workloads like natural language understanding, speech recognition, and text synthesis. Google introduced such a chip with the Nest Mini and Google Wifi last year, claiming at the time that it could deliver up to a teraflop of processing power.
Nest Audio is expected to come in several colors and cost around $100.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,168 | 2,020 | "Pixel 5 fails to live up to Google's AI showcase device | VentureBeat" | "https://venturebeat.com/2020/09/30/pixel-5-fails-to-live-up-to-googles-ai-showcase-device" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pixel 5 fails to live up to Google’s AI showcase device Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
As widely predicted, Google announced two smartphones during its Launch Night In event today: The Pixel 5 and Pixel 4a (5G). The Pixel 5 is the follow-up to last year’s Pixel 4 , while the Pixel 4a (5G) is a 5G-compatible version of the Pixel 4a that launched in August.
Neither phone appears to introduce many AI-powered features that aren’t already available on existing Pixel devices. (Pixel hardware has historically been a showcase for Google’s AI innovations.
) Instead, they seem aimed at nudging the lineup toward the midrange. Affordability is the focus rather than cutting-edge technology, along with the recognition that neither phone is likely to make a splash in a highly saturated market.
Reportedly , Google plans to produce less than 1 million Pixel 5 smartphones this year; production could be as low as around 800,000 units for the 5G-capable Pixel 5.
The Pixel 5 might be a successor in name, but it’s arguably a downgrade from the Pixel 4 in that it swaps the Qualcomm Snapdragon 855 processor for the less-powerful Snapdragon 765G. The RAM capacity has been bumped from 6GB to 8GB, which could make tasks like app-switching faster. The Pixel 5 also has a 4,080mAh battery — the largest in any Pixel to date. Google claims it lasts up to 48 hours on a charge with Extreme Battery Saver, a mode that lets users choose which apps remain awake.
Speaking of the battery, the Pixel 5 introduces Battery Share, a reverse charging feature that can be used to wirelessly recharge Google’s Pixel Buds and other Qi-compatible devices. It’s akin to the Qi reverse wireless charging features found in Samsung’s Galaxy S10 and S20 series.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: The Pixel 5.
The Pixel 5 retains the 90Hz-refresh-rate, 6-inch, 2,340×1,080 OLED display (19.5:9 aspect ratio) introduced with the Pixel 4, as well as the Pixel 4’s rear-facing 12.2-megapixel and 16-megapixel cameras. (The 16-megapixel camera might have an ultra-wide lens, rather than the Pixel 4’s telephoto lens.) As for the front-facing camera, it’s a single 8-megapixel wide-angle affair. There’s a fingerprint sensor on the rear of Pixel 5, harking back to the Pixel 3, and Google has ditched the Pixel 4’s gesture-sensing Soli radar in favor of a streamlined design.
Other Pixel 5 highlights include IP68-rated water- and dust-resistant casing, sub-6GHz 5G compatibility, and 18W USB-C charging and wireless charging. There’s also Hold for Me , a Google Assistant-powered feature that waits on hold for you and lets you know when someone’s on the line. (Currently, Hold for Me is only available in the U.S. in English for toll-free numbers, Google says.) Google’s night shooting mode, Night Sight, now works in portrait mode; Portrait Light illuminates portraits even when they’re backlit; and Cinematic Pan creates a “sweeping” video effect by stabilizing and slowing down motion.
The Pixel 4a (5G) is a tad less exciting, but it sports a larger display than the Pixel 4 (6.2 inches versus 5.8 inches). It also shares the Pixel 5’s 2,340×1,080 resolution, processor, and cameras alongside a headphone jack, but at the expense of other components. The Pixel 4a (5G) makes do with a 60Hz screen refresh rate, 6GB of RAM, a 3,885mAh battery, and Gorilla Glass 3 instead of the Pixel 5’s Gorilla Glass 6, with no IP rating for water or dust resistance.
The Pixel 4a (5G) will cost $499, according to Google — a $150 premium over the $349 Pixel 4a. It’s available in the U.S., Canada, U.K., Ireland, France, Germany, Japan, Taiwan, and Australia. The Pixel 5 costs around $699 in the U.S., U.K., Canada, Ireland, France, Germany, Japan, Taiwan, and Australia, which makes it far cheaper than the $799-and-up Pixel 4.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,169 | 2,020 | "Spider-Man: Miles Morales comes to PS5 this holiday | VentureBeat" | "https://venturebeat.com/2020/06/11/spider-man-miles-morales-comes-to-ps5-this-holiday" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Spider-Man: Miles Morales comes to PS5 this holiday Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Surprise! We’re already getting a new Spider-Man game this year, and it’s coming to PlayStation 5 this holiday season. Sony revealed Spider-Man: Miles Morales during its PS5 event today.
The game stars Mile Morales instead of Peter Parker, although the former can be heard speaking in the reveal trailer.
Miles Morales debuted in the comics in 2011. There, he takes up the mantel of Spider-Man after Peter Parker’s death. The world also became familiar with the character thanks to his starring role in Spider-Man: Into the Spider-Verse , an animated movie that released in theaters in 2018. Miles Morales was also a supporting character in Insomniac’s last Spider-Man game.
Spider-Man: Miles Morales could be a launch title for PS5, but we do not know what exact date that system is launching. Sony just said “holiday 2020.” Evan Narcisse, who wrote the Rise of the Black Panther comic, is working on the game.
Marvel’s Spider-Man came out for PS4 in 2018. It was a huge hit, selling over 13.2 million copies.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Update 6/12/202 8:20 a.m.: Sony has clarified to The Telegraph that Spider-Man: Miles Morales is a remaster of the last game with the Miles Morales campaign acting as expansion content.
Update 6/12/202 8:50 a.m.: The plot thickens! Now Bloomberg reporter Jason Shreier is saying that Sony tells him that Miles Morales is not an expansion slapped onto a remaster.
NEWS: Spider-Man Miles Morales is *not* an expansion or enhancement or remaster, despite a Sony executive's comments this morning, a source tells Bloomberg News. Nor is it Spider-Man 2. It is a brand-new, standalone game similar in scope to Uncharted Lost Legacy.
— Jason Schreier (@jasonschreier) June 12, 2020 So, honestly, who knows what the game is at this point.
Update 6/12/202 8:55 a.m.: OK, now Insomniac itself is calling Miles Morales a standalone game.
Marvel's Spider-Man: Miles Morales is the next adventure in the Marvel's Spider-Man universe. We will reveal more about this standalone game at a future date.
#MilesMoralesPS5 pic.twitter.com/GOTAvNhUaF — Insomniac Games (@insomniacgames) June 12, 2020 Of course, that still means it could be on the scale of an Uncharted: The Lost Legacy.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,170 | 2,018 | "Verizon will offer 5G home broadband starting October 1 for $50 to $70 | VentureBeat" | "https://venturebeat.com/2018/09/11/verizon-will-offer-5g-home-broadband-starting-october-1-for-50-to-70" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Verizon will offer 5G home broadband starting October 1 for $50 to $70 Share on Facebook Share on X Share on LinkedIn Verizon launched its 5G Home broadband service in October 2018, and is readying its mobile 5G network now.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Verizon announced today that it will begin offering its long-awaited fixed 5G home broadband service on October 1, 2018, starting in four cities: Houston, Indianapolis, Los Angeles, and Sacramento. Dubbed Verizon 5G Home, the service will be available to current Verizon Wireless customers for $50 per month, and new customers for $70 per month.
The revelation of pricing and date details ends two of the biggest mysteries about Verizon’s fixed 5G service, though those details are interestingly complex. Starting on Thursday, September 13 at 8 a.m. ET, Verizon’s new website FirstOn5G.com will provide information and a preorder opportunity for interested 5G customers in those four cities. Consumers who sign up will get Verizon 5G Home free for three months, after which “current Verizon Wireless customers with a qualifying smartphone plan” will pay $50 per month, and others will pay $70 per month, including all taxes and fees — with no contract.
Equally interesting is the fact that Verizon promises “no additional hardware costs,” and is promising customers “typical network speeds around 300 Mbps and, depending on location, peak speeds of nearly 1 Gig, with no data caps.” The service is being pitched at cord-cutters interested in exiting their wired cable broadband packages, and Verizon has previously said that it will be working with both Apple and Google to offer certain free television services over the network.
Verizon is executing on a plan announced at the beginning of 2018 and claiming that it “is the first company to bring 5G broadband internet service to consumers and is expected to be the first to offer 5G mobile service,” though the latter part of that sentence comes with an asterisk. Rival AT&T has promised to launch the first mobile 5G network in the United States across 12 cities, albeit using mobile hotspots rather than smartphones at first.
Motorola has shown a 5G Moto Mod accessory for its just-released Moto Z3 handset , but the accessory was expected to hit stores in 2019; this may indicate that its U.S. launch date will take place earlier.
Samsung and Ericsson are among Verizon’s confirmed hardware partners for the fixed 5G service.
The carrier separately announced today that it will be branding its 5G offerings as the “Ultra Wideband 5G network,” a marketing effort to focus on its large investments in high-bandwidth fiber and millimeter wave infrastructure to support its home and mobile 5G networks. The company is not alone in making these investments, however, as smaller rival T-Mobile has similarly been building a multi-band 5G network across the country, and AT&T has made significant nationwide investments in its own towers.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,171 | 2,019 | "Verizon: 5G Home won’t expand until second half of 2019 | VentureBeat" | "https://venturebeat.com/2019/01/29/verizon-5g-home-wont-expand-until-second-half-of-2019" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Verizon: 5G Home won’t expand until second half of 2019 Share on Facebook Share on X Share on LinkedIn Verizon CEO Hans Vestberg discusses 5G at the 2019 CES.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Last October, Verizon became the first major carrier to launch commercial 5G services — a feat achieved using pre-standards “5G Home” hardware, which the company said it would upgrade to standards-based 5G technology whenever it became available. During today’s Q4 2018 earnings conference call with analysts, Verizon CEO Hans Vestberg said that standards-based 5G home hardware isn’t coming until the second half of 2019, effectively confirming that the four-city 5G Home network won’t expand much beyond its current footprint until then.
Unlike rival AT&T, which waited for standards-compliant 5G hardware guaranteed to work with 5G smartphones, Verizon wanted to be first to market with some form of 5G, even if that meant using unfinished 5G hardware. To meet its 2018 launch date, it installed pre-standards 5G hardware on towers and in customers’ homes, enabling residential users to reach up to 1Gbps speeds — with one caveat. At some point, the network and home devices would need to be updated , either through firmware or replacement hardware, in either case at the carrier’s expense.
Consequently, Verizon had to choose between building a large 5G network with unfinished parts or taking a more gradual approach. The carrier ultimately opted to launch 5G in only four U.S. cities — Houston, Indianapolis, Los Angeles, and Sacramento — and only in parts of those cities. In December, the company said it would wait until standards-based equipment was available to expand further.
Vestberg said today that Verizon’s 5G was “fully deployed in the four cities we have decided” and confirmed that further expansion was “waiting for the CPE equipment.” He explained that getting 5G smartphones to market has become the “first priority” for device makers this year, with 5G CPE home routers consequently taking a back seat. “We expect this year we’ll see CPE hardware for the standard in the second half of 2019,” explained Vestberg, noting that partners — including Motorola and Samsung — were working to get smartphones ready for launch on Verizon’s network.
Even at this late date, it’s unclear when the 5G phone launches and expansions of Verizon’s network to accommodate them will actually take place. The carrier has dodged multiple requests to offer timetables for its 5G rollouts, most recently at CES and again during the call. Asked by an analyst to provide a sense of its anticipated 5G footprint by the end of 2019 and 2020, Vestberg declined, saying it was “nothing we want to disclose for competitive reasons.” The holdup appears to be largely hardware-related. Vestberg said Verizon has spent years preparing for its 5G rollout but is waiting for the industry to catch up so it can offer both home and mobile services. “We’re going to go fast as soon as we have all the pieces [in place] for 5G Home and 5G mobility …. We need to see that the ecosystem is … equally [as] ready as Verizon is now.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,172 | 2,019 | "Report: Verizon 5G Home service too expensive to scale, attracts few users | VentureBeat" | "https://venturebeat.com/2019/03/22/report-verizon-5g-home-service-too-expensive-to-scale-attracts-few-users" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Verizon 5G Home service too expensive to scale, attracts few users Share on Facebook Share on X Share on LinkedIn Verizon launched its 5G Home broadband service in October 2018, and is readying its mobile 5G network now.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Verizon may have been the world’s first major carrier to launch a commercial 5G network , but a new report suggests that its 5G Home service isn’t practically scalable — its short-range 5G “small cells” are expensive to install, reach too few customers, and might not be economically feasible for a nationwide rollout.
That’s the harsh conclusion of research analysts at MoffettNathanson (via MultiChannel ), whose “ Peek Behind the Curtain of Verizon’s 5G Rollout ” report and followup conference call today questioned whether the carrier will be able to scale and make money on its fixed 5G network. The researchers focused on findings in Sacramento, one of the first 5G cities, roughly six months after Verizon launched 5G Home there.
According to the report, only 6 percent of homes in tested areas had access to Verizon’s 5G, and under 3 percent of residences in those areas actually subscribed to the 5G service. Moreover, the report said that the millimeter wave -based “cell radii appear much smaller” than expected, which is to say that even more 5G “ small cell ” broadcasting units might be needed on towers than was previously thought.
“To us, the most interesting statistic isn’t so much the low take rate as it is the relatively low coverage,” the firm said, “as it illustrates the enormity of the challenge of scaling a small cell network, in neighborhood after neighborhood, across the United States.” There’s no question that building a millimeter wave-based small cell network is challenging — in equal parts due to the cost of new 5G radio hardware and to zoning considerations. Sensing the potential for local and state approval delays, the FCC voted to cut regulatory red tape and limit local fees that could impede the installation of new 5G small cells. Even with federal support, however, carriers still have to get permission from hundreds of cities and towns. Verizon set up a mini-site to ask citizens to lobby local officials to speed up the necessary approvals.
Verizon has paused its 5G Home expansion well short of full coverage in its initial four cities, explaining at the end of January 2019 that standards-based 5G hardware wouldn’t be ready until later this year. Two weeks later, a Sacramento TV station reported that Verizon had only installed 200 5G radios there, covering under 10 percent of the city, and suggested that a full rollout could take years.
MoffettNathanson suggests that Verizon’s small cell installation costs in Sacramento — a mid-market city ranked 35th in size — are lower than they will be in bigger, denser cities such as New York. The analysts aren’t convinced that Verizon will be able to reach 30 million customers who are already served by fiber cable broadband, as the costs won’t be matched or exceeded by “second player” service revenues.
Verizon’s competitors have differed in their approaches to 5G home broadband service.
T-Mobile and Sprint have touted a combined plan to launch 5G broadband services using devices that do not require millimeter wave small cells.
AT&T has focused largely on mobile 5G but expects customers to use personal hotspots for some of their broadband needs.
We’ve reached out to Verizon for comment and will update this article if and when we hear back. The carrier previously said that it will commence mobile 5G service on April 11 in Chicago and Minneapolis, two cities not involved in the 5G Home rollout, with a 30-city mobile 5G deployment this year.
Based on Verizon’s prior statements, it’s highly likely that the initial four 5G Home cities will be converted to combined mobile and home 5G service later this year under the “5G Ultra Wideband Network” name, as more standards-based 5G hardware becomes available.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,173 | 2,019 | "Disney+ streaming video service launches on November 12 for $7 per month | VentureBeat" | "https://venturebeat.com/2019/04/11/disney-streaming-video-service-launches-in-november-for-7-per-month" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Disney+ streaming video service launches on November 12 for $7 per month Share on Facebook Share on X Share on LinkedIn Disney+ Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
(Reuters) — Disney unveiled new details on Thursday about Disney+, a family-friendly digital video subscription that is set to debut later this year and compete with Netflix.
The ad-free service will include movies and TV series from Disney and will feature programming from the Marvel superhero universe, the “Star Wars” galaxy, “Toy Story” creator Pixar animation and the National Geographic channel.
It will cost $7 a month or $70 per year.
When the service debuts on Nov. 12, some of the programming available from Disney’s libraries will include: Classic Disney animated movies such as 101 Dalmatians and Bambi The entire Pixar catalog including A Bug’s Life and Cars Captain Marvel and three other Marvel films The first and second Star Wars trilogies Live-action movies such as the Pirates of the Caribbean series, Mary Poppins and The Sound of Music.
More than 5,000 episodes of Disney Channel shows such as Hannah Montana Disney also will create original programming exclusively for the service. According to the company, that will include: Star Wars A new season of animated series Star Wars: The Clone Wars A live-action Star Wars series called The Mandalorian , developed by Jon Favreau A TV series starring Diego Luna that is a prequel to the movie Rogue One: A Star Wars Story Marvel A series focused on the villain Loki, played by Tom Hiddleston A series starring Elizabeth Olsen as Scarlet Witch Animation Monsters at Work , a series inspired by Pixar hit Monsters Inc.
Billy Crystal and John Goodman will return as the voices of Mike and Sulley.
Movies Remakes of Disney classics such as Lady and the Tramp and Sword in the Stone Noelle , a Christmas fantasy adventure starring Anna Kendrick and Bill Hader Togo , starring Willem Dafoe in a story about a famous sled-dog Television A new High School Musical series Diary of a Female President , a comedy series about a 12-year-old Cuban-American girl on a journey to become president of the United States Non-fiction Marvel’s 616 , a documentary series exploring the intersection between Marvel stories and characters and the real world Be Our Chef , a food competition show in which families compete and the winner’s dish will be served at Walt Disney World Rogue Trip , a travel guide to places an average tourist is least likely to visit (Reporting by Lisa Richwine; Editing by Lisa Shumaker and Leslie Adler) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,174 | 2,020 | "Qualcomm promises more affordable 5G chips and longer-range mmWave in 2020 | VentureBeat" | "https://venturebeat.com/2019/09/06/qualcomm-promises-more-affordable-5g-chips-and-longer-range-mmwave-in-2020" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm promises more affordable 5G chips and longer-range mmWave in 2020 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
On the running list of issues with early 5G devices, high prices and inconsistent millimeter wave performance have emerged as non-trivial concerns for both consumers and carriers — problems that chip and device makers will need to solve. Today, Qualcomm announced that it’s tackling these issues with several new 5G components, all of which will start appearing in consumer devices next year.
For consumers, the biggest news is that 5G modems will be coming to a much wider price range of devices in 2020: Qualcomm will be expanding 5G support past the current Snapdragon 855 to additional 8-series, 7-series, and 6-series chips next year. While flagship phone customers can look forward to superior 5G performance in a post-855 chip that’s yet to be announced, Qualcomm will also bring 5G to non-flagship models, including a 5G modem and RF solution for 6-series chips, and a fully integrated 7-series 5G system-on-chip manufactured using a 7-nanometer process.
The goal, Qualcomm explains, is “to make 5G accessible to more than 2 billion smartphone users” by broadening the types of devices that can include the technology. All three of the Snapdragon tiers will support both millimeter wave and sub-6 frequencies, as well as both standalone and non-standalone 5G standards, dynamic spectrum sharing, and multi-SIM technologies, enabling OEMs to offer their products globally. Non-phone devices are also likely to benefit from the broader 5G chip support.
Twelve OEMs are already on board to use the integrated 7-series 5G SoC, which is now being called the Snapdragon 7 Series 5G Mobile Platform, and entering commercial production in the fourth quarter of 2019. Devices from LG, Motorola, Nokia/HMD Global, Oppo, and Vivo are expected to hit the market “soon thereafter,” starting in early 2020.
Above: Qualcomm’s new QTM527 millimeter wave antenna, roughly the size of a penny, is designed to be used in home broadband modems.
Qualcomm has also developed a new 5G antenna system, QTM527 , that will extend the range of millimeter wave signaling beyond current limitations, specifically for fixed wireless solutions such as home broadband modems. The QTM527 promises to let 5G modems reach millimeter wave towers at typical distances of 1.7 kilometers (1.06 miles) in rural environments or 1.1 kilometers (0.68 miles) in urban settings, though maximum unobstructed distances can be even longer.
Carriers will be able to use the QTM527 to deliver even faster cellular broadband speeds than existing cable modem-rivaling solutions. Based on aggregation of 800MHz of millimeter wave bandwidth, carriers could deliver up to 7Gbps download speeds, which would be seven times faster than the premium gigabit fiber packages offered to consumers. 5G modems with the improved performance are likely to become available in 2020.
Going forward, Qualcomm says that it’s going to place an increased marketing emphasis on the value of its end-to-end 5G modem, RF transceiver/front end, and antenna solutions, referring to them collectively as the “ Snapdragon 5G Modem-RF System.
” While the company isn’t changing the way it’s licensing and selling products to OEMs, it’s hoping to make clear that there are going to be performance and power efficiency advantages when all of these parts come from one vendor, rather than pairing a Qualcomm modem with, say, a Broadcom front end.
Most OEMs are already using end-to-end Qualcomm solutions, the company says, including all major players — though recent partner Apple , currently an outlier in this regard, was notably not mentioned or addressed. Choosing a fully integrated Qualcomm chip and wireless package enables OEMs to focus more on differentiating their devices’ look and feel, rather than on the harder engineering challenge of getting rival chips to perform ideally together.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,175 | 2,019 | "Verizon's new 5G home router has Wi-Fi 6, Alexa, and self-setup option | VentureBeat" | "https://venturebeat.com/2019/10/21/verizons-new-5g-home-router-has-wi-fi-6-alexa-and-self-setup-option" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Verizon’s new 5G home router has Wi-Fi 6, Alexa, and self-setup option Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
If you live in one of a handful of U.S. cities, Verizon has some exciting news today: It’s now offering customers a true second-generation 5G home broadband router with substantial new features, including integrated Wi-Fi 6 support, Amazon’s Alexa digital assistant , and the ability to self-install without waiting for a Verizon appointment.
As a successor to Verizon’s previously released 1A and 1B models, the “new 5G Home Internet router” goes well beyond its predecessors, which included prior-generation Wi-Fi 5 (802.11ac) support that was fine for devices released prior to 2019. The addition of Wi-Fi 6 (801.11ax) support allows the new router to improve throughput and multi-device handling with 2019 devices, such as Samsung’s Galaxy S10 and Note 10 phones and Apple’s iPhone 11s , which are at the vanguard of the latest Wi-Fi standard.
Verizon alternately describes the router as the “first commercially available tri-band 802.11ax (Wi-Fi 6) router to maximize throughput and coverage in the home” and “the first commercially available Wi-Fi 6 router complete with parental controls.” While there are other Wi-Fi 6 routers on the market, none has integrated 5G functionality, and most of them are not as aesthetically neutral as the Verizon offering.
Though it’s unclear at this point who is making the new 5G router for Verizon, the specs are impressive, and the carrier continues to promise “typical” download speeds of 300Mbps with peaks of 1Gbps — some prior 5G home customers have seen typical speeds 1.5 to 2 times faster than that.
There aren’t any specific latency guarantees, beyond that it’s “low latency for a better gaming experience” with “ultra-low lag” during video chats.
Verizon’s router also includes standalone speaker functionality with Amazon Alexa support. The device has a 10-watt speaker inside with the ability to deliver music streamed over Bluetooth or Wi-Fi, and it promises full compatibility with over 100,000 Alexa skills. Combining a router with speaker and assistant functionality makes a lot of sense. Google recently debuted Nest Wifi Routers with those features but dropped the ball on including Wi-Fi 6 functionality.
The self-installation option is another major step forward for Verizon’s 5G Home service. Previously, the company required an installer to visit each customer’s home and make adjustments to the 5G hardware — a step needed to guarantee that millimeter wave 5G signals would be picked up through a window and delivered to the router. Verizon’s new router appears to come with a pleasant-looking window unit that can be mounted without the need for an installer.
As of today, Verizon’s 5G Home service is available solely in “parts of” Chicago, Houston, Indianapolis, Los Angeles, and Sacramento. After one year on the market, the scope of coverage is still believed to be quite limited in each city.
The company says it’s still offering the same $50 monthly pricing for existing Verizon wireless customers, and $70 monthly pricing for customers without Verizon phones. Service is free for the first three months, and there’s no additional charge for the new router hardware.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,176 | 2,020 | "U.S. Cellular claims record-breaking 5G mmWave data call spanning 5km | VentureBeat" | "https://venturebeat.com/2020/09/17/u-s-cellular-claims-record-breaking-5g-mmwave-data-call-spanning-5km" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages U.S. Cellular claims record-breaking 5G mmWave data call spanning 5km Share on Facebook Share on X Share on LinkedIn U.S. Cellular demonstrates smart farming via its rural-focused wireless network.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
While Verizon, AT&T, and T-Mobile spar over the size and performance of their 5G networks, regional carrier U.S. Cellular has been building its own alternative , focused largely on serving roughly 5 million customers in America’s heartland. Today, U.S. Cellular and network partner Ericsson claimed a significant breakthrough in their efforts to bring meaningfully fast 5G to rural populations. Using a commercial millimeter wave 5G network, they made a data call spanning a record-breaking distance of over 5 kilometers, a significant improvement over the last such milestone.
Millimeter wave 5G’s incredible download speeds have been offset by a relatively limited transmission range, meaning early mobile 5G devices can lose mmWave connectivity when users enter buildings or move a couple of blocks away from transmitting towers. By comparison, “fixed 5G” broadband modems intended for home and small business use are seeing dramatic improvements in range, thanks to Qualcomm’s QTM527 mmWave antenna.
QTM527 debuted in September 2019 with a promised 1-mile rural broadcasting range, but last month Ericsson tests with Casa 5G broadband modems revealed that the hardware was capable of reaching 2-mile distances.
In Janesville, Wisconsin, U.S. Cellular used a QTM527-based fixed 5G broadband modem and Ericsson 5G network hardware for a data session ranging “more than 5 km,” which translates to at least 3.1 miles — 3 times Qualcomm’s initial promise. The companies say this range challenges conventional wisdom that mmWave is only appropriate for high-density network deployments, demonstrating that the high-frequency spectrum can be used to service customers in rural and suburban environments. Practically, each expansion of transmission range means fewer towers will be needed to deliver ultra-fast 5G to businesses and homes.
That could be very good news for U.S. Cellular and for rural customers, who have historically been “underserved” by both wired and wireless service offerings due to a combination of network installation challenges and corporate penny-pinching. Cable and cellular companies alike have balked at the expense of building out the fiber lines and towers needed to deliver truly high-speed internet access to rural users, forcing many potential customers to use slower-speed technologies instead. T-Mobile, which promised to offer 5G to rural and underserved customers , upset rivals by blanketing the U.S. with a slow form of 5G that’s easy to deploy across long distances.
U.S. Cellular appeared set to adopt a similar strategy.
Perhaps unsurprisingly, extended-range signaling appears to compromise the peak bandwidth 5G is capable of delivering up close. The test hit speeds exceeding 100Mbps, which would be a major improvement for sluggish rural broadband, but nowhere near the multi-Gbps download rates mmWave hardware is capable of.
The companies suggest extended-range mmWave technology will enable new 5G use cases. As just one example, bringing high-speed broadband to rural schools, hospitals, and town halls means public buildings will reap benefits ranging from mixed reality services to faster general purpose internet access. But U.S. Cellular will have to actually deploy the mmWave hardware widely before users can take advantage of it — a process that could take a year or two, depending on the company’s ambitions. Since the same technology is available to other carriers, it’s highly likely to fuel new confidence in mmWave’s possibilities within the U.S., as well as elsewhere in the world.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,177 | 2,020 | "Zoom's daily participants jumped from 10 million to over 200 million in 3 months (Updated) | VentureBeat" | "https://venturebeat.com/2020/04/02/zooms-daily-active-users-jumped-from-10-million-to-over-200-million-in-3-months" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zoom’s daily participants jumped from 10 million to over 200 million in 3 months (Updated) Share on Facebook Share on X Share on LinkedIn Dance instructor Anneliese Suda teaches a ballet class through Zoom, April 1, 2020 Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Zoom’s daily users ballooned to more than 200 million in March from a previous maximum total of 10 million, the video conferencing app’s CEO Eric Yuan said on Wednesday, as the company fought to dispel concerns over privacy and “Zoombombing.” Update : Yuan said daily meeting participants, which can be counted multiple times, not daily active users, which cannot.
The use of Zoom and other digital communications has soared with political parties, corporate offices, school districts, organizations, and millions across the world working from home after lockdowns were enforced to slow the spread of the coronavirus.
“To put this growth in context, as of the end of December last year, the maximum number of daily meeting participants, both free and paid, conducted on Zoom was approximately 10 million,” Yuan wrote in a letter to Zoom users on Wednesday.
Yuan said that Zoom usage has taken off over the last few weeks, with more than 90,000 schools across 20 countries using its video conferencing services to conduct classes remotely.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, the huge influx of users on its platform has raised a lot of issues for the company — mainly around privacy.
“We recognize that we have fallen short of the community’s — and our own — privacy and security expectations,” Yuan said. “For that, I am deeply sorry.” On Monday, the Federal Bureau of Investigation’s Boston office issued a warning about Zoom, telling users not to make meetings on the site public or share links widely after it received two reports of unidentified individuals invading school sessions, a phenomenon known as “Zoombombing.” A couple of days later, Elon Musk’s rocket company SpaceX banned employees from using the Zoom app in a memo seen by Reuters, saying the app had “significant privacy and security concerns.” Yuan acknowledged the problems in his letter saying, “over the next 90 days, we are committed to dedicating the resources needed to better identify, address, and fix issues proactively.” Microsoft’s business-focused Teams app was used by 1.56 million mobile users on Monday, while Slack had less than 500,000 mobile users.
Research firm Apptopia estimated that Zoom’s daily U.S. mobile user volumes rose to a record 4.84 million for the same day.
Shares of Zoom, which had been on a tear this year, have slipped over the last three days, as the company rushes to plug privacy issues plaguing its platform. The stock, which debuted last year at $36, closed down about 6% at $137 on Wednesday.
( Reporting by Subrat Patnaik in Bengaluru, editing by Bernard Orr.
) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,178 | 2,020 | "Microsoft Teams breaks daily record with 2.7 billion meeting minutes, tops mid-March high by 200% | VentureBeat" | "https://venturebeat.com/2020/04/09/microsoft-teams-breaks-daily-record-with-2-7-billion-meeting-minutes-tops-mid-march-high-by-200" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft Teams breaks daily record with 2.7 billion meeting minutes, tops mid-March high by 200% Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The remote working shift spurred by COVID-19 continues to drive explosive growth for companies providing the tools and platforms to connect people across distances.
The latest evidence of this trend comes from Microsoft, which today announced that Microsoft Teams set a new daily record of 2.7 billion meeting minutes, up 200% from the 900 million minutes it recorded on March 16, when many lockdowns were just going into effect.
The company had previously reported that Microsoft Teams, which launched worldwide in March 2017 , topped 44 million daily active users in mid-March , after having just passed 32 million DAUs earlier in the month. Back in November, the company had 20 million DAUs.
Microsoft didn’t update the DAU number in this latest report.
The surge has caused some functionality issues for Microsoft Teams, though those issues seem to have been resolved.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The latest numbers come as Microsoft finds itself facing intense competition from Zoom, which last week disclosed that its daily users increased to more than 200 million in March from 10 million three months ago. However, there are growing signs of a backlash against Zoom , owing to some questionable security and privacy practices.
While Zoom has vowed to address those, that may create a competitive opening for Microsoft.
Late last month, Microsoft unveiled several new consumer-oriented features for Teams.
Today, it announced a few additional ones designed to improve meeting management and close the gap with some popular features already available on Zoom.
Microsoft said custom backgrounds are now generally available in Teams and that the ability to upload custom images is coming in November. The “raise hand” feature should be introduced globally this month. Meeting organizers can now end a session for everyone with one click. And organizers will also be able to download reports that track participation, such as when each person enters and leaves a meeting.
Later this year, Microsoft expects to introduce AI-driven real-time noise suppression to reduce background noises.
Future of work Microsoft also released a new report dubbed the Work Trend Index.
The index draws on signals the company is tracking across its products, including Microsoft 365, Bing, LinkedIn, and other productivity tools, to monitor changes in work habits and productivity.
Among the initial findings, users are now twice as likely to turn on video in Teams as they were a month ago, indicating a desire to feel more connected. Microsoft said total video calls rose 1,000% in March. While people in Norway and the Netherlands used video for 60% of their calls , India only saw 22% video use, likely to do with differences in available connectivity.
Microsoft has also seen big increases in the number of streaming events on Teams and the amount of usage on mobile phones. And it has recorded a bigger gap between the first and last calls of the day, likely due to greater flexibility in people’s schedules.
How permanent any of these shifts will be when the coronavirus crisis ends is an open question. But the current lockdowns are certainly exposing a far greater number of people to remote work and learning options.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,179 | 2,019 | "Square leverages Weebly acquisition to bridge offline and online commerce | VentureBeat" | "https://venturebeat.com/2019/03/20/square-leverages-weebly-acquisition-to-bridge-offline-and-online-commerce" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Square leverages Weebly acquisition to bridge offline and online commerce Share on Facebook Share on X Share on LinkedIn Square Online Store: Revamped Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Square is overhauling two of its omnichannel merchant products, as the company continues to extend its reach beyond simple mobile payments.
While Square is better known for its core service that allows merchants to accept card payments in-store through a mobile device, the San Francisco-based company has branched out into numerous commerce-related verticals as it looks to gather a bigger piece of small business.
One of these products is Square Online Store , launched originally as Square Market back in 2013 , which is a free service that allows merchants to set up a basic ecommerce store.
For existing Square merchants, the Online Store offering served as an easy conduit to get online and selling more goods (with Square, of course, powering the payments), though in reality it could be used by anyone to launch their first online storefront. However, Square Online Store was limited in its scope, as it only offered a single page and lacked useful integrations with other ecommerce tools.
The revamped Square Online Store is a different proposition in its entirety, as it now promises access to real-time inventory; integration with Instagram sales; integrated shipping labels; in-store pickup service; support for Square gift cards ; and — crucially — synchronization with Square’s in-store point-of-sale system, Square POS.
Above: Square Online Store: New user interface and features Interestingly, Square is also now targeting restaurants with the new Square Online Store, allowing them to accept orders online as well as in-store, and should serve as a complementary offering to its other food-focused services including Caviar.
Design for life This upgrade also helps to shine a light on one of Square’s big acquisitions from 2018: Weebly , a popular website builder and webhosting service. It was clear that Square had big plans to integrate Weebly into its products as part of its omnichannel push, and thus Square Online Store today becomes one of two products that will be fully integrated into Weebly.
The other product that will now integrate with Weebly is Square for Retail , a dedicated point-of-sale platform that Square launched for retailers back in 2017.
Today, the company announced that Square for Retail has been redesigned with a bunch of new features, including the ability to create a website via Weebly and connect their retail catalog to the Square Online Store — this means merchants can now synchronize everything across their online and offline outlets, including pricing, inventory, and all related data.
Above: Square for Retail These are notable upgrades by Square, as it not only joins the dots for merchants already selling offline and online, but it also encourages brick-and-mortar merchants to set up shop on the web through easy-to-use website-building tools. And for customers, these integrations mean that they can order and pay for items through a company’s website and collect their purchases in-store during business hours.
“It’s crucial that sellers are able to reach their buyers on any channel, whether in person, online, or in apps,” said Square’s head of ecommerce David Rusenko.
This is the latest in a line of moves by Square designed to give it a tighter grip on merchants’ activities. Earlier this year, for example, Square launched a new debit card that gives sellers instant access to their funds, in addition to discounts with other sellers within the Square ecosystem.
To use something of a cliché, Square wants to create a seamless experience for both the merchant and the customer. And, importantly, it wants to play a part in all transactions — this is how it makes its money, whether a sale takes place online or offline.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,180 | 2,020 | "Square launches Online Checkout to take on PayPal | VentureBeat" | "https://venturebeat.com/2020/05/07/square-launches-online-checkout-to-take-on-paypal" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Square launches Online Checkout to take on PayPal Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Square is introducing a new payment tool designed for small businesses looking to rapidly transition to ecommerce.
With billions of people around the world forced to adhere to lockdown and social distancing measures driven by the COVID-19 crisis, this has led to a sizable uptick in people buying things online. Whether this trend represents a permanent shift from the status quo or a temporary blip due to shelter-in-place policies is up for debate. But with the entire retail industry facing significant headwinds for the foreseeable future, if ever there was a time to embrace ecommerce, it would be now.
With Square Online Checkout, the payments processor is looking to capitalize on this shift by offering companies an easier way to accept online card payments — this works on any website, social media profile, instant messaging app, or even SMS.
Crucially, this sees Square challenge PayPal more directly in the online payments sphere.
How it works Sellers not already using Square can sign up to Online Checkout , and from the dashboard they can create a link for any goods or service that they want to accept payment for — this could be cake-baking or an online fitness class — and give the link a title and a corresponding dollar amount.
Above: Square Online Checkout dashboard From the dashboard, sellers can copy the link and paste it into an email, WhatsApp message, Instagram bio, or anywhere else.
Above: Square Online Checkout link in a social media bio Alternatively, they can save the link as a button, customize it, and embed it on any website or blog. The text on this button can be tweaked so that if someone is looking for a donation rather than a sale, it could read “Donate Now” instead.
Above: Square Online Checkout: Embeddable button With Online Checkout, Square is catering to all manner of business, regardless of whether they want a Square-powered website — or any website, for that matter. An online fitness instructor who is temporarily giving classes over Zoom might not have any desire to create an entire online store, so a solution such as this could work well. In terms of fees, Square charges 2.9% + $0.30 per transaction.
Options There are plenty of other ways to accept online payments, of course, but Square is pitching Online Checkout as a friction-free way of accepting online payments. The end-user doesn’t have to have a PayPal account, for example, and the merchant may not even want to offer PayPal anyway.
There are comparisons to be made here with Stripe Checkout too. However, Square’s incarnation seems to be a simpler, low-tech option for those looking to accept payments on the fly, similar to PayPal, except here, all the buyer needs is a name, email address, and credit card number to complete the transaction.
It is worth noting that Square Online Checkout also offers Apple Pay and Google Pay as options, though what you see will very much depend on the browser and device that you’re using. For example, iPhone users will only see Apple Pay as an option.
Above: Checkout powered by Square While Square is perhaps better known for point-of-sale software and devices that allow offline merchants to easily accept card payments, it has been pushing deeper into the online realm in recent years. Last year, for example, Square partnered with courier network Postmates to bring on-demand deliveries to more restaurants and retailers. Square also allows businesses to set up their own online store — powered by Square’s payment tools, of course.
In many ways, Square has taken the opposite journey to that of PayPal, which began life as an online payments company before transitioning into offline payments over the past decade. Square, meanwhile, was already acutely aware of the need to support online retailers as its recent activities show — COVID-19’s impact on brick-and-mortar merchants merely underscores the need to double down on these efforts.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,181 | 2,019 | "Uber restricts drivers' app access in NYC to comply with regulation | VentureBeat" | "https://venturebeat.com/2019/09/17/uber-restricts-drivers-app-access-in-nyc-to-comply-with-regulation" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Uber restricts drivers’ app access in NYC to comply with regulation Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Uber on Tuesday will begin limiting drivers’ access to its app in New York City to comply with regulation aimed at boosting drivers’ pay and easing congestion in Manhattan, laws that Uber says will have unintended consequences.
Uber’s move to lock out drivers at times and in areas of low demand comes just months after rival Lyft implemented similar measures in response to city regulation.
Both companies oppose the unprecedented rules, saying they will prevent drivers from earning money and cut off low-income New Yorkers in remote areas not serviced by regular taxis, a claim the city rejects.
“Time and again we’ve seen Mayor (Bill) de Blasio’s TLC pass arbitrary and politically-driven rules that have unintended consequences for drivers and riders,” Uber said in a statement on Monday.
New York City’s Taxi and Limousine Commission (TLC) last year implemented several laws challenging the way ride-share companies operate in North America’s largest city, one of the industry’s largest markets.
The agency’s acting commissioner, Bill Heinzen, in a statement on Monday defended the laws, saying they held companies accountable and prevented Uber and Lyft from oversaturating the market at drivers’ expense.
New rules cap the number of app-based, for-hire cars and established minimum pay for the city’s 80,000 ride-share drivers based on how much time they spend transporting passengers.
The laws also limit the time drivers spend “cruising” — driving to or waiting to pick up new passengers. Starting in February, ride-share companies have to reduce cruising rates by 5% and later by 10%, down from currently 41%. Non-compliance can result in fines or even the inability to operate in the city.
The rules are aimed at reducing congestion in Manhattan, where ride-share vehicles make up close to a third of peak time traffic, according to the TLC.
Uber said there was no evidence showing the steps would ease congestion. The company supported a $2.75 congestion surcharge implemented for Manhattan ride-share trips earlier this year.
Lyft in June changed its app to lock out drivers during low demand. The company said it supports drivers during the change, for example by showing them areas with high demand or times during which restrictions are lifted.
The New York Taxi Workers Alliance, a union representing taxi and app-based drivers, said the companies were trying to scare drivers.
“Uber is now spreading fear and disinformation to New York drivers, attempting to convince workers that rules protecting their livelihoods are to blame for Uber’s greedy policies,” the union said in a statement.
( Reporting by Tina Bellon in New York; Editing by Dan Grebler ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,182 | 2,020 | "LinkedIn open-sources DeText, a framework for natural language processing tasks | VentureBeat" | "https://venturebeat.com/2020/07/28/linkedin-open-sources-detext-a-framework-for-natural-language-processing-tasks" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LinkedIn open-sources DeText, a framework for natural language processing tasks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
LinkedIn today released DeText , an open source framework for natural language process-related ranking, classification, and language generation tasks. It leverages semantic matching, using deep neural networks to understand member intents in search and recommender systems. As a general framework, LinkedIn says it can be applied to a range of tasks, including search and recommendation ranking, multi-class classification, and query understanding.
According to LinkedIn senior engineering manager Weiwei Guo, DeText was designed with enough flexibility to meet the requirements of different production services. It’s powered by “state-of-the-art” algorithms incorporated in an end-to-end model where the variables are jointly updated, but it attempts to balance its overall effectiveness with high efficiency.
“The framework allows users to better utilize models and embeddings across real-world applications,” Guo told VentureBeat via email. “It has been applied at LinkedIn across search and recommendation ranking, query intent classification, and query auto-completion, with significant improvements in relevance ranking for members searching people and jobs.” DeText contains multiple components, all of which can be customized via preloaded templates: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! An embedding layer that converts a sequence of words into a matrix, a set of numbers arranged in rows and columns. (Matrices are often used to represent the data that feeds into AI models.) Models for text encoding, which map text data into fixed-length embeddings, or numerical representations from which algorithms can learn.
An interaction layer that generates features based on the above-mentioned text embeddings.
Feature processing that combines traditional features with the interaction features (deep features) in jointly trained wide linear models and deep neural networks. (In this context, features refer to individual measurable properties and characteristics of phenomena being observed.) An MLP layer that combines wide and deep features.
Running DeText requires creating and launching a dev environment with the necessary dependencies, including Python. But once it’s installed, an example model can be trained on the sample data set from the GitHub repository.
“Deep learning-based natural language processing has the potential to deepen how search and recommender systems understand human intent. Yet the ability to leverage models … in commercial applications remains unwieldy due to its heavy computational load, especially when it comes to ranking results and classifying text,” Guo continued. “DeText can be thought of as a cordless drill that allows users to easily swap and optimize natural language processing models, depending on the use case.” LinkedIn’s use of AI is pervasive. In October 2019, the Microsoft-owned platform pulled back the curtains on a model that generates text descriptions for images uploaded to LinkedIn, achieved using Microsoft’s Cognitive Services platform and a unique LinkedIn-derived data set. LinkedIn’s Recommended Candidates feature learns the hiring criteria for a given role and automatically surfaces relevant candidates in a dedicated tab. And its AI-driven search engine employs data like the kinds of things people post on their profiles and the searches candidates perform to produce predictions for best-fit jobs and job seekers. Moreover, LinkedIn’s AI-driven moderation tool automatically spots and removes inappropriate user accounts.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,183 | 2,018 | "Tobii Pro VR Analytics turns your eye movements into design and training data | VentureBeat" | "https://venturebeat.com/2018/05/30/tobii-pro-vr-analytics-turns-your-eye-movements-into-design-and-training-data" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tobii Pro VR Analytics turns your eye movements into design and training data Share on Facebook Share on X Share on LinkedIn Tobii Pro VR Analytics enables companies to create virtual store layouts and packages, seeing which win the most attention from consumers.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Eye movement tracking might sound like a dystopian nightmare, but as the technology is continuing to improve and beginning to go mainstream, the tracking data is becoming more useful. Tobii, maker of frighteningly accurate eye-tracking technology solutions, today announced Tobii Pro VR Analytics, a tool that enables developers and researchers to visualize what virtual reality headset users are actually looking at in simulated worlds.
While the value of eye-tracking analytics might seem abstract, concrete examples show its game-changing potential across industries. On the retail front, Tobii suggests that stores and brands could test alternative shop layouts and package designs to see which maximize purchasing. Educators can train pilots, surgeons, and factory workers to focus correctly on their tasks while avoiding distractions. And construction companies can lay out building interiors with potential signage and emergency exits, thereby optimizing safety prior to construction.
Tobii foreshadowed the value of eye-tracking analytics last November when it worked with a metal foundry on a VR-assisted worker training program to reduce accidents and reveal aluminum casting process inefficiencies. After employees gave their consent to have eye movements tracked, the foundry examined eye data to better “understand behaviors that are intuitive to a skilled performer but difficult to articulate to the novice,” including the extreme eye-level focus and concentration required to safely make parts out of molten metal.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Using eye tracking within a VR environment helps us better understand how people navigate around a space,” explained David Watts of the human factors consultancy CCD Design & Ergonomics. “We want to bring evidence into the design process, and the visualizations tell us what people actually look at and how their attention is drawn to different design interventions we make. This methodology is so much more powerful than relying on our own intuition about what does and doesn’t work. It also provides a great visual record to demonstrate behaviors to others in the design team.” Tobii Pro VR Analytics is a Unity plugin, designed to work with existing Unity 3D environments and PC VR headsets that have Tobii eye-tracking technology. Once enabled, the tool instantly displays visualizations of eye movements, along with navigation and interaction data. Heat, opacity, and scene/path overview maps let researchers see which areas received the most attention, as well as analyzing how multiple people moved through a scene to determine bottlenecks and foot tracking patterns. Tobii Pro VR Analytics is available through Tobii now.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,184 | 2,019 | "Why HTC targeted Vive Pro at gamers, and why Pro Eye won't replace it | VentureBeat" | "https://venturebeat.com/2019/03/03/htc-interview-why-it-targeted-vive-pro-at-gamers-and-why-pro-eye-wont-replace-it" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why HTC targeted Vive Pro at gamers, and why Pro Eye won’t replace it Share on Facebook Share on X Share on LinkedIn HTC's Vive Pro.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
HTC has a lot of new VR headsets. There’s much to learn about the Vive Cosmos , but there’s two new additions coming to the enterprise side too. The upcoming Vive Pro Eye improves on the first Pro with integrated eye-tracking, for example. Meanwhile, the newly-announced Vive Focus Plus succeeds a three month-old headset with new six degrees of freedom (6DOF) controllers.
Despite surpassing their predecessors, though, neither of these headsets will be fully replacing them. Why is that? I put that question to Vive General Manager Daniel O’Brien at MWC this week. He told me that it was down to the difference between consumer and enterprise markets. “It’s really about — when you’re talking about enterprise — it’s a very long lead sales times,” O’Brien said. “And you’re also talking about time that you need to service and you need to keep supporting those customers. They’ve built business cases around them, they’re going to deploy them, they’ll ramp in that new hardware when they’re ready to ramp it in.” Having previously worked in HTC’s phone division, O’Brien said he understands how that may look to a consumer market. “You’ve got to give your customer enough time,” he added. “And sometimes that cycle can be 12 – 18 months. You’ve got to be very respectful of your customers and how they purchase products and not cause friction to their planning process or else you’re out of business.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Speaking of the Pro, I also spoke to O’Brien about the decision to sell the kit to consumers too. When HTC introduced the Pro at CES 2018 it seemed marketed toward both consumers and businesses. When the hefty $799 price tag was later revealed (for just the headset), it became clear it was focused on the latter audience. The company caught a lot of flak for the price online. So why sell it to consumers at all? “We just knew on the consumer side if we blocked them out of a higher resolution display and more comfortable headset, we were going to upset them,” O’Brien explained. “And we didn’t want to upset those customers.” He told me that the company was selling “a lot” of Pros on the enterprise side. “I know it seemed confusing in the messaging, but we were just trying not to upset anyone,” he said.
Vive Pro Eye will be much the same case. Prosumers will be able to buy the headset when it launches in Q2, but it’s more built for business use than gaming. Instead, it’s Cosmos that will be HTC’s next consumer-focused VR headset. The device is due to launch this year. HTC remained tight-lipped about it at MWC, however.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,185 | 2,019 | "HTC releases Vive Pro Eye in North America for $1,599 | VentureBeat" | "https://venturebeat.com/2019/06/06/htc-releases-vive-pro-eye-in-north-america-for-1599" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HTC releases Vive Pro Eye in North America for $1,599 Share on Facebook Share on X Share on LinkedIn HTC Vive Pro Eye.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Having already released Vive Pro Eye in China and Europe , HTC today began selling the enterprise-class virtual reality headset in North America, and there’s one surprise: It starts at $1,599 (U.S.), rather than its roughly $2,000 price elsewhere. But there may be other costs for U.S. and Canadian customers to consider.
Featuring OLED screen technology running at a 2,880 x 1,600 pixel resolution, Vive Pro Eye is an upgrade to HTC’s standard Vive Pro , differentiated largely by Tobii eye-tracking hardware and support for foveated rendering.
The headset is bundled with two of HTC’s 2018 hand controllers and two tracking base stations, but developers can create apps that you substantially control with your eyes, moving cursors and selecting items just by gazing.
The new headset’s base $1,599 price includes everything mentioned above, plus a link box, DisplayPort cable, USB 3 cable, and necessary power adapters, as well as cleaning and earphone adjustment parts. Tax isn’t included, which will drive the price up somewhat, but shipping is free. If that sounds too expensive to purchase in one shot, HTC has teamed up with PayPal to offer Vive Pro Eye on a 24-month installment plan for around $73 per month, or $1,746 over 2 years.
Another semi-optional expense is the Vive Pro Eye Advantage services package for enterprises, which costs £198 ($252) in Europe and $200 in North America. It adds a two-year warranty, 24-hour email response guarantee, and expedited repairs to the hardware bundle; HTC recommends it to enterprise customers but doesn’t emphasize it as much as it did during the European and Chinese launches.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The Vive Pro Eye is in stock now for immediate shipping and can be ordered here.
Users who don’t need the eye-tracking hardware will get an otherwise identical experience from the less expensive standard Vive Pro, which sells for $1,399 — or $200 less.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,186 | 2,019 | "Tobii Spotlight's foveated rendering can cut VR graphics load by 57% | VentureBeat" | "https://venturebeat.com/2019/07/30/tobii-spotlight-uses-foveated-rendering-to-cut-vr-graphics-load-by-57" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tobii Spotlight’s foveated rendering can cut VR graphics load by 57% Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Tobii’s gaze-tracking hardware for VR headsets is getting some serious software enhancements, the company announced today, in the form of full support for foveated rendering — with “even more foveated-related capabilities and benefits moving forward.” The features are being offered as Tobii Spotlight Technology, with significant initial rendering performance benefits.
Foveated rendering is an advanced technology that uses your eye’s position to create a particularly sharp focus point for a GPU to concentrate more of its resources upon, while areas outside that point can be rendered at lower detail levels. In order for the feature to work, the eye’s position needs to be tracked — Tobii’s specialty — and the software needs to have both designated “in focus” and “out of focus” areas, with a rendering pipeline to differentially tackle those areas.
Tobii has worked with Nvidia to achieve variable rate shading foveation in Tobii Spotlight, and says that it’s already seeing GPU rendering load reductions of 57%, bringing the average shading rate using dynamic foveated rendering down to 16% from around 24%. This frees up GPU resources for other purposes, such as boosting frame rates or conserving power, and Tobii expects that it will be used in the future to transfer and stream data that’s been dynamically optimized for the user’s current visual focus, including over low-latency 5G networks.
“Tobii Spotlight Technology is advanced eye tracking specialized for foveation,” said Tobii CEO Henrik Eskilsson. “Tobii Spotlight Technology directly addresses the ever-increasing demand for computational efficiency. Working with our partners to enable foveated rendering is just the beginning, and Tobii Spotlight Technology reflects our ongoing commitment to innovation in this area for years to come.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The higher the resolution of the headset, the better the dynamic rendering works compared with fixed rendering, which would otherwise need to fill every pixel of a high-res frame with full detail. Initial results are coming from a Vive Pro Eye headset with Tobii gaze tracking hardware, plus an Nvidia RTX 2070 , but the company expects that the dynamic foveated rendering will be able to keep shading manageable even when headset pixel counts reach or exceed 15K (compared with current 2K, 3K, 5K, and 8K models).
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,187 | 2,020 | "HP Reverb G2 virtual reality headset arrives this fall for $600 | VentureBeat" | "https://venturebeat.com/2020/05/28/hp-reverb-g2-virtual-reality-headset-arrives-this-fall-for-600" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HP Reverb G2 virtual reality headset arrives this fall for $600 Share on Facebook Share on X Share on LinkedIn HP Reverb VR headset will cost $600.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
HP is unveiling the HP Reverb G2 virtual reality headset with high-resolution specs the company hopes will attract new enterprise users and consumers. The company is launching the second-generation VR headset in a partnership with both Microsoft and game company Valve. The headset will debut in the fall for $600.
The resolution of the headset is 2,160-by-2,160 per eye, which should help with the visual realism of VR, said John Ludwig, lead product manager for VR at HP, in a press briefing. He said the Reverb G2, which uses lenses designed by Valve, will have 2.5-times the resolution of the Oculus Rift headset, delivering sharper images that enhance the feeling of being transported to another reality.
“These are brand-new panels, not the same panels the Reverb G1 used, and they come with some amazing improvements in immersiveness,” Ludwig said. “The contrast and brightness are up significantly on these brand new panels. We’ve also reduced the persistence of the pixels. So with the contrast and brightness boost, you get a much better visual experience. With persistence, you get a more comfortable and fluid experience.” Above: HP designed its new VR headset to be comfortable.
HP worked with Valve and Microsoft to enable integration across the Windows Mixed Reality and SteamVR platforms. The new headset is a replacement for the HP Reverb G1 , which launched in March 2019 for $600. That headset had visual flaws that made it feel like you were looking at the world through dirty goggles, but those issues aren’t in the new headset, Ludwig said.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The hope is the product will lure more people into the virtual world. While VR hasn’t lived up to its original promise, it has been making steady gains during the pandemic.
“We at HP have been learning to adapt to this new normal,” said Anu Herranen, director of new product introduction at HP, in a press briefing. “Now the virtual way is the only way for us all. So this new normal has really accelerated and expanded how and when we use VR at HP. There will be a huge population of people working, training, and learning from home.” Developing a new VR experience Above: The HP VR headset has 2K-by-2K per eye resolution.
VR has an opportunity because of the pandemic, as Zoom video meetings lack immersive interaction, according to HP, and physical meetings aren’t possible.
In April, SteamVR saw nearly 1 million additional monthly-connected headsets, tripling the previous largest monthly gain. HP believes that by 2021, 25% to 30% of the workforce will be working from home multiple days a week and searching for new ways to collaborate. HP kept features such as high-resolution LCDs in a lightweight design and a 114-degree field of view. It runs at 90 frames per second.
The new device has enhanced audio that HP says will allow the user to experience a real sense of 3D space when immersed in the VR world — for example, letting gamers locate their foes with audio clues. The speakers for the device are similar to those in the Valve Index VR headset.
Above: The HP Reverb G2 VR headset has speakers similar to the Valve Index.
Like other modern headsets, it has inside-out tracking, or four cameras on the headset itself that get rid of the need for external sensors. Windows Mixed Reality also enables 1.4 times more movement capture, maintaining six degrees of freedom without external sensors or lighthouses, Herranen said.
With better resolution, users will be able to see text and textures more clearly, providing a better experience and increased retention. The hand controllers come with new intuitive control features including an optimized button layout, application and game compatibility, and the ability to be pre-paired via Bluetooth for easy setup.
HP designed it to be more comfortable. The headset has manual adjustments for your eye settings and a facemask cushion for better comfort. You can flip the facemask 90 degrees when moving back and forth from the virtual to the real world. And the headset also has better weight distribution and comfort for extended VR sessions. It connects to a PC via a single cable.
U.S. preorders will be available today on HP.com, the SteamVR homepage, and select channel partners.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,188 | 2,020 | "AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute | VentureBeat" | "https://venturebeat.com/2020/09/04/ai-weekly-a-biometric-surveillance-state-is-not-inevitable-says-ai-now-institute" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a new report called “ Regulating Biometrics: Global Approaches and Urgent Questions ,” the AI Now Institute says regulation advocates are beginning to believe a biometric surveillance state is not inevitable.
The report’s release couldn’t be more timely. As the pandemic drags on into the fall, businesses, government agencies, and schools are desperate for solutions that ensure public safety. With measures ranging from tracking body temperatures at points of entry to issuing health wearables to employing surveillance drones and facial recognition systems, there’s never been a greater need to balance the collection of biometric data with individual rights and freedoms.
Meanwhile, a growing number of companies are selling biometric-driven products and services that seem benign but could become problematic or even abusive.
Surveillance capitalism is presented as inevitable to discourage individuals from daring to push back. It is an especially easy illusion to pull off as COVID-19 continues to spread around the globe. People are reaching for immediate solutions, even if that means acquiescing to a new and possibly longer-lasting problem in the future.
When it comes to biometric data collection and surveillance, there’s often a lack of clarity around what is ethical, safe, and legal — and what laws and regulations are still needed. The AI Now report methodically lays out all of those challenges, explains why they’re important, and advocates for solutions. It then provides shape and substance through eight case studies that examine biometric surveillance in schools, police use of facial recognition technologies in the U.S. and U.K., national efforts to centralize biometric information in Australia and India, and more.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! All citizens — not just politicians, entrepreneurs, and technologists — need to acquire a working understanding of the issues around biometrics, AI technologies, and surveillance. Amid a rapidly changing landscape, the report could serve as a reference for understanding the novel questions that continue to arise. It would be an injustice to summarize the whole 111-page document in a few hundred words, but it touches on several broad themes.
Laws and regulations pertaining to data, rights, and surveillance are lagging behind the development and implementation of various AI technologies that monetize biometrics or adapt them for government tracking. This is why companies like Clearview AI are thriving — what they do is offensive to many and may be unethical, but it is — with some exceptions — still legal.
The very definition of biometric data remains unsettled, and some experts want to pause the implementation of these systems while we create new laws and reform or update others. Others seek to ban the systems entirely on the grounds that some things are perpetually dangerous, even with guardrails.
To effectively regulate the technology, average citizens, private companies, and governments need to fully understand data-powered systems that involve biometrics and their inherent tradeoffs. The report suggests that “any infringement of privacy or data-protection rights be necessary and strike the appropriate balance between the means used and the intended objective.” Such proportionality also means ensuring a “right to privacy is balanced against a competing right or public interest.” This raises the question of whether a situation warrants the collection of biometric data at all. It is also necessary to monitor these systems for “function creep” and make sure data use doesn’t extend beyond the original intent.
The report considers the example of facial recognition used to track student attendance in Swedish schools. The Swedish Data Protection Authority eventually banned the technology on the grounds that facial recognition was too onerous for the task at hand. And surely there were concerns about function creep; such a system captures rich data on many children and teachers. What else might that data be used for, and by whom? This is where rhetoric around safety and security becomes powerful. In the Swedish school example, it’s easy to see how facial recognition use doesn’t hold up to principles of proportionality. But when the rhetoric is about safety and security, it’s harder to push back. If the purpose of a system is not taking attendance, but rather scanning for weapons or identifying people who aren’t supposed to be on campus, the conversation takes a different turn.
The same holds true of the need to get people back to work safely and protect returning students and faculty from COVID-19. People are amenable to more invasive and extensive biometric surveillance if it means maintaining their livelihood while reducing their risk of becoming a pandemic statistic.
It’s tempting to default to a simplistic position of more security equals more safety , but that logic can fall apart in real-life applications. First of all: more safety for whom? If refugees have to submit a full spate of biometric data at the border or civil rights advocates are subjected to facial recognition while exercising their right to protest, whose safety is protected? And even if there is some need for security in such situations, enhanced surveillance can have a chilling effect on a range of freedoms. People fleeing for their lives may balk at invasive conditions of asylum. Protestors may be afraid to speak freely, which hurts democracy itself. And kids could suffer from the constant reminder that their school is under threat, which would hamper mental well-being and the ability to learn.
A related problem is that regulation may happen only after these systems have been deployed, as the report illustrates with the case of India’s controversial Aadhaar biometric identity project. The report described it as “a centralized database that would store biometric information (fingerprints, iris scans, and photographs) for every individual resident in India, indexed alongside their demographic information and a unique 12-digit ‘Aadhaar’ number.” The program ran for years without proper legal guardrails. In the end, instead of using new regulations to roll back the system or address its flaws and dangers, lawmakers essentially fashioned the law to fit, thereby encoding the problems for posterity.
And then there are issues of how well a given measure works and whether it’s even helpful. You could fill entire tomes with research on AI bias and examples of how, when, and where those biases cause technological failures and result in abuse. Even when models are benchmarked, the report notes, their scores may not reflect how well they perform in real-world settings. Fixing bias problems in AI, at multiple levels of data processing, product design, and deployment, is one of the most important and urgent challenges the field faces today.
Keeping a human in the loop is one way to mitigate the errors AI coughs up. In police departments, biometric scans are used to provide leads after officers run images against a database, and humans can then follow up with any suspects. But these systems often suffer from automation bias, which is when people rely too much on the machine and overestimate its credibility. This defeats the purpose of having a human in the loop and can lead to horrors like false arrests , or worse.
Efforts to improve efficacy also raise moral considerations. Many AI companies say they can determine a person’s emotions or mental state by using computer vision to examine their gait or their face. Though the reliability of such tools is debatable, some people believe their very goal is immoral. Taken to the extreme, such predictive efforts result in absurd research that amounts to AI phrenology.
Finally, none of the above matters without accountability and transparency. When private companies can collect data without anyone knowing or consenting, when contracts are signed in secret, when proprietary concerns take precedent over demands for auditing, when laws and regulations between states and countries are inconsistent, and when impact assessments are optional, human rights are lost. And that’s not acceptable.
The pandemic has revealed cracks in governmental and social systems and has brought simmering problems to a boil. As we cautiously return to work and school, the biometrics issue remains front and center. We’re being asked to trust biometric surveillance systems, the people who made them, and those who are profiting from them, all without sufficient transparency or regulation. It’s a steep price to pay for the purported protections to our health and economy. But the AI Now Institute’s latest report provides a map of the territory and can serve as a resource for anyone looking to shape the evolving landscape.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,189 | 2,020 | "XRSI releases VR/AR user privacy framework, citing 'urgent' need | VentureBeat" | "https://venturebeat.com/2020/09/09/xrsi-releases-vr-ar-user-privacy-framework-citing-urgent-need" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages XRSI releases VR/AR user privacy framework, citing ‘urgent’ need Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Virtual and augmented reality technologies have continued to improve at a brisk pace, with Facebook’s Oculus Quest VR headset and Nreal’s Light AR glasses setting new standards for mobility and comfort. But as the hardware and software evolve, concern over their user privacy implications is also growing. The nonprofit XR Safety Initiative has released its own solution — the XRSI Privacy Framework — as a “baseline ruleset” to create accountability and trust for extended reality solutions while enhancing data privacy for users.
The XRSI Privacy Framework is urgently needed, the organization suggests, as “individuals and organizations are currently not fully aware of the irreversible and unintended consequences of XR on the digital and physical world.” From headsets to other wearables and related sensors, XR technologies are now capable of gathering untold quantities of user biometric data, potentially including everything from a person’s location and skin color to their eye and hand positions at any given split second. But comprehensive regulations are not in place to protect XR users. The National Institute of Standards and Technology has offered basic guidance, while regional laws such as GDPR, COPPA, and FERPA govern some forms of data in specific locations. But XRSI’s document ties them all together and goes further.
Developed and vetted by a group of academics, attorneys, XR industry executives, engineers, and writers, the Framework is a 45-page document with around 25 pages of regulatory and guideline meat that will be of more interest to lawyers and corporate privacy officers than end users. Broadly speaking, the Framework pushes companies such as Facebook to develop and use immersive technologies responsibly , rather than creating tools to harvest as much information from individuals as possible. It uses the aggregated threat of legal consequences to encourage voluntarily appropriate corporate behavior and is designed to get XR stakeholders to think before acting, rather than holding to the classic “ move fast and break things ” mantra.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! From a user perspective, the XRSI aims to deliver transparent, easy-to-understand solutions that are inclusive while protecting individual privacy by design and default, including modern understandings of identity and respect for the user’s individual characteristics and preferences. It’s also timely: As schooling from home gains traction and XR potentially plays a larger role in remote education, the Framework canvasses existing laws protecting both children under 13 and older students against discrimination and inappropriate record keeping, helping XR companies understand their existing and future legal obligations in the scholastic arena.
The XRSI is working with liaison organizations — including Open AR Cloud, the University of Michigan, and the Georgia Institute of Technology — to further develop the Framework beyond its current “version 1.0” status and get it adopted and enforced. While the group credits individual experts from organizations like HERE and Niantic with helping to craft the document, it’s unclear at this stage whether XR platform developers such as Facebook, HTC, or Valve will support the initiative.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,190 | 2,017 | "How Google's Pixel 2 Now Playing song identification works | VentureBeat" | "https://venturebeat.com/2017/10/19/how-googles-pixel-2-now-playing-song-identification-works" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Google’s Pixel 2 Now Playing song identification works Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The most interesting Google Pixel 2 and Pixel 2 XL feature, to me, is Now Playing. If you’ve ever used Shazam or SoundHound, you probably understand the basics: The app uses your device’s microphone to capture an audio sample and creates an acoustic fingerprint to compare against a central song database. If a match is found, information such as the song title and artist are sent back to the user.
Now Playing achieves this with two important differentiators. First, Now Playing detects songs automatically without you explicitly asking — the feature works when your phone is locked and the information is displayed on the Pixel 2’s lock screen (you’ll eventually be able to ask Google Assistant what’s currently playing, but not yet). Secondly, it’s an on-device and local feature: Now Playing functions completely offline (we tested this, and indeed it works with mobile data and Wi-Fi turned off). No audio is ever sent to Google.
It’s worth noting that Now Playing is turned off by default. You have to explicitly turn it on in the setup flow when first starting your Pixel 2 or Pixel 2 XL, or in Settings (as shown above).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! We asked Google to explain how the feature works.
Detection Now Playing uses your device microphone to grab audio samples and machine learning to distinguish which parts are music that can be converted to a digital fingerprint. It then tries to match that fingerprint to the local song database. In optimum conditions, a Google spokesperson estimated that detection should take just a few seconds. If there’s background noise, however, it can take a little longer, we were told.
If you’re seeing a longer delay, and you likely will, that’s because Now Playing isn’t working all the time. To save battery life when you’re listening to continuous music, Now Playing only runs every 60 seconds, the Google spokesperson explained. That means if the last detection was 30 seconds ago, you’ll only get an update on the next song that is playing in another 30 seconds, plus the time it takes for the actual recognition. If after 60 seconds no music is playing, the system will wait for music to be detected before attempting a new song identification. This also explains why long after a song is over, your lockscreen will still show whatever was previously playing.
When a song is identified and displayed on your lockscreen, you have the option to tap on it. After you unlock your phone, you’ll be taken to the Google Assistant where you can learn more about that given song.
The feature is purposefully designed as an “ambient” and “lean back” experience as opposed to on-demand. My colleague Khari Johnson lamented as much in his Pixel 2 review — there’s no historical option or way to see all the songs Now Playing has detected as you go about your day. It’s a “currently playing” feature and that’s it.
Database The Pixel 2’s on-device database for Now Playing is based on Google Play Music’s top songs, the Google spokesperson revealed. Google wouldn’t share the exact number of songs in the database, but the spokesperson did note it’s in the high 10s of thousands (for comparison’s sake, Shazam claims its database features over 11 million songs, but that of course is in the cloud).
If you’re curious, XDA posted over 10,000 songs that the Pixel 2s can detect. While it’s cool to sift through this list, you should know that it’s far from definitive, for multiple reasons.
First up, the Google spokesperson revealed that your device will have a different song database depending on the country you’re in — there’s a different catalog for every country where Pixel 2s are sold. Secondly, Google also confirmed that the database is updated weekly, depending on what’s popular on Google Play Music in your country. This weekly update, which only ever occurs over Wi-Fi, is incremental and replaces the most outdated part of your country-specific database with new songs.
Since Now Playing works automatically and Google made a point to keep it localized, these constraints created a new limitation: local storage. That’s why the feature doesn’t work across all songs in Google Play Music, but rather just the most popular ones (not necessarily most recent, just “top music”).
The Google spokesperson wouldn’t give us an exact size for the database file (which is not surprising, since it changes every week and is based on your country) but did say the whole feature should take up less than 500MB. Again, if you never turn the feature on, don’t worry — you won’t lose this space.
Exclusive At the moment, Now Playing is exclusive to the Pixel 2 and Pixel 2 XL. There are no plans to bring it to other Android devices, not even Google’s own Pixel and Pixel XL.
The Google spokesperson noted that Now Playing requires specific hardware and software changes, but didn’t elaborate exactly what was required aside from saying that the two have to be closely integrated for the feature to work. It’s not out of the question that future devices, with the specific hardware and software changes, could offer Now Playing as well, but Google can’t simply bring the feature to Android Oreo as an update, we were told.
In fact, Google doesn’t view see Now Playing as an Android or even a Google Assistant feature. It’s very much a Pixel 2 feature, at least for now.
Now Playing is something I believe will lead to conversations, and I’d even go as far as betting it will be the most-discussed feature among friends of Pixel 2 and Pixel 2 XL owners. It’s really the perfect storm: Everyone has heard of Shazam and SoundHound, so nobody will feel out of their depth discussing the topic; nothing like Now Playing is available on other phones; and music is always a timeless topic.
For those reasons, I suspect Google will try to keep Now Playing exclusive to its Pixel 2s for as long as possible.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,191 | 2,019 | "Android Q's Live Caption will only work on 'select phones' | VentureBeat" | "https://venturebeat.com/2019/05/10/how-android-qs-live-caption-works-select-phones" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Android Q’s Live Caption will only work on ‘select phones’ Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
At its I/O 2019 developer conference this week, Google showed off Live Caption , an Android Q feature that provides real-time continuous speech transcription. The company touted Live Caption as able to caption any media on your phone. But it turns out that “your phone” can’t be just any Android Q phone. “Live Caption is coming to select phones running Android Q later this year,” a Google spokesperson confirmed.
“It’s not going to be on all devices,” Brian Kemler, Android accessibility product manager, told Venturebeat. “It’s only going to be on some select, higher-end devices. This requires a lot of memory and space to run. In the beginning it will be limited, but we’ll roll it out over time.” As we get closer to Android Q’s launch, Google plans to release a list of sanctioned devices that will offer Live Caption.
This wasn’t clear from Google’s keynote or any of the ensuing coverage. The pitch was that this great on-device machine learning feature was coming in the latest Android release, for everyone to use.
“We believe technology can be more inclusive. And AI is providing us with new tools to dramatically improve experiences for people with disabilities,” Google CEO Sundar Pichai said onstage before showing off Live Caption and Google’s three new accessibility projects.
Afterwards, he added: “You can imagine all the use cases for the broader community too. For example, the ability to watch any video if you’re in a meeting or on the subway, without disturbing the people around you.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Live Caption works with songs, audio recordings, podcasts, and so on. The feature captions any content that you’re streaming, that you’ve downloaded, or even that you recorded yourself. It doesn’t matter if it’s from a first-party app or a third-party app — if your phone can play it, your phone can caption it. That also includes games, though Kemler has not tried it with Stadia yet.
On device vs. in the cloud To use Live Caption, you hit one of your phone’s volume buttons and then tap the software icon when the volume UI pops up. Turn it on with a single tap, and as soon as speech is detected, captions will appear on your phone screen. You can double-tap to show more and drag the captions to anywhere on your screen. Kemler explained that Google made Live Caption a movable overlay because it’s not easy for Android to predict where the content will be or what else the user may want to do as they’re reading.
When you enable Live Caption for the first time, Google plans to show a banner explaining the feature.
“Hey, this is what it does. This is what it doesn’t do. Because we took this cloud-based model that was over 100GB and shrank it down to less than 100MB to fit on the device, it’s not going to be quite as perfect or accurate,” Kemler explained. “Not that cloud transcription is perfectly accurate, but it’s going to be a little bit better. But [Live Caption is enough] for apps where that caption content is not available, which remember is the vast majority of user-generated content. Which also, remember, is the vast majority of content. Even if you took YouTube, that’s 400 hours uploaded every minute, and then think of Facebook, Instagram, Snap, all podcasts, etc. Unlike TV and film, which by law are required to have captions, user-generated content doesn’t have it.” Kemler let me play with the feature on a Pixel 3a, and it did indeed work as described. There is no separate app required, no need for a Wi-Fi or data connection, and no perceptible delay. He wouldn’t provide a word error rate target or range for Live Caption, but it’s clearly low enough for Google to confidently include the feature in Android Q.
No transcriptions Live Caption doesn’t save anything. If you want a transcription tool, Google offers Live Transcribe , released in February. Live Transcribe also uses machine learning algorithms to turn audio into real-time captions. But unlike Live Caption, it’s a full-screen experience, uses your smartphone’s microphone (or an external microphone), and relies on the Google Cloud Speech API to caption real-time spoken words in over 70 languages and dialects. You can also type back into it — Live Transcribe is really a communication tool.
Meanwhile, “Live Caption is the notion that, at the OS level, we should be able to caption any media on the device,” Kemler explained. “Not only to make that media accessible to people who can’t hear or who have trouble hearing, but also for people like us. You’re sitting at I/O and you need to watch a video and you want to do so silently. That’s a really important use case. You’re on the train, you’re on the plane, you don’t want audio in certain cases. There are other applications too. Think of learning another language — super helpful to have those captions in that language.” Live Caption relies on the AudioPlaybackCaptureConfiguration API , which is being added as part of Android Q. That’s what makes it possible for the feature to capture your phone’s audio, even if you’ve muted the device.
“We will have a new API that’s available primarily for OEMs to use in the context of live captions,” Kemler elaborated. “It’s in what we call a ‘personal AI environment.’ It’s a very secure environment, and it gets special system privileges, like being able to pull audio, but it has to adhere to a set of principles. So, for instance, you can get captions, but Google would never have access to that audio. It’s just always going to be on the device. You can’t do anything with that audio other than provide those captions. So it’s very important for us that we honor security and privacy. Things that are sensitive stay local on the device.” This is also why Live Caption doesn’t work on phone calls, voice calls, or video calls. And there are no plans to let Live Caption support transcriptions.
“Not for Live Caption. Obviously, we thought about that. But we want the captions to be truly captions in the sense that they’re ephemeral, if they help you understand or consume that experience. But we want to protect the people, the publishers, content, and content owners. We don’t want to give you the ability to pull out all that audio, transcribe it, and then do [whatever they want with it].” Could someone use the API to do that? “Not the way we have it architected.” Language support When showing off Live Caption, Google has hinted that it’s also exploring automatically translating the captions if the content is not in your set language. But that’s a long way off. In fact, putting translations aside, Live Caption is only going to launch with one language supported.
“So, for release we’re going to launch in English,” Kemler confirmed. “And then we’re going to push as hard as we can to add as many other languages as possible. It will also depend a little bit on the devices. So if we go with an approach on Pixel, which is very skewed toward the English language, then we’ll look at the other big languages, like Japanese.” When you unbox your new Android Q device that supports Live Caption, the first time you use the feature, it will have to download the offline model. It won’t be on the device, because Google wants to ensure you’re always using the latest model. Updates to the model will be delivered through Google Play Services. And since only English will be available, it will be straightforward. But one day, likely based on the language you pick in your phone’s initial setup process, your device will download the corresponding offline language model.
That process gets even more complicated when you start thinking about translation.
“Translation is not in the feature set,” Kemler emphasized. “It’s the tip of an iceberg. It looks like a very simple feature, but it has so many different layers to it. Translation requires a completely different pipeline, a completely different UI. We’re focused on nailing the MVP experience, number one. Number two, adding more languages, and getting it out more into the ecosystem. Translation is something that’s super important, but we want to make sure that the core experience is very high quality, is very good, and has a broad reach and broad adoption, before we get into everything we could possibly do with it.” Google must learn to crawl before it can walk. And Translation is more of a run.
“We take a very dumbed-down version of the audio in mono — I think it’s 16 kilohertz — and then put that into the model,” said Kemler. “And if the model has features which add complexity — so things like capitalization and punctuation, that adds latency, it adds processing, and has a battery impact. And then we have to render that into text. So we have all of those things to do. And then ‘Oh, we want to translate on the fly?’ Well, we have to figure out the downloading of that model and then have another layer of processing in that pipeline. So we think, theoretically, it’s obviously something doable — and something intentionally, conceptually, we want to do — but there’s a cost to doing that.” So the team would rather focus on the initial experience and getting users to adopt it and use it, “which we don’t think is going to be any problem. It’s so useful, and so utilitarian. And then we’ll look into doing more wizardry, where we can really optimize that pipeline.” If the number of supported devices is small that will be a problem, as Live Caption won’t reach utilitarian status if most people can’t use it. So, in addition to improving the models and adding more languages, Google will also have to add support for more devices.
“We absolutely want to make the feature as available as possible,” Kemler said.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,192 | 2,019 | "Pixel 4 and Pixel 4 XL review: Function over form | VentureBeat" | "https://venturebeat.com/2019/10/21/pixel-4-and-pixel-4-xl-review-function-over-form" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review Pixel 4 and Pixel 4 XL review: Function over form Share on Facebook Share on X Share on LinkedIn The Google Pixel 4.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Since 2016, Google has each year released a pair of flagship Pixel smartphones designed to showcase the very best of Android. This year saw the debut of the Pixel 4 and Pixel 4 XL , which ship running Android 10. But what’s unusual this time around is that the duo’s hardware is perhaps just as compelling as their software.
There’s a lot to unpack with respect to the Pixel 4 series, including a host of AI-related features alongside the usual design elements, feature upgrades, and hardware tweaks.
A refined design The Pixel 4 and Pixel 4 XL ditch their predecessors’ signature design for an aesthetic that’s more refined — and more functional. Gone is the two-tone rear cover that featured prominently on the original Pixel , Pixel 2 , and Pixel 3 series, replaced with polished and grippy Corning Gorilla Glass 5. It’s easier to grasp than that of the Pixel 3 and Pixel 3 XL, and it’s better able to resist oily fingers and pocket lint.
The Pixel 4 series is IP68-certified to withstand up to five feet of water for half an hour, which puts it on par with the outgoing Pixel 3 series. But both the Pixel 4 and the Pixel 4 XL are a good deal heavier than the Pixel 3 (5.71 ounces versus 5.2 ounces) and Pixel 3 XL (6.8 ounces versus 6.49 ounces), perhaps owing to the curved aluminum frame running the length of the former pair’s sides.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Pixel 4 series’ frame is coated with a soft-touch material that’s jet black on all three of the colorways — Clearly White, Just Black, and the limited edition Oh So Orange. The haptics, which Google characterizes as “sharp and textured,” feel great. Linear resonant actuator (LRA) motors sit snugly against the outer casing, allowing for fine-grained vibrations triggered by gestures like Active Edge.
Active Edge, which debuted on the Pixel 3 series, invokes Google Assistant when you squeeze the left and right bezels. It’s handy in a pinch, but we found that activating it required a vice-like grip even after we turned down the threshold slider.
The Pixel 4 phones trade the visor-style camera housing of the Pixel 3 and Pixel 3 XL for a square module that juts out from the top left. They pack a Sony 12.2-megapixel sensor (f/1.7 aperture, 1.4 μm pixel width, 77-degree field of view) with autofocus and dual pixel phase detection that’s both optically and electronically stabilized. (It’s complemented by a spectral sensor, which bolsters color accuracy, and a banding-reducing flicker sensor.) Alongside the primary camera is a 16-megapixel telephoto camera (f/2.4 aperture, 1.0 μm pixel width, 52-degree field of view) with autofocus that’s also Sony-made. Like the 12.2-megapixel camera, it can zoom up to 8 times (2 times optical), thanks to a combination of optical zoom and Google’s Super Res Zoom technology.
Above: Night Sight on the Pixel 4, which uses AI to boost image brightness.
Above: Night Sight disabled.
Are two cameras better than the Pixel 3 series’ one? Google certainly believes so. It says the 16-megapixel sensor produces 12-megapixel photos captured from the center portion of the sensor, enabling greater zoom reach at higher quality than is achievable with the 12.2-megapixel sensor alone. The company further claims the physical gap between the two cameras enhances the Pixel 4 series’ depth perception, which principally derives from the 1-millimeter separation between the subpixels contained within each sensor.
Above: A picture taken with the Pixel 4 from a dimly lit bar.
Read our impressions in a holistic review of the camera, its computational photography, and the other AI features that the Pixel 4 phones offer.
Google has done away with the fingerprint sensor on the Pixel 4 phones, relegating authentication to Smart Lock (which keeps the Pixel 4 unlocked when it’s on your person, in a geofenced location, or connected to a trusted device) and facial recognition. It’s a perplexing decision, given the effort invested in Pixel Imprint, a security feature of the Pixel 2 series and Pixel 3 series that became more accurate the more it read fingers. But Google argues that facial recognition, in particular, is a faster means of password-free authentication.
Above: A portrait photo taken with the Pixel 4’s front-facing camera.
Flipping the Pixel 4 and Pixel 4 XL around brings you to the curved Gorilla Glass 5-shielded displays and sensor arrays, which is where the face-detecting and gesture-sensing magic happens. The volume rocker is on the left side of the phones, as is the power button, which is curiously accented bright orange on the Clearly White version. On the opposite side is a SIM card, which complements an eSIM (Embedded SIM) that can be used simultaneously with a physical SIM in dual standby mode for calls, texts, or data.
A responsive display The Pixel 4 series phones ship with OLED screens that are roughly the same resolution as last year’s Pixel models — Full HD+ (Pixel 4) and Quad HD+ (Pixel 4 XL). The Pixel 4’s screen measures 5.7 inches diagonally (up from the Pixel 3’s 5.5 inches) and has a pixel density of 444 pixels per inch (versus 443 pixels per inch). By contrast, the Pixel 4 XL’s screen is 6.3 inches — the same as the Pixel 3 XL’s — with a pixel density of 537 pixels per inch (compared with 523 pixels per inch on the earlier model).
Minor differences aside, Google says both handsets’ panels have a 100,000:1 contrast ratio and full 24-bit depth (or 16.77 million colors). They’re also certified by the UHDA Alliance to display high dynamic range (HDR) videos, TV shows, and movies, which boast improved brightness, wider color gamuts, and better contrast than their non-HDR counterparts. Unfortunately absent is compatibility with Dolby Vision — Dolby’s proprietary HDR specification — which has slightly more luminosity per square meter and 12-bit color instead of the standard 10-bit.
One of the screens’ headlining features is Smooth Display, which dynamically boosts their refresh rate from 60Hz to 90Hz, depending on the content. Google doesn’t spell out the criteria, unfortunately, but there’s a noticeable improvement in overall responsiveness when it kicks in. Scrolling through apps like Twitter and Gmail and pinching-to-zoom in on photos and webpages feels smoother than on the Pixel 3 and Pixel 3 XL, which max out at 60Hz. Other high refresh-rate handsets include the OnePlus 7, OnePlus 7T, Asus ROG Phone, Asus ROG Phone 2, and Razer Phone 2.
The Pixel 4 series also features Ambient EQ, a carryover from Google’s Nest Hub that automatically optimizes the display’s white balance for ambient lighting conditions. You’ll get a correspondingly warmer color temperature when sunlight streams into your bedroom, for instance, and blue tones when your phone is in a place that registers cooler on the spectrum.
Ambient EQ is subtle enough that you may not notice it if you’re not looking for it, but from our experience, it makes normally blindingly white content like webpages a bit easier on the eyes (particularly at night). That’s doubly true when it’s used in combination with the Night Light setting, which tints the screen amber during a predefined time window to minimize circadian rhythm-interrupting blue light.
The Pixel 4 and 4 XL offer further display customization with three color modes: Adaptive, Boosted, and Natural. Google describes the first as “designed to provide a vivid yet natural rendition of colors that most users prefer,” while Boosted is slightly more saturated compared to the neutral Natural.
Stereo audio, but no headphone jack The Pixel 4 series sports three microphones to better pick up voices by canceling out noise in loud environments, and the phones’ display is an earpiece that sits opposite a bottom-firing speaker. They’re in stereo like the Pixel 3’s front-facing speakers but deliver “clear[er]” and “[more] spacious” sound, according to Google. Both indeed seem crisper and clearer to our ears, but there’s a slight imbalance between the two that we chalk up to the earpiece’s smaller resonance chamber.
For better or worse, the Pixel 4 and Pixel 4 XL follow in their forebears’ footsteps and omit a 3.5mm headphone jack. Google bets buyers will opt for wireless earbuds or headphones that play nicely with the phones’ USB-C 3.1 Gen 1 ports, and that may be a decent bet. The Pixel 4 series has a Bluetooth 5.0 Low Energy chip and support for hi-fi codecs, including Qualcomm’s AptX and AptX HD, as well as Sony’s LDAC.
To be clear: Neither the Pixel 4 nor Pixel 4 XL come bundled with wired headphones or a 3.5mm-to-USB-C adapter. That’s ostensibly because “extra in-box audio accessories [often] end up going to waste,” according to Google, but ditching the adapter seems to us legacy headphone owners like an arbitrarily egregious move.
A reliable workhorse Both the Pixel 4 and Pixel 4 XL are packing Qualcomm’s Snapdragon 855 system-on-chip (SoC), the same SoC inside Samsung’s Galaxy Note10 and last year’s LG V40.
This variation supports Wi-Fi 2.4GHz/5GHz 802.11 a/b/g/n/ac 2×2 MIMO, and it’s faster than the long-in-the-tooth Snapdragon 845, thanks to a 64-bit ARM Cortex design based on Qualcomm’s in-house Kryo 485 processor. Four cores handle the heavy lifting — one prime core clocked at 2.84GHz and three performance cores at 2.42GHz — while four efficiency cores running at 1.8GHz handle less performance-intensive tasks.
Both handsets have 6GB of RAM (which is 2GB more than the Pixel 3 and Pixel 3 XL) and pack both a next-generation Pixel Visual Core and a Pixel Neural Core, which are Google-designed coprocessors that crunch millions to trillions of operations per second. The Pixel Visual Core accelerates the Pixel 4 series’ HDR+ feature, as well as Zero Shutter Lag and Rapid and Accurate Image Super-Resolution (RAISR) technologies.
Zero Shutter Lag eliminates the delay between triggering the phone shutter and the moment the photo is actually recorded. As for RAISR, it uses machine learning to produce high-quality versions of zoomed-in images and speeds up tasks like always-on listening, gesture detection, face unlock, and selected camera features like Top Shot and Frequent Faces.
The Pixel 4 and Pixel 4 XL put all that hardware to good use. They’re multitasking maestros, capable of whizzing through day-to-day tasks. During our testing, apps and webpages rarely reloaded after we switched away from them — a problem that plagued the Pixel 3 series and the budget-oriented Pixel 3a series. And bootup was consistently swift, with reboots clocking in at around 15 seconds or less.
In Geekbench, our benchmarking app of choice, the Pixel 4 scored 3,148 and 10,169 single-core and multi-core scores, respectively. That’s slightly behind the LG G8 (3,458 single-core and 11,101 multi-core score) but head and shoulders above the V40, which achieved a multi-core score of 8,841 in our testing. And it beats out the Vivo Nex S, Samsung Galaxy S9, G7, and Galaxy Note9.
Suffice it to say that the Pixel 4 and Pixel 4 XL are a good deal faster than the outgoing models on paper, but they’re a mixed bag on the battery front.
The Pixel 4 has a 2,800mAh battery, which is 115mAh smaller than the Pixel 3’s (2,915mAh), while the Pixel 4 XL’s battery is larger than the Pixel 3 XL’s at 3,700mAh (up from 3,430mAh). The good news is that they both support fast charging (up to 18W) and wireless Qi charging accessories (up to 10W), including the Pixel Stand that was released last year.
Both the Pixel 4 and Pixel 4 XL make it through a full day on a single charge — a solid 10-16 hours on average. The Pixel 4 predictably has shorter legs, lasting around 12 hours on a charge with light usage (i.e., browsing the web, checking email, and responding to messages). But we managed to eke out several additional hours by switching off Smooth Display and enabling Battery Saver, which reduces network pings and kicks on the system-wide dark theme.
Facial recognition Beneath the Pixel 4’s and Pixel 4 XL’s display is a loudspeaker and a USB-C port, and above it (near the top) are a proximity sensor, ambient light sensor, flood illuminator, infrared camera, and dot projector. Together, they power the phones’ facial authentication and gesture recognition features.
The Pixel 4 series is the first to ship with Google’s facial recognition feature, which can authenticate faces from any portrait or landscape angle. There’s a brief calibration step that involves making a rolling motion with your head as the phones’ sensors learn your facial geometry, but once that’s finished, unlocking the phones takes no more than a glance in the front-facing sensors’ direction.
The Pixel 4 series’ facial recognition flavor supports app sign-in, as well as payments, much like Apple’s Face ID. Google Play Store transactions can be authorized with a quick face scan, as well as in-app and web purchases through Google Pay. But that’s all at your own risk — Google notes that the tech could be fooled by someone who looks a lot like the phone’s owner (e.g., an identical twin).
Motion Sense At Google’s I/O developer conference four years ago, the company’s Advanced Technology and Projects (ATAP) team unveiled Project Soli.
The technology taps high-speed radar sensors and algorithms to detect motion. The Pixel 4 and Pixel 4 XL are the first consumer devices on the market with built-in Soli technology, which in the phones has been branded as Motion Sense.
On its face, Motion Sense might seem like an incremental improvement over LG’s Air Motion — and similar in that it’s able to recognize gestures to skip songs, snooze alarms, and silence phone calls. But Motion Sense can recognize when you reach toward the Pixel 4 or Pixel 4 XL and switch on the screen, and its “skip song” action is compatible with Google Play Music, Apple Music, and Spotify out of the box.
Above: Silencing an alarm with Motion Sense on the Pixel 4.
Our first impressions of Motion Sense are positive, though not overwhelmingly so. It quickly wakes the Pixel 4 from sleep upon detecting a hand or face, as advertised, and it registers music-controlling, call-silencing, timer-dismissing, and alarm-snoozing swipes across the screen from up to a foot away. The trouble is that’s about the extent of its capabilities at present. Google promises that more sophisticated Motion Sense commands are on the way, but mum’s the word on when and which.
For more on the tech behind Motion Sense, as well as impressions of the software supported out of the gate, check out our piece on Project Soli.
Live Caption, Recorder, and the new Google Assistant Google has historically used the Pixel series as a pedestal for its latest natural language understanding (NLU) advances, and the Pixel 4 series is no different. The speech recognition model underlying Google Assistant — Google’s intelligent voice assistant that can summon an Uber, pull up a podcast, perform a search for local businesses, and control thousands of smart home devices from hundreds of brands — has shrunk in size from 100GB to less than 0.5GB. It now works offline, cutting down the latency to “near zero” and speeding the average response by up to 10 times, according to Google.
Google Assistant on the Pixel 4 and Pixel 4 XL features a colorful overlay near the bottom of the screen that pops up when you invoke it, showing recognized words and phrases floating in front of a blurred foreground. Support for Continued Conversations eliminates the need to repeat the “Hey Google” hotword after Google Assistant responds to a query, and Google Assistant now integrates more tightly with first-party apps like Google Maps and Photos. For instance, a question like “Where can I find sugar nearby?” prefills Map’s search bar with “eggs nearby,” while a request like “Show me photos from California” surfaces relevant pics from within Photos.
Above: Google Assistant on the Pixel 4 compared with the LG V50.
We peeled back the covers on the new Google Assistant in a previous piece , but here’s the skinny: While it’s wicked fast, it’s not consistently faster than its predecessor. Compared with Google Assistant running on one of our test devices, the LG V50, utterances like “Show me photos from last week” process in about half the time. But responses to other questions, like “Where’s the nearest grocery store?,” “How many calories are in an apple?,” and “How far is it to Toronto from here?” pop in at roughly the same time.
The new Google Assistant has a leg up when it comes to transcription, however. Words appear onscreen the moment they’re sussed out by the offline language model, which is typically instantaneously. It’s also more contextually responsive in apps like YouTube and Google Maps, such that a request like “Search for Italian restaurants nearby” when Maps is running surfaces in-app results straightaway.
Above: Live Caption on the Pixel 4.
The Pixel 4 series’ real-time transcription prowess is evident elsewhere, like in Live Caption.
With a tap of the Pixel 4 or Pixel 4 XL’s volume rocker and the corresponding on-screen shortcut, Live Caption provides real-time continuous U.S. English speech transcription in a moveable overlay for apps like YouTube, Google Podcasts, Google Photos, Amazon Prime Video, and Netflix (but not phone calls, voice calls, or video calls). The customization options are pretty limited — at least at the moment — but you’re able to hide profanity, show labels for sounds (like laughter, applause, and music), and expand the overlay if you so choose.
Live Caption worked flawlessly in our testing, but we’d hesitate to call it a selling point, considering that it’s coming to the Pixel 3 and Pixel 3a series in December.
Above: Google’s new Recorder app on the Pixel 4.
There’s also a new Recorder app that features automatic transcription (again in U.S. English) and audio search for words and phrases, neither of which require an internet connection. It produces a searchable transcript and automatically suggests file titles based in part on your location, and it recognizes audio events like applause, birds, cats, dogs, laughter, music, roosters, speech, phones, and whistling.
Recorder is a strictly on-device affair, which makes it somewhat less useful than its online competition. (Otter lets you share a link to real-time transcriptions with others, for example.) But that was a conscious design decision — Google tells us that sensitive data doesn’t generally leave the Pixel 4 or Pixel 4 XL, like that pertaining to speech recognition and facial recognition. It’s instead secured with a custom-engineered Titan M security chip containing a purpose-built micro-controller and network controller chip that borrows from server security tech.
Software and messaging Google’s Personal Safety app makes its debut with the Pixel 4 and Pixel 4 XL. It lets you quickly share your location and a brief message describing your current situation with multiple contacts. Using location and sensor readings, it’s able to dial 911 automatically if it detects that you’re involved in a car crash, which is a neat idea that’s evidently a work in progress. Google notes that Personal Safety might not be able to detect all crashes, that “high impact” activities could trigger calls to emergency services, and that crash detection is available only in the U.S. for now.
Above: Google’s new Safety app.
Now Playing is a feature that leverages machine learning to listen for millions of tunes in the background and surface matching metadata. It debuted on the Pixel 2 series, but it has improved with the Pixel 4 series, which aggregates the counts of songs recognized across devices to take into account song popularity by region. Matches appear both on the lockscreen and in a list within the settings menu (Settings > Sound > Now Playing).
Rounding out the Pixel 4 series’ features list is Call Screen, which transcribes calls in real time so that you can decide whether to pick up, and Google Duplex , which dials eligible U.S. restaurants in nearly all territories and states to make reservations on your behalf. Also worth spotlighting is Google Lens , Google’s AI-powered search and computer vision tool, which on the Pixel 4 offers suggestions (in English, Spanish, German, Hindi, and Japanese) within the camera app to translate languages, scan documents to PDF, and copy and paste text.
Taking a page from Apple’s online personal setup service, Google now offers one-on-one support sessions — Pro Sessions — through the Google One app in English in the U.S. and Canada.
But it’s not all sunshine and rainbows. Unlike all previous Pixel phones, which came with unlimited lossless photo and video storage in Google Photos, Pixel 4 and Pixel 4 XL owners get only compressed “high quality” image backups. It’s an apparent effort to drive Google One sign-ups. That’s a real shame — unlimited uncompressed photo backups have been a major incentive for purchasing Pixel phones since the series’ inception.
In another discouraging development, the Pixel 4 doesn’t support RCS messaging — which adds features like typing indicators and higher-quality attachments to apps that support it (like Android Messages) — on major U.S. networks like Verizon and T-Mobile. There’s nothing precluding RCS messaging support from arriving in the next few months, but those hoping for a seamless day-one experience will be disappointed.
Android 10 The Pixel 4 series ships running Android 10 , which by default swaps previous versions of Android’s dedicated navigation keys for iPhone-style gestural controls. A swipe upward from the bottom of the display brings up the home screen, while a swipe from the left or right edge triggers the back action button. Swiping up and holding brings up the multitasking menu — a carousel of preview windows representing apps recently run or actively running. A swipe in from the corner summons Google Assistant.
It all sounds simple in theory, but the execution is a different story. If you fail to swipe up far enough before you hold, the multitasking menu won’t launch properly. And apps with hamburger menus (which slide out from the right- or left-hand side of the screen) can be tricky to use without fancy fingerwork. Long-pressing near the bezel of the display where the menu resides or swiping at a roughly 45-degree angle sometimes works, but not always.
Perhaps anticipating the frustrations of users new to Android 10, the Pixel 4 series allows you to revert back to three-button navigation and adjustment of the back gesture’s sensitivity via a slider. But the inclusion of those options suggests Google is aware that gestural navigation needs refining.
Above: Android 10 gestures on the Pixel 4.
It’s not all bad news where Android 10 is concerned. System-provided Smart Replies appear directly in messaging app notifications by default, such that alerts from Hangouts and Facebook Messenger prepopulate with messages generated by AI (e.g., “Sure thing,” “Okay,” and “Sounds good”). What’s more useful is that Android 10 recommends actions informed by context; if a friend asks you out to dinner, it will pull up directions right in Google Maps.
In a somewhat related development, Android 10 boasts an improved copy-and-paste experience, which it calls Smart Copy/Paste. It detects and extracts telephone numbers and addresses from lengthy chat messages. If a chat message makes reference to placing a call and includes a number, for instance, Smart Copy/Paste will edit out the text surrounding the relevant phone number automatically.
Additionally, Android 10 introduces a system-wide dark theme, which swaps Android’s traditionally light color palette for black backgrounds and white text. The quickest way to activate it is from a Quick Settings tile, or from the Settings menu (Display > Dark Theme). Alternatively, it can be auto-triggered when Battery saver mode is switched on (Battery > Battery saver >Turn on now).
Above: Android 10’s dark theme.
Apps must explicitly support the dark theme, but a number do already, including Google’s YouTube, Google’s main app, Medium, Reddit, Instapaper, Pocket, iBooks, Kindle, Google Maps, Waze, Slack, and Twitter. Beyond the spiffy look, dark mode should extend the battery of the Pixel 4 series and other phones with OLED screens, because the pixels are self-illuminating. We’ve seen evidence of that so far anecdotally, but sussing out the quantitative difference will require more testing.
Another, less obvious, nice-to-have is Project Mainline , which promises to keep Android 10 devices up to date with code changes delivered via Google Play. Google says it will enable manufacturers to upgrade specific OS components without requiring a full system update by downloading modules in the background that load the next time the device starts up.
Above: Android 10 offers greater transparency with respect to location permissions and sharing.
On the privacy side, Android 10 affords you greater control over when apps can request your location. Apps ask for permission, but now you’ve got more say over when to allow access to your location — such as only while the app is in use, all the time, or never.
Conclusion There’s plenty we didn’t mention about the Pixel 4 series, both great and not-so-great, like the improved sharing tool in the camera app that lets you quickly send photos via third-party apps. The Pixel 4 and Pixel 4 XL will soon join the ranks of devices supporting a location-enhancing version of GPS called dual-frequency GNSS. And unlike Apple’s iPhone 11 series , the Pixel 4 series lacks support for the latest Wi-Fi standard — 802.11ax, or Wi-Fi 6.
But Pixel phones have never been about rocking the boat. They’re flagship handsets, sure, but their specs don’t approach those of the Galaxy Note10, for instance. Still, they are packed with every new feature Google has to offer and serve to showcase the latest in Google’s machine learning expertise.
Few phones this past year came close to besting the quality achieved by the Pixel 3 series’ camera, and the Pixel 4 series appears poised to repeat history (although Apple’s Deep Fusion tech might give it a run for its money). To our knowledge, the Pixel 4 series’ real-time offline transcription capabilities are without equal among smartphones. And while Project Soli is still in its infancy, it has the potential to usher in novel ways of interacting with apps (and perhaps more excitingly, games) through gestures.
The question is whether those refinements justify the premium price tags — $800 for the Pixel 4 (64GB) and $900 for the Pixel 4 XL (64GB), the same as last year’s Pixel 3 series. The Samsung Galaxy S10e and OnePlus 7 Pro, the latter of which features a 90Hz display and OnePlus’ excellent OxygenOS skin, can be had for less than $700.
That’s all to say the Pixel 4 series targets those on the hunt for no-nonsense, performant phones with appreciable (but not earth-shattering) quality of life improvements. It commands a premium for AI features at the expense of specs, which won’t sit well with every prospective buyer. But those unconcerned with specs and willing to pay the Google tax aren’t likely to be disappointed with their purchase.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,193 | 2,020 | "Google's federated analytics method could analyze end user data without invading privacy | VentureBeat" | "https://venturebeat.com/2020/05/27/googles-federated-analytics-method-could-analyze-end-user-data-without-invading-privacy" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s federated analytics method could analyze end user data without invading privacy Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
In a blog post today, Google laid out the concept of federated analytics, a practice of applying data science methods to the analysis of raw data that’s stored locally on edge devices. As the tech giant explains, it works by running local computations over a device’s data and making only the aggregated results — not the data from the particular device — available to authorized engineers.
While federated analytics is closely related to federated learning , an AI technique that trains an algorithm across multiple devices holding local samples, it only supports basic data science needs. It’s “federated learning lite” — federated analytics enables companies to analyze user behaviors in a privacy-preserving and secure way, which could lead to better products. Google for its part uses federated techniques to power Gboard’s word suggestions and Android Messages’ Smart Reply feature.
“The first exploration into federated analytics was in support of federated learning: how can engineers measure the quality of federated learning models against real-world data when that data is not available in a data center? The answer was to re-use the federated learning infrastructure but without the learning part,” Google research scientist Daniel Ramage and software engineer Stefano Mazzocchi said in a statement. “In federated learning, the model definition can include not only the loss function that is to be optimized, but also code to compute metrics that indicate the quality of the model’s predictions. We could use this code to directly evaluate model quality on phones’ data.” As an example, in a user study, Gboard engineers measured the overall quality of word prediction models against raw typing data held on phones. Participating phones downloaded a candidate model, locally computed a metric of how well the model’s predictions matched words that were actually typed, and then uploaded the metric without any adjustment to the model itself or any change to the Gboard typing experience. By averaging the metrics uploaded by many phones, engineers learned a population-level summary of model performance.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a separate study, Gboard engineers wanted to discover words commonly typed by users and add them to dictionaries for spell-checking and typing suggestions. They trained a character-level recurrent neural network on phones, using only the words typed on these phones that weren’t already in the global dictionary. No typed words ever left the phones, but the resulting model could then be used in the datacenter to generate samples of frequently typed character sequences — i.e., the new words.
Beyond model evaluation, Google uses federated analytics to support the Now Playing feature on its Pixel phones , which shows what song might be playing nearby. Under the hood, Now Playing taps an on-device database of song fingerprints to identify music near a phone without the need for an active network connection.
When it recognizes a song, Now Playing records the track name into the on-device history, and when the phone is idle and charging while connected to Wi-Fi, Google’s federated learning and analytics server sometimes invites it to join a “round” of computation with hundreds of phones. Each phone in the round computes the recognition rate for the songs in its Now Playing history and uses a secure aggregation protocol to encrypt the results. The encrypted rates are sent to the federated analytics server, which doesn’t have the keys to decrypt them individually; when combined with the encrypted counts from the other phones in the round, the final tally of all song counts can be decrypted by the server.
The result enables Google’s engineers to improve the song database without any phone revealing which songs were heard, for example, by making sure the database contains truly popular songs. Google claims that in its first improvement iteration, federated analytics resulted in a 5% increase in overall song recognition across all Pixel phones globally.
“We are also developing techniques for answering even more ambiguous questions on decentralized datasets like ‘what patterns in the data are difficult for my model to recognize?’ by training federated generative models. And we’re exploring ways to apply user-level differentially private model training to further ensure that these models do not encode information unique to any one user,” wrote Ramage and Mazzocchi. “It’s still early days for the federated analytics approach and more progress is needed to answer many common data science questions with good accuracy … [B]ut federated analytics enables us to think about data science differently, with decentralized data and privacy-preserving aggregation in a central role.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,194 | 2,019 | "Glassdoor: 14 of the 25 highest-paying U.S. jobs for 2019 are in tech | VentureBeat" | "https://venturebeat.com/2019/09/17/glassdoor-10-of-the-25-highest-paying-u-s-jobs-for-2019-are-in-tech" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Glassdoor: 14 of the 25 highest-paying U.S. jobs for 2019 are in tech Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Career website Glassdoor today released its 2019 report on the “Highest Paying Jobs in America.” Out of the top 25 U.S. jobs listed, 14 were in tech — one more than last year , and as always, more than in any other industry (health care was next in line, with six jobs).
Glassdoor’s list is ordered by average base salary. For a job title to be considered, it had to have received at least 100 salary reports shared by U.S.-based employees over the past year (similar job titles were grouped together; C-suite level jobs were excluded). Glassdoor also says it applies “a proprietary statistical algorithm to estimate annual median base pay, which controls for factors such as location and seniority.” Without further ado, here are the 14 tech jobs that made the cut: Enterprise Architect Software Engineering Manager Software Development Manager Applications Development Manager Solutions Architect Data Architect IT Program Manager Systems Architect UX Manager Site Reliability Engineer Cloud Engineer Data Scientist Information Security Engineer Analytics Manager The first three tech jobs placed in the top 10 of the full list. Enterprise Architect was fifth overall, Software Engineering Manager grabbed seventh, and Software Development Manager snuck in at tenth. Last year, four tech positions made it into the top 10.
Top 25 highest-paying jobs Here is the full list of the 25 highest-paying jobs in the U.S.: Physician: $193,415 base salary Pharmacy Manager: $144,768 base salary Dentist: $142,478 base salary Pharmacist: $126,438 base salary Enterprise Architect: $122,585 base salary Corporate Counsel: $117,588 base salary Software Engineering Manager: $114,163 base salary Physician Assistant: $113,855 base salary Corporate Controller: $113,368 base salary Software Development Manager: $109,809 base salary Nurse Practitioner: $109,481 base salary Applications Development Manager: $107,735 base salary Solutions Architect: $106,436 base salary Data Architect: $104,840 base salary Plant Manager: $104,817 base salary IT Program Manager: $104,454 base salary Systems Architect: $103,813 base salary UX Manager: $102,489 base salary Site Reliability Engineer: $100,855 base salary Cloud Engineer: $98,626 base salary Attorney: $97,711 base salary Data Scientist: $97,027 base salary Information Security Engineer: $95,786 Analytics Manager: $95,238 base salary Financial Planning & Analysis Manager: $94,874 Only six job positions weren’t making six figures, same as last year.
Update : Glassdoor originally misstated the total for tech jobs.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,195 | 2,020 | "Accenture: Tech companies' disregard for inclusion drives women away | VentureBeat" | "https://venturebeat.com/2020/09/30/accenture-tech-companies-disregard-for-inclusion-drives-women-away" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Accenture: Tech companies’ disregard for inclusion drives women away Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A joint report from Accenture and Girls Who Code found a massive perception gap between leaders in the tech industry — including C-suite executives and senior human resource officers — and its female-identifying employees. While 77% of leaders think their workplace empowers women, only 54% of these women agree. And while 45% of leaders claim it’s easy for women to thrive in tech-related jobs, only 21% of women overall (and 8% of women of color) feel the same way.
These findings from the report “ Resetting Tech Culture ” are based on online surveys completed by three distinct groups within the United States in 2019: 1,990 tech employees (1,502 of whom identify as women), 500 senior human resources leaders, and 2,700 college students. The researchers then analyzed workplace culture by applying a linear regression model to the survey results, which quantified the impact of different cultural factors on women’s advancement.
According to the report, the disparity is all about culture and opportunity: uncomfortable classroom settings in college, or even high school, combined with less-than-ideal company work environments, lead over 50% of young women in technology roles to drop out of the industry by the age of 35.
Senior human resources leaders are largely responsible for workplace culture. They’re changemakers who determine who is hired, how they work, and what they work on. But according to the survey results, they largely overestimate how safe and welcoming their workplaces are while underestimating how difficult it is for women to build their careers in technology.
This perception gap is key because leadership undervalues inclusion in the workplace and remains focused on hiring women when there’s an existing attrition problem. The report indicates that leaders tend to center their efforts on hiring rather than retaining women. An emphasis on hiring makes it less likely for women to advance in their career within a company; the company then misses out on reduced bias, a more equitable workplace, and an overall improved culture. The report asserts that the corporate world cannot improve at the rate it needs to without the contributions of women.
This report identifies five actionable cultural practices that can curb this trend: strengthening parental leave policies, selecting diverse leaders for senior teams, developing women-specific mentorship programs, rewarding employees for creativity, and scheduling networking events that are open to all team members. It expects that these changes could help ensure up to 3 million early-in-career women will work in technology roles by 2030. That’s almost twice as many as there are right now, according to the report.
Accenture and Girls Who Code say this reset would help to “drive much-needed change: [the] analysis suggests that if every company scored high on measures of an inclusive culture — specifically, if they were on par with those in the top 20% of [the] study — the annual attrition rate of women in tech could drop by up to 70%.” Although the number of women working in technology as a whole has increased, the proportional gender imbalance in technology today is actually greater than it was 35 years ago. This disequilibrium hurts not only women’s earnings and advancement but also the goals of technology companies, because inclusivity and innovation are closely intertwined.
And if technology is the future, these next few years present a golden opportunity to make it work for everyone. Accenture and Girls Who Code believe that this begins with resolving the critical disconnect between tech leaders and their employees through empathy and women-focused policies.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,196 | 2,019 | "GitHub acquires Semmle to help developers spot code exploits | VentureBeat" | "https://venturebeat.com/2019/09/18/github-acquires-semmle-to-help-developers-spot-code-exploits" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub acquires Semmle to help developers spot code exploits Share on Facebook Share on X Share on LinkedIn GitHub mug.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft-owned GitHub today announced that it’s acquired Semmle , a San Francisco startup developing an engineering analytics solution for software development process management. The terms of the purchase weren’t disclosed, but GitHub says it’ll make Semmle’s code analysis engine available across public and enterprise repositories through its GitHub Actions tool.
GitHub also revealed this morning that it’s now a Common Vulnerabilities and Exposures (CVE) Numbering Authority. (For the uninitiated, the CVE system provides a reference for publicly disclosed information about security vulnerabilities and exposures.) Going forward, GitHub says it’ll become easier for code contributors to report vulnerabilities directly from repositories, after which they’ll be assigned a CVE ID, posted to the CVE List, and then uploaded to the National Vulnerability Database (NVD).
“Open source has had a remarkable run over the past 20 years. Today almost every software product from any vendor or community includes open source code in its supply chain. We all benefit from the open source model, and we all have a role to play in making open source successful for the next 20 years,” wrote GitHub in a blog post. “Both of these announcements are part of our larger strategy to secure the world’s code.” Semmle originally spun out of research at Oxford in 2006, and soon after attracted clients like Microsoft, Google, Credit Suisse, NASA, and Nasdaq and raised over $31 million in venture capital. (In the last year alone, it saw a two time uptick in new customers.) It provided a free version of its technology to open source programmers to use with their apps, which prior to the acquisition analyzed the commits of tens of thousands of projects.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As GitHub SVP of product Shanku Niyogi explained in a blog post, Semmle’s unique approach to code analysis enables it to make sense of complex data structures and quickly spot all variations of a coding mistake. Researchers using Semmle leverage a declarative, object-oriented query language dubbed QL to suss out vulnerabilities in large codebases, and to share and run searches over many codebases. (Helpfully, Semmle ships with 2,000 queries covering a number of known exploits and their variants.) Niyogi says that to date, over 100 CVEs in repositories have been discovered using its approach, including in high-profile projects like U-Boot, Apache Struts, the Linux Kernel, Memcached, VLC, and Apple’s XNU. “We are excited to bring Semmle to all open source communities and our Enterprise customers,” he added. “As the community grows and contributes their queries, we all help to make software more secure.” These latest developments come months after GitHub revealed that it had acquired Dependabot, a third-party tool that automatically opens pull requests to update dependencies in popular programming languages. Around the same time, GitHub made dependency insights generally available to GitHub Enterprise Cloud subscribers, and it broadly launched security notifications that flag exploits and bugs in dependencies for GitHub Enterprise Server customers.
In May, GitHub revealed beta availability of maintainer security advisories and security policy, which offers a private place for developers to discuss and publish security advisories to select users within GitHub without risking an information breach. That same month, the company said it would collaborate with open source security and license compliance management platform WhiteSource to “broaden” and “deepen” its coverage of and remediation suggestions for potential vulnerabilities in .NET, Java, JavaScript, Python, and Ruby dependencies.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,197 | 2,020 | "Cobalt raises $29 million to bring its 'pentest as a service' platform to more software teams | VentureBeat" | "https://venturebeat.com/2020/08/20/cobalt-raises-29-million-to-bring-its-pentest-as-a-service-platform-to-more-software-teams" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cobalt raises $29 million to bring its ‘pentest as a service’ platform to more software teams Share on Facebook Share on X Share on LinkedIn Cobalt Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Cobalt.io , a “pentest-as-a-service” platform that lets any business access ethical hackers to stress-test their software, has raised $29 million in a series B round of funding led by Highland Europe.
Penetration testing, or “pentesting,” is a process that strives to identify vulnerabilities and exploit them as a real-world hacker might. The pentesting market is currently pegged at $1.7 billion, a figure that will more than double within five years, according to a MarketsandMarkets report.
Founded in 2013, San Francisco-based Cobalt vets qualified human pentesters and facilitates on-demand tests for clients, who pay a fixed price based on the size of their application and how frequently they want tests to be carried out. Companies receive vulnerability reports via the Cobalt Central dashboard, where they can be assigned directly to relevant developers through their bug-tracking system of choice (e.g., Jira or GitHub).
Above: Cobalt.io pentesting platform Cobalt Central also allows companies and pentesters to communicate about any vulnerabilities. This two-way interaction creates what Cobalt calls a “dynamic, real-time feedback loop” between developers and pentesters.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cobalt promises to bring pentesting into the digital era, bypassing PDFs that simply list vulnerabilities and providing a marketplace for certified pentesters and an interface for managing the process from start to finish. Notable existing clients include MuleSoft, Verifone, and Axel Springer.
Automation AI and automation are increasingly infiltrating cybersecurity , which is why automated pentesting platforms should come as no surprise. But Cobalt believes a human-centric approach is best.
“Automation and AI are disruptive forces in the world of enterprise tech, but when it comes to pentesting, the manual element will never become obsolete,” chief strategy officer Caroline Wong told VentureBeat. “While there are many types of security vulnerabilities that can be found using automated platforms, there are entire classes of issues that can only be discovered manually, by humans. These include business logic bypass, race conditions , and chained exploits.
” Cobalt does lean on some automation, however. External pentesters and developers haven’t always worked together effectively, and companies need to be informed immediately when critical vulnerabilities are discovered. This is why Cobalt automates some of the communication and collaboration between the two parties, with tickets and fix-verification triggered automatically.
“Immediate notification of found vulnerabilities to the developer team, and on-demand, asynchronous communication between pentesters and engineers helps newly discovered security issues to get to the right folks so they can get fixed,” Wong said.
Cobalt recruits and assesses its pentesters, with each candidate undergoing a technical assessment and video interview. The company also gathers feedback on an ongoing basis from customers and other team members. Cobalt currently counts 300 pentesters as part of its Cobalt Core team.
“Our pentester community is the lynchpin of our business, so the bar for entrants is high,” Wong said. “It’s a closed and exclusive group, and we do not consider applications without a referral from within the community, within the company, or within our customer base.” Cobalt had previously raised around $8 million, and with another $29 million in the bank it plans to double down on international growth. Other participants in its series B round include Gerhard Eschelbeck, former VP of security and privacy engineering at Google; Adobe’s chief product officer Scott Belsky; Soren Abildgaard; Gary Swart; Elizabeth Tse; and Greg Nicastro.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,198 | 2,020 | "Software that monitors students during tests perpetuates inequality and violates their privacy | MIT Technology Review" | "https://www.technologyreview.com/2020/08/07/1006132/software-algorithms-proctoring-online-tests-ai-ethics" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Software that monitors students during tests perpetuates inequality and violates their privacy By Shea Swauger archive page Photo by Iris Wang on Unsplash The coronavirus pandemic has been a boon for the test proctoring industry. About half a dozen companies in the US claim their software can accurately detect and prevent cheating in online tests.
Examity , HonorLock , Proctorio , ProctorU , Respondus and others have rapidly grown since colleges and universities switched to remote classes.
While there’s no official tally, it’s reasonable to say that millions of algorithmically proctored tests are happening every month around the world. Proctorio told the New York Times in May that business had increased by 900% during the first few months of the pandemic, to the point where the company proctored 2.5 million tests worldwide in April alone.
I'm a university librarian and I've seen the impacts of these systems up close. My own employer, the University of Colorado Denver, has a contract with Proctorio.
It’s become clear to me that algorithmic proctoring is a modern surveillance technology that reinforces white supremacy, sexism, ableism, and transphobia. The use of these tools is an invasion of students’ privacy and, often, a civil rights violation.
If you’re a student taking an algorithmically proctored test, here’s how it works: When you begin, the software starts recording your computer’s camera, audio, and the websites you visit. It measures your body and watches you for the duration of the exam, tracking your movements to identify what it considers cheating behaviors. If you do anything that the software deems suspicious, it will alert your professor to view the recording and provide them a color-coded probability of your academic misconduct.
Depending on which company made the software, it will use some combination of machine learning, AI, and biometrics (including facial recognition, facial detection, or eye tracking) to do all of this. The problem is that facial recognition and detection have proven to be racist , sexist , and transphobic over , and over , and over again.
In general, technology has a pattern of reinforcing structural oppression like racism and sexism.
Now these same biases are showing up in test proctoring software that disproportionately hurts marginalized students.
A Black woman at my university once told me that whenever she used Proctorio's test proctoring software, it always prompted her to shine more light on her face. The software couldn’t validate her identity and she was denied access to tests so often that she had to go to her professor to make other arrangements. Her white peers never had this problem.
Similar kinds of discrimination can happen if a student is trans or non-binary. But if you’re a white cis man (like most of the developers who make facial recognition software), you’ll probably be fine.
Students with children are also penalized by these systems. If you’ve ever tried to answer emails while caring for kids, you know how impossible it can be to get even a few uninterrupted minutes in front of the computer. But several proctoring programs will flag noises in the room or anyone who leaves the camera’s view as nefarious. That means students with medical conditions who must use the bathroom or administer medication frequently would be considered similarly suspect.
Beyond all the ways that proctoring software can discriminate against students, algorithmic proctoring is also a significant invasion of privacy. These products film students in their homes and often require them to complete “room scans,” which involve using their camera to show their surroundings. In many cases, professors can access the recordings of their students at any time, and even download these recordings to their personal machines. They can also see each student’s location based on their IP address.
Privacy is paramount to librarians like me because patrons trust us with their data. After 9/11, when the Patriot Act authorized the US Department of Homeland Security to access library patron records in their search for terrorists, many librarians started using software that deleted a patron’s record once a book was returned. Products that violate people’s privacy and discriminate against them go against my professional ethos , and it’s deeply concerning to see such products eagerly adopted by institutions of higher education.
This zealousness would be slightly more understandable if there was any evidence that these programs actually did what they claim. To my knowledge, there isn’t a single peer-reviewed or controlled study that shows proctoring software effectively detects or prevents cheating. Given that universities pride themselves on making evidence-based decisions, this is a glaring oversight.
Fortunately, there are movements underway to ban proctoring software and ban face recognition technologies on campuses , as well as congressional bills to ban the US federal government from using face recognition. But even if face recognition technology were banned, proctoring software could still exist as a program that tracks the movements of students’ eyes and bodies. While that might be less racist, it would still discriminate against people with disabilities, breastfeeding parents, and people who are neuroatypical. These products can’t be reformed; they should be abandoned.
Cheating is not the threat to society that test proctoring companies would have you believe. It doesn’t dilute the value of degrees or degrade institutional reputations, and student’s aren’t trying to cheat their way into being your surgeon.
Technology didn’t invent the conditions for cheating and it won’t be what stops it. The best thing we in higher education can do is to start with the radical idea of trusting students.
Let’s choose compassion over surveillance.
Shea Swauger is an academic librarian and researcher at the University of Colorado Denver.
hide by Shea Swauger Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Generative AI deployment: Strategies for smooth scaling Our global poll examines key decision points for putting AI to use in the enterprise.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
16,199 | 2,019 | "D-Wave previews quantum computing platform with over 5,000 qubits | VentureBeat" | "https://venturebeat.com/2019/02/27/d-wave-previews-quantum-computing-platform-with-over-5000-qubits" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages D-Wave previews quantum computing platform with over 5,000 qubits Share on Facebook Share on X Share on LinkedIn A D-Wave machine.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
D-Wave Systems today unveiled the roadmap for its 5,000-qubit quantum computer. Components of D-Wave’s next-generation quantum computing platform will come to market between now and mid-2020 via ongoing quantum processing unit (QPU) and cloud-delivered software updates. The complete system will be available through cloud access and for on-premise installation in mid-2020.
Binary digits (bits) are the basic units of information in classical computing while quantum bits (qubits) make up quantum computing. Bits are always in a state of 0 or 1, while qubits can be in a state of 0, 1, or a superposition of the two. Quantum computing leverages qubits to perform computations that would be much more difficult for a classical computer. Based in Burnaby, Canada, D-Wave has been developing its own quantum computers that use quantum annealing.
D-Wave is mainly focused on solving optimization problems, so its quantum computers can’t be directly compared to the competition. Indeed, many have questioned whether D-Wave’s systems have quantum properties, and thus performance that classical computers can’t match. In the meantime, D-Wave continues to improve and sell its systems.
In October, D-Wave launched D-Wave Leap , a cloud service for developers to run their open source applications on its quantum computers. Today, the Canadian company promised that you will also be able to run and build applications on the next-generation quantum computing platform by purchasing hours of use through Leap. That way, developers, researchers, governments, institutions, and businesses can access the D-Wave quantum system without breaking the bank.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Pegasus The next-generation quantum computing platform relies on a new chip topology, Pegasus , which D-Wave promises is the world’s most connected commercial quantum system. While its predecessor, Chimera , offered six connected qubits, Pegasus more than doubles the number to 15. Having each qubit connected to 15 other qubits instead of six translates to 2.5x more connectivity, which in turn enables the embedding of larger and more complex problems with fewer physical qubits.
This animated GIF shows the evolution from the Chimera topology to Pegasus: D-Wave’s 2000Q quantum computer launched in January 2017 with 2,000 qubits, doubling the size of its predecessor. The successor to 2000Q will do even more, with D-Wave expecting the next-generation platform to offer more than 5,000 qubits. This should give programmers access to a larger, denser, and more powerful graph for building commercial quantum applications.
The next-generation system will also include D-Wave’s lowest-noise commercially available QPUs. The new QPU fabrication technology improves system performance and solution precision, the company claims.
Better software and tools To ensure all these improvements can be leveraged appropriately, D-Wave will also be updating its hybrid software and tools. Developers can run both classical and next-generation quantum platforms in the hybrid rapid development environment using Python. They can also interrupt processing and synchronize classical and quantum tasks to draw maximum computing power out of each system.
The open source tools in D-Wave’s Ocean SDK are written in Python and C. The Ocean SDK now includes compilers for embedding problems on the new Pegasus topology.
New components of the platform will be available through Leap.
“Quantum computing is only as valuable as the applications customers can run,” said D-Wave’s chief product officer, Alan Baratz. “With the next-generation platform, we are making investments in things like connectivity and hybrid software and tools to allow customers to solve even more complex problems at greater scale, bringing new emerging quantum applications to life. Every decision we’ve made and every decision we’ll make will reflect an ongoing commitment to helping developers learn quantum systems and helping customers build the first commercial quantum killer applications.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,200 | 2,020 | "D-Wave announces Leap 2, its cloud service for quantum computing applications | VentureBeat" | "https://venturebeat.com/2020/02/26/d-wave-announces-leap-2-its-cloud-service-for-quantum-computing-applications" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages D-Wave announces Leap 2, its cloud service for quantum computing applications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
D-Wave Systems today announced Leap 2, a new version of its quantum cloud service for building and deploying quantum computing applications. Leap 2 is designed to help businesses and developers transition from quantum exploration to quantum production. D-Wave also promised that Advantage, its next-generation quantum system , will be available via Leap 2 this year.
Binary digits (bits) are the basic units of information in classical computing, while quantum bits (qubits) make up quantum computing. Bits are always in a state of 0 or 1, while qubits can be in a state of 0, 1, or a superposition of the two. Quantum computing leverages qubits to perform computations that would be much more difficult for a classical computer. Based in Burnaby, Canada, D-Wave has been developing its own quantum computers that use quantum annealing.
In October 2018, D-Wave launched Leap, letting developers run their open source applications on its quantum computers.
At the time, D-Wave had about 80 customer applications built on its quantum processors. That number has more than doubled today to over 200. Applications so far have spanned protein folding, financial modeling, machine learning, materials science, and logistics. The company says it gleaned insights from the usage of thousands of users over the past 18 months to build Leap 2.
D-Wave Leap 2 features and pricing On top of real-time access to a D-Wave 2000Q quantum computer , Leap 2 includes three main features: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hybrid solver service: A managed cloud-based service allowing users to solve large and complex problems of up to 10,000 variables. Users do not need to list complex parameters. The solver automatically runs problems on a collection of quantum and classical cloud resources, using D-Wave’s algorithms to decide the best way to solve a problem.
Problem inspector: Allows more advanced quantum developers to visually see how their problems map onto the quantum processing unit (QPU). By showing the logical and embedded structure of a problem, the inspector displays the solutions returned from the QPU and provides alerts that allow developers to improve their results.
Integrated Developer Environment (IDE): A prebuilt, ready-to-code environment in the cloud for quantum hybrid Python development. The Leap IDE has the latest Ocean SDK set up and configured and includes the problem inspector and Python debugging tools. Seamless GitHub integration means developers can easily access the latest examples and contribute to the Ocean tools from within the IDE.
When D-Wave launched the first version of Leap, the company priced access starting at $2,000 per hour of QPU time per month. Leap 2 is a bit more flexible in that users can upgrade their account for additional time in customizable blocks of Leap “units” for different skill and investment levels. The units can be used for both the QPU and the hybrid solver service: When you sign up for Leap 2, you get a free minute of direct quantum computing access time, which is equivalent to running between 400 and 4,000 problems. Leap 2 also includes 20 minutes of free access to quantum-classical hybrid solvers.
Doubling down on cloud D-Wave was the first company to sell commercial quantum computers and claims it was the first to give developers real-time access to live quantum processors. Other companies, including tech giants like IBM , offer their own cloud services for their quantum computers.
Late last year, major cloud providers Amazon and Microsoft announced plans to join the fray alongside single hardware providers.
In November, Microsoft announced Azure Quantum , a cloud service that lets businesses and developers tap into quantum hardware providers Honeywell, IonQ, or QCI. In December, AWS announced Amazon Braket , a cloud service that lets businesses and developers tap into quantum hardware providers D-Wave, IonQ, and Rigetti.
The impact of major cloud providers entering the quantum computing market remains to be seen. For now, quantum computing providers like D-Wave plan to keep expanding their first-party cloud services while playing ball with the giants. We’ll know soon enough if newer quantum computing players will develop their own cloud services or end up relying on the likes of Amazon and Microsoft.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,201 | 2,020 | "D-Wave makes its quantum computers free to anyone working on the coronavirus crisis | VentureBeat" | "https://venturebeat.com/2020/03/31/d-wave-makes-its-quantum-computers-free-to-anyone-working-on-the-coronavirus-crisis" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages D-Wave makes its quantum computers free to anyone working on the coronavirus crisis Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
D-Wave today made its quantum computers available for free to researchers and developers working on responses to the coronavirus (COVID-19) crisis. D-Wave partners and customers Cineca, Denso, Forschungszentrum Jülich, Kyocera, MDR, Menten AI, NEC, OTI Lumionics, QAR Lab at LMU Munich, Sigma-i, Tohoku University, and Volkswagen are also offering to help. They will provide access to their engineering teams with expertise on how to use quantum computers, formulate problems, and develop solutions.
Quantum computing leverages qubits to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Based in Burnaby, Canada, D-Wave was the first company to sell commercial quantum computers, which are built to use quantum annealing.
D-Wave says the move to make access free is a response to a cross-industry request from the Canadian government for solutions to the COVID-19 pandemic. Free and unlimited commercial contract-level access to D-Wave’s quantum computers is available in 35 countries across North America, Europe, and Asia via Leap, the company’s quantum cloud service. Just last month, D-Wave debuted Leap 2 , which includes a hybrid solver service and solves problems of up to 10,000 variables.
Quantum computing and COVID-19 applications D-Wave and its partners are hoping the free access to quantum processing resources and quantum expertise will help uncover solutions to the COVID-19 crisis. We asked the company if there were any specific use cases it is expecting to bear fruit. D-Wave listed analyzing new methods of diagnosis, modeling the spread of the virus, supply distribution, and pharmaceutical combinations. D-Wave CEO Alan Baratz added a few more to the list.
“The D-Wave system, by design, is particularly well-suited to solve a broad range of optimization problems, some of which could be relevant in the context of the COVID-19 pandemic,” Baratz told VentureBeat. “Potential applications that could benefit from hybrid quantum/classical computing include drug discovery and interactions, epidemiological modeling, hospital logistics optimization, medical device and supply manufacturing optimization, and beyond.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Earlier this month, Murray Thom, D-Wave’s VP of software and cloud services, told us quantum computing and machine learning are “extremely well matched.” In today’s press release, Prof. Dr. Kristel Michielsen from the Jülich Supercomputing Centre seemed to suggest a similar notion: “To make efficient use of D-Wave’s optimization and AI capabilities, we are integrating the system into our modular HPC environment.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,202 | 2,020 | "Amazon launches Braket quantum computing service in general availability | VentureBeat" | "https://venturebeat.com/2020/08/13/amazon-launches-braket-quantum-computing-service-in-general-availability" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches Braket quantum computing service in general availability Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Amazon today announced the general availability of Amazon Braket , a fully managed Amazon Web Services (AWS) product that provides a development environment for exploring and designing novel quantum algorithms. Customers can tap Braket — which launched in preview last December — to test and troubleshoot algorithms on simulated quantum computers running in the cloud to help verify their implementation. Users can then run those algorithms on quantum processors in systems from D-Wave , IonQ , and Rigetti.
In theory, quantum computing has the potential to solve problems beyond the reach of classical computers by harnessing the laws of quantum mechanics to build powerful information-processing tools. Scientific discoveries arising from quantum computing could transform energy storage, chemical engineering, drug discovery, financial portfolio optimization, machine learning, and more. But advances require in-house expertise, access to quantum hardware, or a combination of both. Amazon asserts that managed quantum infrastructure could help facilitate research and education in quantum technologies and accelerate breakthroughs.
Using Jupyter notebooks and existing AWS services, Braket users can assess present and forthcoming capabilities, including quantum annealing, ion trap devices, and superconducting chips. Amazon says partners were chosen “for their quantum technologies” and that customers and hardware providers can design quantum algorithms using the Braket developer toolkit. They can also access a library of prebuilt algorithms and execute either low-level quantum circuits or fully managed hybrid algorithms, as well as selecting between software simulators running in AWS Elastic Cloud Compute and quantum hardware.
In addition to running quantum algorithms, customers can use Braket to run hybrid algorithms, which combine quantum and classical computing systems to overcome limitations inherent in today’s quantum technology. They’re also given access to Amazon’s Quantum Solutions Lab, which aims to connect users with quantum computing experts — including from 1Qbit, Rahko, Rigetti, QC Ware, QSimulate, Xanadu, and Zapata — to identify ways to apply quantum computing inside their organizations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Amazon says Volkswagen has tested Braket to gain an “in-depth understanding of the meaningful use of quantum computing in a corporate environment.” Other early adopters include multinational power company Enel, biotechnology organization Amgen, the University of Waterloo’s Institute for Quantum Computing, quantum machine learning startup Rahko, Qu & Co, and the Fidelity Center for Applied Technology.
Amazon Braket is available today in US East (N. Virginia), US West (N. California), and US West (Oregon) AWS Regions, with more regions planned.
Braket competes with Microsoft’s Azure Quantum , a service that offers select partners access to three prototype quantum computers from IonQ, Honeywell, and QCI. But Azure Quantum is still in preview. And other rival offerings from Google and IBM only deliver compute from single, proprietary quantum processors and machines.
In a sign of its commitment to quantum computing research, Amazon unveiled the AWS Center for Quantum Computing last December. The Caltech-based laboratory aims to “boost innovation in science and industry” by connecting Amazon researchers and engineers with academic institutions to develop more powerful quantum computing hardware and identify novel quantum applications.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,203 | 2,020 | "Intel given greenlight to supply some products to Huawei | VentureBeat" | "https://venturebeat.com/2020/09/22/intel-given-greenlight-to-supply-some-products-to-huawei" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel given greenlight to supply some products to Huawei Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Intel has received licences from U.S. authorities to continue supplying certain products to Huawei, a company spokesperson said on Tuesday.
With U.S.-China ties at their worst in decades, the Trump administration has been pushing governments around the world to squeeze out Huawei , arguing that the telecoms giant would hand data to the Chinese government for espionage.
From September 15, new curbs have barred U.S. companies from supplying or servicing Huawei.
This week, the state-backed China Securities Journal said Intel had received permission to supply Huawei.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Last week, China’s Semiconductor Manufacturing International Corporation (SMIC) confirmed it had also sought permission to continue servicing Huawei. SMIC uses U.S.-origin equipment to make chips for Huawei and other companies.
Huawei, founded in 1987 by a former engineer in China’s People’s Liberation Army, denies it spies for Beijing and says the United States is trying to smear it because Western firms are falling behind in 5G technology.
In what some observers have compared to the Cold War arms race, the United States worries that 5G dominance would give China an advantage Washington is not ready to accept.
( Reporting by Josh Horwitz, editing by Clarence Fernandez.
) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,204 | 2,018 | "Arm unveils 7nm Cortex-A76AE, the 'world's first autonomous-class' car processor | VentureBeat" | "https://venturebeat.com/2018/09/26/arm-unveils-7nm-cortex-a76ae-the-worlds-first-autonomous-class-car-processor" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arm unveils 7nm Cortex-A76AE, the ‘world’s first autonomous-class’ car processor Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Arm Holdings , the Softbank-owned British semiconductor and software company, is perhaps best known for its role in designing billions of smartphone, tablet, and smartwatch chips sold worldwide. System-on-chips built on Arm architectures have an estimated 90-95 percent share of the smartphone processing market, and Mali, Arm’s graphics processing unit (GPU), is the third most popular in mobile devices. (Arm says that to date, its roughly 1,590 licensees have shipped more than 125 billion Arm-based chips globally, with 21 billion in the fiscal year 2017 alone.) Arm’s automotive efforts tend to get less attention. But it has licensed car hardware in some form or another since 1996 and says that the top 15 automotive chip makers tap its IP to drive 85 percent and 65 percent of in-vehicle infotainment (IVI) and advanced driver-assistance features (like adaptive cruise control and collision avoidance), respectively.
Today, in a bid to further cement its in-car dominance, the company unveiled a refreshed solutions lineup aimed at autonomous technologies.
“Key electronic innovations are going to happen in the IVI cockpit and ADAS space,” Lakshmi Mandyam, vice president of Arm’s automotive division, told VentureBeat in a phone interview. “Safety is the highest priority for car makers we talk with, for both the obvious technology factors associated with autonomous systems controlling all aspects of driving, but also to ensure that human passengers can trust their automated driver. If consumers don’t trust the autonomous systems in their cars are safe, then mass market acceptance of this technology will be slow to happen.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To that end, Arm’s three new products and services — Arm Safety Ready, Cortex-A76AE, and Split-Lock — emphasize safety above all else.
Above: Arm Safety Ready processors.
Arm Safety Ready has Arm engineers conduct a litany of tests to ensure its chipsets meet rigorous standards, such as ISO 26262 (titled “Road vehicles — Functional Safety”), the International Organization for Standardization’s (ISO) baseline for in-car electronic systems safety, and IEC 61508, the safety systems design and deployment standard published by the Electrotechnical Commission. But it’s more than just a certifications program — Mandyam described it as a “one-stop shop” for software, tools, and components intended to reduce the cost of integrating functional safety for OEMs.
“The question we asked ourselves was, ‘How do we give our partners a head start on safety?'” she said. “We’ve developed certified software components, a software test library, compilers, methodologies, [and] comprehensive safety documentation … and we’re working with third-party certification authorities to pre-certify products.” Safety Ready products include the Cortex-A72, Cortex-R5, Cortex-R52, Cortex A53, and Cortex M4, among others Also on the list is the Cortex-A76AE, which Arm’s calling the world’s first “autonomous-class processor.” (The “AE” in Cortex-A76AE stands for “Automotive Enhanced,” a new designator reserved for chips with specific in-vehicle processing features.) The 7nm, sub-30-watt system-on-chip targets ASIL D, the highest degree of autonomous hazard as defined by ISO, and boasts more than 250 KDMIPs of computing power.
Above: The Cortex-A76AE’s Autonomous Compute Complex.
At the core of Cortex-A76AE is the Autonomous Compute Complex, a scalable architecture made up of Cortex-A76AE processors, a dedicated machine learning processor ( Arm ML ), and Arm’s Mali-G76 GPU meshed together with CoreLink CMN-600AE, MMU-600AE, and GIC-600AE interlinks. It supports multi-CPU clusters of up to 64 cores, with V8.2 RAS features plus memory virtualization and protection for machine learning and neural network accelerators, automotive-focused AI processing, and multiple guest operating systems.
In these ways, Mandyam said, the Cortex-A76AE is uniquely tailored to handle the vast amounts of data generated by lidar sensors, cameras, AV and storage controllers, network gateways, and other autonomous car mainstays — not to mention the software routines in operating systems from Baidu, Linaro, AutoWare, and other ecosystem partners.
“It gives OEMs the flexibility to virtualize on a single CPU and still take advantage of the Split-Lock feature,” Mandyam explained. “Our implementation allows OEMs to allocate split clusters based on the applications they have.” The Cortex-A76AE’s other headliner is support for Split-Lock, a feature that allows OEMs to configure CPU clusters in a system-on-chip in two modes: “split mode” for high performance, which leverages two or four CPUs in said cluster, or “lock mode,” which synchronizes one or two CPU pairs for safer, less error-prone processing.
Above: A graphic illustrating the Cortex-A76AE’s Split-Lock feature.
“With the processor or cores locked together, you can achieve higher levels of safety,” Mandyam said. “Each operation is checked automatically in hardware. [That’s faster than] the traditional setup, where operations are sent to another chip to check if an error has occurred. The [Cortex-A76AE] is the first processor of this performance level to incorporate Split-Lock, [and] it’s been a game-changer in our conversations with OEMs and tier-one manufacturers.” Arm claims that in 16-core deployments, Cortex-A76AE can lead to a 10 times reduction in cost and power consumption compared to competing architectures. It’ll begin shipping in vehicles beginning in 2020, and it’s just the start — Arm’s new AE roadmap includes Helios-AE and Hercules-AE, which the company says it’ll detail at a later date.
“Historically, chip manufacturers have had to make design choices that made it more complex for them to implement safety features,” Mandyam said. “We’re introducing a high-performance processor with a best-in-class performance that’s uncompromising.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,205 | 2,013 | "This sock tells you your baby's heart rate, sleep position, oxygen levels, and temperature | VentureBeat" | "https://venturebeat.com/2013/08/26/this-high-tech-sock-tells-you-your-babys-heart-rate-sleep-position-oxygen-levels-and-temperature" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages This sock tells you your baby’s heart rate, sleep position, oxygen levels, and temperature Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
New parents who are wondering how their baby is sleeping, whether he’s cold, or whether she’s still on her back, might be able to rest a little easier at night if the first wearable tech product for babies meets its crowdfunding campaign goals this month.
Or, they might be nervously checking their baby’s vitals every five minutes.
Owlet Baby Care is launching a smart sock that gives you information on your child’s heart rate, blood oxygenation levels, sleep quality, skin temperature, and sleep position. That last one is particularly critical, as doctors have said that sleeping face-down is likely a contributing factor in SIDS , sudden infant death syndrome.
“I’m a dad, I’ve got 2 kids,” project cofounder Jacob Colvin told me this morning. “They had RSV [a respiratory infection that causes difficulty in breathing], and I had a really close friend who lost their child while we were actually visiting them. We want to be able to provide parents with peace of mind.” I have three children, and that nagging question — is our baby OK? — is indeed one that forces parents to get up and go check on their infants, particularly when they’re a few months old and starting to sleep in their own rooms.
The expandable sock includes a four-sensors pulse oximeter, which allows the device to measure skin temperature and heart rate simply via a built-in light. It’s similar to but more advanced than the typical pulse rate device you might put your finger into in a doctor’s office, with two light sources and two photo diodes to ensure good readings even with different-sized or growing infants.
An accelerometer keeps track of a baby’s movements, providing insight into sleep quality and an alarm if your child rolls onto his stomach, a thermometer provides temperature, and the sock transmits all the data to your smartphone or your computer, giving you a customizable dashboard into your baby’s current health status.
Owlet isn’t crowdfunding via Kickstarter, and there’s a good reason for that.
“Kickstarter doesn’t allow baby products, home health care products, or medical products,” Colvin told me. “And we’d like to be able to have a little more control over our campaign.” Another difference compared to many hardware crowdfunding campaigns is a short timeline to deliver the product. Owlet has already built multiple functioning prototypes of the baby monitor, Colvin told me, because babies, after all, can’t be put on hold. Expecting parents will need their devices in just a few months, not half a year, and Owlet is ready to fulfill that, he said.
Eventually, it will be certified as a medical device as well.
“This device does not require FDA clearance, but we have another version that does,” Colvin told me. “It has an alarm system built into it [that alerts parents if their] child’s oxygen levels drop.” That alarm requires FDA approval, apparently, and part of the crowdfunding proceeds will be put to that notoriously lengthy and complicated process. The company is looking for $100,000 in funding, and has already raised $19,650 of that on the first day of its campaign.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,206 | 2,014 | "7 wacky wearables that show how the industry is evolving | VentureBeat" | "https://venturebeat.com/2014/10/01/7-wacky-wearables-that-show-how-the-industry-is-evolving" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature 7 wacky wearables that show how the industry is evolving Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Who doesn’t love shiny new gadgets? I know I do. But what good are gizmos if they don’t actually, you know, do something ? The wearable tech industry is booming right now, but whether or not it’s accomplishing something productive — other than giving us something that looks cool — isn’t so clear.
So we’ve put together a list of some of the latest (and most prominent) wearables currently on the market or in development to try and sort out why they’re so appealing. We also determine if there are any nuggets of true, world-changing innovation here. Here they are in order from the useful to just the plain whack-a-doodle: meMINI The meMINI is a wearable camera that lets you “handpick your favorite moments, after they happen,” according to its Kickstarter page, which makes it sound like any old recording device on the surface. In actuality, the meMINI records video continuously so that when a moment happens that you want to capture forever, you just have to press the “Recall” button to save the few seconds or few minutes that have just transpired. It’s like a video record button that lets you go back in time. It includes a “magnacatch” that lets you wear it anywhere on your clothing, too. You can get it now for $199.
Narrative Clip Lifeloggers rejoice! The Narrative Clip is a wearable camera the size of a stamp that automatically takes photos of your life. Since you don’t have to consciously take the pictures, the argument here is that you’ll capture moments that you might’ve otherwise missed. It can store up to 4,000 images that you can easily upload to your computer later or share with friends online. It weighs just 0.7 oz so it promises not to hang off your shirt. Starting price is $229.
Mi.mu Now, this is a wearable I can get behind. The Mi.mu is a pair of gloves that uses gesture detection to help wearers control music using the movement of their hands.
Though it didn’t meet its Kickstarter goal, the gloves are still in development and have received considerable endorsement by musician Imogen Heap. No release date has been set as of yet.
Lumo Lift If you’re like me and sit slumped over a keyboard all day, this wearable might actually be useful. The Lumo Lift is a magnet that clips onto your clothes near your collarbone that can sense when you’re slouching. When it does, the device vibrates until you stand or sit up straight again.
It can also measure how good your posture is throughout the day. And it works (of course) with the Lumo Lift app. Expect to pay $100 for this one.
Muse Because who doesn’t need their brain “sensed”? At least, that appears to be the thinking behind the Muse, a brain fitness tool that promises to help you better manage stress and improve focus. To use it, just sync the headband’s Bluetooth with the dedicated app and go through Muse-guided exercises. For instance, you’ll be presented with a beach environment on the app, and if your mind wanders, the weather will change. This helps your improve focus and learn calming techniques. Muse costs $299.
Sensoria Smart Sock Fitness Tracker We’ve got smart wristbands. Sensoria smart Sock Fitness Tracker will be giving us smart socks. This device, which met its funding goal on Indiegogo , consists of “smart” socks that are made of running-friendly fabric that’s been infused with textile sensors. Once you’ve put on the socks, just snap on an electronic anklet to one of them and you’ll be able to sync details about your step count, speed, calories burned, and how your feet hit the ground, to a dedicated app. What will super smart socks set you back? $149.
SmartWig The SmartWig from Sony is only at the patent stage right now, but it’s so unusual, I had to include it here. The SmartWig is designed to cover all of or a portion of the head and is loaded with sensors that are capable of communicating with a whole host of external devices. It has a GPS, lasers for conducting remote PowerPoint presentations, and ultrasound transducers that vibrate when you’re getting close to an object. And if you want to complete a task, just touch the wig. Here’s my favorite description from the patent: “During a presentation the user may, for example, move forward or backward through presentation slides by simply pushing the sideburns.” Now, be honest: Do you see yourself actually using any of these? VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,207 | 2,015 | "'Smart' clothing sales to top 10M in 5 years, study says | VentureBeat" | "https://venturebeat.com/2015/05/04/smart-clothing-sales-to-top-10m-in-5-years-study-says" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ‘Smart’ clothing sales to top 10M in 5 years, study says Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
It’s inevitable that the wearable technology we see now, with its various biometric sensors, will start to move from our wrists to different parts of our body. The sensors will start showing up in different parts of our clothing.
One research firm believes this change will happen before the end of the decade. A new report from Tractica says we consumers will be buying more than 10 million pieces of smart clothing yearly by 2020.
There’s nothing very new about biosensing clothing, but this clothing has for the most part been confined to athletic appeal. Sports enthusiasts are using sensor-infused shirts, shorts, sports bras, and socks that provide biometric data on muscle activity, breathing rate, and heart activity zones — all data that is not currently tracked by fitness bands or smart watches. Over the next several years, smart clothing will begin to look less like athletic gear and more like street gear.
It wasn’t too long ago that sales of smart clothing were at a trickle. Tractica said just 140,000 of the garments moved in 2013, almost all of them athletic gear.
“The ultimate wearable computer is a piece of smart clothing that one can wear as a garment or a body sensor that can track and measure specific vital signs,” said Tractica Research director Aditya Kaul in a statement. “Both of these device categories are designed to seamlessly integrate with users’ daily lives.” While body sensor shipments will decrease from 3 million units in 2013 to 1.2 million by 2017, Tractica says, they will rise again to 3.1 million units in 2020. The reason for the downward dip at 2017, Tractica says, is because heart rate monitors will decline in unit volume before newer devices like baby and pregnancy monitors, headbands, posture monitors, and 3D trackers begin to build momentum.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,208 | 2,018 | "Siren unveils socks with microsensors that detect diabetes problems | VentureBeat" | "https://venturebeat.com/2018/03/28/siren-unveils-socks-with-microsensors-that-detect-diabetes-problems" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Siren unveils socks with microsensors that detect diabetes problems Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
San Francisco-based Siren has unveiled socks with microsensors woven in that can detect whether a person faces a diabetic foot problem. The company is also announcing it has raised $3.4 million in seed funding from DCM, Khosla Ventures, and Founders Fund.
More than 100,000 people lose feet or legs to diabetes each year due to ulcers that become infected. About 56 percent of diabetic foot ulcers become infected, and 20 percent of those with infected foot wounds end up with some type of amputation. And 80 percent of the people with diabetes who have foot amputations pass away within five years.
Siren has developed Neurofabric, a textile with microsensors embedded directly into the fabric. Its sensors are completely seamless and virtually invisible to the user, the company said, and Neurofabric can be made on standard industrial equipment, making its production cost-efficient and easily scalable.
Above: Ran Ma is CEO of Siren.
The Siren Diabetic Sock continuously monitors foot temperature so people can detect signs of inflammation, the precursor to diabetic foot ulcers. Monitoring foot temperature is clinically proven to be the most effective way of catching foot injuries, up to 87 percent more effective at preventing diabetic foot ulcers than standard diabetic foot care.
Current solutions for diabetic foot monitoring rely on non-continuous and manual measurement. People who want to monitor foot temperature have to go to the doctor and get six spots on each foot manually measured for temperature, a time-consuming and inefficient process.
“We built this technology because foot ulcers are the most common, costly and deadly complication for people with diabetes, yet there was no way to continuously monitor for these massive problems,” said Ran Ma, CEO of Siren, in a statement. “Our Neurofabric has endless applications across healthcare, sports, military, and fashion, but it was obvious to us that solving this specific problem is where we had to start, because it impacts so many and can mean the difference between losing a limb or not.” The best thing about the socks is that they look and feel like a regular pair of socks.
Above: Siren Diabetic Socks tell you if you have a risk of an ulcer on your feet.
“The Siren system has become a vital part of my foot care because it helps catch potential problems early,” said Melissa G., who has type 1 diabetes, in a statement. “The socks stay incredibly soft even after washing them, and remain comfortable throughout the day. I love that I can see the temperature of my feet instantly with the app and compare changes from day to day.” The Siren system includes a variety of patented technologies that enable the standard manufacturing of integrated sensors and simultaneous pairing of multiple devices. The socks are machine washable. Customers who order in the next 30 days can pay $19.95 a month.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,209 | 2,019 | "Pensa Systems uses autonomous drones to track store inventory | VentureBeat" | "https://venturebeat.com/2019/01/13/pensa-systems-uses-autonomous-drones-to-track-store-inventory" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pensa Systems uses autonomous drones to track store inventory Share on Facebook Share on X Share on LinkedIn Pensa Systems' autonomous inventory drones.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
There’s perhaps no retail task more time-consuming than taking stock of merchandise. Inventory tracking — that is, figuring out which products are in stock, which stock is likely to run low in the next week, and so on — is a never-ending battle, as shoppers spend an estimated 40 billion hours picking things off store shelves. It’s also error-prone — according to one study, employees regularly misplace nearly one in 10 items.
However, serial entrepreneur Richard Schwartz believes he has the answer, and it involves airborne drones with brace cages that resemble giant wiffle balls.
Really.
Schwartz is the CEO and founder of Pensa Systems , an Austin startup developing a retail inventory system that taps computer vision algorithms to “understand” what’s on store shelves. Pensa has already trialed its platform with Anheuser-Busch InBev — a strategic investor — along with several other brands and retailers in multiple countries. And at the New York Retail Federation’s annual conference in New York, the company today announced that it has secured fresh capital it will put toward client acquisition.
Signia Venture Partners led the $5 million investment in Pensa, with participation from Commerce Ventures, as well as existing investors ZX Ventures, ATX Seed Ventures, Capital Factory, and RevTech Ventures. This follows the Austin startup’s $2.2 million seed funding round in 2018, bringing its total raised to $7.2 million.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As part of the round, Ed Cluss, a partner at Signia Venture Partners, will join Pensa’s board of directors. He’ll join Mick Mountz, founder and CEO of Kiva Systems, a warehouse automation and robotics company that was acquired by Amazon for $775 million in May 2012.
“Lack of inventory visibility is an age-old problem for brands and retailers,” Schwartz said. “Retailers and manufacturers go blind staring at products on the shelf to see what is missing or has been misplaced. Advanced artificial intelligence can borrow the best of how people perceive shelf conditions and automate it at scale to continuously read out what is on a shelf at any point in time.” Pensa’s inventory management system, unlike those offered by Bossa Nova , Keonn Technologies, and Simbe Robotics, eschews ground robots for the aforementioned quadcopters, which are equipped with cameras that scan store shelves for stock. With the aid of wirelessly connected Intel edge servers and self-learning algorithms that get better at recognizing products over time, the spherical drones scan and automatically sense shelf conditions with “high accuracy” as they fly between aisles.
According to Schwartz, drones have an advantage when it comes to scalability. They’re cheaper than a few of the robotics products Pensa’s competitors currently provide, in part because they’re subsidized by a data-as-a-subscription model. And in some cases they’re less complex to operate, particularly in stores with unusual and highly compact layouts.
“In-store inventory visibility remains a giant black hole for the retail supply chain,” Schwartz said. “Retailers and brand manufacturers have tried all combinations of robots, cameras, and smart shelving, but these solutions are too expensive, inaccurate, and brittle.” During the pilot with AB InBev, Pensa’s drones collected hourly and daily inventory data in a brick-and-mortar store — IGA Extra Beck — in Montreal, Canada. Over a two-week period, they scanned dry shelves and coolers containing cans, bottles, and packs and managed to detect when an item was out of stock 98 percent of the time.
“Any solution that can help us maintain store integrity and ensure we don’t have out-of-stocks provides the competitive advantage we need,” said IGA Extra Beck owner Todd Beck. “The ability to learn about a potential out-of-stock situation hours ahead of when our manual systems might notify us represents an opportunity to drive incremental sales and make customers happier in the process. The immediate feedback we can get from the Pensa system can offer tremendous value to our business.” Pensa isn’t the only startup using robotics to tackle the logistical challenges of brick-and-mortar businesses.
U.K. supermarket chain Ocado — one of the world’s largest online-only grocery retailers — has engineered a packing system that uses computer vision to transfer goods.
Takeoff Technologies’ platform, which works out of pharmacies, convenience stores, and quick-service restaurants, doubles as a pick-up station, complete with lockers for easy access. Not to be outdone, commerce giant Walmart recently partnered with Alert Innovation in August to deploy AlphaBot , an autonomous fulfillment system capable of picking and transporting the “vast majority” of grocery items.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,210 | 2,019 | "Simbe Robotics raises $26 million for autonomous inventory robots | VentureBeat" | "https://venturebeat.com/2019/09/12/simbe-robotics-raises-26-million-for-autonomous-inventory-robots" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Simbe Robotics raises $26 million for autonomous inventory robots Share on Facebook Share on X Share on LinkedIn Simbe Robotics' Tally robot.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Robots are coming for the grocery aisle, and that’s because they promise to save storeowners invaluable time by inventorying shelves of stock quickly and accurately. Research and Markets anticipates that the global brick-and-mortar automation market will be worth close to $18.9 million by 2023, which some analysts say could cut down on the billions of dollars in lost revenue traced to misplaced and erroneously priced items.
It’s this opportunity that drove Brad Bogolea, formerly a product manager at mesh networking firm Silver Spring Networks, to found Simbe Robotics in 2015. Together with Willow Garage robotics lab veterans Jeff Gee and Mirza Shah, he loftily sought to transform the retail industry with robots capable of keeping tabs on inventory. Only a few years later, Simbe’s machine — Tally — has navigated more than 25,000 miles in stores operated by over a dozen of the top 250 global brands, including Schnuck Markets, Giant Eagle, Decathlon Sporting Goods, and Groupe Casino.
Building on this momentum, Simbe this morning announced that it’s secured $26 million in a series A equity funding round led by Venrock, with participation from Activant Capital and Valo Ventures. As part of the arrangement, SoftBank Robotics America, SoftBank’s intermediate holding company responsible for its robotics projects, has agreed to help Simbe scale production of Tally to an additional 1,000 units over the next two years.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! CEO Bogolea says the newfound funds will be put toward business operations, including R&D and the expansion of Simbe’s engineering, sales, marketing, and customer success teams. Recently, Simbe moved into a new South San Francisco headquarters that’s six times the size of its previous office, and today, it revealed new board of directors members in iPod and iPhone co-inventor and Nest founder Tony Fadell, Venrock’s David Pakman, and Pathbreaker Venture’s Ryan Gembala.
“Our investors, both previous and new, provide much more than financial support. They are advocates and trusted advisors who bring invaluable institutional knowledge to all facets of our business,” said Bogolea. “Both our equity financing partners and the SoftBank Robotics team are deeply aligned with Simbe’s vision to revitalize physical retail through data. We are at a pivotal time of growth and value their support as we continue to transform retail at a global scale.” So how’s Tally work? Simbe drives the robot around to create a store map and “teach” it the location of its charging dock. (Simbe claims most customers see Tally achieve baseline accuracy in as little as one week.) Once configured, Tally operates completely on its own, only requiring remote or on-site servicing if something goes wrong.
Tally taps computer vision to tell which products aren’t on a shelf and which (if any) are missing facings, and it uses RFID to take precise inventory counts. A single robot can scan 15,000 to 30,000 products per hour, or around 80,000 SKUs in roughly two hours. That’s compared with the average employee, who spends 20 to 30 hours a week scanning 10,000 to 20,000 products.
In a typical setup, Tally performs rounds three times per day: in the morning to validate the previous night’s restock, in the afternoon to fill remaining holes with backstock, and in the evening to provide recommendations to the restocking team. It’s designed for grocery store, drug stores, value stores, clothing stores, and consumer electronics stores larger than 5,000 square feet, and it currently captures all items with the exception of bakery, deli, and produce.
Simbe doesn’t sell Tally units. Rather, the company provides the hardware free of charge and levies a monthly fee on its retailer users, which varies depending on factors like deployment size, the number of SKUs scanned, and whether computer vision and/or RFID scanning are actively used.
Simbe competes with a number of companies in the retail automation space, including San Francisco-based Bossa Nova, which last year raised $29 million for a robot that scans store shelves for missing inventory.
Penna Systems eschews wheeled robots for autonomous quadcopters that track store inventory from overhead. U.K. supermarket chain Ocado — one of the world’s largest online-only grocery retailers — has engineered a packing system that uses computer vision to transfer goods. And Takeoff Technologies ‘ platform, which works out of pharmacies, convenience stores, and quick-service restaurants, doubles as a pick-up station, complete with lockers for easy access.
But investors like Venrock partner David Pakman have confidence Simbe’s solutions and vision will keep it well ahead of the pack.
“The Simbe team is building one of the most interesting datasets in retail, capturing inventory and pricing data that hasn’t been available before. With Simbe, retail stores can free up scarce labor to spend more time doing high-value activities like interacting with customers. Retail is changing, and stores must modernize to succeed in today’s environment,” said Venrock partner Pakman. “The Simbe team has impressed me from the start, and they are uniquely equipped to bring this new reality to retail.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,211 | 2,013 | "At MailChimp, data science works behind the scenes | VentureBeat" | "https://venturebeat.com/2013/12/05/at-mailchimp-data-science-works-behind-the-scenes" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages At MailChimp, data science works behind the scenes Share on Facebook Share on X Share on LinkedIn John Foreman, chief data scientist at MailChimp Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
REDWOOD CITY, Calif. — Data science doesn’t need to look cool. It doesn’t need to use trendy technologies, either. What it ought to do is solve problems.
And data science has done that for MailChimp, in a variety of applications, as company data scientist John Foreman showed during a presentation today at VentureBeat’s DataBeat/Data Science Summit event.
Internally, scheduling support staff was such a burden for the e-mail marketing service that two people were spending their working hours managing workers’ schedules. “And that’s a really bad idea,” Foreman said.
It was a classical optimization problem, he said. A plugin for Microsoft Excel, called OpenSolver, helped Foreman quickly construct what he described as an “optimal schedule.” Finding spammers among the people who go to MailChimp to send e-mail was another job for data science. One data set MailChimp gets is e-mail addresses to send to. That’s data worth tapping to build a model that can weed out the spammers to solve a business problem.
“We’ve been in business over 10 years,” Foreman said. “We have a really great training set for determining who’s a bad actor and who’s not.” Indeed, data science can tell a good bit about the people behind e-mail addresses.
And a user-facing service Foreman highlighted is not exactly glitzy. It just recommends the best time for a customer to shoot out an e-mail. Customers don’t have to send its e-mail with at what MailChimp identifies as the “time for maximum engagement,” but it’s an option available to paying customers in the latest version of MailChimp.
There are no heat maps that tell you what you already know or infographics packed with meaningless information to see here. It’s a no-questions-asked, one-line option that could help customers.
There’s a bit of complexity going on, like how the time recommendation is only good for 24 hours, to reflect changes in data, as Foreman wrote in a recent MailChimps blog post.
But that’s abstracted away, so customers can simply get the most out of an e-mail blast every time.
Foreman has found is that a “not exotic stack” often works. For example, a good old PostgreSQL database can work just fine for solving problems when data is inherently structured. Hadoop and a NoSQL database might not be necessary despite the continuing hype around it. The point is to avoid using a fancy new technology just because it’s name-dropped in a news article and instead circumvent unnecessary risk and complexity.
“A data science team should align itself with the business and serve that business,” Foreman said. “The purpose of the data science team is to lead from the back, not to make headlines.” That might be a hard concept for some executives to accept today, as even hiring data scientists is an achievement to start with. But in time the hype will subside, and Foreman’s views stand to become common sense.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,212 | 2,019 | "ProBeat: Google's Pixel 4 ups the AI ante to offline language models | VentureBeat" | "https://venturebeat.com/2019/10/18/probeat-googles-pixel-4-ups-the-ai-ante-to-offline-language-models" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: Google’s Pixel 4 ups the AI ante to offline language models Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Google’s Pixel phones are the company’s preferred way of showcasing its AI chops to consumers. Pixel phones consistently set the phone camera bar thanks to Google’s AI prowess. But many of the AI features have nothing to do with the camera.
The Pixel 4 and Pixel 4 XL unveiled this week at the Made by Google hardware event in New York City continue this tradition. Camera improvements aside, the Pixel 4 makes a play for a new arena that Google clearly wants to rule: offline natural language processing.
At Google’s I/O 2019 developer conference in May, multiple executives touted being able to shrink the company’s cloud-based language model, which is over 100GB, to less than 100MB. The smaller model isn’t as accurate, of course, but it can work offline. The competition, whether that be Apple, Amazon, Samsung, or Microsoft, have nothing like it.
Live Caption and Recorder, which debut exclusively when the Pixel 4 and Pixel 4 XL ship on October 22, are the direct result of this improvement. The former was first shown off at I/O and the latter leaked weeks ago. In fact, as a result of the leaks, Google didn’t even talk about Live Caption onstage this week and quickly skimmed over Recorder. But a closer look shows that they are indeed cut from the same cloth.
Update : Google confirmed to me that Live Caption and Recorder use the same underlying speech model with some custom training for the different use cases.
Live Caption and Recorder work only in English. For Live Caption, Google plans to support more languages “ in the near future.
” For Recorder’s transcription and search functions, more languages are “ coming soon.
” Coincidence? I think not.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Live Caption and Recorder are Yin and Yang Live Caption provides real-time continuous speech transcription of whatever is playing on your phone. The feature can caption any live media, including songs, audio recordings, podcasts, and so on. Live Caption can be accessed via the volume buttons; it appears as a software icon when the volume UI pops up. As soon as speech is detected, captions will appear on your phone screen. You can double-tap to show more, and also drag the captions to anywhere on your screen. You don’t need to open another app, and you don’t need a Wi-Fi or data connection.
The Recorder app records meetings, lectures, and anything else you point your phone’s microphone at. Like any other similar app, you can save recordings and listen to them later. Recorder goes further, however, by simultaneously transcribing speech, as well as automatically recognizing audio events like applause, birds, cats, dogs, laughter, music, roosters, speech, phones, and whistling. Furthermore, you can search within your recordings to find a specific word or sound. Here as well, you don’t need a Wi-Fi or data connection.
The new Recorder app uses speech recognition and AI to transcribe lectures, meetings, interviews and more—and makes them easy for you to find later. (English only right now, with more languages to come.) #madebygoogle pic.twitter.com/fdKRItuS4b — Google (@Google) October 15, 2019 So Live Caption is for anything coming out of your phone’s speakers and Recorder is for anything coming into your phone’s microphone. That said, Live Caption and Recorder don’t work if you’re on a phone call, voice call, or video call.
Back at I/O, Brian Kemler, Android accessibility product manager, told me Google had no plans to let Live Caption support transcriptions. “Not for Live Caption. Obviously, we thought about that. But we want the captions to be truly captions in the sense that they’re ephemeral, if they help you understand or consume that experience. But we want to protect the people, the publishers, content, and content owners. We don’t want to give you the ability to pull out all that audio, transcribe it, and then do [whatever they want with it].” If you want a transcription, that’s what Recorder is for.
Android 10 required Don’t confuse Live Caption and Recorder with Live Transcribe , which Google released in February. That tool uses machine learning algorithms to turn audio into real-time captions, but it relies on the cloud (specifically, the Google Cloud Speech API ). Live Transcribe is available on 1.8 billion Android devices. Live Caption and Recorder may work on-device, but the number of devices is limited.
Google says that the Pixel 4 and Pixel 4 XL use a Pixel Neural Core for on-device processing. Live Caption is coming to the Pixel 3, Pixel 3a, Pixel 3 XL, and Pixel 3 XL “later this year.” Google is also “working closely with other Android phone manufacturers to make it more widely available in the coming year.” Obviously, none of these have a Pixel Neural Core (Pixel 3 and Pixel 3 XL have a Pixel Visual Core, the Pixel 3a and Pixel 3a XL have neither).
We can conclude that Live Caption will work best on the Pixel 4 and Pixel 4 XL, but Google is clearly able to get it to work without the Pixel Neural Core. (In fact, Kemler showed it to me on a Pixel 3a back in May.) We can conclude the same for Recorder. The app leaked late last month.
Enthusiasts were able to get it to work on various devices, including non-Pixel phones. The only real requirement seemed to be Android 10.
Google’s strategy here seems obvious to me. The company will use the Pixel 4 and Pixel 4 XL to show off Live Caption and Recorder in English. As the company adds more languages and gets comfortable with performance, Live Caption and Recorder will become more widely available. First on older Pixel phones, and eventually on other Android devices.
That way, Google will be able to say it’s bringing cool AI features to more and more people. At the same time, it will ensure that anyone buying the latest Pixel phone is getting its cutting-edge AI features first.
ProBeat is a column in which Emil rants about whatever crosses him that week.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,213 | 2,020 | "Google Pixel Buds tap AI to alert users to sirens, crying babies, and barking dogs | VentureBeat" | "https://venturebeat.com/2020/08/20/google-pixel-buds-tap-ai-to-alert-users-to-sirens-crying-babies-and-barking-dogs" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Pixel Buds tap AI to alert users to sirens, crying babies, and barking dogs Share on Facebook Share on X Share on LinkedIn Google Pixel Buds Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Google Pixel Buds can now alert you to the sounds of a crying baby, barking dog, or the siren of an emergency vehicle when you’re listening to something and may not otherwise hear the sound. The feature reduces the volume of whatever music or podcast you’re listening to and plays a chime sound to signal an alert. Attention Alerts is part of an AI-powered experimental feature being added to Google’s flagship earbuds today in a larger firmware update.
Google trained AI systems to recognize the trio of sounds by scraping audio from publicly available videos, a company spokesperson told VentureBeat.
Amazon’s Echo speakers have the ability to detect sounds that may be important in a home setting, like alarms, breaking glass, or sounds indicating someone is in your home when you’re away.
Pixel Buds are also getting a more powerful way to translate languages today. They are now able to use Transcribe mode on Google Translate with a voice command, starting with English translations for French, German, Italian, and Spanish speakers. Google also introduced voice commands to turn off earbud touch control and check earbud battery levels today. Conversation mode translations can be helpful, but since they only last a few seconds, they’re not always practical in an actual conversation, where people can speak uninterrupted for longer periods of time. A Google spokesperson told VentureBeat that Transcribe supports listening for up to a couple of hours. Transcribe mode for the Google Translate app launched in March.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Also new today is a feature that recognizes when you’re sharing an earbud with another person. And you can now use Find My Device to get the last known location of your Pixel Buds. Features like the ability to lower volume in loud places, Attention Alerts, and extensive translations are powered by a series of microphones on the exterior of each earbud.
Google made Pixel Buds 2 available for $179 in April.
To justify a price tag slightly higher than that of other popular choices, like Samsung Galaxy Buds or Apple AirPods, Pixel Buds launched with unique features, such as the ability to automatically turn up the volume when the surrounding environment gets noisy, hands-free access to Google Assistant, and conversation mode to power quick translations.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,214 | 2,020 | "Google says Pixel's Hold for Me feature records and stores audio on-device | VentureBeat" | "https://venturebeat.com/2020/09/30/google-says-pixels-hold-for-me-feature-records-and-stores-audio-on-device" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google says Pixel’s Hold for Me feature records and stores audio on-device Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
One of the just-announced Pixels’ most intriguing features is Hold for Me, a Google Assistant-powered service that waits on hold when you call a retailer, utility, or other business’ toll-free support number. When a human comes on the line, Hold for Me — which will launch in preview in English in the U.S. before expanding to other regions and devices — notifies you with sound, vibration, and a prompt on your screen.
Hold for Me was announced today at Google’s annual hardware event, and the company responded to a list of VentureBeat’s questions afterward. According to a spokesperson, Hold for Me is powered by Google’s Duplex technology, which not only recognizes hold music but also understands the difference between a recorded message — for example, “Hello, thank you for waiting” — and a representative on the line. (That said, a support page admits Hold for Me’s detection accuracy might not be high “in every situation.”) To design the feature, Google says it gathered feedback from a number of companies, including Dell and United, as well as from studies with customer support representatives.
“Every business’ hold loop is different, and simple algorithms can’t accurately detect when a customer support representative comes onto the call,” Google told VentureBeat. “Consistent with our policies to be transparent, we let the customer support representative know that they are talking to an automated service that is recording the call and waiting on hold on a user’s behalf.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hold for Me is an optional feature that must be enabled in a supported device’s settings menu and activated manually during each call. In the interests of privacy, Google says any audio processing Google Assistant uses to determine when a representative is on the line is done entirely on-device and doesn’t require a Wi-Fi or data connection. Effectively, audio from the call is not shared with Google or saved to a Google account unless a user explicitly shares it to help improve the feature. (Call data like recordings, transcripts, phone numbers, greetings, and disclosures are stored on Google servers for 90 days before deletion.) If the user doesn’t opt to share audio, interactions between Hold for Me and support representatives are wiped after 48 hours. Returning to a call when a customer support person becomes available stops audio processing.
Google claims its embrace of techniques like on-device processing and federated learning minimize the exchange of data between its servers. For instance, its Now Playing feature on Pixel phones , which identifies songs playing nearby, leverages federated analytics to analyze data in a decentralized way. Under the hood, Now Playing taps an on-device database of song fingerprints to identify music near a phone without the need for an active network connection.
Google’s Call Screen feature, which screens and transcribes incoming calls, also happens on-device, as do Live Caption , Smart Reply , and Face Match.
That’s thanks in part to offline language and computer vision models that power, among other things, the Google Assistant experience on smartphones like the Pixel 4 , Pixel 4a and 4a (5G), and Pixel 5.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,215 | 2,019 | "Twitter Still Can't Keep Up With Its Flood of Junk Accounts, Study Finds | WIRED" | "https://www.wired.com/story/twitter-abusive-apps-machine-learning" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Andy Greenberg Security Twitter Still Can't Keep Up With Its Flood of Junk Accounts, Study Finds Casey Chin Save this story Save Save this story Save Since the world learned of state-sponsored campaigns to spread disinformation on social media and sway the 2016 election , Twitter has scrambled to rein in the bots and trolls polluting its platform. But when it comes to the larger problem of automated accounts on Twitter designed to spread spam and scams, inflate follower counts, and game trending topics, a new study finds that the company still isn’t keeping up with the deluge of garbage and abuse.
In fact, the paper's two researchers write that with a machine-learning approach they developed themselves, they can identify abusive accounts in far greater volumes and faster than Twitter does—often flagging the accounts months before Twitter spotted and banned them.
In a 16-month study of 1.5 billion tweets, Zubair Shafiq, a computer science professor at the University of Iowa, and his graduate student Shehroze Farooqi identified more than 167,000 apps using Twitter's API to automate bot accounts that spread tens of millions of tweets pushing spam, links to malware, and astroturfing campaigns. They write that more than 60 percent of the time, Twitter waited for those apps to send more than 100 tweets before identifying them as abusive; the researchers' own detection method had flagged the vast majority of the malicious apps after just a handful of tweets. For about 40 percent of the apps the pair checked, Twitter seemed to take more than a month longer than the study's method to spot an app's abusive tweeting. That lag time, they estimate, allows abusive apps to cumulatively churn out tens of millions of tweets per month before they're banned.
"We show that many of these abusive apps used for all sorts of nefarious activity remain undetected by Twitter's fraud-detection algorithms, sometimes for months, and they do a lot of damage before Twitter eventually figures them out and removes them," Shafiq says. The study will be presented at the Web Conference in San Francisco this May. "They’ve said they’re now taking this problem seriously and implementing a lot of countermeasures. The takeaway is that these countermeasures didn’t have a substantial impact on these applications that are responsible for millions and millions of abusive tweets." "We found a way to detect them even better than Twitter." Zubair Shafiq, University of Iowa The researchers say they've been sharing their results with Twitter for more than a year but that the company hasn't asked for further details of their method or data. When WIRED reached out to Twitter, the company expressed appreciation for the study's goals but objected to its findings, arguing that the Iowa researchers lacked the full picture of how it's fighting abusive accounts. "Research based solely on publicly available information about accounts and tweets on Twitter often cannot paint an accurate or complete picture of the steps we take to enforce our developer policies," a spokesperson wrote.
Twitter has, to its credit, at least taken an aggressive approach to stopping some of the most organized disinformation trolls exploiting its megaphone. In a report released last week , the social media firm said it had banned more than 4,000 politically motivated disinformation accounts originating in Russia, another 3,300 from Iran, and more than 750 from Venezuela. In a statement to WIRED, Twitter noted that it's also working to curb abusive apps, implementing new restrictions on how they're given access to Twitter's API. The company says it banned 162,000 abusive applications in the last six months of 2018 alone.
But the Iowa researchers say their findings show that abusive Twitter applications still run rampant. The data set used in the study runs only through the end of 2017, but at WIRED's request Shafiq and Farooqi ran their machine-learning model on tweets from the last two weeks of January 2019 and immediately found 325 apps they deemed abusive that Twitter had yet to ban, some with explicitly spammy names like EarnCash_ and La App de Escorts.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight In their study, the researchers focused exclusively on finding toxic tweets produced by third-party apps, given the outsize effects of the automated tools. Sometimes the malicious apps controlled accounts that spammers or scammers themselves created. In other cases, they hijacked accounts of users who had been tricked into installing the applications or had done so in exchange for incentives like a boost in fake followers.
Amid the 1.5 billion tweets the researchers started with—Twitter makes only 1 percent of all tweets available through a research-focused API—457,000 third-party applications were represented. The pair then used that data to train their own machine-learning model for tracking abusive apps. They noted which accounts each application posted to, along with factors including the age of the accounts, the timing of tweets, the number of usernames, hashtags, links the tweets included, and the ratio of retweets to original tweets. Most importantly, they observed which accounts were eventually banned by Twitter during the 16-month period they watched, essentially using those bans to denote abusive accounts.
With the resulting machine-learning-trained model, they found they could identify 93 percent of the applications that Twitter would ultimately ban without looking at more than their first seven tweets. "We're in some sense relying on seeing what Twitter eventually labels as malicious apps. But we found a way to detect them even better than Twitter," Shafiq says.
Twitter countered in its statement that the Iowa researchers' machine-learning model was faulty, because they couldn't actually say with certainty which applications Twitter had banned for abusive behavior. Since Twitter doesn't make that data public, the researchers could only guess by looking at which applications had tweets removed. That could have been from a ban, but it could also have resulted from users or applications deleting their own tweets.
"We think the methods used for this research do not accurately measure or reflect the health of our developer platform—principally because the factors used to train the model in this research are not strongly correlated with whether or not an application in fact violates our policies," a spokesperson wrote to WIRED.
But the Iowa researchers note in their paper that they only marked an application as having been banned by Twitter if 90 percent or more of its tweets had been removed. They observed that for popular, benign apps like Twitter for iPhone or Android, less than 30 percent of tweets are removed. If users of some legitimate app do delete their tweets more often, "these would be a small minority, these apps would not be used by a lot of people, and I don’t expect their results would be affected by that," says Gianluca Stringhini, a researcher at Boston University who has worked on previous studies of abusive social media apps.
"So I would expect that their ground truth is reasonably strong." Beyond those educated guesses at which apps had been banned, the researchers also honed their definition of abusive apps by crawling sites that advertised fake followers and downloading 14,000 applications they offered. Of those, about 6,300 had produced tweets in their 1.5 billion-tweet sample, so they also served as examples of abusive apps for the machine-learning model's training data.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight One drawback to the Iowa researchers' method was its rate of false positives: They admit that about 6 percent of the apps their detection method flags as malicious are in fact benign. But they argue that the false-positive rate is low enough that Twitter could assign human staffers to review their algorithm's results and catch mistakes. "I don't think it would take more than one person to do this kind of review," says Shafiq. "If you don't aggressively target these applications, they’re going to compromise many more accounts and tweets, and cost many more man-hours." The researchers agree with Twitter that the company is moving in the right direction, tightening the screws on junk accounts and more importantly, in his view, abusive applications. They noticed that around June 2017, the company did seem to be more aggressively banning bad apps. But they say their findings show that Twitter is still not exploiting machine learning's potential to catch app abuse as quickly as it could. "They’re probably doing some of this right now," Shafiq says. "But clearly not enough." Messenger lets you unsend now.
Why don't all apps? This birdlike robot uses thrusters to float on two legs A new Chrome extension will detect unsafe passwords The Social Network was more right than anyone realized Micromobility: prose and poetry of the scooter-faithful 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Writer X Topics twitter bots disinformation Andrew Couts David Gilbert Lily Hay Newman David Gilbert David Gilbert Andy Greenberg Andy Greenberg Justin Ling Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
16,216 | 2,019 | "Google debuts Cloud Data Fusion, connected sheets in BigQuery, and Data Catalog | VentureBeat" | "https://venturebeat.com/2019/04/10/google-debuts-cloud-data-fusion-connected-sheets-in-bigquery-and-data-catalog" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google debuts Cloud Data Fusion, connected sheets in BigQuery, and Data Catalog Share on Facebook Share on X Share on LinkedIn Google data center in Douglas County, Georgia.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Coinciding with the database improvements announced this morning during Google’s annual Cloud Next conference, the Mountain View company announced a slew of new capabilities heading to its data analytics portfolio.
The first is Cloud Data Fusion, a fully managed and cloud-native data integration service that’s available starting this week in beta. Google’s pitching it as a way to ingest, integrate, and manipulate data using a library of open source transformations and over a hundred connectors. They’re mainly controlled through a drag-and-drop interface where data sets and pipelines are represented visually, without code.
Google also introduced Data Catalog in beta, a fully managed and scalable metadata management service with a search interface for data discovery, underpinned by the same search technology that supports Gmail and Drive. It boasts a cataloging system for capturing technical and business metadata, and it integrates with Cloud DLP and Cloud IAM for privileged access and control.
On the BigQuery side of the equation, Google says it’s built a data warehouse migration service to automate data and schema migration to BigQuery from Teradata and Amazon Redshift, as well as data loading from Amazon S3. And it took the wraps off of BigQuery BI Engine, a speedy in-memory analysis service designed to handle complex data sets with “sub-second” query response time and high concurrency. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Also new: connected sheets , a type of Google Sheet spreadsheet that works with the full dataset from BigQuery up to 10 billion rows. (BigQuery has been able to take in data from Google Sheets since 2016 , but only up to a point.) Analyses in connected sheets are performed with formulas, pivot tables, and charts as opposed to SQL, and can be visualized as dashboards and shared with anyone within an organization.
BigQuery BI Engine is available in beta starting today through Google Data Studio for interactive reporting and dashboarding, and Google says that in the coming months, Looker and Tableau will be able to leverage it as well. Connected sheets will arrive a bit later.
In other BigQuery news, BigQuery ML, which facilitates the deployment of AI models on data sets inside BigQuery, is gaining new models like k-means clustering (in beta) and matrix factorization (in alpha), and it’s now possible to build and directly import TensorFlow Deep Neural Network models (in alpha). Moreover, Google said that its BigQuery Data Transfer Service, which automates data movement from software-as-a-service (SaaS) apps to Google BigQuery on a scheduled basis, now supports more than 100 apps, including Salesforce, Marketo, Workday, and Stripe.
“From Fortune 500 enterprises to start-ups, more and more businesses continue to look to the cloud to help them store, manage, and generate insights from their data,” said Google Cloud director of product management Sudhir Hasbe. “And we’ll continue to develop new, transformative tools to help them do just that.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,217 | 2,020 | "How Google Meet's noise cancellation denoiser works | VentureBeat" | "https://venturebeat.com/2020/06/08/google-meet-noise-cancellation-ai-cloud-denoiser-g-suite" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Meet noise cancellation is rolling out now — here’s how it works Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google is turning on AI-powered noise cancellation in Google Meet today. Like Microsoft Teams’ upcoming noise suppression functionality , the feature leverages supervised learning, which entails training an AI model on a labeled data set. This is a gradual rollout, so if you are a G Suite customer, you may not get noise cancellation until later this month. Noise cancellation will hit the web first, with Android and iOS coming later.
In April, Google announced that Meet’s noise cancellation feature was coming to G Suite Enterprise and G Suite Enterprise for Education customers. Here’s how the company described it: “To help limit interruptions to your meeting, Meet can now intelligently filter out background distractions — like your dog barking or keystrokes as you take meeting notes.” The “denoiser,” as its colloquially known, is on by default, though you can turn it off in Google Meet’s settings. ( Update on June 30 : Google changed its mind and the feature will be off by default, at least initially.) The use of collaboration and video conferencing tools has exploded as the coronavirus crisis forces millions to learn and work from home.
Google is one of many companies trying to one-up Zoom , which saw its daily meeting participants soar from 10 million to over 200 million in three months.
Google is positioning Meet, which has 100 million daily meeting participants as of April, as the G Suite alternative to Zoom for businesses and consumers alike.
Serge Lachapelle, G Suite director of product management, has been working on video conferencing for 25 years, 13 of those at Google. As most of the company shifted to working from home, Lachapelle’s team got the go-ahead to deploy the denoiser in Google Meet meetings. We discussed how the project started, how his team built noise cancellation, the data required, the AI model, how the denoiser works, what noise it cancels out and what it doesn’t, privacy, and user experience considerations (there is no visual indication that the denoiser is on).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Starting in 2017 When Google rolls out big new features, it typically starts with a small percentage of users and then ramps up the rollout based on the results. Noise cancellation will be no different. “We plan on doing this gradually over the month of June,” Lachapelle said. “But we have been using it a lot within Google over the past year, actually.” The project goes back further than that, beginning with Google’s acquisition of Limes Audio in January 2017. “With this acquisition, we got some amazing audio experts into our Stockholm office,” Lachapelle said.
The original noise cancellation idea was born out of annoyances while conducting meetings across time zones.
“It started off as a project from our conference rooms,” Lachapelle said. “I’m based out of Stockholm. When we meet with the U.S., it’s usually around this time [morning in the U.S., evening in Europe]. You’ll hear a lot of cling, cling, cling and weird little noises of people eating their breakfast or eating their dinners or taking late meetings at home and kids screaming and all. It was really that that triggered off this project about a year and a half ago.” The team did a lot of work finding the right data, building AI models, and addressing latency. But the biggest obstacle was forming the idea in the first place, followed by multiple simulations and evaluations.
“It had never been done,” Lachapelle said. “At first, we thought we would require hardware for this, dedicated machine learning hardware chips. It was a very small project. Like how we do things at Google is usually things start very small. I venture a guess to say this started in the fall of 2018. It probably took a month or two or three to build a compelling prototype.” “And then you get the team excited around it,” he continued. “Then you get your leadership excited around it. Then you get it funded to start exploring this more in depth. And then you start bringing it into a product phase. Since a lot of this has never been done, it can take a year to get things rolled out. We started rolling it out to the company more broadly, I would say around December, January. When people started working at home, at Google, the use of it increased a lot. And then we got a good confirmation that ‘Wow, we’ve got something here. Let’s go.'” Corpus data Similar to speech recognition, which requires figuring out what is speech and what is not, this type of feature requires training a machine learning model to understand the difference between noise and speech, and then keep just the speech. At first, the team used thousands of its own meetings to train the model. “We’d say, ‘OK everyone, just so you know we’re recording this, and we’re going to submit it to start training the model.'” The company also relied on audio from YouTube videos “wherever there’s a lot of people talking. So either groups in the same room or back and forth.” “The algorithm was trained using a mixed data set featuring noise and clean speech,” Lachapelle said. Other Google employees, including from the Google Brain team and the Google Research team, also contributed, though not with audio from their meetings. “The algorithm was not trained on internal recordings, but instead employees submitted feedback extensively about their experiences, which allowed the team to optimize. It is important to say that this project stands on the shoulders of giants. Speech recognition and enhancement has been heavily invested in at Google over the years, and much of this work has been reused.” Nevertheless, a lot of manual validation was still required. “I’ve seen everything from engineers coming to work with maracas, guitars, and accordions to just normal YouTubers doing livestreaming and testing it out on that. The range has been pretty broad.” The denoiser in action The feature may be called “noise cancellation,” but that doesn’t mean it cancels all noise. First off, it’s difficult for everyone to agree on what sounds constitute noise. And even if most humans can agree that something is an unwanted noise in a meeting, it’s not easy to get an AI model to concur without overdoing it.
“It works well on a door slamming,” Lachapelle said. “It works well on dogs barking; kids fighting, so-so. We’re taking a softer approach at first, or sometimes we’re not going to cancel everything because we don’t want to go overboard and start canceling things out that shouldn’t be canceled. Sometimes it’s good for you to hear that I’m taking a deep breath, or those more natural noises. So this is going to be a project that’s going to go on for many years as we tune it to become better and better and better.” On our call, Lachapelle demonstrated a few examples of the feature in action. He knocked a pen around inside a mug, tapped on a can, rustled a plastic bag, and even applauded. Then he did it all again after turning on the denoiser — it worked. You can watch him recreate similar noises (rustling a roasted nut bag, clicking a pen, hitting an Allen key in a glass, snapping a ruler, clapping) in the video up top.
“The applause part was a kind of a strange moment because when we did our first demo of this to the whole team, people broke out in applause and it canceled out the applause,” Lachapelle said. “That’s when we understood, ‘Oh, we’re going to need to have a controller to turn this on and off in the settings because there’s probably going to be some use cases where you really don’t want your noise to be removed.'” Vocal ranges The line for what the denoiser does and doesn’t cancel out is blurry. It’s not as simple as detecting human voices and negating everything else.
“The human voice has such a large range,” Lachapelle said. “I would say screaming is a tough one. This is a human voice, but it’s noise. Dogs at certain pitches, that’s also very hard. So some of it sometimes will slip through. On those kinds of things, it’s still a work in progress.” “Things like vacuum cleaners, we’ve got down really well,” he continued. “I had a big customer meeting the other day with Christina, who’s in Zurich — she leads our support team. And so we were talking with this customer, and all of a sudden I see in the back, her Roomba starts rolling into the room and gets stuck under her desk. She was there trying to talk to the customer and getting rid of the Roomba, and we never heard the Roomba go. It was completely silent. I thought that was kind of the ultimate test. If we can get those kinds of things out — drills, people that have construction next door, people that are sitting in the kitchen and they’ve got the blender going — those kinds of things it’s really, really good at.” A musical instrument will probably also get filtered out. “To a pretty large degree, it does,” Lachapelle said. “Especially percussion instruments. Sometimes a guitar can sound very much like a voice — you’re starting to touch the limits there. But if you have music playing in the background, usually it’ll cut it all out.” What about laughter? “I’ve never heard it block laughter.” What about singing? “Singing works.” Singing goes through, but the musical instruments don’t, “especially if they’re in the background.” Crucially, Google Meet’s noise cancellation is being rolled out for all languages. That might seem obvious at first, but Lachapelle said the team discovered it was “super important” to test the system on multiple languages.
“When we speak English, there’s a certain range of voice we use,” Lachapelle said. “There’s a certain way of delivering the consonants and the vowels compared to other languages. So those are big considerations. We did a lot of validation across different languages. We tested this a lot.” Proximity and amplitude Another challenge was dealing with proximity. This is not a machine learning problem — it’s a “too much noise too close to the microphone” problem.
“Keyboard typing is tricky,” Lachapelle said. “It’s like a step function in the audio signal. Especially if the keyboard is close to the microphone, that bang of the key right next to the microphone means that we can’t get voice out of the microphone because the microphone got saturated by the keyboard. So there are cases where if I’m overloading the microphone, my voice can’t get through. It becomes more or less impossible.” The team factored in distance from the microphone when determining what to filter out. The model thus adapts for amplitude. On our call, Lachapelle played some music from his iPhone. When he put his phone’s speakers right next to the microphone, we could hear the music come through a little bit while his voice, which was coming from further away, distorted a bit. Google Meet did not cancel out the music completely — it was more muffled. When he turned off the denoiser, the music came through at full volume.
“That’s when you see it find that threshold that we were talking about,” Lachapelle said. “You don’t want to have false positives, so we will err on the side of safety. It’s better to let something go through than to block something that really should go through. That’s what we’re going to start tuning now, once we start releasing this to more and more users. We’ll be able to get a lot of feedback on it. Someone out there is going to have a scenario we didn’t think of, and we’ll have to take that into consideration and further the model.” Tuning Tuning the AI model is going to be difficult, given all the different types of noise it encompasses. But the end goal isn’t to get the model to cancel out background noise completely. Nor is it making sure that all types of laughter can get through 100%.
“The goal is to make the conversation better,” Lachapelle said. “So the goal is the intelligibility of what you and I are saying — absolutely. And if the music is playing in the background and we can’t cancel it all out, as long as you and I can have a better conversation with it turned on, then it’s a win. So it’s always about you and I being able to understand each other better.” Making the conversation more coherent is particularly important in the era of smartphones and people working on the go.
“We have a big chunk of users now that are using mobiles, and we’ve never seen this much mobile usage, percentage-wise,” Lachapelle said. “I know we all talk about billions of minutes and so on going on in the system. But of that big chunk, the percentage of mobile users has never been this high. And mobile users are usually in very noisy environments. So for that use case, it’s going to have a huge impact. Here I’m sitting in my little office in Sweden with my fancy mic and my good headphones, probably not what we designed this for. We designed this for noisy environments because people need to talk wherever they are.” Privacy When you’re on a Google Meet call, your voice is sent from your device to a Google datacenter, where it goes through the machine learning model on the TPU , gets reencrypted, and is then sent back to the meeting. (Media is always encrypted during transport, even when moving within Google’s own networks, computers, and datacenters. There are two exceptions: when you call in on a traditional phone, and when a meeting is recorded.) “In the case of denoising, the data is read by the denoiser using the key that is shared between all the participants, denoised, and then sent off using the same key,” Lachapelle said. “This is done in a secure service (we call this borg) in our datacenter, and the data is never accessible outside the denoiser process, in order to ensure privacy, confidentiality, and safety. We’re still working on the plumbing in our infrastructure to connect the people that dial in with a phone normally. But that’s going to come a little bit later because they are a very noisy bunch.” Lachapelle emphasized repeatedly that Google will be improving the feature over time, but not directly using external meetings. Recorded meetings will not be used to train the AI either.
“We don’t look at anything that’s going on in the meetings, unless you decide to record a meeting,” Lachapelle said. “Then, of course, we take the meeting and we put it to Google Drive. So the way we’re going to work is through our customer channels and support and so on and trying to identify cases where things did not work as predicted. Internally at Google, there are meetings that are recorded, and if someone identifies a problem that happened, then hopefully they’ll send it to the team. But we don’t look at recordings for this purpose, unless someone sends us the file manually.” User experience considerations If you’re a G Suite enterprise customer, when Google flips the switch for you this month Meet’s noise cancellation feature will be off by default. You will have to turn it on in settings when you want to filter out “noise.” On the web, you’ll click the three dots at the bottom right, then Settings. Under the Audio tab, between microphone and speakers, you’ll see an extra switch that you can turn on or off. It’s labeled “Noise cancellation: Filters out sound that isn’t speech.” Google decided to put this switch in settings, as opposed to somewhere visible during a call. And there is no visual indication that noise is being canceled out. This means noise will be canceled out on calls and people won’t even be aware it’s happening, let alone that the feature exists. We asked Lachapelle why those decisions were made.
“There’s some people that would perhaps want us to show like ‘Look at how good we are. Right now your noise is being filtered out.’ I guess you could bring it down to user interface considerations,” Lachapelle said. “We’ve done a lot of user testing and interviews of users. We had users in labs last year before confinement, where we tested different models on them. And that combined with — you can see Meet doesn’t have buttons all over the place, it’s a fairly clean UX. Basically, my answer to your question would be, it’s based on the user research we’ve done, and on trying to keep the interface of Meet as clean as possible.” Who controls the noise cancellation? On a typical Google Meet call, you can mute yourself and — depending on the settings — mute others. But Google chose to not let users noise-cancel others. The noise cancellation occurs on the sender’s side — where the noise originates — so that’s where the switch is. While that might make sense in most cases, it means the receiver cannot control noise cancellation for what they hear. The team made that decision deliberately, but it wasn’t an easy one.
“I don’t think the off switch is going to be used much at all,” Lachapelle said. “So putting it front and center might be sort of overloading it. This should just be magic and work in the background. But like again, your ideas are spot on. This is exactly what we’ve been talking about. We’ve been testing. So it really shows that you’ve done a lot of homework on this. Because these are the challenges. And I don’t think any of us is 100% sure that this is the right way. Let’s see how it goes.” If it doesn’t work out, that’s OK. Google has already done the majority of the work. Moving switches around — “I don’t want to say that it’s simple, but it’s simpler than changing the whole machine learning model.” We asked whether alternative solutions could mean having the switch on the receiving end, or even on both ends.
“So we’ll try with this, and we might want to move to what you’re describing, as we get this into the hands of more and more users,” Lachapelle said. “By no means is this work done. This is going to be work that’s going to go on for a while. Also, we’re going to learn a lot of things. Like what controls are the best for the users. How do you make users understand that this is going on? Do they need to understand that this is going on? We think we have an idea of how to get the first step, but beyond that it’ll be a journey with all of our users.” If the current solution doesn’t work, Lachapelle said the team will probably build a few prototypes, do some more user research, and test them out via G Suite’s alpha program.
Cloud versus edge Google also made a conscious decision to put the machine learning model in the cloud, which wasn’t the immediately obvious choice.
“There’s a lot of ways to apply these models,” Lachapelle said. “Some require much beefier endpoints — you need a good computer. You’ve seen some of the stuff that that has been released, some of it as an extension or some of it requires a more powerful graphics card. We didn’t want to go that way. We wanted to make sure that access to this would be possible on your phones, no matter what phone you have, on your laptops. Laptops are getting thinner — they don’t have fans anymore. Loading them too hard with CPU isn’t a good idea. So we decided to see if we could do this in the cloud.” Using the cloud simply wasn’t feasible before.
“Manipulating media in the cloud, just five, six, seven years ago could add 200 milliseconds delay, 300 milliseconds delay,” Lachapelle said. “Our job has always been passing through the cloud as quickly as possible. But now with these TensorFlow processors , and basically the way that our infrastructure is built, we discovered that we could do media manipulation in real time and add sometimes only around 20 milliseconds of delay. So that’s the road we took.” Google did consider using the edge — putting the machine learning model on the actual device, say in the Google Meet app for Android and iOS.
“Of course we thought of it,” Lachapelle said. “But we decided that we wanted to have a more consistent experience across devices. Let’s say that I have an advanced i9 processor and then I get to use [noise cancellation]. But then if I move to my laptop that only has an i3 processor, my voice is so much worse. And so we really tried to see how can we bring this to a large group of people in a consistent way. It’s been about the consistency of the experience.” Google’s decision to use the cloud means you should have the exact same denoised meeting experience on every device. You won’t have to update anything either, not even the Google Meet app on your phone. Noise cancellation will be turned on server-side.
“We really think it’s going to help out a lot,” Lachapelle said. “I’ve worked on echo cancellation, on cleaning up video artifacts in real time, all these things. And this is the first time we can do our signal processing in the cloud. We’re quite excited about it. I think that this can change a lot of the signal processing paradigms. Whereas it used to be very, very complex math, and math that is often limited by the hardware you have — using machine learning models in the cloud instead of the complex math to achieve the same, or better, results.” Speed and cost In addition to training the model on different types of noise, there was another big technical hurdle to overcome: speed.
“Doing this without slowing things down is so important because that’s basically what a big chunk of our team does — try to optimize everything for speed, all the time,” Lachapelle said. “We can’t introduce features that slow things down. And so I would say that just optimizing the code so that it becomes as fast as possible is probably more than half of the work. More than creating the model, more than the whole machine learning part. It’s just like optimize, optimize, optimize. That’s been the hardest hurdle.” Google seems happy with the latency, but there is also a question of cost. It’s expensive to add an extra processing step for every single attendee in every single meeting hosted in Google Cloud.
“There’s a cost associated with it,” Lachapelle acknowledged. “Absolutely. But in our modeling, we felt that this just moves the needle so much that this is something we need to do. And it’s a feature that we will be bringing at first to our paying G Suite customers. As we see how much it’s being used and we continue to improve it, hopefully we’ll be able to bring it to a larger and larger group of users.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,218 | 2,020 | "OpenAI goes all-in on Facebook's Pytorch machine learning framework | VentureBeat" | "https://venturebeat.com/2020/01/30/openai-facebook-pytorch-google-tensorflow" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI goes all-in on Facebook’s Pytorch machine learning framework Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In what might only be perceived as a win for Facebook, OpenAI today announced that it will migrate to the social network’s PyTorch machine learning framework in future projects, eschewing Google’s long-in-the-tooth TensorFlow platform. OpenAI is the San Francisco-based AI research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, Elon Musk, and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman. In a blog post , the company cited PyTorch’s efficiency, scalability, and adoption as the reasons for its decision.
“Going forward we’ll primarily use PyTorch as our deep learning framework but sometimes use other ones when there’s a specific technical reason to do so,” said the company in a statement. “We’re … excited to be joining a rapidly-growing developer community, including organizations like Facebook and Microsoft, in pushing scale and performance on [graphics cards].” OpenAI says that many of its teams have already migrated their work to PyTorch and that they’ll contribute to the PyTorch community in the coming months. Additionally, the company says it plans to make available its Spinning Up in Deep RL educational resource on PyTorch in early 2020, after which point it intends to investigate scaling AI systems with data parallel training, visualizing those systems with model interpretability, and building general-purpose robotics frameworks. (OpenAI is in the process of writing PyTorch bindings for its highly optimized blocksparse kernels, and it says it’ll open-source those bindings in the coming months.) PyTorch, which Facebook publicly released in October 2016, is an open source machine learning library based on Torch, a scientific computing framework and script language that’s in turn based on the Lua programming language. As of March 2018, it incorporates Caffe2, a deep learning toolset pioneered by University of California, Berkeley researchers and further developed by Facebook’s AI Research lab.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see rapid uptake in the data science and developer community. It claimed one of the top spots for fastest-growing open source projects in the past 12 months, according to GitHub’s 2018 Octoverse report. Facebook recently revealed that in 2019 the number of contributors to the platform grew more than 50% year-over-year to nearly 1,200. An analysis conducted by The Gradient found that every major AI conference in 2019 has had a majority of papers implemented in PyTorch, and O’Reilly noted that PyTorch citations in papers grew by more than 194% in the first half of 2019 alone.
Unsurprisingly, a number of leading machine learning software projects are built on top of PyTorch, including Uber’s Pyro and HuggingFace’s Transformers. Software developer Preferred Networks joined the ranks recently with a pledge to move from its bespoke AI development framework, Chainer, to PyTorch in the near future.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,219 | 2,018 | "LogDNA raises $25 million to simplify server data logging | VentureBeat" | "https://venturebeat.com/2018/12/11/logdna-raises-25-million-to-simplify-server-data-logging" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LogDNA raises $25 million to simplify server data logging Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Developers have to contend with a firehose of log data every day — terabytes, in some cases. It’s enough to bring even the nimblest of DevOps teams to a crawl, particularly when something goes wrong.
Entrepreneurs Chris Nguyen and Lee Liu set out to address the information overload five years ago with LogDNA , a Mountain View startup that emerged out of an internal tool they’d built for a previous company. LogDNA’s eponymous analytics suite is now used by more than 2,000 customers and tens of thousands of users, growth that’s boosted revenue five times over the previous year. This growth has attracted a throng of investors led by Emergence Capital, which contributed toward a $25 million series B funding round announced today. Y Combinator and Reddit cofounder Alexis Ohanian’s Initialized Capital also participated.
Nguyen, who serves as CEO, said the infusion of new capital will be used to scale LogDNA’s engineering, sales, and marketing teams.
“The evolution of microservices and Kubernetes have led to an inflection point where one size does not fit all when it comes to application insights. We have seen a shift to multi-cloud environments with data residing across multiple different infrastructures and regions,” he said. “LogDNA has a suite of modern offerings that are already resonating … We are excited to be able to expand on our solution and enable teams to view all of their log data regardless of data residency and infrastructure.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! LogDNA runs in the cloud or on-premises, and offers a streamlined UI that lets developers narrow down log lines by filters or search using standard terms, excludes, or chained operators. Thanks to built-in auto-parsing and graph support for “most” popular formats, developers can use natural language to jump to specific points in time and create custom alerts for certain queries, hosts, and apps.
Above: A screenshot of LogDNA’s dashboard.
There’s a proactive component, as well. Nguyen says the company is investing heavily in machine learning that will prevent server outages and surface data to mitigate outages.
LogDNA is as robust as they come, collecting logs across multiple deployments from more than 30 of the industry’s most popular environments including Amazon Web Services, Heroku, and Elastic. The platform juggles hundreds of thousands of log events per second and more than 20 terabytes per customer per day, Nguyen claims, all while offering SOC2, HITECH, PCI-DSS, and HIPAA-compliant logging.
LogDNA has competition in spades, to be fair — Loggly, Logentries, Sumo Logic, and Scalyr come to mind — but it’s managed to snag big accounts like Instacart, OpenAI, WayUp, Life360, Lime, and Lifesize in part thanks to an attractive pricing scheme. Customers pay per gigabyte — or, if they’re on the smaller side, take advantage of LogDNA’s free plan.
“As every company in the world becomes a software company, a solution that can quickly and securely analyze logs from multiple clouds and on-premise servers is critical to maximize uptime across the board,” Emergence Capital general partner Joe Floyd said. “Our entire team agreed that LogDNA has the core functionality, intuitive interface and smart scaling capabilities to be that indispensable solution, and thus we are thrilled to partner with them to bring this game-changer to market.” LogDNA previously raised $1.3 million in seed funding from Initialized Capital and Skype cofounder Jaan Tallinn in July 2016, and subsequently $6.7 million from Initialized Capital in series A funding last year. To date, it’s raked in close to $34 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,220 | 2,019 | "Coralogix raises $10 million to apply AI to software logs | VentureBeat" | "https://venturebeat.com/2019/11/26/coralogix-raises-10-million-to-apply-ai-to-software-logs" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Coralogix raises $10 million to apply AI to software logs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Canvasing software logs is tricky business when you’re juggling multiple dev environments. About 50% of logging statements don’t include any information about critical things like variable state at the time of an error, according to GitHub and OverOps surveys , which is perhaps why developers spend an estimated one fourth of their time — more than a full day out of the work week — on troubleshooting.
This unfortunate state of affairs motivated Lior Redlus, Ariel Assaraf, and Guy Kroupp to found Coralogix in 2014. The San Francisco-based startup provides AI-imbued analytics solutions addressing a host of software delivery and maintenance challenges. Its suite automatically clusters log records back to their patterns and identifies connections among those patterns, forming baseline flows for comparison and future study.
The approach has backers impressed, it would seem. Coralogix today announced a $10 million series A round led by Aleph, bringing its total raised to $16.2 million. StageOne Ventures, Janvest Capital Partners, and 2B Angels also participated in the raise, which CEO Assaraf said will accelerate Coralogix’s work in cybersecurity.
Coralogix’s eponymous software-as-a-service (SaaS) product automatically creates what Assaraf calls “component-level” insights from log data, in part by applying machine learning to software releases to spot quality issues. Scaling from hundreds to millions of logs with integrations for popular languages and platforms like Docker, Python, Heroku, .NET, Kubernetes, and Java, the toolset spotlights anomalies and affords developers access to a full suite of identification, drilldown, correlation, visualization, and remediation tools.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There’s more in tow for customers with a Coralogix subscription. The service can automatically enrich web logs with IP blacklists to identify suspicious activity across tech stacks while issuing alerts when new errors or critical log entries occur in any environment or component. Separately, an integrated security information and event management (SIEM) and intrusion detection system taps machine learning to pinpoint anomalies within network packets, server events, and audit logs.
The log management market is expected to reach $1.2 billion by 2022, according to Research and Markets, and Coralogix isn’t the only startup leveraging AI to surface abnormalities. Mountain View-based LogDNA raised $25 million last December to further develop its AI-powered tools that surface data to mitigate outages. Anandot, which is based in Israel, claims it analyzed over 5.2 billion data points per day within six months to launch its log-monitoring platform. That’s not to mention Moogsoft, Logz.io, Loom Systems, Dynatrace, and Perspica, all of which use some form of AI to expedite DevOps processes.
But despite the competition, Coralogix has managed to grow its client base to over 1,000 brands to date, among them Payoneer, BookMyShow, PayU, Monday.com, Postman, Decathlon, Taylor Stitch, BioCatch, Spot.IM, Hoodline, Camping World, Playbuzz, Fiverr, Decathalon, Postman, KFC, and Caesars Entertainment. “My CTO Yoni Farin and I rebuilt Coralogix in 2017 to help companies manage the growing complexity of distributed cloud applications. Here we are two years later, and it’s astonishing what we’ve been able to achieve,” said Assaraf.
Coinciding with this morning’s fundraising announcement, Matt Handler, former VP of sales and Channel at Sumo Logic, and Guy Bloch, former COO of EMEA at Splunk, will join the Coralogix board of directors.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,221 | 2,020 | "Baidu debuts updated AI framework and voice platform, wireless earbuds | VentureBeat" | "https://venturebeat.com/2020/09/15/baidu-debuts-updated-ai-framework-and-voice-platform-wireless-earbuds" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Baidu debuts updated AI framework and voice platform, wireless earbuds Share on Facebook Share on X Share on LinkedIn Baidu Silicon Valley AI Lab in Sunnyvale, California.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Yesterday evening, Baidu held a three-hour virtual keynote to kick off its Baidu World 2020 conference. In addition to voice-activated earbuds dubbed the Xiaodu Pods, the company showcased the newest generation of its natural language processing platform, DuerOS , and its PaddlePaddle machine learning development framework.
PaddlePaddle and Baidu Brain PaddlePaddle 2.0 beta makes substantial changes to the underlying API structure. Baidu says the directory format has been optimized to better support high-level API functions, with scaffolding for mixed-precision training under a dynamic graph. (Mixed-precision training is the combined use of different numerical precisions in a computational method.
) There’s a total of 140 new APIs, 155 modified or improved APIs, and upgraded C++ APIs for the framework’s inference library.
The PaddlePaddle upgrades dovetail with the release of Baidu Brain 6.0, the newest version of Baidu’s machine learning platform supporting AI industrial applications. The company says Baidu Brain now boasts more than 270 core AI capabilities used by over 2.3 million developers, with an API upgrade that enables AI models to be deployed more efficiently.
Xiaodu Pods Baidu also unveiled Xiaodu Pods, dual-microphone earbuds that are imbued with the company’s Xiaodu assistant and claim to last 28 hours on a charge. The wireless Xiaodu Pods have a noise-canceling feature and can translate between several languages. Via Xiaodu, they also relay information like the weather, turn-by-turn directions, answers to math problems, and local news on command. A special Wandering Earth mode available in English and Chinese allows two users wearing one earbud each to translate conversations into their preferred language in real time.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! https://twitter.com/Baidu_Inc/status/1305729421358305280 The Xiaodu Pods arrived with DuerOS 6.0, the latest suite of tools developers can use to plug Baidu’s voice platform into speakers, refrigerators, washing machines, infotainment systems, and set-top boxes. DuerOS 6.0 brings support for low-power voice processing chips and neural beamforming, a technique that employs algorithms to amplify the sound recorded by microphone arrays. Voice recognition error rates are now 46% lower compared with DuerOS 5.0, Baidu claims, and DuerOS 6.0 incorporates a more efficient text-to-speech model (WaveRNN).
Baidu said over 60 automakers and more than 40,000 developers are working to integrate products with DuerOS. As of March, DuerOS was handling 6.5 billion voice queries per month and 3.3 billion from Baidu’s Xiaodu smart speakers and displays alone. As of July 2019, the platform was on 400 million devices, in 500 vehicle models (and over a million vehicles), and in over 100,000 hotel rooms.
DuerOS hasn’t quite reached the storied heights of Amazon’s Alexa and Alexa Voice Service — they have more than 100,000 third-party apps and thousands of brands signed on, not to mention compatibility with 60,000 smart home devices. But Baidu, which claims DuerOS now has over 4,000 third-party apps, continues to work with heavy hitters like Huawei, Vivo, and Oppo to build the operating system into future flagships. Baidu also works with automakers like BMW, Daimler, Ford, Hyundai, Kia, Chery, BAIC, FAW, and Byton and hotel chains like InterContinental.
Baidu this summer inked a strategic partnership with China’s largest smart device manufacturer Midea, which has 70 million smart home appliances on the market, to sell bundles of appliances with Xiaodu products. In the future, Xiaodu-powered devices will be able to control Midea appliances via the cloud or infrared tech.
Apollo Beyond PaddlePaddle 2.0, DuerOS 6.0, and Baidu Brain 6.0, Baidu announced developments regarding Apollo , its open source software solution for driverless vehicles. In 2021, Weltmeister will launch a car incorporating Apollo’s valet parking that will be able to identify vacant slots in parking garages and allow people to use Tesla-like smart summon functions, Baidu said. In addition, Baidu demoed what it calls 5G Remote Driving Service, a 5G-powered teleoperation service that allows human operators who have completed at least 1,000 hours of cloud-based training to remotely control multiple vehicles simultaneously in emergency situations.
The fifth generation of Baidu’s autonomous driving kit will be released soon, the company said during the keynote. This will arrive alongside the latest version of Apollo: Apollo 6.0. As of today, Apollo has over 600,000 lines of open source code, 45,000 contributors, and 210 ecosystem partners. (That’s up from 400,000 lines of code, 12,000 contributors, and 130 partners in July 2019.) Notably, Baidu recently launched its own robotaxi service leveraging the Apollo platform — Apollo Go Robotaxi — in Beijing with roughly 100 pickup and drop-off areas covering residential and business zones in Yizhuang, Haidian, and Shunyi.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,222 | 2,013 | "Consoles that won't die: The Atari Jaguar | VentureBeat" | "https://venturebeat.com/2013/04/25/consoles-that-wont-die-atari-jaguar" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Consoles that won’t die: The Atari Jaguar Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Read more: Consoles that won’t die The Intellivision The Commodore 64 The SNES The NES Out of This World (known as Another World outside of North America) is a true gaming classic, hitting systems as diverse as the Super Nintendo and iOS since its 1991 arrival on the Commodore Amiga. Creator Eric Chahi has now given his blessing to one more adaptation of the game on an unlikely console.
French computer scientist Sébastien Briais is the man at the heart of the project, and the platform of choice is the Atari Jaguar, a powerful machine that’s notorious for being one of the worst-selling video game consoles of all time.
Jaguar: Atari’s last console Atari led the video game industry in the late ’70s and early ’80s. But this dominance was a distant memory by the time of the Jaguar’s 1993 release — Atari never recovered from the Crash of 1983 that almost killed home gaming in America. Despite claims that its new console was technically superior to its rivals and some impressive software from Atari, the Jaguar simply didn’t sell.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! A lack of support from third-party publishers such as Activision, Electronic Arts, or Capcom led to an understocked games catalog, and Atari had more or less accepted defeat by the time the new kids on the block, the Sony PlayStation and Sega Saturn, released in 1995.
Atari’s report to stockholders that year was bleak : “From the introduction of Jaguar in late 1993 through the end of 1995, Atari sold approximately 125,000 units of Jaguar. As of December 31, 1995, Atari had approximately 100,000 units of Jaguar in inventory … . There can be no assurance that Atari’s substantial unsold inventory of Jaguar and related software can be sold at current or reduced prices if at all.” Out of This World Briais is an enthusiastic Jaguar programmer, and despite the console’s retail failure 20 years ago, he is confident that it will prove a worthy home for Chahi’s classic cinematic adventure game.
“The story began in 2007 when I attended the Atari Connexion in Congis not far from Paris,” says Briais. “This event was organised by the Retro Gaming Connexion. Eric Chahi was invited to the event, and he was very enthusiastic to see some crazy people still having fun coding on old hardware. Some friends of mine and I asked whether he would let us adapt Another World for the Jaguar.” Above: Out of this World creator Eric Chahi speaking at the European Game Developers Conference in 2010 Eric Chahi recalls his first meeting with Briais with equal clarity. “The event organizers presented me to [Briais’ programming group] The Removers,” he says. “They asked me if it would be possible to port Another World on Jaguar. I was impressed by their ability to code on this machine. These guys sounded like crazy people, so I immediately said, ‘Yes.'” But the Out of This World Jaguar project remained just a concept until 2010, when Briais finally had the time to seriously work on it. Chahi provided Briais with the original Atari source code, along with the latest data and enhanced graphics from the 15th anniversary edition. “I gave Seb technical info on the game engine,” he says, “and later I resized the graphics to the native size of the Jaguar so that there is no dithering [scattering of pixels to make up for a limited color palette].” With Chahi’s support, Briais managed to not only get the game running but take it to a stage where it was outperforming the original. “About one year ago, [Eric] came to my home and tried a beta version,” says Briais. “I think he was quite impressed by the console, as the game runs very smoothly on it.” “It was like jumping into an alternate reality in the past where someone coded Another World on this computer,” recalls Chahi. “I was amazed by the quality of this version. Seb coded it in assembly language using the advantage of the Jaguar hardware. It is one of the best versions, clearly. The code is so well optimized that if the frame rate is not limited, it can run maybe at least five times faster than the original with all the enhanced graphics.” 1 2 View All The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,223 | 2,017 | "IBM's new tool boosts deep learning speed, but only for its hardware | VentureBeat" | "https://venturebeat.com/2017/08/07/ibms-new-tool-boosts-deep-learning-speed-but-only-for-its-hardware" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM’s new tool boosts deep learning speed, but only for its hardware Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
IBM unveiled a new technique today that’s supposed to drastically reduce how much time it takes to train distributed deep learning (DDL) systems by applying a ton of powerful hardware to the task. It works by optimizing data transfer between hardware components that run a deep neural network.
The key issue IBM is trying to solve is that of networking bottlenecks in distributed deep learning systems. While it’s possible to spread the computational load for training a deep neural network out over many computers, that process becomes less and less efficient because of high-latency connections between the hardware doing the actual computation.
PowerAI DDL, a new communication library released in conjunction with an explanatory research paper, aims to improve efficiency by making sure that the systems at play take advantage of all the high-performance connections available. Using PowerAI DDL, IBM was able to train the popular Resnet-50 deep neural network on the ImageNet data set in 50 minutes, using 64 servers, each with four GPUs.
Organizations with enough hardware to really take advantage of PowerAI DDL’s capabilities could see massive improvements in how much time their data scientists have to spend waiting for experiments to run. If experiments run faster, scientists can do more of them, which should produce better results.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! IBM’s communication library is being released as part of its PowerAI software package, which allows data scientists and engineers to perform machine learning tasks on the tech titan’s high-performance Power Systems servers. For testing, the company used 64 Power8 S822LC servers, which each come packed with four Nvidia Tesla P100-SXM2 GPUs.
That’s a lot of pricey hardware, but for organizations with cash to burn and a need for high-performance AI computation, it could be just what the doctor ordered.
Releasing the technology through PowerAI should make it easier for people to reap the benefits of IBM’s research, since it’s integrated with a an existing piece of software that’s supposed to just run on Power Systems hardware.
However, that ease of implementation comes at a cost: IBM is only releasing PowerAI DDL for its own hardware and won’t be making the code for the system available as an open source project so that it can be reimplemented on other platforms.
That’s in contrast to Facebook’s distributed neural network optimization work, which came out earlier this month. The social networking giant released its code — which enabled the training of Resnet-50 on 256 GPUs in one hour — under an open source license.
(IBM is no stranger to contributing code to deep learning projects, it just chose not to do so in this case.) Despite the distribution differences, both of those papers highlight an important frontier in deep learning research. Both companies’ work shows that there’s more to be done when it comes to improving the speed of machine learning systems. The fruits of this acceleration could also go on to benefit other applications, which could have greater follow-on effects.
One of the things that’s important to note in both cases is that while training Resnet is useful as a benchmark, it’s unclear how those results translate to other applications. While it seems likely that the techniques laid out in IBM’s paper should provide additional performance benefits, the company hasn’t done extensive testing yet.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,224 | 2,018 | "The RetroBeat: Nintendo 64 Classic could tout 20 great games without Rare | VentureBeat" | "https://venturebeat.com/2018/01/03/the-retrobeat-are-there-enough-games-for-a-nintendo-64-classic-edition" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The RetroBeat: Nintendo 64 Classic could tout 20 great games without Rare Share on Facebook Share on X Share on LinkedIn The Nintendo 64 still used cartridges at a time when its competitors switched to disks.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
It’s January, but I’m already thinking about Fall. Last September, Nintendo released the SNES Classic Edition.
The November before that, we got the NES Classic Edition.
It makes sense to expect a Nintendo 64 Classic Edition this year.
It would certainly be a hit. The SNES and NES Classics were huge sellers, both managing to top monthly sales charts.
As for the Nintendo 64, Nineties kids grew up with Nintendo’s third major home console. Over the holidays, retro game stores reported that the N64 was their best-selling old console.
The system is home to some of Nintendo’s biggest classics ever, including Super Mario 64 and The Legend of Zelda: Ocarina of Time.
But can Nintendo put together a big enough library to make the mini-console enticing? The NES Classic has 30 games. The SNES Classic has 21. You might think Nintendo should have no problem putting together at least that many, but the prospect faces a few challenges.
Rare is the biggest problem. During the N64 era, this Nintendo-owned studio made many of the console’s most popular games, such as Banjo-Kazooie, Conker’s Bad Fur Day, and Perfect Dark. But Microsoft now owns Rare, and Nintendo doesn’t own the rights to the games the studio made that didn’t feature established Nintendo characters. In other words, anything that wasn’t a Donkey Kong game. Microsoft has already released Rare Replay for the Xbox One, a collection that includes most of Rare’s Nintendo 64 games.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Banjo-Kazooie in Rare Replay.
The Nintendo 64 also has a big third-party problem. Unlike with the NES or SNES, third parties weren’t making many games for the system. Franchises like Final Fantasy jumped ship to PlayStation. And when a classic property, like Castlevania, did get a Nintendo 64 game, it … well, it sucked.
And then you have Goldeneye. This shooter has a double-whammy of issues preventing it from easily appearing on a Nintendo 64 Classic Edition. Rare made it, and it’s a licensed game. The NES and SNES Classic Editions didn’t include any licensed titles. It creates too much expense and legal trouble to do so. That also means no Star Wars games like Rogue Squadron. It also makes wrestling hits like WWF No Mercy unlikely.
So, what does that leave Nintendo with? I’m going to go ahead and see if I can put together a list of at least 20 games for a Nintendo 64 Classic. I’m going to try to mold my list after the ones on the NES and SNES Classics, focusing on games that have at least some renown for quality and not including more than three entries in a single series. I’ll try to have as many genres represented as possible.
Super Mario 64 The Legend of Zelda: Ocarina of Time The Legend of Zelda: Majora’s Mask Kirby 64: The Crystal Shards Yoshi’s Story Paper Mario Mario Kart 64 Mario Party Mario Golf Mario Tennis Super Smash Bros.
Donkey Kong 64 Diddy Kong Racing Star Fox 64 Pokémon Snap Pokémon Puzzle League F-Zero X Pilot Wings 64 Wave Race 64 1080 Snowboarding Hey, look at that! Even without Rare, licensed game, or third-party titles, Nintendo can come up with 20 good games for a Nintendo 64 Classic Edition. Actually, when I look at it, it’s a pretty great list. You have the big single-player classics in Mario and Zelda, racing games with Wave Race 64 and F-Zero X, 2D platformers with Yoshi and Kirby, sports games with 1080 Snowboarding, Mario Gold, and Mario Tennis, a fun fighting game with Super Smash Bros., and puzzle action with Pokémon Puzzle League.
Above: 1080 Snowboarding was one of greatest extreme sports games ever.
The list also does a good job highlighting the Nintendo 64’s biggest strength: four-player couch co-op. Even without Goldeneye or the classic wrestling games, the list has a lot of great multiplayer support. And just as the SNES Classic included two controllers, it would make sense for a Nintendo 64 Classic to have four.
And while the Nintendo 64 didn’t have many great third-party games, it did have some good ones that could make the list. Bomberman 64 had the series’ classic multiplayer with a surprisingly interesting single-player campaign. And if you really want a first-person shooter on the system, I’m sure Nintendo could get one of the old Turok games. I think Disney owns the rights to the series now (a fact I bet Disney isn’t even aware of anymore). I can’t imagine they’re going to play too hard-to-get over Turok.
I have to admit, I’m usually not a big a proponent of the Nintendo 64. I’m more of a Super Nintendo or, heck, even GameCube guy. When I came with the idea for this week’s RetroBeat, I expected to have different results. I thought I’d discover that it was impossible to make a plausible list of 20 games for a Nintendo 64 Classic.
But now that I’ve done, I look at the theoretical device and know that I would happily buy one. And I bet consumers would be able to get over the lack of Goldeneye and buy plenty of them too.
The RetroBeat is a weekly column that looks at gaming’s past, diving into classics, new retro titles, or looking at how old favorites — and their design techniques — inspire today’s market and experiences. If you have any retro-themed projects or scoops you’d like to send my way, please contact me.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,225 | 2,018 | "Apple's A12 Bionic chip runs Core ML apps up to 9 times faster | VentureBeat" | "https://venturebeat.com/2018/09/12/apples-a12-bionic-chip-runs-core-ml-apps-up-to-9-times-faster" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple’s A12 Bionic chip runs Core ML apps up to 9 times faster Share on Facebook Share on X Share on LinkedIn A12 Bionic Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Apple’s investing heavily in artificial intelligence (AI). That much was clear from today’s iPhone and Apple Watch unveiling in Cupertino, California.
The new iPhone Xs and iPhone Xs Max boast the A12 Bionic , a 7-nanometer chip that Apple characterized as its “most powerful ever.” It packs six cores (two performance cores and four high-power cores), a four-core GPU, and a neural engine — an eight-core dedicated machine learning processor, up from a two-core processor in the A11 — that can perform five trillion operations per second (compared to 500 billion for the last-gen neural engine). Also in tow is a smart compute system that automatically determines whether to run algorithms on the processor, GPU, neural engine, or a combination of all three.
Apps created with Core ML 2, Apple’s machine learning framework, can crunch numbers up to nine times faster on the A12 Bionic silicon with one-tenth of the power. Those apps launch up to 30 percent faster, too, thanks to algorithms that learn your usage habits over time.
Real-time machine learning-powered features enabled by the new hardware include Siri Shortcuts , which allows users to create and run app macros via custom Siri phrases; Memoji , a new version of Emoji that can be customized to look like you; Face ID; and Apple’s augmented reality toolkit, ARKit 2.0.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The news follows on the heels of Apple’s Core ML 2 announcement this summer.
Core ML 2 is 30 percent faster, Apple said at its Worldwide Developers Conference in June, thanks to a technique called batch prediction. Furthermore, Apple said the toolkit would let developers shrink the size of trained machine learning models by up to 75 percent through quantization.
Apple introduced Core ML in June 2017 alongside iOS 11. It allows developers to load on-device machine learning models onto an iPhone or iPad, or to convert models from frameworks like XGBoost, Keras, LibSVM, scikit-learn, and Facebook’s Caffe and Caffe2. Core ML is designed to optimize models for power efficiency, and it doesn’t require an internet connection in order to get the benefits of machine learning models.
News of Core ML’s update came hot on the heels of ML Kit , a machine learning software development kit for Android and iOS that Google announced at its I/O 2018 developer conference in May. In December 2017, Google released a tool that converts AI models produced using TensorFlow Lite , its machine learning framework, into a file type compatible with Apple’s Core ML.
Core ML is expected to play a key role in Apple’s future hardware products.
In a hint at the company’s ambitions, Apple hired John Giannandrea , a former Google engineer who oversaw the implementation of AI-powered features in Gmail, Google Search, and the Google Assistant, to head up its machine learning and AI strategy. And it is looking to hire more than 150 people to staff its Siri team.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,226 | 2,018 | "Nvidia unveils Tesla T4 chip for faster AI inference in datacenters | VentureBeat" | "https://venturebeat.com/2018/09/12/nvidia-unveils-tesla-t4-chip-for-faster-ai-inference-in-datacenters" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia unveils Tesla T4 chip for faster AI inference in datacenters Share on Facebook Share on X Share on LinkedIn Nvidia Tesla T4 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nvidia today debuted the Tesla T4 graphics processing unit (GPU) chip to speed up inference from deep learning systems in datacenters. The T4 GPU is packed with 2,560 CUDA cores and 320 Tensor cores with the power to process queries nearly 40 times faster than a CPU.
Inference is the process of deploying trained AI models to power the intelligence imbued in services like visual search engines, video analysis tools, or questions to an AI assistant like Alexa or Siri.
As part of its push to capture the deep learning market, two years ago Nvidia debuted its Tesla P4 chip made especially for the deployment of AI models. The T4 is more than 5 times faster than its predecessor, the P4, at speech recognition inference and nearly 3 times faster at video inference.
Analysis by Nvidia found that nearly half of all inference performed with the P4 in the span of the past two years was related to videos, followed by speech processing, search, and natural language and image processing.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unlike the Pascal-based P4, the T4 utilizes the Turing Tensor Core for GPUs, an architecture expected to fuel a series of Nvidia chips that Huang referred to as the “greatest leap since the invention of the CUDA GPU in 2006.” Since making its debut last month, the Turing architecture has also been utilized to power GeForce RTX graphics chips for real-time ray tracing in video games.
The news was announced onstage today by Nvidia CEO Jensen Huang in a presentation at the GTC conference in Japan.
Also announced today was the launch of the TensorRT Hyperscale Inference Platform, an upgrade to the TensorRT that includes the inference optimizing TensorRT5, and the NVIDIA TensorRT inference server, a containerized software that works with popular frameworks like TensorFlow and can integrate with Kubernetes and Docker.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,227 | 2,019 | "Huawei's Kirin 990 5G promises up to 2.3Gbps download speeds | VentureBeat" | "https://venturebeat.com/2019/09/06/huaweis-kirin-990-5g-promises-up-to-2-3gbps-download-speeds" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Huawei’s Kirin 990 5G promises up to 2.3Gbps download speeds Share on Facebook Share on X Share on LinkedIn A mock-up of the Kirin 990 5G system-on-chip.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Despite regulatory setbacks and fiercer competition than ever before, Huawei remains the second-largest smartphone vendor in the world by shipment volume. But phones are just one of many segments the Beijing-based company has invested billions in, with gadgets like tablets, set-top boxes, laptops, and headphones coming in a close second.
Then there’s its chip business. Huawei’s Shenzhen-based, wholly owned fabless semiconductor division, HiSilicon, has grown to become one of the largest integrated circuit designers in China over the last 15 years. Its Kirin lineup competes against chipsets from the likes of Qualcomm, MediaTek, and other Arm licensees, and the newest member of the family — the Kirin 990 5G — is a veritable chart-topper. In point of fact, Huawei says it’s the most powerful processor on the market and that it’s best in class with respect to power efficiency.
A revamped architecture The Kirin 990 5G, which like the Kirin 980 is manufactured on a 7nm process but with extreme ultraviolet lithography, boasts 10.3 billion transistors in all. (That’s up from 6.9 billion in the Kirin 980 and 5.5 billion in the Kirin 970.) Contributing to the uptick is a refreshed eight-core architecture with two Cortex-A76 high-performance cores for demanding computation, two Cortex-A76 “middle cores” that juggle everyday workloads, and four Cortex-A55 efficiency cores that field light task like music playback and file transfers.
The cores themselves are identical to those in the Kirin 980, but their clock speeds have been increased slightly. The high-performance cores now hit 2.86GHz versus 2.6GHz, while the middle cores reach 2.36GHz compared with 1.92GHz and the efficiency cores get up to 1.95GHz versus the previous maximum of 1.8GHz. According to Huawei, this together confers a 10% single-core and and 9% multi-core performance advantage over Qualcomm’s Snapdragon 855.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Richard Yu, CEO of Huawei’s Consumer Business Group, speaking at IFA 2019 Rarely are all eight of the Kirin 980’s cores used simultaneously. Instead, a “flexible scheduling mechanism” ramps them up individually as needed — one efficiency core for music decoding, for example, or three middle cores for turn-by-turn navigation. Demanding apps like high-end games lean on four efficiency cores and two middle cores, or some combination of high-performance cores and middle cores.
There’s a powerful 12-core graphics chip inside the Kirin 990 5G that’s paired with the eight-core processor: The Mali-G76. It packs six more cores than the Mali-G76 in the Kirin 980 and a cache that serves to reduce bandwidth usage and power draw. These and other improvements enable the new Mali-G76 to leapfrog the Snapdragon 855’s Adreno 640 chip, according to Huawei, by 6% in terms of overall performance and 20% in efficiency.
When it comes to photography, the Kirin 990 5G features an improved dual image signal processor (ISP) that Huawei says is up to 15% more power-efficient than its predecessor. It’s capable of reducing noise in still images by up to 30% and in videos by up to 20%, thanks to block-matching and 3D filtering (BM3D) and dual-domain reduction techniques, the latter of which takes into account a video’s spatial and frequency domain in identifying and zapping artifacts. Huawei characterizes its performance as “DSLR-level.” Supercharged AI Huawei’s Neural Processing Unit (NPU) made its first appearance at IFA 2017, where it debuted in the Kirin 970. Boiled down to basics, it’s a coprocessor optimized for the sort of vector math that’s the lifeblood of machine learning frameworks like Facebook’s Caffe2 and Google’s TensorFlow. Microsoft’s Translator app taps into it for tasks like scanning and translating words in pictures, and Huawei says its heterogeneous computing structure — HiAI — automatically distributes voice recognition, natural language processing, and computer vision workloads across it dynamically.
Qualcomm’s AI Engine, Samsung’s own Neural Processing Unit, and Apple’s A12 Bionic achieve the same ends through different means, but Huawei claims its homegrown Da Vinci architecture — the evolution of the NPU in the Kirin 970 and 980, which were designed by Cambricon — is far and away the most capable. To this end, it delivers tensor computation under half-precision FP16 and INT8 data types courtesy of two “big” cores (up from the Kirin 980’s single big core) for high-intensity workloads and one “tiny” core for less-intensive computation. A single tiny core is up to 24 times more efficient than a big core for tasks like facial recognition, according to Huawei.
Concretely, Huawei claims the Kirin 990 5G is up to 4.76 times faster than the Kirin 980 on the ETH AI benchmark, an Android app developed by research scientists at ETH Zurich to measure the performance and memory limitations associated with running computer vision algorithms on mobile devices. Perhaps unsurprisingly, the company also says the Kirin 9990 5G comes out ahead on all other typical AI benchmarks tested, including MobileNet (Int8) and Resnet50 (FP16).
Huawei also touts the breadth of the Kirin 990 5G’s computer vision model support, arguing it accelerates 90% of the commonly used computer vision algorithms — including Inception, Deep Lab, VDSR, VGG, and MobileNet-SSD — more effectively than any rival AI chip.
5G The 5G chip arms race kicked off in earnest last year with the announcement of Qualcomm’s first 5G solution for mobile devices, and Huawei claims it has in that time managed to far surpass competitors from a technological standpoint. Case in point? The Kirin 990 5G will be one the first all-in-one, full-frequency 5G chipsets to market later this year, on a die area that’s roughly 36% smaller than rival products demonstrated to date. It will support four sub-6Hz antennas in total and both non-standalone (NSA) and standalone (SA) architectures as well as TDD/FDD full frequency bands, but not millimeter-wave.
The chip’s performance won’t disappoint if benchmark results are to be believed. Huawei claims the Kirin 990 5G’s machine learning-based adaptive receiver and split uplink design boosts download and upload speeds in “high-movement” weak-signal environments like cars, train stations, and busses, delivering up to 5.8 times the max upload speed of leading chipsets in preliminary testing. In spots with stronger signals, it’s theoretically capable of reaching downlink and uplink rates of up to 2.3Gbps and 1.25Gbps, respectively.
Availability There’s no doubt about it: The Kirin 990 5G is Huawei’s most ambitious chip yet. Only time will tell how it performs in the real world, of course, but we shouldn’t have long to wait. Huawei confirmed that the chip will feature prominently in the company’s upcoming Mate 30 series, which will be announced at an event in London in September.
One thing’s sure: Huawei has Qualcomm firmly in its crosshairs. And if Huawei can deliver on its promises, it might just best its San Diego rival at its own game.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,228 | 2,019 | "Samsung debuts Galaxy S11-ready 5G Exynos Modem 5123 and Exynos 990 SoC | VentureBeat" | "https://venturebeat.com/2019/10/23/samsung-debuts-galaxy-s11-ready-5g-exynos-modem-5123-and-exynos-990-soc" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung debuts Galaxy S11-ready 5G Exynos Modem 5123 and Exynos 990 SoC Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Just last month, Samsung introduced the Exynos 980 system-on-chip (SoC), combining a CPU, GPU, and 5G modem onto a single energy-efficient die for midrange to premium phones. Now the company is announcing a “premium” alternative named Exynos 990, alongside a separate new modem called the 5G Exynos Modem 5123 — components that could wind up in some of next year’s Galaxy S11 series phones.
The 5G Exynos Modem 5123 is noteworthy because it includes support for both sub-6GHz and millimeter wave 5G networks, plus legacy support for 2G, 3G, and 4G networks. Under ideal circumstances — using ultra-dense 1024-QAM signal encoding — it can wring up to 3Gbps download speeds out of compatible 4G networks, while 8-carrier aggregation enables 5.1Gbps peak speeds on sub-6GHz 5G networks, or 7.35Gbps from mmWave 5G. While most users won’t connect to 4G networks with those sorts of speeds, they’ll likely be able to enjoy Gigabit-plus downloads on 5G towers.
Built on the latest 7-nanometer process rather than the 980’s older 8-nanometer technology, the Exynos 990 is more powerful but lacks integrated 5G functionality. While preserving the basic octa-core design, Samsung has shifted from the 980’s twin Arm Cortex-A77s and six Cortex-A55s to two unnamed but “powerful custom cores,” two high-performance Cortex-A76 cores, and four power-efficient Cortex-A55s. On the graphics front, Samsung has upgraded from a Mali-G76 to a Valhall-based Mali-G77 GPU, offering either a 20% boost in graphics performance or power efficiency, as developers prefer.
The Exynos 990’s AI performance is claimed to be “top-class,” though Samsung isn’t making many direct numerical comparisons to the 980’s neural processor. The company says the dual-core neural processing unit and improved DSP can “perform over 10 trillion operations per second,” which might surpass Apple’s A13 Bionic neural engine by a factor of two — assuming the numbers aren’t being combined in an incomparable way. Samsung also touts an image signal processor that can concurrently process data from three image sensors while handling up to six in total, with up to 108-megapixel resolution support.
When the Exynos 990 and Modem 5123 work together, the modem’s impressive top speeds are supported by LPDDR5 data rates of up to 5.5Gbps , matching the company’s latest 12Gb DRAM chips. There’s also a 120Hz refresh-rate display driver, akin to the ProMotion speeds offered by Apple’s iPad Pro (but not iPhone 11 Pro ) screens. Samsung expects that the new display driver will improve animations and reduce screen tearing on multi-display screens, including foldable phones.
Both of the new chips are going into mass production by the end of 2019, which means they could well be ready in time for next year’s premium Samsung phones, such as certain models of the Galaxy S11, though the company hasn’t specified its consumer intentions for either chip yet. Samsung historically uses its own SoCs and modems in phones shipped in certain markets, while relying on comparably equipped Qualcomm Snapdragon chips for the U.S. and other markets. The new Exynos modem’s millimeter wave support may enable new Galaxy phones to work in the U.S., where the higher-speed, shorter-distance cellular technology is at the core of most early 5G networks. Or Samsung may opt for an upcoming Snapdragon SoC with an integrated 5G modem for greater power savings.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,229 | 2,019 | "MediaTek's first 5G SoC is the AI-heavy, power-sipping Dimensity 1000 | VentureBeat" | "https://venturebeat.com/2019/11/25/mediateks-first-5g-soc-is-the-ai-heavy-power-sipping-dimensity-1000" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MediaTek’s first 5G SoC is the AI-heavy, power-sipping Dimensity 1000 Share on Facebook Share on X Share on LinkedIn MediaTek's Dimensity 1000 is its first 5G SoC in a planned family of 5G chips.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
It’s no surprise that MediaTek has been working on a 5G system-on-chip (SoC) for some time , with plans to offer a Qualcomm Snapdragon 855 alternative for higher-end 5G devices. Today, MediaTek officially named the chip Dimensity 1000 and provided specs for what it says will be the first in a family of 5G SoCs.
The upshot is that Dimensity 1000 will deliver “extreme energy efficiency” because all of its components are inside one chip, enabling phone makers to preserve internal space for bigger batteries or larger camera sensors. Moreover, the chip’s various cores are all high performers, with a new AI chip standing out most from the pack.
On the processing side, the 7-nanometer, octa-core CPU will include four ARM Cortex-A77 cores with up to 2.6GHz speeds, plus four Cortex-A55 cores running at up to 2GHz. It will be coupled with an ARM Mali-G77 GPU, supporting 120Hz FullHD+ screens, or 90Hz 2K+ displays, as well as AV1 streaming media format support, and either one 80MP/24fps camera or lower resolution multi-camera options.
Dimensity 1000 will also include a new AI processing unit called APU 3.0, which the company says will reach an ETH Zurich AI Benchmark score of 55,828, more than doubling the performance of its prior-generation APU. The chip will aid with camera focus, exposure, white balance, noise reduction, HDR, and face detection, plus “the world’s first multi-frame video HDR capability.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In terms of wireless capabilities, the company claims that Dimensity 1000 will deliver the fastest throughput of a sub-6GHz SoC, with 4.7Gbps downlink and 2.5Gbps uplink speeds, while also being the first to support dual 5G SIM technology. The chip promises to aggregate two 5G connections to achieve its download speeds, as well as offering 2G to 5G network compatibility.
That said, it won’t include millimeter wave support , which means it’s intended solely for “global sub-6GHz 5G networks that are launching in Asia, North America, and Europe.” But it will be capable of handling 1Gbps Wi-Fi 6 downloads and uploads , as well as rapidly toggling between cellular and Wi-Fi to deliver the best, lowest-latency connection for gaming. MediaTek also says the SoC will support “ Bluetooth 5.1 +,” suggesting the potential of a firmware upgrade beyond the current standard.
Dimensity 1000 will be found in unspecified devices “later in 2019 and in early 2020.” The company says that future Dimensity family chips will be intended for “premium and flagship smartphones.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,230 | 2,019 | "Qualcomm reveals 5G-ready Snapdragon 865, 765, and 765G | VentureBeat" | "https://venturebeat.com/2019/12/03/qualcomm-reveals-5g-ready-snapdragon-865-765-and-765g" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm reveals 5G-ready Snapdragon 865, 765, and 765G Share on Facebook Share on X Share on LinkedIn Qualcomm's new Snapdragon 765 and 865 processors are shown next to a U.S. penny coin.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Qualcomm’s annual Tech Summit is underway right now in Maui, Hawaii, and as expected, the company is teasing some major new chipsets: the flagship-class Snapdragon 865 and two variations on a midrange option, called the Snapdragon 765 and 765G. The new chips were introduced on stage by Alex Katouzian, Qualcomm’s general manager for mobile.
While full details will be made public tomorrow, the collective aim of the mobile chips is to “lead and scale 5G and AI in 2020.” Qualcomm will offer 5G phone makers the choice between bleeding edge performance or more affordable options, including one designed to appeal to gamers.
The most widely anticipated chipset is the Snapdragon 865, which the company is simply billing as “the world’s most advanced 5G platform,” designed for flagship smartphones and other devices. Notably, while the 865 promises “unmatched connectivity,” Qualcomm says that the platform includes the company’s standalone Snapdragon X55 5G Modem and RF System rather than a fully integrated modem, and will deliver “truly global 5G.” This suggests that 5G won’t be a mandatory feature for Snapdragon 865 products. It could also give Qualcomm the opportunity to offer an upgraded 5G modem, whenever one is announced in the future. Unlike flagship Snapdragon CPUs, the company has no specific cadence for modem announcements; it introduced the Snapdragon X55 in mid-February 2019 and its predecessor X50 in October 2016. A third-generation modem is known to be in the works, but performance details are fuzzy at best.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Additional Snapdragon 865 features teased without details include 8K video capture and gigapixel image processing speeds. There will also be a fifth-generation AI engine with natural language processing abilities, and 15 trillion operations per second (TOPS).
By comparison, the Snapdragon 765 and 765G are said to include “integrated 5G connectivity” and “advanced AI processing,” with differentiated Snapdragon Elite Gaming experiences. The “G” designation hints that 765G will be a higher performer for gaming devices, akin to this year’s mid-cycle Snapdragon 855+ , which powered several gaming phones.
Additionally, Qualcomm revealed that the 765 chips will have an integrated Snapdragon X52 modem with up to 3.7Gbps downloads, supporting both sub-6GHz and millimeter wave networks. This is notably slower than the peak speeds of Snapdragon X55 modems, but a lot faster than top 4G modems. There will also be a 5 TOPS version of the fifth-generation AI chip, as well as 4K HDR video capture.
Qualcomm will be sharing full details on the 865, 765, and 765G platforms tomorrow, and we’ll be covering the announcements live. The company expects to offer a stream of the reveals here.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,231 | 2,020 | "Qualcomm details Snapdragon 865 and 765, promises devices in Q1 2020 | VentureBeat" | "https://venturebeat.com/2019/12/04/qualcomm-details-snapdragon-865-and-765-promises-devices-in-q1-2020" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm details Snapdragon 865 and 765, promises devices in Q1 2020 Share on Facebook Share on X Share on LinkedIn Qualcomm's Snapdragon 865 processor.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
After yesterday’s announcement of names and bare concept outlines for its upcoming Snapdragon 865 and 765 mobile platforms , Qualcomm today detailed how the upcoming chipsets will perform, as well as how they’ll differ in 5G features. It also offered an expected on-sale date for the first consumer devices using both platforms: the first quarter of 2020.
Designed for flagship-class devices, the Snapdragon 865 includes all of Qualcomm’s latest wireless and processor components, including a new 2.84GHz Kryo 585 CPU, Adreno 650 GPU, fifth-generation AI engine, and Spectra 480 image signal processor (ISP). Collectively, the improvements pave the way for next-generation Android phones with major leaps forward in camera, graphics, and networking performance.
During the company’s Snapdragon Summit event, Qualcomm engineering SVP Christopher Patrick discussed how the Snapdragon 865 began development three years ago, with features locked in two years ago, then final design and testing taking place in the final year — plus work with OEMs to place the chips inside actual shipping devices. At every step, the chip design and engineering needed to contemplate the battery and space constraints of ever-thinning devices. Despite that long timeline, features such as 8K video recording, 200-Megapixel still image photography, and the like are still cutting edge today.
One of the 865’s biggest improvements is the Spectra 480 ISP, which promises up to 2 gigapixels per second of processing speed for dramatically higher-resolution photography and videography. The chip supports up to 200-megapixel still photos, roughly twice the Snapdragon 855’s upper limits, as well as 8K video capture and unlimited HD 960fps slow-motion capture. Paired with the right camera, the 865 is capable of live-capturing Dolby Vision 4K HDR footage, a first for mobile devices, and can simultaneously snap 64-megapixel photos while recording 4K video.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Qualcomm’s Snapdragon 865 5G Mobile Platform.
The Snapdragon 865’s GPU improvements are tangible, including the first mobile support for 144Hz HDR display refresh rates and between 20% and 100% improved graphics performance compared with the Snapdragon 855 , depending on the specific feature. At 90Hz, the new GPU achieves a 35% power efficiency improvement over the prior chip.
But there’s an even bigger announcement: Android is adding support for updatable GPU drivers, allowing OEMs to release improved GPU settings through on-device app stores. Qualcomm also says that the 865 will deliver forward rendering abilities akin to PCs, including support for lighting and post-processing visual effects that were previously only available on dedicated computers, as well as better sustained performance over long gaming sessions.
There’s also an optional Sensing Hub and Low Power Camera chip solution with under 1 milliamp of power draw for voice wakeup word sensing, and under 1 milliwatt of power draw for an optional always-on camera feature. OEMs can choose whether or not to include this in the Snapdragon 865 or 765, enabling devices to act like a person resting on the beach — capable of responding to noises and opening their eyes when necessary, but mostly relaxing. It can also use audio and video input to determine contextually where you are, such as in your car, exercising, or at home, and change the device’s performance to match the setting.
On the AI front, Qualcomm’s fifth-generation AI Engine, Hexagon 698, delivers 15 trillion operations per second (TOPS) — twice as powerful as the company’s fourth-generation processor, and roughly equivalent to the AI capabilities of MediaTek’s Dimensity 1000 5G system-on-chip.
It now includes a Hexagon tensor accelerator that’s four times faster than before, yet 35% more power efficient. Four threads can run in vector or scalar processing. The new AI Engine can be used for real-time translation of spoken dialogue into either a foreign language text or speech, as well as fusing multi-sensor, multi-input data streams to provide contextual data for voice assistance. It also includes new lossless 50% compression of deep learning bandwidth, freeing up the SoC for other tasks.
Using the company’s Snapdragon X55 modem , the 865 will support global 5G roaming and multi-SIM devices, connecting with all key regions and bands, including both millimeter wave and sub-6GHz frequencies. In addition to promising peak 5G speeds of up to 7.5Gbps, the 865 platform includes a previously-announced FastConnect 6800 Wi-Fi chip (14-nanometer) capable of delivering Wi-Fi 6 speeds nearing 1.8Gbps, while including Super Wide Band voice over Bluetooth for higher-quality audio communications, and 75% improved power efficiency.
There are also two major security-related improvements in Snapdragon 865. Qualcomm announced that working in collaboration with Google, the chip will will support secure drivers license and ID card storage, starting with Android R, so you can just carry your phone around rather than worrying about the last major physical card in your wallet. It also includes a device attestation feature for high-value transactions, enabling a bank to know that both the person and device are authenticated to facilitate a purchase.
Above: Qualcomm’s Snapdragon 765 5G Mobile Platform.
By comparison, the Snapdragon 765 and 765G are two midrange variants, the former made for mainstream smartphones, the latter with higher-performing graphics cores for gaming. Qualcomm is framing their 5G and AI performance as best-in-class, but in reality, many of the key features are unsurprisingly capped at sub-Snapdragon 865 levels. The performance improvements will be obvious to consumers of prior-generation midrange smartphones, but won’t wow flagship device users.
For instance, you get 4K HDR video capture capabilities, but not 8K, as well as the fifth-generation AI Engine mentioned above, albeit performing at a much lower peak than 15 TOPS. Similarly, both models feature an integrated Snapdragon X52 5G modem with 3.7Gbps peak download and 1.6Gbps peak upload speeds, plus global 5G roaming, multi-SIM support, and compatibility with both sub-6GHZ and millimeter wave towers. There’s also a Kryo 475 octa-core CPU with up to 2.3GHz peak speeds, plus Wi-Fi 6 and Bluetooth 5 support.
The key difference is in the Adreno 620 GPU. Snapdragon 765G promises 20% better graphics performance than the 765, thanks to binning of faster-specced chips. Thus far, Qualcomm is saying only that the 765G will provide “special game extensions and optimizations, smoother gameplay and more enhanced detail and colors” for “select premium tier experiences to gamers.” On the AI front, the 765G will deliver 5.5 TOPS of AI performance, and Qualcomm expects that it will be used for upscaling HD video to 4K, improving charging speeds, and offering contextually aware voice command sensing without excessively draining the battery.
Devices including the new processors will begin to hit the market early in 2020. The Snapdragon 865 will likely begin to show up in benchmarks soon as the SM8250, while the 765 will be the SM7250-AA and the 765G will be the SM7250-AB.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,232 | 2,019 | "The RetroBeat: Learning to love PlayStation after years of bitterness | VentureBeat" | "https://venturebeat.com/2019/12/05/the-retrobeat-learning-to-love-playstation-after-years-of-bitterness" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The RetroBeat: Learning to love PlayStation after years of bitterness Share on Facebook Share on X Share on LinkedIn PlayStation.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
I wish I appreciated the original PlayStation back when it was new.
Sony’s first console celebrated its 25th anniversary on December 3. When the PlayStation brand started back in 1994, gaming was a different world. I was also only 8 years old, so I was pretty different myself.
I was a fanboy, and I was all about Sega. I even slept with a Sonic the Hedgehog doll. A Sonic the Hedgehog 2 poster hung in my bedroom. I had a freaking 32X. I told everyone that Ecco the Dolphin was one of the greatest games ever even though I couldn’t get past the second level.
I was used to Nintendo being the enemy, but then along came Sony. And while I held some childish hostility toward Nintendo, I soon had a much better reason to hate the new PlayStation. It killed Sega.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: This controller used to make me so angry.
Sad for Saturn At least that’s what I told myself. Now I understand that Sega’s undoing was largely its own fault. The Saturn was an expensive, clunky system that put developers through hell if they wanted to make 3D games … and in 1995, everyone wanted to make 3D games. But as the brat I was, all I saw were all of my friends buying PlayStations while the Saturn soon disappeared into obscurity.
For years, I only associated the PlayStation brand with Sega’s death. That feeling intensified in 2000, when the PlayStation 2 became a quick hit and Sega’s final console, the Dreamcast, soon suffered its own death. It was hard for me to get past that resentment and associate any good feelings toward PlayStation.
Which was silly, because I was playing fantastic PlayStation games all of the time.
Tony Hawk’s Pro Skater was so much fun that I (and every other young boy in the country) considered a career in extreme sports. Metal Gear Solid was the most cinematic gaming experience I had ever seen. Twisted Metal 2 offered some of the most entertaining couch multiplayer I have ever experienced.
But while I would love PlayStation games, I couldn’t forgive the brand. My hard feelings over Nintendo during the 16-bit console wars eventually cooled. Soon, my room was filled with memorabilia celebrating both Sega and Nintendo. I had little figurines of Mario and Sonic standing side by side on my computer desk. PlayStation had no representation.
And it stayed that way for years. I’d still own PlayStation consoles and games. I’d would love many of them. But I could never call myself a PlayStation guy.
That sentiment only changed recently. I’m finally feeling nostalgia for the PlayStation. And a lot of that happened because of a device everyone else seemed to hate: the PlayStation Classic.
Above: The PlayStation Classic.
Nostalgia at last Yes, the micro-console has problems. It’s library is missing a lot of notable games, including Tony Hawk’s Pro Skater, and Sony’s bizarre decision to run the European versions of many titles made those games feel sluggish.
But I had fun playing the PlayStation Classic anyway, and it made me appreciate the system’s low polygon, sharp-edged aesthetic. Even aged games like Battle Arena Toshinden are full of personality. I also discovered PlayStation games that I wish I played back when they were new, like Ridge Racer Type 4, which I now realize has one of the best soundtracks of all time.
As I went through the PlayStation Classic library, I began to realize something for the first time. I like the PlayStation. I like it’s gray color. I like its weird symbols for the face buttons. I even like its logo. Somehow, all of the hate had dissolved. I finally moved on from and no longer associate PlayStation with the death of Sega’s console years.
So, while Sony may be celebrating 25 years of PlayStation, I’m toasting to a much shorter period of time as a fan of the brand. But all the same, happy birthday, PlayStation.
The RetroBeat is a weekly column that looks at gaming’s past, diving into classics, new retro titles, or looking at how old favorites — and their design techniques — inspire today’s market and experiences. If you have any retro-themed projects or scoops you’d like to send my way, please contact me.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,233 | 2,020 | "Apple hints at several week iPhone 12 delay during Q3 2020 call | VentureBeat" | "https://venturebeat.com/2020/07/30/apple-hints-at-several-week-iphone-12-delay-during-q3-2020-call" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple hints at several week iPhone 12 delay during Q3 2020 call Share on Facebook Share on X Share on LinkedIn A conceptual rendering of the iPhone 12 Pro by Ben Geskin and Aziz Ghaus.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Apple’s fiscal Q3 2020 was another one for the record books , surpassing analysts’ expectations despite pandemic-related fears of a global recession. But the next quarter might not be as rosy, Apple CFO Luca Maestri hinted today on a conference call with analysts , as COVID-19-related delays could keep the company’s hugely important iPhone lineup from being updated with new models during the traditional September time frame.
“This is an immensely challenging moment,” Apple CEO Tim Cook told analysts, noting that COVID-19 and racial justice issues loom large over the country, though he and Maestri spotlighted the company’s strong performance and financial resilience despite the troubled U.S. economy. But in subsequent comments, Maestri said the launch of new iPhone models would likely come a few weeks later than in the past, leaving the $399 2020 iPhone SE as the newest model until then. Asked for clarification, Maestri underscored that new iPhones had previously launched in late September and suggested that this year’s launch would be several weeks later.
Modem developer Qualcomm obliquely flagged the potential for a “partial impact” on upcoming quarterly earnings during a conference call this week, attributable to “the delay of a 5G flagship phone launch” by an unnamed OEM. Under U.S. securities laws, publicly traded companies are obliged to disclose advance knowledge of facts that might materially impact their financial performance in the upcoming quarter — an obligation that likely contributed to Apple’s decision to flag the issue today.
Rumors of delays for the new iPhones — believed to include iPhone 12, iPhone 12 Pro, and iPhone 12 Pro Max — have floated for months, though the specific reasons have remained unclear and the consensus time frame for the release has generally been “October.” Depending on whether Apple launches the phones in October or early November, the difference could be trivial or resemble late 2017’s famously odd iPhone X launch.
Early sales of new iPhones, including pent-up demand reflected in the first wave of preorders, are typically included in mid-to-late September revenues.
Apple’s CPU manufacturer TSMC has suggested that it’s on track to deliver 5-nanometer A14 processors for the new phones, which are also expected to use Qualcomm’s already released Snapdragon X55 modems rather than the newer but still unreleased Snapdragon X60.
It’s possible the delays are attributable to last-minute iPhone testing challenges, such as real-world trials of prototype devices during COVID-19 lockdowns and/or Apple’s decision to use custom Broadcom antenna and power amplifier components.
It remains to be seen whether Apple will hold a September event to introduce new iPhone and Apple Watch models, as it has done in the past, or will delay the announcements until October. The company notably turned its 2020 WWDC gathering into an entirely virtual show and pushed it back roughly two weeks from its traditional time. The company generally received praise for the new format, and its regular cadence of announcements has suffered little impact.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,234 | 2,020 | "Huawei will stop producing Kirin chipsets on September 15 | VentureBeat" | "https://venturebeat.com/2020/08/08/huawei-to-stop-making-kirin-chipsets-on-september-15" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Huawei will stop producing Kirin chipsets on September 15 Share on Facebook Share on X Share on LinkedIn A mock-up of the Kirin 990 5G system-on-chip.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
( Reuters ) — Huawei will stop making its flagship Kirin chipsets next month, financial magazine Caixin said on Saturday, as the impact of U.S. pressure on the Chinese tech giant grows. U.S. pressure on Huawei’s suppliers has made it impossible for the company’s HiSilicon chip division to keep making the chipsets, key components for mobile phones, Richard Yu, CEO of Huawei’s Consumer Business Unit was quoted as saying at the launch of the company’s new Mate 40 handset.
With U.S.-China relations at their worst in decades, Washington is pressing governments around to world to squeeze Huawei out, arguing it would hand over data to the Chinese government for spying. Huawei denies it spies for China. The United States is also seeking the extradition from Canada of Huawei’s chief financial officer, Meng Wanzhou , on charges of bank fraud.
In May, the U.S. Commerce Department issued orders that required suppliers of software and manufacturing equipment to refrain from doing business with Huawei without first obtaining a license.
“From September 15 onward, our flagship Kirin processors cannot be produced,” Yu said, according to Caixin.
“Our AI-powered chips also cannot be processed. This is a huge loss for us.” Huawei’s HiSilicon division relies on software from U.S. companies such as Cadence Design Systems or Synopsys to design its chips, and it outsources the production to Taiwan Semiconductor Manufacturing, which uses equipment from U.S. companies. Huawei declined to comment on the Caixin report. TSMC, Cadence, and Synopsys did not immediately respond to email requests for comment.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! HiSilicon produces a wide range of chips, including its line of Kirin processors, which power only Huawei smartphones and are the only Chinese processors that can rival those from Qualcomm in quality. “Huawei began exploring the chip sector over 10 years ago, starting from hugely lagging behind to slightly lagging behind to catching up, and then to a leader,” Yu was quoted as saying. “We invested massive resources for R&D and went through a difficult process.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,235 | 2,020 | "Qualcomm: $125 5G phones and Snapdragon 8cx Gen 2 PCs are coming soon | VentureBeat" | "https://venturebeat.com/2020/09/03/qualcomm-expect-125-5g-phones-and-snapdragon-8cx-gen-2-pcs-in-2021" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm: $125 5G phones and Snapdragon 8cx Gen 2 PCs are coming soon Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Europe’s annual technology show IFA has been stripped down due to the ongoing COVID-19 pandemic, but that isn’t stopping Qualcomm from making virtual announcements around the event. Unfortunately, you’ll have to wait a little while to see the new devices in action.
Arguably the biggest Qualcomm news at IFA is the company’s expansion of 5G from the midrange Snapdragon 6 series down to an even more affordable tier of phones. This translates to Snapdragon 4 series chips for devices priced from $125 to $250. That’s notably before any carrier subsidies, which means early next year 5G technology will become affordable for an estimated 3.5 billion users around the world, in many cases without contracts. Qualcomm expects phones at the higher end of that price range to sport fancier cameras and screens than the less expensive ones while remaining performant for apps and games.
Qualcomm is also revealing its next PC chipset, the Snapdragon 8cx Gen 2, which signals how smartphone DNA will shape upcoming laptops. The first-generation 8cx was announced in late 2018 and can now be found in Lenovo’s Flex 5G (aka Yoga 5G) laptop , where it enables Windows apps to run with full-day battery life and take advantage of Verizon’s nascent 5G network.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unsurprisingly, the second-generation 8cx touts typical PC chip speed and performance advantages, promising up to 18% better “total system performance” and 39% better energy performance per watt compared with Intel’s latest 10th-generation Core i5 CPU running at 15 watts. And there will once again be a 5G version, though any changes to it are unclear. The most interesting advances are in camera and video performance, which will collectively up the ante for business video conferencing.
As video calling continues to surge , 8cx Gen 2 machines will support 32-megapixel cameras with up to 4K video resolution and HDR, plus the bandwidth for dual 4K60 displays, including a 4K internal laptop screen. Assuming the conferencing software and wireless network are up to the task, the combination of high-resolution camera and screen tech could make calls look gorgeous. On the sonic side, Qualcomm is also including virtual surround sound support, plus its Aqstic echo cancellation and noise suppression features, enabling calls to sound more vivid and cleaner than ever before.
One somewhat overlooked area is the 8cx Gen 2’s dedicated AI engine, which doesn’t appear to be receiving the sort of huge leaps forward that accompanied the original 8cx announcement. Promising 9 TOPS, the AI system will support persistent eye contact in video conferencing, cartoony avatars for video chats as a more private alternative to live video streaming, and 7 times faster threat detection for certain cybersecurity applications — but none of these features is likely to be exclusive to the Gen 2 chipset.
Last up: Qualcomm is also announcing Adaptive ANC, an active noise cancellation feature coming to its premium QCC514x chipset for truly wireless Bluetooth earphones — a category that has been surging in recent months.
The new feature promises peak noise reduction of -25 decibels at certain frequencies, with -5 to -10 decibel reduction across much of the sonic spectrum, while automatically adapting to both the user’s current ear tip fit and environmental conditions. Wearers won’t need to worry about getting a perfectly tight seal, and the system will adjust to jogging and other activities.
No Adaptive ANC products are being announced today, but as Qualcomm research suggests 71% of consumers now want noise cancellation in their wireless earbuds , the technology should appear in multiple form factors soon enough.
Two companies are publicly announcing support for the Snapdragon 8cx Gen 2 today: Acer showed a Spin 7 convertible laptop slated to be available by year’s end, and HP is working on an unnamed business-focused PC. Motorola, Oppo, and Xiaomi are all planning to release 5G devices with Snapdragon 4 series chips in the first quarter of 2021, though specific chip and phone names aren’t being released today.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,236 | 2,020 | "Apple unveils A14 Bionic processor with 40% faster CPU and 11.8 billion transistors | VentureBeat" | "https://venturebeat.com/2020/09/15/apple-unveils-a14-bionic-processor-with-40-faster-cpu-and-11-8-billion-transistors" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple unveils A14 Bionic processor with 40% faster CPU and 11.8 billion transistors Share on Facebook Share on X Share on LinkedIn Apple's A14 Bionic processor.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Apple today unveiled its new A14 Bionic processor with the aim of pushing ahead of other smartphone and tablet vendors on computing power and artificial intelligence processing. The new $600 iPad Air will use the ARM-based processor.
The new chip has 11.8 billion transistors and a 40% faster central processing unit (CPU). It also has 30% faster graphics and is made with a 5-nanometer manufacturing process, which is believed to be from chip contract manufacturer TSMC. The 5-nanometer measure refers to the width between circuits on the chip. That width is five billionths of a meter. The top smartphones on the market use 7-nanometer chips today. A smaller width is better because it enables the chips to be smaller, faster, and cheaper.
Apple made the announcement during its virtual event today. The chip has a 16-core neural engine that can execute 11 trillion AI operations per second. The neural engine core count is twice the previous chip, and can perform machine learning computations 10 times faster. The A14 has six CPU cores and four graphics processing unit (GPU) cores.
“The A14 Bionic chip definitely pushed the boundaries,” Kevin Krewell, an analyst at Tirias Research, said in an email to VentureBeat. “It uses a 5-nanometer process which is the first high volume 5-nanometer processor for mobile. The emphasis on machine learning with both the neural engine and the CPU ML acceleration shows Apple is on the leading edge of AI for personal computing.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The machine learning tech can be used to improve photo resolution. The previous iPad Air used the A12 processor from two generations ago. The new tech enables a 7-megapixel camera with smart HDR, improved low-light performance, and 1080p video capture at 60 frames per second. The back camera has 12 megapixels and support for 4K at 60 frames per second. Battery life is about 10 hours.
The previous 7-nanometer A13 chip had 8.5 billion transistors, while the A12 had 6.9 billion and the A11 had 4.3 billion. Over time, Apple has been improving the AI performance of the chips with its neural engines and ML accelerators.
Apple is in the midst of shifting its Mac product line from Intel chips to its own internally designed processors based on the ARM architecture. That move will give Apple more control over its processor destiny and allow it to keep more profits that otherwise went to Intel. But the weekend news that Nvidia will buy Arm for $40 billion raises a question for Apple. Nvidia has pledged to openly license the ARM processor architecture to anyone who wants it, and I would expect Apple to continue doing so.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,237 | 2,020 | "Qualcomm details Cloud AI 100 chipset, announces developer kit | VentureBeat" | "https://venturebeat.com/2020/09/16/qualcomm-details-cloud-ai-100-chipset-announces-developer-kit" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm details Cloud AI 100 chipset, announces developer kit Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
During its AI Day conference last April, Qualcomm unveiled the Cloud AI 100 , a chipset purpose-built for machine learning inferencing and edge computing workloads. Details were scarce at press time, evidently owing to a lengthy production schedule. But today Qualcomm announced a release date for the Cloud AI 100 — the first half of 2021, after sampling this fall — and shared details about the chipset’s technical specs.
Qualcomm expects the Cloud AI 100 to give it a leg up in an AI chipset market expected to reach $66.3 million by 2025, according to a 2018 Tractica report.
Last year, SVP of product management Keith Kressin said he anticipates that inference — the process during which an AI model infers results from data — will become a “significant-sized” marked for silicon, growing 10 times from 2018 to 2025. With the Cloud AI 100, Qualcomm hopes to tackle specific markets, such as datacenters, 5G infrastructure, and advanced driver-assistance systems.
The Cloud AI 100 comes in three flavors — DM.2e, DM.2, and PCIe (Gen 3/4) — corresponding to performance range. At the low end, the Cloud AI 100 Dual M.2e and Dual M.2 models can hit between 50 TOPS (50 trillion floating-point operations per second) and 200 TOPS, while the PCIe model achieves up to 400 TOPS, according to Qualcomm. All three ship with up to 16 AI accelerator cores paired with up to 144MB RAM (9MB per core) and 32GB LPDDR4x on-card DRAM, which the company claims outperforms the competition by 106 times when measured by inferences per second per watt, using the ResNet-50 algorithm. The Cloud AI 100 Dual M.2e and Dual M.2 attain 15,000 to 10,000 inferences per second at under 50 watts, and the PCIe hovers around 25,000 inferences at 50 to 100 watts.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Qualcomm says the Cloud AI 100, which is manufactured on a 7-nanometer process, shouldn’t exceed a power draw of 75 watts. Here’s the breakdown for each card: Dual M.2e: 15 watts, 70 TOPS Dual M.2: 25 watts, 200 TOPS PCI2: 75 watts, 400 TOPS The first Cloud AI 100-powered device — the Cloud Edge AI Development Kit — is scheduled to arrive in October. It looks similar to a wireless router, with a black shell and an antenna held up by a plastic stand. But it runs CentOS 8.0 and packs a Dual M.2 Cloud AI 100, a Qualcomm Snapdragon 865 system-on-chip, a Snapdragon X55 5G modem, and an NVMe SSD.
The Cloud AI 100 and products it powers integrate a full range of developer tools, including compilers, debuggers, profilers, monitors, servicing, chip debuggers, and quantizers. They also support runtimes like ONNX, Glow, and XLA, as well as machine learning frameworks such as TensorFlow, PyTorch, Keras, MXNet, Baidu’s PaddlePaddle, and Microsoft’s Cognitive Toolkit for applications like computer vision, speech recognition, and language translation.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,238 | 2,020 | "Aclima says bad air quality from California's fires is affecting millions | VentureBeat" | "https://venturebeat.com/2020/08/28/aclima-says-bad-air-quality-from-californias-fires-is-affecting-millions" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aclima says bad air quality from California’s fires is affecting millions Share on Facebook Share on X Share on LinkedIn When 11,000 lightning strikes hit the San Francisco Bay Area on August 16, they ignited two of the largest fires in California history.
Aclima had the pollution measurement devices in place to record the effects on local air quality. The company said the results show millions of Californians are breathing bad air, including many who may not realize it.
Twelve days later, three fires are still burning, and Aclima scientists have examined the impact of these lightning complex fires on air quality. The result is perhaps the most scientific analysis of data from a big fire in modern times (see the video above). For its analysis of the Bay Area fires, Aclima analyzed both its own data and the data collected by regulatory agencies and reported to the Environmental Protection Agency (EPA).
Aclima can measure air quality on a “hyperlocal” level using a fleet of electric cars with pollution-measuring sensors. The company gathers a massive amount of data compared to other pollution measurement collectors through this mobile method. Aclima previously used this data to assess the pandemic’s effect on car travel and pollution levels in San Diego.
Since the fires began, California’s inland counties have on some days experienced worse sustained daily air quality than Bay Area counties. But Bay Area counties saw larger maximum levels or spikes before the wind dispersed the smoke plumes and blew them inland. This has been tough for me, as I’ve been out jogging almost every day. Of the 168 days of lockdown, I have jogged for 159 days. For seven more of those days I have been writing an indoor exercise bike, and most of those due to the smoke.
Nights are better for walks When looking at diurnal or day-to-night hourly patterns, Bay Area counties experienced worsening daytime and improving nighttime air quality, on average, from August 16 to August 25. That means it’s better to take the dog for a walk at night or in the early morning, said Aclima chief scientist Melissa Lunden in an interview with VentureBeat.
“I was seeing that by four or five o’clock that the levels were falling to a level where I could open the windows and cool the house down,” said Lunden, who has her own measurement device in her home. “We also have regular afternoon winds, and that blows it all inland.” For this analysis, Aclima’s scientists focused on the daily average levels of fine particulate matter (PM2.5), which is a harmful pollutant at least 50 times smaller than a grain of sand and typically invisible to the eye. Even if you can’t see or smell smoke, you may be breathing air with unhealthy levels of particles generated by the wildfires.
Above: The air quality in the Bay Area changes during the day.
To produce the embedded video, Aclima scientists analyzed regulatory air quality data from the state’s stationary monitors positioned at sea level throughout California and calculated changing daily average levels of fine particulate matter on a county-by-county basis throughout the state from August 15-25. The scientists then overlaid satellite-detected VIIRS fire hotspot data from NASA’s Fire Information for Resource Management System (FIRMS) to illustrate the changing air quality in relation to the locations of fire hotspots as seen from space.
As you can see, on many days the wildfires appear to impact daily average PM2.5 levels in inland counties more than Bay Area counties as the wind blew the smoke well beyond the fires.
In addition to analyzing daily air quality in California following the lightning complex fires, Aclima scientists analyzed data generated from the company’s mobile sensor network, which measures air pollution and greenhouse gases block by block across the Bay Area, day and night, weekdays and weekends.
Above: On August 22, the daily average PM2.5 levels in the Bay Area were lower than inland, but the daily maximums were at least as high in the Bay Area as inland.
In Bay Area counties, a diurnal or day-to-night pattern showed cleaner air at the ground level in the evenings and early mornings, with the highest levels of PM2.5 at midday. Unlike regulatory monitors, the Aclima mobile sensor network takes measurements at various elevations on all publicly accessible streets.
Why is air better at night? Lunden said on summer evenings an inversion layer of cool marine air is trapped beneath a layer of warmer air. The marine layer is stable, meaning there is no exchange of air between this lower level and the air above it. As the sun comes out and heats the ground, the height of this layer increases and there is more mixing of air both far above and near ground level. The difference in atmospheric pressure between the cool Pacific and the heating inland regions results in an onshore wind that starts to build mid-morning to a strong flow by late afternoon. As the sun sets, the evening inversion layer forms again.
Emissions from the fires are often found at higher elevations in the hills, and smoke rises high into the atmosphere. In the evening, this smoky layer is above the inversion layer and does not descend to the ground level. As this boundary layer grows during the morning, however, the smoke that has been held high above the ground mixes into this layer and the concentrations on the ground increase. Direct smoke emissions are also more likely to mix into this layer during the day. As the winds pick up, this smoky air is blown out of the Bay Area toward the east. And as the sun goes down, the reformation of the evening inversion layer results in clearer air being closer to the ground and where we breathe.
This isn’t to say that air quality at night has been good, or better everywhere, but it has shown a strong tendency to be measurably better on a county level throughout the Bay Area. For communities directly impacted by fire, unhealthy levels of smoke — not to mention the danger from the fire itself — have occurred at any time of day or night. And it’s important to note that these observed patterns hold true for what happened, but if the winds change then the patterns will too. A good resource for air quality is here.
People have tragically lost their lives to these historic fires, and many others have lost their homes. That’s not to mention the animals and habitats lost and the tens of thousands of people displaced due to evacuations. Meanwhile, millions of people are being exposed to unhealthy air quality across and well beyond California, Aclima said.
Air pollution is not confined to county, state, or country borders, and it is harming human and planetary health everywhere. By better understanding the impacts of climate change events — like lightning complex fires — Aclima said we can make informed decisions to protect ourselves and build a more resilient and equitable future.
Measurement challenges You can check the health level of the air by zip code using Air Now , but you may not be able to entirely trust that estimate. That’s because the data is based on the government’s regulatory sensors operating in stationary places, and that data is then extrapolated to cover a much larger region. It’s not based on the fine-grained data Aclima collects with its cars, but it’s pretty much the best measure available at the moment.
Lunden said you can’t judge whether the air is safe enough to take a walk based on what you smell.
“What you smell from a fire is like the organic olfactory compounds that get emitted, and that stuff is pretty reactive in the atmosphere and disappears after some number of hours, but the smoke is still there,” Lunden said. “It just doesn’t smell like smoke anymore. It could still have pretty high concentrations.” On top of that, in the Bay Area you can’t judge air quality by how blue the sky is. The inversion layer may or may not be in place, and you can’t see it. In other words, there isn’t a perfect way to know whether your air is clean or not. As for Aclima’s data from its cars, it isn’t analyzed in real time at the moment, so the company has to use it to analyze long-term trends, not the hourly changes in pollution levels you would need in order to understand whether it’s safe to go outside.
“The real strength and power in our data comes from the persistent differences we see,” Lunden said. “And those persistent differences come from repeat measurements over time. So we’ll be in your zip code on any given day, and then we’ll be there another two weeks later, and so on. And as we continue to do that mapping, you get those persistent differences.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,239 | 2,014 | "Wasteland 2 blends apocalyptic clichés, bad graphics, and unforgiving play into a surprise masterpiece (review) | VentureBeat" | "https://venturebeat.com/2014/09/25/wasteland-2-review" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review Wasteland 2 blends apocalyptic clichés, bad graphics, and unforgiving play into a surprise masterpiece (review) Share on Facebook Share on X Share on LinkedIn A typical combat setup in Wasteland 2 Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Check out our Review Vault for past game reviews.
The world is finished.
Cold War politicians emptied their missile silos after World War II, and only a tiny fraction of the population survived. Food and water are scarce. You are surrounded by pockets of radiation and goons that want to drink from your skull. The Internet never existed. It’s time to take stock of who your friends and enemies are and ultimately decide who lives and who dies on your road to survival.
This cheery world is Wasteland 2’s reality. Brian Fargo, creator of the original Wasteland and executive producer of its spiritual successor Fallout, released the game after several years of crowd-funded development. Wasteland 2 comes 26 years after its predecessor’s release.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! I followed what I believe to be the main or major storyline for the game. Wasteland 2’s story is not at all linear, and players can do pretty much whatever they want from day one. I progressed through the Desert Ranger’s primary quest line, which is about 60 hours long depending on side quest choices, for the purposes of this review.
What you’ll like Above: Just a public execution on the streets of Santa Monica. Nothing to see here.
Post-war life is shit in Wasteland 2 Wasteland 2’s shining achievement is creating a nuclear holocaust-ridden dystopia that truly sucks.
Too many post-apocalyptic worlds found in games are more akin to 24-hour summer camps than Hell on Earth. Titles like Fallout: New Vegas and Dead Rising portray the end of the world as a party where people enjoy the lack of rules and structure. Sure, people die every now and then, but we are all having fun in a life with no rules.
Wasteland 2 does not. Everyone you encounter, even the people who appear to be partying, hates their life. They’ve all lost family and friends. They need supplies and have turned to eating dogs and sometimes people to survive. Many of them wish for death and eventually kill themselves in front of you. Desperation surrounds you at all times.
Players command a squad of Desert Rangers, law enforcement units spawned from a corps of army engineers sent to Arizona to build bridges after World War II. After the bombs went off, they turned a prison into a base and began to re-establish order in the region. Your first mission is to investigate the killing of Ace, a high-ranking ranger torn to pieces by an unknown foe.
That is, unless, you don’t want to do that.
Unique open-ended gameplay with tons of replay value Above: A well-placed grenade would make me a very upset squad leader.
Wasteland 2 offers players a level of freedom they probably never thought possible.
Here’s an example: The game picks up with a gorgeous live-action movie narrated by General Vargas, the Patton-like leader of the Desert Rangers. He gave me a history lesson on the nuclear war itself before explaining the current situation surrounding Ace’s death.
After the movie, I found myself at a stereotypical post-apocalyptic desert base, a la the “Mad Max” film series. Vargas gave me a cookie-cutter starting mission: Prove your worth recruits, and I will let you into my super exclusive club. We have outfits and tell ghost stories at night. You are going to want in on that.
The conversation ends, but I wasn’t convinced by his sales pitch. I selected the red targeting reticle next to the picture of my submachine gun, hovered it over Vargas, saw that it had a 50 percent chance of hitting him, and fired. It hit him, and he promptly turned around and curb-stomped my entire squad. I was sent back to the main menu to play again.
Every single interaction that I encountered in Wasteland 2 can be handled in this way. Players can murder, rob, or help anyone they want. It’s possible to wage war against the desert rangers and end the game by wiping them out. In this scenario, players never take the ranger helicopter to Los Angeles. The game simply ends at the halfway point of the traditional storyline.
Above: One of many, MANY important dialogue windows in Wasteland 2.
Even positive interactions have their consequences. As Fargo noted in an earlier interview with GamesBeat , Wasteland 2 has more text written into its vast conversation bank than all three “Lord of the Rings” novels. I suggest reading most of these word blocks carefully, as they can make your miserable life a little better.
Another example of Wasteland’s unorthodox gameplay comes fairly early in the Arizona section. I was tasked with saving a farming community and a dam. One provides the rangers with their food, and the other provides their water. Vargas told me to hurry, and my radio exploded with cries for help from both sites. I saved the dam, but it took me a while. By the time I got out, I heard bloodcurdling screams over the radio. I ran to the farm, but everyone was dead or worse — transformed into poisonous pod people bent on killing my squad. I emptied the area of danger, but the farm was too far gone and had to be abandoned.
Could I have saved both? I don’t really know. I expected the typical video game experience where I would save one then arrive just in the knick of time to save the other. That’s not how Wasteland 2 works. Player choices mean more here than in any game I have ever seen. And this is just one of many, many such scenarios.
Given all of the decisions you can make, Wasteland 2’s replay value is vast. I would guess that it features more than 100 total hours of possible story arcs and questing content. The lengthy main storyline is just the beginning of Wasteland 2’s gameplay possibilities.
1 2 3 View All Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,240 | 2,018 | "The Bard's Tale IV: Barrows Deep -- Striking a powerful chord | VentureBeat" | "https://venturebeat.com/2018/09/18/the-bards-tale-iv-barrows-deep-review" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review The Bard’s Tale IV: Barrows Deep — Striking a powerful chord Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
It’s been generations since a band of adventurers rescued Skara Brae from the vile clutches of Mangar and saved time itself from the mad god, Tarjan. But the world of Caith finds itself in peril again in The Bard’s Tale IV: Barrows Deep, and it turns to another band of wizards and warriors ( and bards ) for deliverance from a grand evil.
And this time, you’ll have to think harder than in any previous Bard’s Tale to get the job done.
The Bard’s Tale IV resurrects a long-dead franchise, picking up a century after The Bard’s Tale III (which itself came out 30 years ago in 1988). The Interplay series is known for its emphasis on music, puzzles, and sometimes brutal combat.
InXile Entertainment (which Interplay founder Brian Fargo started in 2002) turned to those fans in 2015 with a Kickstarter for The Bard’s Tale IV, and three years later, the role-playing game is ready for its 33,741 backers — and anyone else who wants to buy it on Steam, GOG, and other platforms. What they’ll find is a game that makes music and puzzles an even bigger part of its soul, celebrating what many of us fell for back in the 1980s. Combat also becomes a puzzle.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! I found that, despite some glitches, I deeply enjoy my return to Skara Brae, and more than 20 hours into my adventure, I’m pleased with the challenge of the puzzles, the bard’s role in your party, and a turn-based combat system that feels different than what you find in most RPGs.
What you’ll like Above: Bards give you a big hand in combat.
The bard’s life My adventures in Caith started to sing (I’m not sorry for that) once I added a bard to my party. (Unless you create one at the start, you won’t find one until a few hours in.) In the first three Bard’s Tales, your bard could sing songs when exploring or during combat. In Barrows Deep, you belt out the beats just when you’re fighting, or you can play special songs of exploration when you’re cruising through town, tiptoeing through a forest, or delving into a dungeon. You don’t need a bard for these, and they’re essential for solving puzzles and learning more about the world.
Proper puzzles Far too many games have puzzles that are about finding objects or matching blocks (I see you, mobile gaming). The Bard’s Tale IV’s puzzles are intricate, and one sort of these is unlike anything I’ve seen in my near 40 years of playing video games.
The best — and most difficult — puzzles involve fairies. These little glowing sprites move in a straight line when pushed, and if they don’t hit a special grouping of magical rocks, they poof out of exist. They reappear seconds later. The InXile team created an entire lore behind these little spiffy sparks, and it’s a different treatment for faeries than you’ll see in other games.
How do these work in puzzles? You have pillars with rotating pictures in them, and each face dictates where the sprite moves after you shoo it away — left, right, stay, or return the same direction. To find the solution, you must rotate these faces until you can get the sprite from the start to the magical rocks.
Some of these are quite difficult. The best way to go about solving the puzzles is to shoo the faerie along as you experiment with which faces will get the sprite to the end of its journey. I rather enjoyed these, though I did find that, like the bard, I did better after drinking a beer.
Above: I’ve got an “inkling” about this puzzle.
Other puzzles include panels of gears that require you to move them until you find the right configuration to open a door or deactivate a trap; moving blocks to specific spaces; and offering pots with clues on what items to sacrifice; and figuring out inks in one of my favorite puzzles that didn’t involve faeries.
Every copy of The Bard’s Tale IV comes with a strategy guide, in case the puzzles prove too difficult. I applaud this — it helps the player, and it continues the tradition of the series, which had some fantastic guides during its heyday.
At times, The Bard’s Tale IV feels more like a mix of an RPG with Myst, and this is a good thing.
A mean drunk Booze plays a crucial role in The Bard’s Tale IV. Your bard must wet their whistle in order for their magical songs to work, and drinking up also bestows action points to your troubadour so they can make merry with a tune or swing with their weapon. But by far, my favorite bard’s ability is mean drunk. It’s a passive skill, and whenever you take a swig from your booze, you throw out a tankard that damages a foe during combat. It’s hilarious, especially when you hear a party member asking why you’re drinking during combat.
Thinking combat A good number of The Bard’s Tale IV’s fights require you to think who is going to attack and when. Your party, which starts with two and grows to six (or more with summoned monsters), must consider positioning and timing on the battlefield, which is a grid two squares deep by four squares across. Some of your attacks only hit the squares right in front of you. Others might cover a 2-by-2 area. One tactic you may try — using a fighter’s or practitioner’s abilities — is to move an enemy spellcaster into range of your crushing melee attack, or take an armor-wearing tank and move them where you can smash through their armor so you can deal out physical damage.
Damage types are another part of the combat puzzle. You can do physical, mental, or true damage, and armor and other abilities can protect against each type. You’ll have to batter down those protections before hurting those foes. I enjoyed this, because it meant that even easy combats required some thinking if I had to deal with damage types.
Above: The autumn leaves are just one taste of the color that livens up The Bard’s Tale IV: Barrows Deep.
The colors and music Far too many fantasy RPGs just put you into places that look like our world, with a supernatural flourish here-and-there to remind you it’s a magical realm. The Bard’s Tale IV steeps a number of its areas in fantastical looks, and where it doesn’t, InXile remembers that nature is full of different colors.
The art teams shows this off in the first major dungeon, the cellars of Kylearan’s Tower (one of the romps from the original Bard’s Tale). Here, an underground grove helps bring bright colors, with glowing mushrooms and vibrant vegetation. Statues and even walls and floors get color, too. A forest deeper into the story shows off the reds and oranges of a glorious autumn.
With many RPGs, I tend to play as I listen to podcasts. I didn’t do that with The Bard’s Tale IV. Music bubbles up in many different areas as you explore, and you’ll never know where you may find a Gaelic tune.
What you won’t like Puzzling pace I adore the puzzles, but I wish the designers gave us a break to step back and appreciate them better. In some dungeons, you go from solving one puzzle right into facing another. I would’ve preferred to have a little more space between some of them. Or maybe InXile should showcase the puzzle design in one dungeon that’s a gauntlet of such tests.
Performance problems Granted, this is review code and not the final game, and since receiving it midweek, InXile has sent out a number of patches. However, the deeper I get into The Bard’s Tale IV, the more hiccups and pauses I encounter during exploration and combat. A patch improved some of the performance issues, but it still has some slow loading and other quality-of-life issues.
Inventory management It didn’t take long for my inventory to become cluttered with food, drink, crafting components, useful items, and weapons and armor that I was stashing for other characters. But I could never find a way to clean it up or organize items by type. Finding stuff in the pages of the inventory screen (it goes up to five) became messier than searching for, well, anything in my 8 year old’s bedroom. I hope this is fixed upon release, otherwise, it’s going to annoy some players (note: It still hasn’t as of October 8). I annoyed me.
Conclusion Above: The Bard’s Tale IV: Barrows Deep has a hairy sense of humor.
The Bard’s Tale IV: Barrows Deep delivers on the faith its Kickstarter backers put into the project. It weaves combat, exploration, music, and puzzles into a game that stands out in a crowded market. It’s unlike any other RPG, and with other old-school RPGs finding success these days — Pillars of Eternity , Octopath Traveler , and Dragon Quest XI — I hope InXile is able to come back to this fantastical world, just like it’s doing with Wasteland.
InXile’s approach to music and puzzle design make The Bard’s Tale IV a standout RPG. At times feeling Myst-esque, the puzzles are some of the most challenging and satisfying in today’s market. Their only flaw is pacing — sometimes, you don’t get a chance to take a second to appreciate your cleverness for solving a tough one. Music matters, and the designers weave it into your exploration, puzzles, and combat. In some ways, music is The Bard’s Tale, as it should be.
I look forward to cracking open a beer and taking a drink every time my bard does as I finish this unique game from InXile Entertainment.
Score: 82/100 The Bard’s Tale IV: The Barrows Deep is out now for PC. The development studio sent GamesBeat a Steam code for the purposes of this review.
Disclaimer: The reviewer backed The Bard’s Tale IV on Kickstarter.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,241 | 2,019 | "Wasteland 3: Even the apocalypse can't kill the One Percent | VentureBeat" | "https://venturebeat.com/2019/09/02/wasteland-3-even-the-apocalypse-cant-kill-the-one-percent" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Wasteland 3: Even the apocalypse can’t kill the One Percent Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Jeremy Kopman joined InXile Entertainment during what I consider the most exciting time in its history — the crowdfunding chapter. He signed on shortly after the Kickstarter campaign for Wasteland 2 , working on another game that the indie RPG publisher would ask its players to help fund — Torment: Tides of Numenera.
He’s now working on the series that showed InXile (and a host of indie developers) that crowdfunding could indeed be a workable path. He’s the lead level designer for Wasteland 3, the postapocalyptic RPG that InXile is releasing in early 2020 for PC, PlayStation 4, and Xbox One.
And just as he joined at the beginning of a new era for InXile, he’s helping finish it. Wasteland 3 will be the studio’s final game from before 2018’s Microsoft acquisition. Deep Silver is still publishing it. But Wasteland 3 is also the first to benefit from the increased support from one of the biggest game companies in the world.
I spoke with Kopman a couple of weeks ago when he brought a demo of Wasteland 3 to San Francisco. This is an edited transcript of our chat before the demo.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Small studio, big stories GamesBeat: How long have you been at inXile? Jeremy Kopman: I started right after the Wasteland 2 crowdfunding campaign. I’ve been here for six and a half, almost seven year.
GamesBeat: What do you like about working there? Kopman: The freedom that each–even when I was just starting out as a regular level designer, I still had a ton of input on what the levels I was assigned would–the content of those levels, the characters, the writing. I got to contribute to all of that. That’s not something you necessarily have at other studio.
GamesBeat: Especially at bigger studios. Level designers rarely get to do narrative.
Kopman: Right. The benefit of a small studio is you get that many-hats situation. There can be times when it gets busy and stressful, but it’s worth it for the opportunity to really put your mark on the game.
GamesBeat: Did you work on The Bard’s Tale IV? Kopman: I didn’t. I worked on Torment. Torment — I went from Torment to Wasteland, and Bard’s Tale was kind of bridged between those.
GamesBeat: There’s a little crossover with the themes of Wasteland and the themes of Torment.
Kopman: Somewhat. Although the narrative team was not–there wasn’t as much crossover. Torment, we were starting to work on it well before Wasteland 2 released. There are certain questions, big themes that are attractive because of where we are in the world and all that stuff.
Entering the wastes Above: Just ask the Separatists how well crab droids did for them.
GamesBeat: Bring us up to speed a bit on the story. Where is it after the events of Wasteland 2? Jeremy Kopman: Wasteland 2 ends with a dire situation for the Rangers, where their base is gone. They blew it up in order to destroy the big bad from the last game.
GamesBeat: Which was an AI system.
Kopman: Correct. They’re scraping by in Arizona, just barely surviving, trying to rebuild, when they get a call from someone who calls themself the Patriarch of Colorado. This is a guy who offers money, supplies, weapons, what they need to get back on their feet in Arizona, if they can send a contingent and help him maintain control over this small bastion of civilization that’s been built up.
GamesBeat: Have they gone from Rangers to mercs now? Kopman: No, it’s not — the proposition isn’t so much, I’ll just pay you to do my fighting for me. It’s, we are of a like mind. We want to maintain order in this wasteland and try to rebuild civilization. I can help you. You help me and we’ll both benefit.
GamesBeat: One of my main takeaways from Wasteland 2 was the dangers of AI. Does that carry over at all in Wasteland 3, considering how AI has become a much bigger part of our lives these days? Kopman: I think we’re not speaking too much to the broader story of the game yet, but I actually think it’s threaded through the story, and it pops up in interesting places throughout. It’s not a straightforward “AI bad” setup. It is an important part of the world, but the story is more deeply about the Patriarch and these rich survivalist doomsday preppers of Colorado, who put away stores of weapons and food and all this stuff, and when they came out of their bunkers after the nuclear fallout from the war, they were the only people with any resources. They immediately were able to take over.
GamesBeat: I didn’t know doomsday preppers in Colorado were rich.
Kopman: Some small numbers of them are. Those are the ones who had enough resources. They may not have been super-wealthy before the apocalypse, but once they came out, they were the only ones with tons and tons of canned food and huge stockpiles of weapons and that kind of thing. He comes from the Buchanan family, which is one of the 100 families that rules the Colorado Springs area.
GamesBeat: Did you choose that name for any particular reason? Kopman: I don’t know offhand. It almost certainly has some sort of allusion to the President. That might be–one of the narrative design people might be able to answer that question directly.
GamesBeat: In many RPGs, names are important. They have some significance. Sometimes they don’t, but especially when you draw on names in a game that takes place in an alternate U.S., that’s tied to a name in U.S. history.
[ The Wasteland 3 narrative team had this to say about the Saul Buchanan, the Patriarch: No AI needed GamesBeat: When it comes to AI, this is a very different question. More game studios use AI to help make their games, whether it’s the placement of elements, checking on bugs, and so on. Does inXile use AI in any way to help make their games? Kopman: Not really? We’re trying to build a visually very polished, modern-looking game, but the way we make a game is still very old-school, I would say. All that narrative reactivity that’s part of Wasteland 2 and will be a big part of Wasteland 3, it’s all handled by designers manually figuring out — OK, we need to set this variable here, this variable there, and then check it over in this spot. It’s a lot of work, but it’s the way that we can get the most natural–that human work is still the best way to get that kind of reactivity.
GamesBeat: Is it hard to use AI generation with a structured RPG? Kopman: There could be ways to do it to help with some of the systems side of things, but that’s not really the inXile style. We like to have a hand in everything and make sure it’s what we want.
Post-Microsoft GamesBeat: How has the development of Wasteland 3 changed after the Microsoft deal? Kopman: We got some more time and a bunch more resources. We’ve been able to step up the visual quality. We were able to fully lock in on full voiceover for all dialogue in the game. It’s not a small amount of V/O. Those things were not opportunities we had before the Microsoft acquisition. It’s really opened up some of those avenues.
Above: When in doubt, drink! GamesBeat: As a design department lead, when you first heard of that, what did you think? Kopman: We were pretty well into development on Wasteland 3. The structure of the story, the structure of the levels, was relatively well set. We were still iterating on everything. As we speak there’s designers still coming up with stuff and adding things to the game. But it definitely made it more comfortable. We knew the stuff we’d been planning in our heads — we could actually pull it off.
GamesBeat: Does the addition of more sets of eyes also help — here’s something you missed, you should fix this or do it this other way? Kopman: Yeah, it gives us the opportunity to do some stuff — bringing in people who have a lot of experience. Microsoft has UX experts that can look at the game and say, here, these are some ways to tweak things to make sure that the experience flows better. That kind of insight is something that would have been harder. You can hire consultants for that stuff, but inXile wasn’t in a position to do that, necessarily. Now we have quick access to that stuff.
GamesBeat: Now that Obsidian is also under the Microsoft umbrella, can you two cooperate and say, you know, we’re having trouble here, how have you guys done with this in the past? Kopman: I don’t think that’s come up yet. I think because they’re working on — they were well into Outer Worlds and we were well into Wasteland 3. We’re all head-down. I’m not sure if that’s going to happen in the future. I can’t really speak to that. But I know that — the ability to get — Microsoft has been able to offer us lots of resources and information-sharing and stuff. I think we’re all hopeful that that kind of thing can happen, but we haven’t experienced yet. It hasn’t been relevant to the projects yet.
1 2 View All Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,242 | 2,020 | "Frostpoint is a VR multiplayer shooter and InXile's last non-Microsoft project | VentureBeat" | "https://venturebeat.com/2020/07/29/frostpoint-is-a-vr-multiplayer-shooter-and-inxiles-last-non-microsoft-project" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Frostpoint is a VR multiplayer shooter and InXile’s last non-Microsoft project Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
InXile Entertainment is announcing Frostpoint VR: Proving Grounds, a multiplayer team shooter in virtual reality. It’s a bit of an unexpected announcement, as InXile is finishing up Wasteland 3 for release this fall, along with another unannounced role-playing game. It’s also the Microsoft studio’s final project for another publisher. The game is coming this year for Oculus Rift, HTC Vive VR, and Valve Index.
Frostpoint VR’s publisher is Japan-based Thirdverse.
The company published Swords of Gargantua , a multiplayer sword-fighting game that came out in 2019, under the moniker Yomuneco but rebranded to Thirdverse earlier this year “to better align with our mission and focus on the metaverse,” cofounder Kiyoshi Shin said over email. Multiplayer team shooters are a big market, as Blizzard Entertainment’s Overwatch and Riot Games’ Valorant show. But the genre, like many others, is small in virtual reality. So InXile and Frostpoint see an opportunity there, even with VR’s smaller (but engaged) player base.
This isn’t InXile’s first foray into VR. In 2018, it launched The Mage’s Tale, an RPG set between The Bard’s Tale III and The Bard’s Tale IV: Barrows Deep.
It’s on Oculus and Steam VR.
“We’d been working on it for a few years before the acquisition, and like Wasteland 3, Microsoft has been great in allowing our existing partnerships and deals to remain,” InXile boss Brian Fargo said over email, confirming that “yes, after this title and Wasteland 3, we’re a fully focused Xbox Game Studio.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Frostpoint features two sides fighting against each other, and its setting is a run-down military training base in Antarctica. This blizzardy battleground has a wildcard — biomechanical beasts that may attack both sides as you’re trying to blast your foes. You get more than a dozen weapons.
“[The biomechanicals are] essentially a wildcard faction that appear wherever players are, and more of them will show up if you’re doing something like capturing a point. So it creates a lot of tension and variety in the game modes as you’re dealing with the enemy players, but also this third and unpredictable force,” Fargo said.
The description brought The Thing to mind, as both have monsters in a chilly setting (though that horror movie has a much more slithery angle). Frostpoint does have a bit of a media tie-in — the world and story are from Daniel Wilson, the author of New York Times best sellers such as Robopocalypse and How to Survive a Robot Uprising.
(He also has a Ph.D. in robotics from Carnegie Mellon.) “Over the years, we’ve adapted that, and yes, we were fairly inspired by The Thing as we went. In fact, we were actually talking to John Carpenter at one point to create the soundtrack for us. Unfortunately, that didn’t pan out,” Fargo said.
My question: Microsoft does have Windows Mixed Reality and the HoloLens. I wonder if this is the end of InXile’s VR development … or just a new beginning.
“Anything’s possible. We don’t have any other projects in mind, but things can change,” Fargo said.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,243 | 2,013 | "Paradox still the grand master of grand strategy with Europa Universalis IV (review) | VentureBeat" | "https://venturebeat.com/2013/08/13/paradox-still-the-grand-master-of-grand-strategy-with-europa-universalis-iv-review" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Paradox still the grand master of grand strategy with Europa Universalis IV (review) Share on Facebook Share on X Share on LinkedIn A hit! A fine hit! This is one of my few crushing victories over the York rebels in Europa Universalis IV.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The Doge of Venice sat in his hall, contemplating his next move. Enemies crouch all around Venice. With the backing of the Austrians and the Holy Roman Empire, the Doge found himself having to give up conquests made over the Milanese, even though Milan started the war in the first place. The Austrians, in alliance with a number of smaller, but still Germanic, kingdoms, coveted the Doge’s territories on the northern border. And his merchants — the backbone of Venice’s strength — were struggling abroad.
How the Doge handles each of these choices will decide whether Venice would remain a free mercantile republic or a vassal of a larger empire. These choices define Europa Universalis IV, giving you plenty of freedom to shape the destiny or doom of your empire. You make your decisions in real time, managing the administration, culture, diplomacy, expansion, military, and trade of your kingdom, and in the fourth edition of the grand-strategy series from publisher Paradox Interactive and developer Paradox Development Studio, out now for the PC and Mac, you have more, streamlined tools to help you do so.
Even with the added help, Europa Universalis will confound — but in the “What the bloody hell just happened?!” way in which you’re a little confused but also a little impressed over what has just happened.
Above: Advisers boost your administrative, diplomatic, and military point pools.
What you’ll like Points in its favor As time passes in Europa Universalis IV, you gain points — monarch power — that you can use to advance your administrative, diplomatic, and military goals. This may sound like something standard for a strategy game, something that you’d take for granted would be there, but players of the series won’t, because it’s the first time monarch power has appeared in the franchise (it’s a mechanic brought over from Paradox’s 2012 grand strategy hit, Crusader Kings II). And boy, do they help you manage your empire.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! You can use your administrative points to boost your stability, reducing the odds of a rebellion. Those diplomatic points come in handy when you’re trying to convert a conquered province to your empire. They can also change the culture of provinces, making them more suitable for your kingdom’s values (though I can’t help but think of “ethnic cleansing” in the back of my mind; I hope this is achieved in-game via intermarriage and the absorption of culture, not the forcible removal of it). And you use all of these points to unlock National Ideas (the Europa Universalis tech tree) and the technologies that lie within them.
These points are also tied to your advisers — the more skilled your adviser (administratively, diplomatically, and militarily), the more bonuses you receive to the growth of these points. Highly skilled advisers cost more in coin and maintenance, but they give you better benefits. (And they die as well as time passes.) The capabilities you unlock through National Ideas also help determine the growth of these point pools. And this is also where some of the most interesting choices come in Europa Universalis IV: Do you hoard your administrative points to access better National Ideas, or do you spend your bureaucratic capital on boosting your realm’s stability? Your military points help in repressing rebellion, but they also help you unlock better units. This give-and-take is one of the most fascinating aspects of Europa Universalis IV, and I look forward to many hours of finding which National Ideas and advisers best fit my playstyle.
I also like how you receive missions; it’s nice to have some more concrete goals than just, “Hey, let’s dominate the world.” Some are pretty easy — like beating up on a neighboring state — but others can be a challenge. Hungary is a natural rival of Venice, but one mission I received asked that I boost my relations to them to “100” — and I was at a level below zero. This gave me one more goal to strive toward as I sought to dominate trade in the Mediterranean, and the rewards from missions helped me accomplish my other, more long-term goals.
Above: Look at the ducats flow into Venice.
Trade nodes In the past, trade felt neglected in Europe Universalis. But the changes to the trade system make playing merchant powers like Venice not only a viable strategy but fun as well.
Trade flows through “trade nodes” located in important areas of commerce (think places like Venice or Alexandria). With the trade nodes, you want to send as much trade flowing to you as possible. You can even see arrows in the economic map flow out of trade nodes to the realms that are controlling these key commerce locations. Merchants help you steer trade to your kingdom, and now your navy helps you protect this trade.
While I wasn’t able to win with the “trade first” strategy — the thrice-damned Austrians, Swiss, and Milanese ganged up on my Venice — I had a great deal of fun seeing if I could bend the trade winds of Europe (and ultimately the rest of the continent) to my will.
Above: The (Dominican) Inquisition … not such a show. The Inquisition … here we go? Living history One of my favorite aspects of historical strategy games is, well, the history. I started my first run as England, and it didn’t take long before one of the most important events of that period kicked off: the War of the Roses. In that classic conflict, I played the Lancasters holding off what turned out to be the rebellion of Richard III. But the War of the Roses did little to cement my control over England — I continually faced rebel opposition for four decades, either coming York holdouts or Welsh loyalists, who seized on the chaos to declare their independence. And as I dealt with these numerous rebellions, I didn’t have the power to keep my holdings across the Channel on the French coast, losing several provinces and barely holding on to Calais.
What’s great about Europa Universalis is that while history may be the backdrop, what comes from the events is your story. We know history worked out differently for England — the kingdom went on to become the strongest power in the world. But embroiled in rebellions and losing prestige on the continent, my England would be just a footnote on the world stage in this playthrough.
Above: The World Trade map can be a little … daunting.
1 2 View All Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,244 | 2,017 | "PC Gaming Weekly: Strategy's resurgence is no Paradox | VentureBeat" | "https://venturebeat.com/2017/07/06/pc-gaming-weekly-strategys-resurgence-is-no-paradox" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PC Gaming Weekly: Strategy’s resurgence is no Paradox Share on Facebook Share on X Share on LinkedIn Crusader Kings II is one of Paradox's breakout hits.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
You won’t find any paradox at foot when you examine why Sweden’s top game publisher is becoming the leader in the strategy game market.
Last week, Paradox Interactive acquired Triumph Studios, the designers of the Age of Wonders series. Already a top-tier maker of strategy games, Paradox now has a fourth studio to join an impressive lineup that includes games like Crusader Kings, Europa Universalis, and Hearts of Iron. And while I enjoy the Warlock series , it’s an average fantasy-strategy series at best. Adding the folks behind Age of Wonders makes for an intriguing future here for Warlock and fantasy at Paradox.
The strategy market is getting stronger as more publishers add to their stables. Last year, Sega acquired Amplitude Studios, the makers of the well-regarded Endless strategy series. Sega also has Total War designer The Creative Assembly, making it a formidable publisher on PC.
You’ve also got Stardock, and its small fleet of studios, making some great strategy games such as Ashes of the Singularity , Galactic Civilizations III , and Offworld Trading Company.
It’s a bright time in PC gaming. We might be on the cusp of a radical shift in shooters thanks to Playerunknown’s Battlegrounds , one we haven’t seen since 2007’s Call of Duty 4: Modern Warfare. Indie designers continue to push the boundaries of what games are and what they do.
And continued rise of the esports industry keeps games such as Overwatch , League of Legends, and Hearthsone in the spotlight.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! For PC gaming coverage, send news tips to Jeff Grubb and guest post submissions to Rowan Kaiser.
Please be sure to visit our PC Gaming Channel.
Just as we’ll keep spotlighting the PC here, too, at GamesBeat.
—Jason Wilson, GamesBeat managing editor Farmer: It’s hard to raise a family these days. Necromancer: Unless they’re buried together. … From GamesBeat PlayerUnknown’s Battlegrounds is the most important shooter since Call of Duty 4: Modern Warfare PlayerUnknown’s Battlegrounds represents a paradigm shift in the shooter genre. Its thrilling last-man-standing multiplayer action has led to more than 4 million in sales, but it’s the breadth of its appeal that makes it so important to the future of gaming. Everyone is playing Battlegrounds. That’s hyperbole, but it doesn’t feel too far from the […] RuneScape developer reflects on 15 years of making games Few games from the turn of the century are still around. Sure, the Mario and Halo franchises are doing just fine, but they have done so through multiple sequels and expensive marketing budgets. The massively multiplayer online role-playing game RuneScape, however, has survived and thrived by helping to define the free-to-play business model and through […] Blizzard has squandered the trust of Overwatch’s competitive teams From a distance, it seems Blizzard has everything figured out with Overwatch. $1 billion in sales and 30 million players is a massive commercial and cultural success. Taking out the magnifying glass to examine Overwatch from a competitive gaming perspective tells a different story, however. Blizzard wants Overwatch to be a major esport, but so […] Indie dev turns Wikipedia into a text adventure game Like a lot of people, I’ve spent hours clicking from link to link, sucked into Wikipedia’s endless void of information. Indie dev Kevan Davis apparently has as well — except he came out on the other side with a game. Wikipedia: The Text Adventure is a free-to-play piece of interactive fiction that uses real entries […] Get ready for serious games that improve your judgment For years, video games have provided useful imitations of real-world scenarios. From flight simulations to military training, video games offer a low-risk environment to develop necessary experience. Now, a recent government intelligence program has taken that a step further, creating video games to improve cognitive skills. The recurrent errors in our decision-making In psychology, heuristics […] Paradox Interactive acquires Age of Wonders dev Triumph Studios Paradox Interactive is adding more talent to its stable of strategy developers. The company revealed today that it has acquired Triumph Studios, which is the team responsible for the turn-based strategy game Age of Wonders. Paradox also owns all of Triumph’s assets as well in a deal worth $4.5 million (4 million euros) with the […] Beyond GamesBeat Brexit Britain: League of Legends in-game currency gets UK price hike League of Legends players in the UK will soon have to pay more for their Riot Points as a result of Brexit. Developer Riot Games yesterday revealed it’s been forced to raise the price by 20 percent due to Britain’s decision to leave the European Union.
(via Gamasutra) How THQ Nordic will build a successful brand out of a failed one When small but feisty publisher Nordic Games rebranded as THQ Nordic, many an industry eyebrow was raised. On the one hand, the firm had grabbed the world’s attention by acquiring the rights to franchises such as Darksiders and Red Faction, as well as countless others, and even picked up the THQ trademark. By why would you want to associate an up-and-coming publisher with those properties’ former owner, perhaps the most famous story of bankruptcy the games sector has seen in more than a decade? (via gamesindustry.biz ) Take beautiful screenshots with PC’s best built-in photo modes There are countless ways to take beautiful screenshots of PC games, whether it’s typing in console commands, editing .ini files, using Cheat Engine tables created by virtual photographers, or injectors like Matti Hietanen’s superb Cinematic Tools. But if, for whatever reason, you can’t or don’t want to use external software like this, there are a few games with powerful photo modes conveniently built in. Here are some of our favourites.
(via PC Gamer) Metro Exodus and the developer that won’t stop fighting On the early morning of May 22, 2017, Ubisoft released its first teaser trailer for Far Cry 5, an open world game set in a fictionalized version of Montana. Halfway around the world, in an office in Malta, a tired Ukrainian man watched the video as he neared the end of his work day. As it played out, he got increasingly angry. Swearing up a storm, he called for another man in the building to come to his desk and watch.
(via Polygon) GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,245 | 2,020 | "Ratchet & Clank: Rift Apart will release in PS5's 'launch window' | VentureBeat" | "https://venturebeat.com/2020/08/27/ratchet-clank-rift-apart-will-launch-in-ps5s-launch-window" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ratchet & Clank: Rift Apart will release in PS5’s ‘launch window’ Share on Facebook Share on X Share on LinkedIn Ratchet & Clank: Rift Apart is going to define the PS5 generation.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
We got to see more of Ratchet & Clank: Rift Apart for PlayStation 5 during Gamescom ‘s Opening Night Live event.
Developer Insomniac revealed that it is coming out during the PlayStation 5’s launch window. That usually means at some point during the console’s first few months of release.
Rift Apart is the latest in insomniac’s 3D action-platformer series. The franchise mixes traditional platforming puzzles with shooter aspects and wacky guns. The first Ratchet & Clank came out for PlayStation 2 in 2002 and has had over a dozen sequels. The last one came out for PlayStation 4 in 2016. It was also called Ratchet & Clank, and it was both a remake of the original and a tie-in with the Ratchet & Clank movie.
Of all the next-gen games we’ve seen so far, Rift Apart is among the most impressive. It looks like a CG movie come to life, and its titular heroes travel through portals to other worlds in the middle of intense action sequences, showing off the loading prowess of the PlayStation 5’s SSD.
The new gameplay demo shows regular villain Dr. Nefarious. Ratchet pulls rifts toward him, helping him cross a destroyed bridge. He also uses a weapon that looks like a lawn sprinkler that freezes enemies. Overall, it’s an extended, uninterrupted version of what we saw before. You can watch the demo below.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! In an interview after the trailer, Insomniac said that the game will have no load screens whatsoever. It will also use the PS5 DualSense controller’s haptic vibrations and adaptive triggers to make each weapon feel unique. For a shotgun-style weapon, the trigger will offer resistance about halfway through, representing the use of a single shell. Pulling the trigger all the way down will fire two.
Insomniac also confirmed that this game does chronologically follow Into the Nexus, which came out for PlayStation 3 in 2013.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,246 | 2,020 | "Ratchet & Clank return on PlayStation 5 in Rift Apart | VentureBeat" | "https://venturebeat.com/2020/06/11/ratchet-clank-return-on-playstation-5-in-rift-apart" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ratchet & Clank return on PlayStation 5 in Rift Apart Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Sony revealed today that Ratchet & Clank are coming back, and the duo is heading to PlayStation 5.
It’s called Ratchet & Clank: Rift Apart.
This new game shows Ratchet and Clank travelling through dimensions via portals. They also get to ride a dragon! Neat! The end of the trailer also had a big surprise, showing Ratchet replaced with a female lookalike.
Ratchet & Clank is a series of action-based 3D platformers. The games specialize in mixing traditional platforming puzzles with shooter aspects, and throwing in some wacky guns makes it even more fun. The franchise helped make developer Insomniac Games build its reputation as a top-flight developer in the game industry.
The first Ratchet & Clank came out for PlayStation 2 in 2002. It has had over a dozen sequels. The last one, also called Ratchet & Clank, was both a remake of the original and a tie-in with the Ratchet & Clank movie. It came out in 2016 for PlayStation 4.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The franchise has slowed down in recent years. PlayStation 2 had four entries in the series. PlayStation 3 had six Ratchet & Clank games. That aforementioned remake was the only title in the franchise released on PlayStation 4.
But Insomniac was still busy on PlayStation 4, notably and most recently with Marvel’s Spider-Man , which came out in 2018 and became a massive hit, selling over 13.2 million copies.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,247 | 2,020 | "The RetroBeat: Sony recommits to 3D platformers with PlayStation 5 | VentureBeat" | "https://venturebeat.com/2020/06/12/the-retrobeat-sony-recommits-to-3d-platformers-with-playstation-5" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The RetroBeat: Sony recommits to 3D platformers with PlayStation 5 Share on Facebook Share on X Share on LinkedIn Ratchet & Clank: Rift Apart is going to define the PS5 generation.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The Playstation 5 isn’t retro. Heck, it’s the opposite. The system isn’t even out yet. Still, this week’s PS5 reveal was giving me some nostalgic vibes, and it’s all because of the focus on specific genre: 3D platformers.
Games that focus on cute characters jumping their way through colorful levels in 3D spaces used to a big part of the PlayStation platform. Series like Crash Bandicoot and Spyro the Dragon helped to establish the PlayStation brand. 3D platformers were even more important on PlayStation 2 thanks to franchises like Jak & Daxter, Sly Cooper, and Ratchet & Clank.
Ratchet & Clank still had a strong presence on PS3, but it was clear that we were out of the golden age of 3D platformers. Jak & Daxter developer Naughty Dog put its attention to third-person action games like Uncharted and The Last of Us. Sly Cooper studio Sucker Punch created the open-world franchise Infamous. Even Insomniac, which still made plenty of Ratchet & Clank games for PS3, split its focus to a new first-person shooter series, Resistance.
Things felt more dire on PS4. Sure, we got one more Ratchet & Clank game and the excellent Astro Bot: Rescue Mission (which you need VR to experience), but most of Sony’s big exclusives were open-world and third-person action games. Except for Knack, which is kind of a platformer … I think.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Cute characters welcome That’s why I’m so happy that Sony showed off so many 3D platformers during its PS5 event. The new Ratchet & Clank, Rift Apart , looks fantastic. It’s the strongest showpiece for next gen hardware yet, with its colorful world filled with movement and action. Seeing Ratchet travel through different worlds via portals with just a second or two or travelling time also shows how the faster loading of the solid-state drive can actually make games more interesting.
While many expected a new Ratchet & Clank, I don’t think folks were planning on an Astro Bot sequel. And unlike the original, Astro’s Playroom won’t require VR. I’m so excited for more people to get to try out the fun and joyful gameplay that I experienced in Rescue Mission.
And that wasn’t the only surprise. LittleBigPlanet’s Sackboy is getting his own game. Instead of a 2D platformer with an emphasis on making your own levels, Sackboy: A Big Adventure looks to focus on 3D platforming (alone or with up to three friends). As someone who never liked LittleBigPlanet’s creation tools but found the Sackboy character undearing, I think that this is great.
I’m hoping that this is just the start of 3D platforming on PS5. The successful Crash Bandicoot and Spyro remakes opened everyone’s eyes to how great (and profitable) these games can be. I want Sony to keep investing in new games, and I also hope that we see some other classics get fancy remakes (that Ape Escape trilogy is just waiting).
The RetroBeat is a weekly column that looks at gaming’s past, diving into classics, new retro titles, or looking at how old favorites — and their design techniques — inspire today’s market and experiences. If you have any retro-themed projects or scoops you’d like to send my way, please contact me.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,248 | 2,019 | "Planet Zoo is Frontier Developments' next management sim | VentureBeat" | "https://venturebeat.com/2019/06/10/planet-zoo-is-frontier-developments-next-management-sim" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Planet Zoo is Frontier Developments’ next management sim Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Frontier Developments has excellent sims with Planet Coaster and Jurassic Park: Evolution , and now it’s creating a modern zoo with Planet Zoo. Frontier made the announcement today during the PC Gaming Show during the Electronic Entertainment Expo (E3).
You make and manage a zoo in this new game from the makers of Elite Dangerous.
In the trailer, we saw animals such as hippos (and yes, one pooped). You’ll manage the animals and exhibits along with other ways to bring people into your zoo.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,249 | 2,016 | "The EU Privacy Shield one week in: A privacy exec’s perspective | VentureBeat" | "https://venturebeat.com/2016/08/10/the-eu-privacy-shield-one-week-in-a-privacy-execs-perspective" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The EU Privacy Shield one week in: A privacy exec’s perspective Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The privacy community was abuzz this past week, as the new Privacy Shield Framework opened for business. On August 1, companies could begin self-certifying under this new program, which replaced the 15-year-old EU Safe Harbor Framework governing transfer of personal data between the EU and U.S.
We’re only a few days into the new program, but several companies have already certified and many others have begun the process, although the U.S. Department of Commerce has yet to list any certified companies. Through corporate blog posts, however, we have learned that two of the first applications came from Microsoft and Workday, which moved quickly to submit their Privacy Shield certifications. In the coming weeks, we will see more and more companies submitting for self-certification as they review not only their internal policies but also those of their vendors and partners.
For the unfamiliar, the concept behind the invalid Safe Harbor, and the new Privacy Shield, is to ensure “adequate” treatment of EU citizen data when transferred to the U.S. The concern is that there are disparate levels of data protection between the EU and U.S., and thus once the EU data is in the U.S., it would not be entitled to the safeguards offered by EU data protection law. These frameworks were designed to ensure that if the data travels to the US, it is sheltered within a governance mechanism that provides equivalent protections.
Safe Harbor endured criticism but, all things considered, functioned well in its time. The evolution of Internet technology, the explosion of personal data created in social media and other sites and stored on servers in non-EU geographies, and in particular the revelations surrounding U.S. government surveillance techniques escalated the pressure on Safe Harbor, leading the European Court of Justice to ultimately make the protocol invalid in the fall of 2015.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Since then, the U.S. Department of Commerce and the European Commission have collaborated to develop a new mechanism for companies on both sides of the Atlantic with a means to comply with the EU data protection requirements when transferring personal data from the EU to the U.S. The final result is the EU-US Privacy Shield.
Several key aspects of Privacy Shield are worth mentioning. In particular, Privacy Shield requires greater disclosures and opt out requirements than Safe Harbor did. Another key feature of Privacy Shield is that it requires companies provide EU citizens with easier access to data about them and that companies make it easier to make changes to the data. But perhaps the most important aspect of Privacy Shield is that data controllers are now accountable for the actions of third parties with whom they share data. Data controllers will now have to exercise effective oversight over the third parties to ensure they use the data only for limited and specified purposes.
One week in, however, there remain legitimate questions about the durability of the Privacy Shield framework. The future of the new data transfer framework may depend on how well it functions over the next year. On July 26, the Article 29 Working Party (WP29) , the committee tasked with advising the EU on personal data protection, responded to the formal approval of Privacy Shield by reiterating a number of their misgivings but vowed to wait to raise objections until a review of the first year’s performance of the Shield. The WP29’s statement means European privacy regulators will be watching to see if U.S. companies live up to their Privacy Shield commitments. Further — although Europe’s privacy regulators may have collectively agreed to Privacy Shield, at least one data protection authority in Germany is signaling his intent to challenge the adequacy of the Privacy Shield even before the one year watch-and-see is up.
These are pretty strong “uncertainty” signals – something businesses don’t like.
There is also the post-Brexit effect to consider. Although Britain is not expected to initiate the two-year exit process until 2017, at some point, Privacy Shield will not be able to serve as the basis for cross-border personal data flows out of this key European market. While this historic decision occurred just weeks before the Privacy Shield Framework was approved, which made it difficult for the program architects to draft an appropriate response, there should be a strategic imperative to address this elephant in the room and quell this additional uncertainty in the marketplace.
Using data ethically builds trust. If brands can build trust with the consumer around the use, protection, and stewardship of data, strong ethics becomes a strategy where all stakeholders – brand and consumer – benefit from the value created through the data exchange. As the privacy debate continues to unfold throughout the world, Privacy Shield underscores the concerted effort to strengthen consumer data protections and ensure brands and marketers are accountable with the personal data entrusted to them.
Sheila Colclasure is VP global executive for privacy and public policy at Acxiom and LiveRamp.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,250 | 2,020 | "The DeanBeat: Esports pivots to digital because of the coronavirus | VentureBeat" | "https://venturebeat.com/2020/03/20/the-deanbeat-esports-pivots-to-digital-because-of-the-coronavirus" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: Esports pivots to digital because of the coronavirus Share on Facebook Share on X Share on LinkedIn The Vancouver Titans Overwatch team is part of Enthusiast Gaming's Luminosity organization.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
As the coronavirus epidemic accelerated, esports had to change. The first sign was when Blizzard canceled its events in China on January 29, to protect the safety of people who gathered at live events at stadiums in China to cheer on their favorite esports stars.
And now, all of the esports events are being canceled at physical venues, whether they’re at small internet cafes, movie theaters, or large stadiums. Fortunately, unlike sporting leagues like the NBA and the NHL, esports teams have the option of pivoting to digital competitions.
The change is wrenching for the people working at the venues and supporting physical events, but esports can permanently benefit and grow from this as well, according to interviews I did with esports company CEOs and other experts this week.
All of life is moving to digital, with people working from isolated homes and sheltering in place. It’s tragic on a global scale, and no one wants to be perceived as taking advantage of this, said Ann Hand, CEO of Super League Gaming, in an interview. But in the name of both preserving and creating new jobs and meaningful new work for people, the esports industry is pivoting. (We’ll be talking about this change at our online-only GamesBeat Summit 2020 event on April 28-29.) Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Ann Hand, CEO of Super League Gaming.
Like many other esports companies, Super League Gaming is not making money on its combined physical and digital businesses, which include holding amateur esports tournaments in movie theaters and running a Minehut community for kids to play Minecraft together.
“You never feel good about talking about bright spots of opportunity when the world is in such a dire place,” Hand said. “But I do take comfort that gaming is bigger than TV and three times the size of the box office, and it’s an important way that people are going to stay socially connected during this time.” Super League Gaming’s Minehut only-only community has grown from 3,000 concurrent (simultaneous) users during the week to 13,800 users for the past three days. That’s a small crowd, but it’s up 360% in the last two weeks. Viewers for the community’s pages are above 500,000 a day in the last four days, and that’s compared to 150,000 per day in February.
Across the world, esports is growing Berlin-based G2 Esports was founded in 2014, and it built its business for online infrastructure. Adapting to online isn’t as hard, as most of the playing for qualifiers takes place online, with only the finals occurring in person. To deal with the loss of physical events, the company is coming up with different twists to make digital events more interesting.
“Our viewership numbers are going up 30%,” said Carlos Rodriguez, CEO of G2 Esports, in an interview. “It’s unfortunate, as the events are taking place in such a bad time. But it is welcome and unexpected.” For instance, Spanish pro soccer players Sergio Reguilon and Borja Iglesias played out the canceled real-world Seville derby as a digital event on FIFA 20 (the real match was scheduled to be played on Sunday). G2 Esports content creator, Ibai “Ibai” Llanos, hosted the match and had the opportunity to teach the football stars how to play League of Legends.
At its peak, 62,000 people watched a stream of the game and the streaming numbers of the Seville derby were two or three times greater than usual.
“It’s honestly a nice moment fo companies that rely on content and social media to reinvent themselves and be able to show their community, which happens to be in the real world, and stay relevant to them in the digital world,” said Rodriguez. “Technology becomes more relevant to people, the more time people spend at home. The more time at home, the more entertainment becomes relevant to them. And it’s no accident that Netflix said Fortnite is its biggest competitor.” The bigger picture for esports Above: Enthusiast Gaming’s Vancouver Titans On the macro side, it’s clear people need entertainment at home, whether it’s Netflix, porn, TikTok, or video games. A lot of the negative stigma around games as greater slices of the population play.
And so people are turning to games, as Verizon said its online gaming activity is in the U.S. since the coronavirus quarantines went into effect last week. Last weekend, Steam surpassed a record 20 millon concurrent players. And Call of Duty: Warzone, a new battle royale game in the combat series, grew to 16 million players in four days.
Viewership on Twitch is up 10%, and 15% on YouTube Gaming compared to a week ago, according to Doron Nir, CEO of livestreaming tool and service provider SteamElements.
“This past week we saw an increase in livestreaming viewership in Italy and Australia where different approaches have been taken to prevent the spread of COVID-19 (coronavirus),” Nir said in an email. “Based on data from our analytics partner Arsenal.gg, we now have a global snapshot of viewership growth. With more stay-at-home mandates being issued around the world and the entertainment industry finding new ways to migrate their offerings to livestreaming platforms, we expect to see these numbers rise.” Before the virus hit, M&A advisors at Quantum Tech Partners estimated that esports could hit $4 billion in revenue by 2022, and market researcher Newzoo estimated the total esports audience number would grow to 495 million people in 2020.
The joke is that antisocial gamers have been preparing for this day all of their lives. But the truth is that “games are the new social network,” said Adrian Montgomery, CEO of Enthusiast Gaming, which has a collection of game and esports properties with 150 million users a month. Enthusiast Gaming owns the Luminosity esports team, as well as properties like The Escapist, Sims Resource, and Pocket Gamer.
Above: Super League Gaming “People think that kids in the basement are being antisocial and cooped up in their rooms,” Montgomery said, in an interview. “But the reality is they’re getting online, forging new relationships, making friends with people all across the world, and it’s a social network for them. So in a world that we live in now with social distancing becoming a reality, gaming allows people the opportunity to be social.” Montgomery said his company’s Sims Resource site doubled in page views from 6 million views to 12 million views in the past week.
In this sense, the coronavirus is accelerating trends that were already pushing gaming and esports forward. And one of the things that is pushing it dramatically now is the absence of traditional sports programming.
“In some ways, this is the opportunity for esports, though I’m not sure it’s the one anybody wanted,” said Kevin Klowden, managing economist and executive director at the Milken Institute’s Center for Regional Economics, in an interview. “Suddenly, the major sports networks have no programming. People actually want to watch games. Nobody is in a better position to take advantage of this, in terms of having content, than esports. Productions are suspended in entertainment for a while.” And eventually, people are going to get tired of reruns. By contrast, esports fans love to watch the esports pros play games over and over again. The big question is whether the excitement of in-person physical events, with thousands of people roaring at a championship esports match, carry over into a digital-only event without a studio audience? “Stadiums were an add-on,” Klowden said. “They were nice to have, but they weren’t core to the business model.” The birth of live esports Above: The ESL esports finals for Counter-Strike: Global Offensive in Katowice, Poland, in 2017.
In some ways, esports is returning to its roots in that way, said Craig Levine, chief strategy officer at ESL North America, in an interview. Esports started out as online competitions, and people only started showing up at stadiums starting in 2013. That year, Katowice, Poland’s Spodok arena was home to Intel and ESL’s world championship for esports events for League of Legends and StarCraft II: Heart of the Swarm. 50,000 people turned out, and more than 500,000 watched.
Now the event draws more than 100,000 people in person. And now the live events are canceled.
“This is Back to the Future for us,” Levine said. “Now that there are travel bans, self-quarantining, and social distancing, it was fairly easy for us to be agile and adapt back to online. We created a dynamic and still exciting product.” But it was still painful. Eleven hours before the Katowice event was supposed to open on February 28, the local government ordered its closure for a live audience. The competition still took place — without 100,000 screaming people in the audience. Still, the Counter-Strike competition for the Intel Extreme Masters World Championship was the most-watched non-major event ever.
“There was definitely a pivot, but with our 20 years of experience, we understood how our products could change and how we could ensure their integrity,” Levine said. “We didn’t miss one day of broadcasts. But I’ll be honest with you. This was uncharted territory. There wasn’t a playbook for” the coronavirus.
Racing to broadcast TV Above: Darren Cox of Torque Esports, and race car driver Rubens Barrichello at Esports BAR Miami.
Over in Miami, Torque Esports has a physical business with race car events and a digital business with racing car simulator esports events. And now the company’s All-Star Esports Battle race set a record, and a second one is planned for Saturday with drivers from real Formula 1 races participating in esports competitions.
“Eight days ago, 92% of our business was physical racing, and 8% was esports,” said Darren Cox, CEO of Torque Esports, in an interview. “Now, 87% is esports and 13% is physical. Our digital event took off with a half million views. It was a massive win.” Cox said his company is holding talks with big media companies about turning the content into television shows on traditional sports networks.
“You have to be respectful of the situation we’re in, but we’re having those conversations,” Cox said. “The revenue model has gone upside down. I never expected to have a revenue line that was called ‘broadcast rights.’ Suddenly, I have to expand my Excel spreadsheet and put a big fat line there. We have people knocking on our door. 10 days ago, that was nonsense.” Now the question is whether digital-only events will continue for an extended period of time. Without knowing that, it’s hard for anyone to plan in advance. And what happens when traditional sports comes back? Will it supplant the new esports content on broadcast TV? “Now that we’ve stabilized the ship a little bit, we’re having interesting conversations about how we can create new inventory and new experiences around different game types,” Levine at ESL said. “A trend that is worth watching is the convergence of competitive gaming with more established forms of entertainment. You see labor mobility from film and content, streaming lifestyle content, sports, and gaming.” Changes for deal makers Above: Immortals Gaming Club “This is a global pandemic, and there is tons of bad news,” said Ari Segal, CEO of the Immortals esports company. “The reality is there are more people at home, fewer content options, and entertainment has a role to play. Gaming can fill that void. I heard that all Los Angeles Best Buys were out of Xbox Ones. We launched our new Counter-Strike League and the top match had peak concurrent viewership of 100,000.” That was far higher than other esports events in the past. Sponsorship budgets are being shifted on the fly from traditional sports to esports, Segal said, and media rights for broadcasting are accelerating, as Fox Sports can only show so many reruns before people demand live content.
Segal also believes that esports deal activity will change.
Quantum Tech Partners estimated that $1 billion worth of deals were done in 2019, with 33 transactions during the year, with 13 of those involving esports teams.
“For 30 to 90 days, the deal-making will slow,” Segal said. “That said, there will be consolidation. Certain businesses won’t have access to private capital. Their strategic combinations will put businesses in a better position to ride out the storm.” “We would expect competitive gaming as an industry outperforms other industries in the next several months,” Segal said. “There’s a lot of conversation about gaming. We knew this was an inevitability, but it feels like it has spend up even more than we thought.” Getting accustomed to digital fans Above: Complexity Gaming’s new headquarters.
“When you look at this extraordinary circumstance that we have, with traditional sports on hiatus, esports stands alone as being able to continue,” said Jason Lake, CEO of the Complexity esports organization, in an interview. “It’s tough on the event side, with the brick-and-mortar event cancellations. But we can go online and play against each other.” He added, “We’ve seen a huge uptick in viewership across games. The chosen pastime of this generation is gaming. There’s a unique chance to show a lot of new viewers the product that we have, and that it is compelling.” Sadly, Complexity built a brand new headquarters last year in Dallas, but its players can no longer use that state-of-the-art facility. But the players can still play their games. But the team has to pay attention to the isolation of players who are normally given an emotional boost by fans who cheer them on in person.
“Still, they can still go online and have 250,000 people watching them now,” Lake said. “Nothing can replace being in a stadium with 20,000 screaming people. But they can go on Twitch and see 100,000 people in the chat session. For now, that will have to suffice.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,251 | 2,020 | "Call of Duty League resumes online-only esports tournaments starting April 10 | VentureBeat" | "https://venturebeat.com/2020/04/06/call-of-duty-league-resumes-online-only-esports-tournaments-starting-april-10" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Call of Duty League resumes online-only esports tournaments starting April 10 Share on Facebook Share on X Share on LinkedIn Call of Duty League resumes online events on April 10.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The Call of Duty League has shifted away from live events, but it will resume online-only esports tournaments starting April 10.
Activision said that the online-only approach will deliver competitive entertainment for fans in a way that’s safe for its global community. The league announced on March 12 that it would shift online, and now the new schedule has been cemented. You can read it here.
Starting Friday, April 10, the regular season will continue in a fully online production and competition format, where fans can still follow their favorite teams broadcast live on YouTube.com/CODLeague.
Above: Call of Duty League 2020 season opener.
“I spent many years at the NFL, and saw firsthand how sports can lift the human spirit,” said commissioner Johanna Faries in a statement. “No one wants to be in this situation, but we are, and we’re thankful that Call of Duty League can forge ahead and deliver live competition to fans when it’s probably needed most.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Earlier this year the Call of Duty League debuted 12 city-based teams across four countries, with Call of Duty athletes competing live before thousands of fans. But since the pandemic hit, live esports events have been banned like other large gatherings in most countries. But players can still fight each other online, and audiences can view those events.
On April 10, YouTube will begin airing live broadcasts in events dubbed Home Series Weekends. Competition will span three days, kicking off Friday (April 10) with group stage play, followed by knockout and semifinals matches on Saturday (April 11), ending with marquee championship matchups on Sunday (April 12) for the ultimate prize of the first Call of Duty League Champion.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,252 | 2,018 | "Qualcomm shows world's first 5G mmWave and sub-6GHz smartphone modules | VentureBeat" | "https://venturebeat.com/2018/07/23/qualcomm-shows-worlds-first-5g-mmwave-and-sub-6ghz-smartphone-modules" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm shows world’s first 5G mmWave and sub-6GHz smartphone modules Share on Facebook Share on X Share on LinkedIn Qualcomm's new QTM052 millimeter wave 5G antenna module.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
With the launch of next-generation 5G cellular networks now only months away in the United States , South Korea , and possibly other countries, Qualcomm is today debuting two critically important new “modem-to-antenna” components: the world’s first fully integrated, mobile device-ready 5G millimeter wave (mmWave) antenna modules and sub-6 GHz RF modules. The parts will enable even smartphones to connect to upcoming 5G mmWave networks, an engineering feat that was once considered impossible.
Qualcomm’s QTM052 mmWave antenna modules and QPM56xx sub-6 GHz RF modules are each designed to work with Qualcomm’s Snapdragon X50 5G modem, addressing different radio frequencies. The mmWave antenna can be used on 26.5-29.5 GHz, 27.5-28.35 GHz, or 37-40 GHz bands, while the sub-6 GHz module works on 3.3-4.2 GHz, 3.3-3.8 GHz, or 4.4-5.0 GHz bands. Each country’s regulators are currently in the process of determining which mmWave and sub-6 GHz bands will be used for 5G services within their borders.
Above: Qualcomm’s mmWave antenna and X50 modem enable pocket devices to achieve ultra-fast 5G speeds, achieving once-inconceivable miniaturization.
While the sub-6 GHz module uses radio frequencies similar to existing wireless phones, the mmWave module is a significant breakthrough. Qualcomm notes that mmWave was once written off as too difficult to engineer into mobile devices, due to challenges with “materials, form-factor, industrial design, thermals, and regulatory requirements for radiated power.” But engineers saw its potential to speed up 5G networks, and persisted until they solved the engineering issues. As Qualcomm president Cristiano Amon explains, Today’s announcement of the first commercial 5G NR mmWave antenna modules and sub-6 GHz RF modules for smartphones and other mobile devices represents a major milestone for the mobile industry … These type of modem-to-antenna solutions, spanning both mmWave and sub-6 spectrum bands, make mobile 5G networks and devices, especially smartphones, ready for large scale commercialization. With 5G, consumers can expect gigabit-class Internet speeds with unprecedented responsiveness in the palm of their hands, which stand to revolutionize the mobile experience.
To deliver high throughput in dense urban areas and crowded indoor environments, QTM052 supports up to 800MHz of bandwidth, using advanced beam forming, beam steering, and beam tracking technologies to improve mmWave signaling. The module includes a 5G radio transceiver, power management IC, RF front-end, and phased antenna array, working with the Snapdragon X50 modem as a complete system.
Interestingly, up to four QTM052 modules can be placed in a single smartphone to improve the device’s resistance to signal attenuation and other interference. This provides OEMs with a brute-force alternative to get their first high-speed 5G devices on the market by early 2019, while letting engineers continue to work on more streamlined second-generation models.
Above: Qualcomm expects that some manufacturers will place four mmWave antenna modules in one housing to avoid signal loss.
By contrast, the sub-6 GHz module family — spanning the QPM5650, QPM5651, QDM5650, and QDM5652 — will enable devices to access 5G networks in less densely populated, non-urban areas. While all four of these modules support the same sub-6 GHz bands, the P versions contain power amplifiers, while the D versions offer diversity support. They’re all designed to support massive MIMO transmissions, which use multiple antennas to achieve high data rates.
Qualcomm says that all of the new components are currently sampling to customers. They are expected to appear in the first 5G smartphones early next year, though Qualcomm has previously said that it’s working to assist some customers with device launches before then.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,253 | 2,018 | "Keysight spurs 5G and 6G research with 'largest' NYU Wireless donation | VentureBeat" | "https://venturebeat.com/2018/10/03/keysight-spurs-5g-and-6g-research-with-largest-nyu-wireless-donation" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Keysight spurs 5G and 6G research with ‘largest’ NYU Wireless donation Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
NYU Wireless announced it has received the largest donation in its history, courtesy of Keysight Technologies. The organizations have a clear purpose in mind: to create next-generation 6G cellular technologies and further development of 5G.
As New York University’s wireless research arm, NYU Wireless helped pioneer the use of millimeter wave spectrum in upcoming cellular networks, eventually leading to pocket-sized devices that transmit on the same 20+ gigahertz frequencies as gigantic satellite dishes. Keysight’s donation is intended to enable future telecommunications devices to use even higher frequencies — 100 gigahertz, and eventually the terahertz range. Speeds with 6G are expected to be 1000 times faster than with 5G.
The donation includes equipment that can measure up to 110-gigahertz frequencies, enabling NYU Wireless researchers to develop uses of millimeter wave and terahertz electromagnetic spectra. Described as “cutting-edge,” the hardware includes millimeter wave and broadband signal analysis and generation tools, advanced time-domain analysis, and RF/millimeter wave power measurement capabilities. No dollar value was given for the donation, but it is the largest NYU Wireless has received and the largest in-kind donation ever made to NYU’s Tandon School of Engineering.
Teams of NYU Wireless researchers are already working on terahertz research, quantum devices, and “post-massive MIMO antennas,” innovations that will likely come into play a decade from now. Terahertz frequencies promise everything from high bandwidth to strong building penetrability and good directionality, including the ability to pass safely through people and non-conducting objects.
Engineers at Finland’s University of Oulu have been working on 6G for some time, and U.S. FCC Commissioner Jessica Rosenworcel recently discussed the contours of the future technology, noting that beyond its speed, it is expected to rely on mesh-style densified networks — base stations embedded in every piece of technology people use, potentially employing blockchain to facilitate dynamic spectrum sharing.
Like 5G, 6G faces a fundamental engineering challenge: miniaturizing presently large terahertz transmitters and receivers. Qualcomm and others spent years reducing millimeter wave components from the size of satellite dishes and airport body scanners to fingertip-sized modems and antennas.
There was also the matter of developing an international 5G standard , which was only recently finalized. The same processes will need to take place again for 6G, albeit with the benefit of lessons already learned.
Keysight and NYU Wireless expect their new research to impact communications, medical imaging, pharmaceutical monitoring, semiconductors, spectroscopy, and “synchronized clouds of ‘smart dust’ detectors.” In addition to the equipment donation, Keysight’s 5G program manager, Roger Nichols, and his team will serve as mentors to NYU Wireless’ student researchers, joining 5G pioneer Dr. Ted Rappaport as he returns to direct the unit’s research initiatives.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,254 | 2,019 | "FCC opens 95GHz to 3THz spectrum for '6G, 7G, or whatever' is next | VentureBeat" | "https://venturebeat.com/2019/03/15/fcc-opens-95ghz-to-3thz-spectrum-for-6g-7g-or-whatever-is-next" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages FCC opens 95GHz to 3THz spectrum for ‘6G, 7G, or whatever’ is next Share on Facebook Share on X Share on LinkedIn FCC Chair Ajit Pai.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Commercial 5G networks are barely operational in the United States right now, but that hasn’t stopped engineers from thinking ahead to 6G — and the U.S. government wants to facilitate their experiments over the next decade. After a unanimous vote , the FCC is opening “terahertz wave” spectrum for experimental purposes, creating legal ways for companies to test and sell post-5G wireless equipment.
The FCC’s Spectrum Horizons First Report and Order deals specifically with the 95 gigahertz (GHz) to 3 terahertz (THz) range — a collection of frequencies that aren’t currently being used in consumer devices, and have wide bandwidth with vast potential for data streaming. In addition to issuing 10-year licenses to experiment in that range, the FCC will offer a full 21.2GHz of spectrum for testing of unlicensed devices.
Collectively, that 95GHz to 3THz spectrum extends a little beyond the 300GHz to 3THz range defined as “tremendously high frequency.” At the lower end of the FCC’s range, 95GHz to 300GHz signals are technically still millimeter waves, as they’re at or over 1 millimeter in wavelength. But 300GHz to 3THz signals are are at or under 1 millimeter in wavelength, and for that reason called “submillimeter waves.” Even by comparison with the 24GHz to 28GHz millimeter wave spectrum that’s currently being auctioned off by the FCC, the terahertz spectrum is considered bleeding-edge enough to be nearly science fiction. FCC Commissioner Michael O’Rielly said the nascence of terahertz technology made the vote felt “like designating zoning laws for the moon,” and noted his hesitance to “create a class of incumbents, who then have to be moved or protected in the future when this spectrum becomes of greater interest for 6G, 7G, or whatever the next-next-generation wonder technology may be.” But other commissioners, including Jessica Rosenworcel, were largely optimistic on the plans for the spectrum. “There is something undeniably cool about putting these stratospheric frequencies to use and converting their propagation challenges into opportunity,” she said. “This rulemaking gets that effort underway, so it has my support.” The “propagation challenges” she references are non-trivial. Like millimeter waves, submillimeter waves face issues such as limited transmitting distances and an inability to penetrate buildings. However, NYU Wireless Professor Ted Rappaport (via FierceWireless ) said that the higher-frequency signals will perform better with directional antennas, and claimed that with the new spectrum, you can start having data rates that approach the bandwidth needed to provide wireless cognition, where the computations of the human brain at those data rates could actually be sent on the fly over wireless. As such, you could have drones or robotics receive in real time the kind of perception and cognition that the human brain could do.
Actual applications of the technology are yet to be demonstrated, and in most cases even imagined. But the dreaming of what’s to come after 5G has already begun.
Keysight made a sizable donation to NYU last October to spur 6G development. Later this month, scientists and engineers will gather in Oulu, Finland for a 6G Wireless Summit , in hopes of further defining what the standard could look like — when it comes together in or around the year 2030. Thanks to the FCC’s vote, it looks like the United States is ready to welcome the next generation of wireless experiments with open arms.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,255 | 2,019 | "Why 6G research is starting before we have 5G | VentureBeat" | "https://venturebeat.com/2019/03/21/6g-research-starting-before-5g" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why 6G research is starting before we have 5G Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
While much of the world is still wondering how long it will take to get 5G networks, and what it could mean to their lives and economies, a group of telecommunications researchers is looking further ahead to what comes after that: 6G.
Next week in Levi, Finland, a group of 250 researchers will gather for one of the first global summits on the 6G Wireless standard to begin asking the most basic of questions: What is it and why does the world need it? “I don’t know what 6G is,” said Dr. Ari Pouttu, a professor at the University of Oulu in Finland. “Nobody does.” That’s a blunt assessment from the man who is also vice-director of Finland’s 6G Flagship program.
During a recent visit to Oulu as part of a media tour of the region’s tech ecosystem, Pouttu gave a presentation to our group of journalists.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The country designated Oulu, located on the edge of the Baltic Sea about five hours north of Helsinki, the center of its 6G efforts due to its historic connections to Nokia and its concentration of researchers, such as Pouttu, who were instrumental in developing the 5G standard.
The program runs over the next eight years, and is valued at about $285 million, with about half coming from public funding and the other half to be raised from industry partners.
Above: Dr. Ari Pouttu, professor at the University of Oulu These efforts are just barely in an embryonic stage, and as in the past, would likely have stayed the stuff of obscure academic chatter were it not for a sudden spotlight thrown on them by the most unlikely of 6G boosters: I want 5G, and even 6G, technology in the United States as soon as possible. It is far more powerful, faster, and smarter than the current standard. American companies must step up their efforts, or get left behind. There is no reason that we should be lagging behind on……… — Donald J. Trump (@realDonaldTrump) February 21, 2019 While Trump was roundly mocked, and 6G remains undefined and at least 10 years or more in the future, it is also not just science fiction.
Today, 5G networks are just starting to roll out. The current 4G LTE standard will dominate for several more years, as telecom carriers seek to recoup their massive investments on that infrastructure. Pouttu projects current 4G networks won’t really be used to their full potential until about 2025.
Meanwhile, carriers are proceeding cautiously with 5G.
On one hand, Pouttu says the research community was surprised because some of the basic standards were settled much sooner than predicted. On the other hand, the rollout of 5G is going to be far more costly than 4G due to the short distances the signals can travel and the need for a greater density of equipment to transmit the signals. The capital costs are astronomically high, and the business models that would justify these investments are still fuzzy.
When 5G does become the dominant network, Pouttu says he expects it to be the most transformational leap since the evolution from 2G to 3G networks. Not only does 5G promise theoretical speeds of 20 Gbps compared to the max theoretical 1 Gbps for 4G, but there is no latency, and it supports a greater density of connections in a smaller area.
Coupled with advances in so-called “edge computing” that will push more intelligence toward end devices, the 5G era is being hyped for its ability to enable smart cities, smart factories, autonomous vehicles, untethered VR streaming , and more.
Pouttu says the next question he gets is a version of the same question he’s gotten for decades: “Why do we need 4G as we have 3G? Why do we need 5G as we have 4G?” And so the research for the next standard proceeds, he says, by trying to map out that question: Why do we need 6G? “We want to see what’s leftover from 5G, what things did 5G not address,” he says.
The most obvious starting points are speed and spectrum. The initial thinking is that 6G will target speeds of 1 terabyte per second. Yes, terabyte. To get those speeds, signals will need to be transmitted above 1 terahertz, compared to the measly gigahertz range where 5G operates.
But operating at that range in the spectrum may require breakthroughs in material research, new computing architectures, chip designs, and new ways of coupling that with energy sources, Pouttu says.
That sooner those experiments start, the better. The group hopes to produce a white paper by this summer following the Levi conference that will start to define critical areas of research.
Power generation and power consumption loom as massive hurdles, both in terms of the environment and cost. How can we move to a world where nearly every single object produced is constantly collecting, analyzing, and transmitting data without cost efficient, renewable power sources to ensure we don’t burn down the planet in the process? At the same time, the research group wants to start outlining possible use cases and future scenarios for the technology. While the 5G era is expected to make the smartphone less of a centerpiece of our lives than it is today, Pouttu speculates that 6G will be a post-smartphone era.
With everything capable of being connected, almost every object will be data driven, with true artificial intelligence capabilities a standard feature and augmented reality interfaces that pop up when needed and then disappear. The ability of all objects to capture and process visual data will be immense, and continue to accelerate automation and the evolution of AI.
The notion that we once had to carry a gadget to control other objects or communicate will seem quaint to the 6G generation, he says.
“The way we consume data will be changing,” Pouttu says. “Today, most of our data is consumed using a smartphone. But if we have virtual or augmented reality glasses with 5G, it could be these or other devices that are consuming that data. And with printable electronic devices, there are new machine-human interfaces coming really fast. So maybe let’s assume we toss away our phones and see what happens.” In that scenario, our relationship with our carrier is no longer buying a smartphone, but perhaps buying a bay station and allowing each home or office building to in essence become its own communications operator for the massive number of devices and data flowing through this next next generation connectivity. Those purchases could be the way the network rollout for 6G is funded, with enough intelligence to share and buy and sell spectrum on a neighborhood level, Pouttu says.
Or not. For now, this is all just academic speculation.
Pouttu says each standard roughly takes a decade to develop, and so the formalizing of 6G standards is being targeted for 2029-2020. His research group projects the world will max out the use of 5G around 2035. And so with 6G standards and enabled devices rolling out around 2030, the timing for this transition should be just about right.
While that seems years away, there have been signs here and there that momentum around research is starting to build. Just last week, the U.S. Federal Communications Commission announced it was opening the “terahertz wave” for experiments on next standards.
And late last year, China’s government announced it was intensifying work on 6G, with a goal of dominating the industry by 2030. In January, LG announced the creation of a 6G research center in South Korea.
Those developments have helped break down some of the resistance to talking about 6G by carriers, Pouttu says. With carriers sinking huge sums of money into their 5G rollouts, they would prefer from a marketing message standpoint that the benefits not get muddled by talk of future standards.
“The industry doesn’t want to talk about 6G because it is diluting their message about 5G and their ability to make money from 5G,” Pouttu says. “We heard a lot of ironic comments about our efforts one year ago, because everyone thought it was too early. And then we heard China was going to launch a 6G program, and then Korea. Now attitudes are changing because no one wants to get left behind.” (Disclosure: VentureBeat’s travel to Oulu and accommodations were paid by Business Oulu as part of a media tour of the region’s tech ecosystem.) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,256 | 2,019 | "Researchers say 6G will stream human brain-caliber AI to wireless devices | VentureBeat" | "https://venturebeat.com/2019/06/14/researchers-say-6g-will-stream-human-brain-caliber-ai-to-wireless-devices" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers say 6G will stream human brain-caliber AI to wireless devices Share on Facebook Share on X Share on LinkedIn Mental health is an important issue in games.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
As 5G networks continue to expand in cities and countries across the globe , key researchers have already started to lay the foundation for 6G deployments roughly a decade from now. This time, they say, the key selling point won’t be faster phones or wireless home internet service , but rather a range of advanced industrial and scientific applications — including wireless, real-time remote access to human brain-level AI computing.
That’s one of the more interesting takeaways from a new IEEE paper published by NYU Wireless’s pioneering researcher Dr. Ted Rappaport and colleagues, focused on applications for 100 gigahertz (GHz) to 3 terahertz (THz) wireless spectrum.
As prior cellular generations have continually expanded the use of radio spectrum from microwave frequencies up to millimeter wave frequencies, that “ submillimeter wave ” range is the last collection of seemingly safe, non-ionizing frequencies that can be used for communications before hitting optical, x-ray, gamma ray, and cosmic ray wavelengths.
Dr. Rappaport’s team says that while 5G networks should eventually be able to deliver 100Gbps speeds , signal densification technology doesn’t yet exist to eclipse that rate — even on today’s millimeter wave bands, one of which offers access to bandwidth that’s akin to a 500-lane highway. Consequently, opening up the terahertz frequencies will provide gigantic swaths of new bandwidth for wireless use, enabling unthinkable quantities and types of data to be transferred in only a second.
The most relatable one would enable wireless devices to remotely transfer quantities of computational data comparable to a human brain in real time. As the researchers explain it, “terahertz frequencies will likely be the first wireless spectrum that can provide the real time computations needed for wireless remoting of human cognition.” Put another way, a wireless drone with limited on-board computing could be remotely guided by a server-sized AI as capable as a top human pilot, or a building could be assembled by machinery directed by computers far from the construction site.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some of that might sound familiar, as similar remote control concepts are already in the works for 5G — but with human operators. The key with 6G is that all this computational heavy lifting would be done by human-class artificial intelligence, pushing vast amounts of observational and response data back and forth. By 2036, the researchers note, Moore’s law suggests that a computer with human brain-class computational power will be purchasable by end users for $1,000, the cost of a premium smartphone today; 6G would enable earlier access to this class of computer from anywhere.
Dr. Rappaport’s team also expects that the submillimeter wave spectra will enable enhancements of existing technologies, such as see-in-the-dark millimeter wave cameras, high-definition radar, and terahertz (rather than millimeter wave) security body scanning.
The incredibly high bandwidth will also enable a transition from reliance on fiber cable infrastructure to “wireless fiber” for network backhaul and datacenter connectivity.
There are, of course, significant practical challenges to overcome before 6G can move from theoretical to real, including miniaturization of the core technologies and health studies to confirm that terahertz frequencies are as safe as currently believed. Additionally, like millimeter wave transmissions, submillimeter wave frequencies will require highly directional antennas, in part because they’re highly susceptible to interference from the atmosphere, particularly above 800 GHz.
But the researchers note that overcoming those challenges, as was successfully accomplished with millimeter wave over the past decade, will lead to great benefits for users. Data transmissions will consume far less energy, and ultra-high gain antennas will be able to be made “extremely small.” That will pave the way for tinier devices, including military-grade secure communications links that are “exceedingly difficult” to intercept or eavesdrop upon.
In March, the FCC unanimously voted to open the 95GHz to 3THz range for “6G, 7G, or whatever is next,” though commissioners suggested the speculative uses of the frequencies at that point made the vote akin to “designating zoning laws for the moon.” Based on past history, Dr. Rappaport and others will be at the forefront of transitioning these concepts from science fiction to science fact — in the foreseeable if not immediate future.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,257 | 2,020 | "Docomo says 6G will bring AI everywhere, deliver 'extreme' performance | VentureBeat" | "https://venturebeat.com/2020/01/24/docomo-says-6g-will-bring-ai-everywhere-deliver-extreme-performance" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Docomo says 6G will bring AI everywhere, deliver ‘extreme’ performance Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
While most of the world focuses on deploying and improving the performance of early 5G networks, small groups of telecom engineers are working to establish what 6G networks will bring a decade from now. As one of the earliest cellular pioneers, Japanese carrier NTT Docomo today weighed in with predictions on what 6G will look like in 2030, and they’re intriguing: 6G will bring AI everywhere, and transform 5G’s “best effort” performance standards into guarantees of hyper-fast speeds.
Docomo has an expansive view of AI’s role in the 5G and 6G eras, suggesting that AI will empower “cyber-physical fusion” by ingesting real-world data into digital networks, then processing it for human-benefiting outputs such as virtual people, transportation, and manufacturing. Rather than viewing AI as merely a technology to solve specific issues, Docomo equates it to the brain in a human body, with 5G or 6G serving as the nervous system that transmits data between the artificial brain and sensory organs — albeit on a much larger scale. “AI reproduces the real world in cyberspace and emulates it beyond the constraints of the real world,” explains Docomo, and over time, will enable computers to both predict the future and discover new knowledge.
The key to improving AI’s performance will be ultra-low latency , Docomo says, as AI will need to rapidly ingest real-world information and output it into feedback of value to users. Among other applications, Docomo expects that AI will enable communication between humans and things, such as a holographic virtual doctor talking directly with a patient. Reducing network latency from 5G’s sub-5-millisecond hopes and 1-millisecond target below 1 millisecond everywhere will guarantee that such interactions feel seamless to people, no matter where they may be using wireless services.
Docomo isn’t the only organization suggesting that AI will have a major role in the 6G era or forecasting that AI will be pervasive a decade from now. Last June, the IEEE suggested that 6G will give wireless devices access to human brain-level AI capabilities, with the heavy processing taking place on networked servers. As just one example, the IEEE suggested that drones could be remotely piloted by human-caliber AI, and building assembly coordinated by machines located far away from construction sites.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Docomo’s graph of 1G through 6G technology shows the key technology underpinnings of each current standard and goals for future 6G networks. (NR = new radio; RAT = radio access technology; FFS = for future study) Some of Docomo’s other predictions similarly dovetail with what early 6G researchers elsewhere have suggested, including expansion of frequency support from 5G’s centimeter and millimeter wave frequencies into terahertz spectrum , plus related changes to the way networks are built and augmented with massive MIMO antenna systems. One goal, Docomo suggests, will be to go beyond 5G’s focus on improving download speeds at whatever the network’s best capabilities are at a given point, and deliver guaranteed multi-Gbps performance and ultra low latency for both downloads and uploads.
This change, Docomo suggests, will enable 6G to support not only current 5G use cases such as enhanced mobile broadband for consumers, massive machine communications for industry, and ultra-reliable low latency communications for transportation, but “extreme requirements for specific use cases” that need currently implausible combinations of bandwidth and latency. Moreover, Docomo expects 6G to deliver gigabit coverage everywhere, with peak speeds in excess of 100Gbps, while expanding cellular coverage in the sky, sea, and space.
Early 5G is currently delivering sub-gigabit speeds in many places , but with the goal of improvement over time.
One of the carrier’s interesting forecasts is that 6G will include support for radio frequency-based wireless charging, freeing battery-powered devices from the need to be manually charged. While Docomo doesn’t provide much detail on how this will be accomplished, several companies are already working on wireless charging solutions using low-frequency and millimeter wave signals , seemingly without the expectation of incorporating their innovations into future cellular standards.
NTT Docomo’s full set of predictions for 6G are included in this white paper.
You can compare and contrast the carrier’s views with those of early 6G researchers in Finland here.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,258 | 2,020 | "FCC unlocks 3.5GHz CBRS band, enables OnGo in Apple and Android phones | VentureBeat" | "https://venturebeat.com/2020/01/27/fcc-unlocks-3-5ghz-cbrs-band-enables-ongo-in-apple-and-android-phones" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages FCC unlocks 3.5GHz CBRS band, enables OnGo in Apple and Android phones Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Following six years of private and public collaboration to open up some of the United States’ most valuable wireless spectrum, the Federal Communications Commission (FCC) today authorized full commercialization of OnGo services using the 3.5GHz CBRS band , a development that will enable the latest iPhones and Android phones to achieve faster data speeds in many parts of the country. The move will initially benefit 4G communications, but is expected to enhance 5G later this year.
Spectrum in the 3.5GHz band has been selected across the world as ideal for next-generation cellular services, thanks to its combination of reasonably long-distance range and solid chunks of available bandwidth. Within the “low,” “mid,” and “high band” ranges of radio frequencies , 3.5GHz is mid-band spectrum and is already being used for 5G in China , Europe , and South Korea , while the U.S. has focused until now on low and high band 5G frequencies.
Led by the 159-member CBRS Alliance and Wireless Innovation Forum , efforts to open 3.5GHz spectrum for public access required development of a sharing arrangement between existing users — most notably the U.S. Department of Defense (DoD) for naval purposes near the country’s coasts — and the broader public, including millions of potential users in both landlocked and coastal areas. Using special “spectrum access systems” operated by CommScope, Federated Wireless, Google, and Sony, radio frequencies from 3.55GHz to 3.7GHz can now be used by members of the public, subject to whatever priority access the government may demand on an as-needed basis.
In the event that the DoD needs access to the spectrum in a specific coastal area, a protection zone is activated, dynamically reassigning access system users within the zone to other frequencies. This lets prior government users immediately access the 3.5GHz band as needed, while making the previously locked-up spectrum generally available for consumer devices and future private networks. Supported phones will enjoy faster speeds and/or stronger signals on the 3.5GHz band, falling back to other spectrum in the event of DoD use.
Current-generation smartphones including Apple’s iPhone 11s , Google’s Pixel 4 phones , LG’s G8 ThinQ , Motorola phones with the 5G Moto Mod , the OnePlus 7 Pro , and Samsung Galaxy S10 series all include support for the new band, which is being branded as OnGo. As additional devices are OnGo-certified, the CBRS Alliance expects that the band will also be used outside of the smartphone market, assisting rural broadband, enterprise IT, hospitality, retail, real estate, industrial IoT, and transportation initiatives.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,259 | 2,020 | "Samsung: Expect 6G in 2028, enabling mobile holograms and digital twins | VentureBeat" | "https://venturebeat.com/2020/07/14/samsung-expect-6g-in-2028-enabling-mobile-holograms-and-digital-twins" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung: Expect 6G in 2028, enabling mobile holograms and digital twins Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Just as the earliest 5G networks began to go live two years ago, a handful of scientists were eager to publicize their initial work on the next-generation 6G standard , which was at best theoretical back then, and at worst an ill-timed distraction. But as 5G continues to roll out, 6G research continues, and today top mobile hardware developer Samsung is weighing in with predictions of what’s to come. Surprisingly, the South Korean company is preparing for early 6G to launch two years ahead of the commonly predicted 2030 timeframe , even though both the proposed use cases and the underlying technology are currently very shaky.
Given that the 5G standard already enabled massive boosts in data bandwidth and reductions in latency over 4G, the questions of what more 6G could offer — and why — are key to establishing the need for a new standard. On the “what” side, Samsung expects 6G to offer 50 times higher peak data rates than 5G, or 1,000Gbps, with a “user experienced data rate” of 1Gbps, plus support for 10 times more connected devices in a square kilometer. Additionally, Samsung is targeting air latency reductions from 5G’s under 1 millisecond to under 100 microseconds, a 100 times improvement in error-free reliability, and twice the energy efficiency of 5G.
The obvious question is “why,” and it’s here that Samsung is either grasping or visionary, depending on your perspective. Some of 6G’s potential applications are clearly iterative, it notes, including faster broadband for mobile devices , ultra-reliable low latency communications for autonomous vehicles , and factory-scale automation.
Better performance of 5G’s key applications will appeal to some businesses and consumers, as will support for next-generation computer vision technologies that well exceed human perception: Samsung suggests that while the “human eye is limited to a maximum resolution of 1/150° and view angle of 200° in azimuth and 130° in zenith,” multi-camera machines will process data at resolutions, angles, wavelengths, and speeds that people can’t match, eating untold quantities of bandwidth as a result.
On the human side, Samsung also suggests that 6G will be needed to enable “truly immersive” extended reality, as next-generation XR headsets will need around 0.44Gbps throughput to power human retina-matching 16 million pixel displays; that’s more individual bandwidth than what 5G networks can guarantee. Similarly, Samsung expects that mobile and larger displays will begin to display actual volumetric holograms , requiring “at least” 580Gbps for a phone-sized 6.7-inch display, and “human-sized” holograms at several terabits per second.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! To the extent that holographic displays are a known concept to many people, another “key” 6G application — “ digital twins ” or “digital replicas” — isn’t. Going forward, Samsung expects that people, objects, and places will be fully replicated digitally, enabling users “to explore and monitor the reality in a virtual world, without temporal or spatial constraints,” including one-way or two-way interactions between physical and digital twins. A human might use a digital twin to visit their office, seeing everything in digital form while relying on a robot for physical interactions. Duplicating a one-square-meter area in real time would require 800Gbps throughput, again well beyond 5G’s capacity.
Like other 6G researchers, Samsung is pinning its hopes for the massive necessary bandwidth on terahertz radio spectrum , which has a wide array of technical challenges ahead of commercialization, just as millimeter wave did before 5G launched. It’s also expecting AI to take on a more significant, system-level role in 6G, becoming explicitly required for every 6G component to enable distributed AI and split computing efficiency throughout every piece of the network.
At this stage, Samsung expects international standardization of 6G to begin in 2021, with the earliest commercialization happening “as early as 2028,” followed by “massive commercialization” around 2030. That would roughly parallel the accelerated timetable that saw 5G take eight years to go from concept to reality, as compared with 3G’s 15 years of development time. Between now and then, however, engineers will need to figure out ways to create even more massively dense antennas, improve radio spectral efficiency, and handle other novel or semi-novel issues introduced with terahertz waves. Only time will tell whether those challenges will be summarily overcome, as with 5G, or will hold 6G back as an engineering pipe dream until later in the next decade.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,260 | 2,021 | "U.S. will reallocate military 3.5GHz spectrum for consumer 5G in 2021 | VentureBeat" | "https://venturebeat.com/2020/08/10/u-s-will-reallocate-military-3-5ghz-spectrum-for-consumer-5g-in-2021" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages U.S. will reallocate military 3.5GHz spectrum for consumer 5G in 2021 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Over the past three years, engineers and carriers across the world have largely agreed that 3.5GHz-adjacent radio spectrum is ideal for 5G deployments — a “mid band” compromise between the lower frequencies used by older cellular standards and the higher, shorter-distance millimeter wave frequencies.
But in the United States, the Department of Defense (DoD) has controlled much of the mid band spectrum, creating a tension between military and potential consumer applications. Today, the White House announced that the DoD has agreed to relinquish 100MHz of 3.5GHz spectrum for commercial use, a process that will augment U.S. 5G networks over the next two years.
The newly available DoD-reserved frequencies range from 3.45GHz to 3.55GHz, a sizable spectrum block that alone would let 5G users enjoy fast speeds without connecting to millimeter wave small cells. According to United States CTO Michael Kratsios, this 100MHz block was chosen because it can be “made available without sacrificing our nation’s great military and national security capabilities,” and will support towers and devices operating at “full commercial power levels” from coast to coast. Some mid band frequencies have been subject to regional power and priority restrictions based on existing military applications.
Importantly, however, the new block is adjacent to 430MHz of mid band spectrum that has previously been freed up in small parcels from various users. This means that U.S. 5G will be able to work on 3.45GHz through 3.98GHz mid band frequencies, as well as lower and higher band spectrum that’s available , putting 5G devices and users here on par with those in foreign countries.
A senior administration official explained that this block was designed to rapidly broaden the range of existing mid band frequencies for consumer use, turning what might have been a six-to-eight-year DoD/FCC approval slog into an 18-month turnaround. On a positive note, the FCC is apparently ready to follow the legally required timeline and process to auction the spectrum as quickly as possible. The bad news: That will result in a December 2021 auction, followed by mid-2022 consumer deployment of the spectrum.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Although the 3.45GHz to 3.55GHz frequencies represent a welcome step forward for mid band consumer 5G, the DOD still holds other frequencies — including a huge block of 3.1GHz to 3.45GHz spectrum — which are arguably less useful for military use than consumer applications. The DOD’s release of the 100MHz block doesn’t mean that these other frequencies won’t eventually become available for consumer use, but rather that they weren’t prioritized for the faster track approval process. Despite the uncertainty over future spectrum additions, it’s comforting to know that the U.S. shortage of consumer mid band 5G frequencies is being addressed, and that the military won’t just hoard spectrum forever.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,261 | 2,019 | "Qualcomm debuts Snapdragon X55 modem for slimmer, faster 5G smartphones | VentureBeat" | "https://venturebeat.com/2019/02/19/qualcomm-debuts-snapdragon-x55-modem-for-slimmer-faster-5g-smartphones" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm debuts Snapdragon X55 modem for slimmer, faster 5G smartphones Share on Facebook Share on X Share on LinkedIn Qualcomm's new QTM525 5G millimeter wave antenna can fit inside slim phones with under 8mm thickness.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Backed by early 5G networks in the United States and South Korea, the first 5G devices are already in customers’ homes and hands — home routers and hotspots powered by Qualcomm’s first-generation 5G modem, the Snapdragon X50.
With roughly 30 devices slated to use the X50 this year, Qualcomm today announced its sequel, Snapdragon X55, to kick off a “second wave” of faster, slimmer, and more capable 5G products around the end of 2019.
When Qualcomm announced Snapdragon X50 back in 2017, 5G was still theoretical — standards weren’t finalized, pocket-sized devices hadn’t been tested, and regulators didn’t know which radio frequencies they would allocate to the cellular services. Since then, all of those pieces have fallen substantially into place, revealing a need for smaller and more power-efficient chips, as well as broad support for lots of different radio frequencies.
Snapdragon X55 addresses those issues. Unlike the X50, which required a separate LTE modem, the X55 uses a single chip to support every cellular generation from 2G through 5G, as well as “virtually any” radio frequency in “any region” of the world. It uses the latest 7-nanometer manufacturing process, shrinking from 10-nanometer scale, which combined with other component improvements will cut energy consumption, allowing second-wave 5G devices to have smaller batteries.
The X55 will also be noticeably faster than its predecessor. In 5G mode, it offers a top download speed of 7Gbps, up from the X50’s roughly 5Gbps peak, plus a 3Gbps top upload speed. When on 4G networks, it can download at up to 2.5Gbps, which is 25 percent faster than the company’s standalone X24 LTE modem.
In addition to supporting both millimeter wave and sub-6GHz frequencies, it can operate in a 5G/4G spectrum-sharing mode so carriers can offer both types of service on the same radio frequencies.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Qualcomm’s Snapdragon X55 5G modem.
Simultaneously, Qualcomm is rolling out its third-generation 5G antenna, the QTM525, specifically designed to fit inside smartphones thinner than 8mm. QTM525 is millimeter wave-specific, but goes beyond its predecessor by adding 26GHz support to the prior 28GHz and 39GHz bands. That will enable it to work on the fastest 5G networks currently planned for North America, South Korea, Japan, Europe, and Australia — a wide swath of territories.
Two related new components may sound even more obscure, but they’re equally important: the QET6100 5G Envelope Tracker and QAT3555 5G Adaptive Antenna Tuning Solution. In short, the QET6100 enables a 5G device to efficiently use a 100MHz chunk of radio spectrum at once, offering faster speeds and longer battery life, while the QAT3555 lets the device work on radio frequencies all the way from 600MHz to 6GHz, with greater power efficiency and a 25 percent smaller package. They enable the Snapdragon X55 to be the first modem capable of using these new radio features.
Collectively, the new parts will enable the second wave of consumer and enterprise 5G products to “meet the expectations that people have with regards to their smartphones and devices today,” Qualcomm said, which effectively means smaller, faster, and longer-lasting. The company expects that the new parts will be used in everything from phones to hotspots, fixed wireless routers, laptops, tablets, automotive applications, and enterprise devices.
Snapdragon X55 will be shown off from February 25-28 at the Mobile World Congress, alongside numerous 5G demos, including “boundless XR,” an enhanced ultra-reliable low-latency communication (eURLLC) test with 99.9999 percent reliability, and 5G cellular vehicle-to-everything (C-V2X) demos. The new modem is sampling to customers now, with the other components sampling in the first half of 2019. Initial products based on the second wave parts are expected to become available by the end of the year.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,262 | 2,020 | "Qualcomm’s 5nm Snapdragon X60 modem can use mmWave and sub-6GHz 5G together | VentureBeat" | "https://venturebeat.com/2020/02/18/qualcomms-5nm-snapdragon-x60-modem-can-use-mmwave-and-sub-6ghz-5g-together" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm’s 5nm Snapdragon X60 modem can use mmWave and sub-6GHz 5G together Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
When historians look back on 5G devices, the first wave will likely be a footnote, the second wave will dovetail with the spread of the technology, and the third will be where 5G really shines. This roadmap became clearer today when Qualcomm unveiled its third-generation 5G modem, Snapdragon X60 , which will enable the next wave of premium phones to make considerably better use of 5G networks than earlier devices.
One of the X60’s multiple advantages over prior alternatives comes down to size: It’s the first 5G modem system with 5-nanometer baseband , shrinking below its predecessor X55’s 7-nanometer manufacturing process and the prior X50’s 10-nanometer design.
Each nanometer scale reduction enables a chip to be more energy-efficient than its predecessors, as well as allowing it to occupy less physical space, freeing phone designers to make smaller devices or include larger components, such as batteries.
Less physically obvious are two killer Snapdragon X60 features that promise to speed up third-wave phones: aggregation of millimeter wave and sub-6GHz signals and aggregation of multiple types of sub-6GHz signals. While first-wave 5G phones could connect to either millimeter wave or sub-6GHz 5G networks, and second-wave phones could connect to both — but not at once — the X60 can combine data from different frequency bands, resulting in additive speeds.
In other words, if your cellular carrier has nearby millimeter wave and sub-6GHZ towers with available bandwidth, the Snapdragon X60 will be able to receive data from both at once. It will also be able to take advantage of your provider’s separate FDD and TDD sub-6GHz streams, aggregating their data. So while the X60’s peak 7.5Gbps download and 3Gbps upload speeds aren’t higher than the X55’s, users will actually come closer to those peaks, thanks to mix and match aggregation. Qualcomm expects the X60 to double standalone 5G speeds when using sub-6GHz networks and help carriers move their early non-standalone 5G towers to standalone mode.
The Snapdragon X60 also includes Voice over NR (VoNR) support to enable phone calls to take place over the 5G network rather than falling back to 4G or 3G.
And it will be compatible with a third-generation 5G antenna module, the QTM535, which is narrower than its QTM525 predecessor , enabling even sleeker premium phone designs. OEMs using the X60 will be able to choose between either antenna module, but the newer one promises improved millimeter wave performance, including global mmWave support for 26GHz, 28GHz, and 39GHz bands that will be used in Australia, Europe, Japan, North America, and South Korea.
Despite a looming EU investigation over the tying of its 5G modems to its RF front-end solutions, Qualcomm is again marketing the Snapdragon X60 as a “5G modem-RF system,” including multiple components that it says work best together, rather than having parts cobbled together from competing vendors. The performance benefits are expected to be particularly noticeable where mmWave support and energy efficiency are concerned.
One question mark at this point is whether the Snapdragon X60 will be backward-compatible with earlier Snapdragon system-on-chip solutions. While the recently announced Snapdragon 865 conspicuously lacked direct integration with a 5G modem, instead relying on an separate X55 modem for cellular connectivity, the X60 is not yet guaranteed to be an option for that “premium” SoC — or less expensive ones. Qualcomm says only that the X60 will become available to developers for initial testing during this quarter and will start appearing in devices in “early 2021,” which will likely dovetail with the release of a Snapdragon 865 sequel.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,263 | 2,020 | "Samsung demos 'full potential' of 5G mmWave with 8.5Gbps small cell | VentureBeat" | "https://venturebeat.com/2020/04/14/samsung-demos-full-potential-of-5g-mmwave-with-8-5gbps-small-cell" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung demos ‘full potential’ of 5G mmWave with 8.5Gbps small cell Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Most 5G networks currently use the same low and mid band radio frequencies that were common in the 4G era, but going forward, high band millimeter wave (mmWave) radios promise to deliver much faster speeds — assuming carriers deploy the thousands of “small cells” required to cover large areas with the short-distance radio signals. Today, Samsung is quantifying 5G mmWave’s incredible performance with the results of a new test: A small cell delivered a whopping 8.5Gbps throughput, comparable to fiber cable speeds.
While Samsung’s result comes with caveats, foremost that the 8.5Gbps number was split across two devices that each peaked at roughly 4.3Gbps, the key is that a single 5G mmWave small cell will let carriers transfer vast quantities of data to multiple client devices at once. To achieve the new throughput milestone, Samsung used the 28GHz-capable 5G NR Radio Access Unit announced at MWC Los Angeles in October , with a 1,024-antenna MU-MIMO system and 800MHz of mmWave spectrum.
The 8.5Gbps number is good news for carriers, enterprises, and consumers, as it heralds an age where relatively compact cellular radios will provide ultra-high speed wireless service in urban environments and business campuses — potentially including private corporate 5G networks.
Some carriers also plan to use mmWave in large gathering places, such as sports stadiums and concert venues, splitting high-bandwidth connections across hundreds of people.
Major U.S. carriers are all in the process of bulking up their mmWave spectrum holdings to take advantage of 5G small cells. Verizon is already using 400MHz of spectrum in some markets to deliver real-world speeds of up to 1.5Gbps to early 5G smartphone and home broadband users. AT&T and T-Mobile have been less aggressive than Verizon in deploying consumer mmWave services, but all three carriers expect to use mmWave in highly dense settings.
Absent mmWave small cells , 5G carriers rely upon low and mid band frequencies, the same general ranges used by 4G devices and Wi-Fi routers, where real-world peak speeds tend to be in the sub-300Mbps to sub-1Gbps range. Engineers have long maintained that the biggest speed gains will come from mmWave radios, specifically because they can rely upon large chunks of radio spectrum that were previously considered unusable due to their one mile or shorter typical transmission ranges.
Samsung has already sold over 6.7 million 5G phones , and is generally considered to be the world’s leader in consumer 5G devices. Its 5G networking hardware is being used by some carriers in South Korea, the United States, and Japan, alongside competing offerings from Ericsson, Nokia, and Huawei.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,264 | 2,020 | "BBB blasts Verizon for 5G ads, says coverage claims mislead customers | VentureBeat" | "https://venturebeat.com/2020/05/14/bbb-blasts-verizon-for-5g-ads-says-coverage-claims-mislead-customers" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages BBB blasts Verizon for 5G ads, says coverage claims mislead customers Share on Facebook Share on X Share on LinkedIn Verizon shows off its 5G network in Los Angeles, California at Mobile World Congress L.A.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
While rivals AT&T and T-Mobile were racing to launch nearly national 5G networks last year, Verizon oddly focused on rollouts at sports stadiums and arenas — places where it could show off its high-speed, short-distance millimeter wave technology. Now the Better Business Bureau’s (BBB) advertising arm is taking Verizon to task for using those stadium installations as proxies to overhype its modest 5G expansion elsewhere.
This morning, the BBB’s National Advertising Division (NAD) told Verizon to stop claiming that it was “building the most powerful 5G experience for America” and recommended that the carrier make clear and conspicuous disclosures to consumers about the limited actual availability of its 5G network. While Verizon has agreed to update its disclosures, it will appeal the NAD’s objection on its network construction claims to the National Advertising Review Board.
The NAD action comes at a critical juncture for U.S. 5G deployments, which have thus far failed to deliver on the 5G standard’s bold promises. Consumers and businesses were promised 10 to 100 times faster mobile and fixed broadband speeds, as well as single-digit millisecond latency and remote industrial benefits, sparking widespread interest in the new technology.
But early U.S. 5G networks have ranged from tiny (Verizon) to fuzzy (AT&T) to large and slow (T-Mobile), depriving most potential customers of those expected benefits, despite general eagerness to market “5G” as a reason to invest in new devices and services. Meanwhile, similar networks elsewhere in the world are delivering much faster average speeds, in some cases covering much larger percentages of the population.
Verizon’s rollout has been particularly unusual. As one of the two largest cellular carriers in the United States, it has over 100 million wireless customers and has aggressively promoted 5G for two years. Yet Verizon’s 5G rollouts have been almost comically spotty , covering only “parts” of over 30 cities , with outside estimates of roughly 3% coverage in some supposedly served areas. In some cases, Verizon’s 5G is available largely in stadiums or airports — retrospectively bad bets in 2020 — but the ads cited by NAD implied that the carrier’s “powerful” 5G performance was widely available elsewhere and would be coming wherever the ads were being shown. To the extent Verizon offered disclaimers in ads, they were obscured by text-neutralizing colors and fast-moving video.
Somewhat surprisingly, the initial complainant on the ads was chief rival AT&T, which brought the claims to NAD — a division of the BBB’s industry self-regulating organization BBB National Programs — for resolution. AT&T has created its own 5G-related headaches, underdelivering on a consumer 5G network last year , and mis-marketing its 4G LTE Advanced network to consumers as “5G Evolution” in the lead-up to offering actual 5G network access. Sprint, now owned by T-Mobile, took AT&T to court over 5GE “deception,” ultimately settling.
It remains to be seen how Verizon will actually improve its 5G network this year. The carrier currently offers super simplified “5G coverage maps” for 34 cities, with only one — San Diego — identified as “coming soon,” though Verizon is promising some coverage in 60 U.S. cities by year’s end. CEO Hans Vestberg said this week that expansion of Verizon’s initially hyped 5G Home broadband service will take place this fall, pending delivery of 5G modems with longer-range Qualcomm millimeter wave hardware.
The carrier has also committed to rolling out a slower version of 5G that will share its existing 4G spectrum with 5G devices , though the timing of that is still ambiguous.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,265 | 2,019 | "Medal of Honor: Above and Beyond hands-on -- Advancing VR combat on the Oculus Rift | VentureBeat" | "https://venturebeat.com/2019/09/25/medal-of-honor-above-and-beyond-hands-on-with-vr-combat-on-the-oculus-rift" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Preview Medal of Honor: Above and Beyond hands-on — Advancing VR combat on the Oculus Rift Share on Facebook Share on X Share on LinkedIn Medal of Honor: Above and Beyond is coming to VR.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Oculus and Respawn Entertainment have finally revealed their secret project as Medal of Honor: Above and Beyond, a big-budget virtual reality game coming in 2020 for the Oculus Rift and Rift S VR headsets.
The first-person shooter game revealed today at Oculus Connect 6 in San Jose, California, is perhaps the biggest effort underway to prove the fledgling interactive medium of VR, with more than 180 people working on it for as much as three years. The game is highly anticipated in part because it comes from the respected studio Respawn, maker of Apex Legends and Titanfall , and it is the first installment in the Medal of Honor series since 2012’s Medal of Honor: Warfighter.
I played a preview of the Word War II game for an hour or so, and it was a sweaty affair. Playing in VR is a lot different from sitting on a couch with a controller. It’s more of a physical game, said Peter Hirschmann, Above and Beyond’s director, in a press interview. The game will have 50 levels in single-player mode and it will also have multiplayer (to be described later).
“With Medal of Honor: Above and Beyond, I think we’re finally able to, to fully realize that vision from 20 years ago with putting you in the boots and allowing you to see through the eyes of someone who was actually there,” Hirschmann said. “And that’s been the most exciting and fulfilling part of the project.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Michael Doren of Oculus and Peter Hirschmann of Respawn Entertainment.
You can catch a grenade thrown by a Nazi soldier, but you do so by putting your hand in the air and clicking on a button on a motion controller. And you can look around a corner by moving your body and peeking around it. If you want to load your gun, you grab for a cartridge on your belt with your hand and then click it into place in the gun being held in the other hand.
“This is all about analog warfare,” he said. “Bullets and health really matter. The adrenaline kicks in and you have to make some very serious decisions in the moment. It’s up close and personal. We certainly have moments of spectacle in the game. But when you can see the faces of the enemy and the insignia of their uniforms, that’s where VR really pays off.” Hirschmann said the project started nearly three years ago, when Respawn was still an independent company and was talking with partner Electronic Arts (which eventually bought the studio) about working on a Medal of Honor game. Many of Respawn’s people, like CEO Vince Zampella, had worked on the original Medal of Honor two decades earlier.
The team began the work, but it also got a pitch from Michael Doran, a producer at Oculus, about making a VR game. The Respawn team liked that idea so much that they decided to work on Medal of Honor: Above and Beyond, a VR title. The partners announced the title two years ago at OC4.
Above: Medal of Honor: Above and Beyond In the game, you play an operative in the Office of Strategic Services, the WWII forerunner of the Central Intelligence Agency. As such, you can travel to a lot of different hot spots in the war, from the German heavy water research facility in Norway to a French Resistance operation in Paris. You get to take part in the Omaha Beach landings and the biggest moments of the war.
The game is mostly about action, without a moment to relax in the middle of various war zones. It starts in a village in occupied France, where the Resistance plays records to send coded messages. I donned the headset and hand controllers and dove in.
I had to move around by moving the thumbstick. You won’t see any teleporting in the game, as the team found that accelerating the graphics to 90 hertz helped with motion sickness. Instead of climbing stairs, however, you just go up to the stairway and the scene shifts to the next floor.
I searched for documents and picked them up with my hand, pulling a trigger or pushing a button to make it happen. I then went to a window, picked up a record, put it in the phonograph, picked up a gun and started shooting at the Nazis outside the window.
Above: Medal of Honor: Above and Beyond The shooting with the motion vibration in the controllers felt pretty good, but it took some time to get used to the reloading process. That took a lot longer than it did in a normal video game, so it left me exposed and not moving out in the open sometimes. That made me a target, and the enemies would take me out.
So that taught me to seek out cover, especially when I was reloading. But I could also more easily fight by peeking around the corners. I could also do blind fire by holding my arm up high and twisting my wrist so that I could fire over barriers, albeit with less accurate aim. Sometimes, when I popped my head over the barrier, I would see a body and realize the blind fire did the job.
Hirschmann said that players will be able to discover this level of immersion and possibilities for unexpected gameplay over time. The combat is like other Medal of Honor games in that it is about moment-to-moment encounters, with you pitted in a tactical situation against a single or multiple enemies who are in close quarters with you.
Above: Yes, you can throw a pot at a Nazi in Medal of Honor: Above and Beyond.
And it’s pretty well done. With force-feedback in your hands, the guns feel realistic when they’re blazing. I played another level where I had to clear an ornate French palace full of Nazis, room by room, in the middle of Paris. From the kitchens to the commander’s office, I had to fight every step of the way there.
The goal was to get to a safe, find a list of Resistance members that the Nazis captured, and burn it. When I finally got there, I had to go through the motions of striking a lighter and setting the paper on fire. A cute touch: My name was on the list.
Is it too real? Above: The opening mission of Medal of Honor: Above and Beyond. You are in the French Resistance.
Some people have expressed the fear that shooting games in VR would seem too realistic and frightening to play, a la the old stereotype that games would become “murder simulators.” I didn’t feel that was the case. In part, the physics of the game felt a bit antiquated or exaggerated.
If you tossed a grenade at a Nazi, it would explode and send the body flying in the air, as if they were shot out of a cannon. That deflated the sense of realism. And while you see blood and some tense combat like knifing enemies, it didn’t seem excessively grisly. On top of that, the enemies were Nazis. They weren’t civilians, and you are mostly firing in self-defense.
The tone of the game is respectful, and the violence doesn’t seem over-the-top or needlessly celebratory. You see just enough violence and intensity to give you a real sense of the gravity of combat and the need for action. The graphics are not like Battlefield V. It’s a bit more cartoony than that, if that’s a word.
360-degree VR veteran videos Above: Oculus and Respawn interviewed veterans like Frank for Medal of Honor: Above and Beyond’s bonus content.
Some things that lent authenticity to the game were the bonus VR videos that you get once you reach a certain threshold of completion. The team connected with WWII veterans, including some who landed on D-Day in Normandy. A nonprofit dubbed Honor Flight, which flies veterans back to the combat zones, helped make the connections.
One of the veterans, Frank, now in his 90s, traveled with the Respawn team to the 82nd Airborne Division’s paratrooper landing zones. He found the exact field in France where his glider crash-landed in 1944. And the veteran had a tearful reunion with a French farmer who had been forced to bury the dead Americans in the field. Eventually, locals buried 6,000 bodies in the makeshift cemetery. That was really moving, and it did more to establish the game’s authentic feel.
Living up to a legacy Above: You play an OSS operative in Medal of Honor: Above and Beyond.
While that is no doubt a part of the marketing of a World War II game (we’ve seen so many of them by now), Hirschmann is right about one thing. And that is the modern video game has become the way that people learn about their history. Steven Spielberg’s Saving Private Ryan delivered the shock of realism to a generation that did not remember the way. But that film came out way back in 1997.
Now it’s the responsibility of the game makers, not the filmmakers, to deliver the knowledge and learning of history in a memorable way. In one way, the game also tells a more complete version of history, as it has female soldiers, particularly in the French Resistance, and other people of color represented in a way that we have not seen before.
Above: You will feel what it’s like to be in a Sherman tank in Medal of Honor: Above and Beyond.
To educate a new generation about World War II is a heavy burden for a game that is all about fun. When fun and history collide in such a game, fun wins out, Hirschmann said. But it will be good to see this game come out and make an impression on young people who don’t know the sacrifices made by an older generation.
The game could use some improvements. I think it will be good and more polished by the time in comes out in 2020. It will have some of the drawbacks of VR, such as animations that are good but not necessarily as good as the best high-end PC games. The headset will get hot and sweaty if you play for a long time. And the physics and animations may not seem as realistic.
But I have no doubt the immersion of VR will be able to make up for a lot of that. The market will decide. But it is gratifying that a couple of the biggest names in gaming are collaborating on this project with the hope of moving VR closer to a mass market.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
16,266 | 2,020 | "Microsoft launches Project Bonsai, an AI development platform for industrial systems | VentureBeat" | "https://venturebeat.com/2020/05/19/microsoft-launches-project-bonsai-an-ai-development-platform-for-industrial-systems" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft launches Project Bonsai, an AI development platform for industrial systems Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft announced the public preview of Project Bonsai , a platform for building autonomous industrial control systems, during its Build 2020 online conference. The company also debuted an experimental platform called Project Moab that’s designed to familiarize engineers and developers with Bonsai’s functionality.
Project Bonsai is a “machine teaching” service that combines machine learning, calibration, and optimization to bring autonomy to the control systems at the heart of robotic arms, bulldozer blades, forklifts, underground drills, rescue vehicles, wind and solar farms, and more. Control systems form a core component of machinery across sectors like manufacturing, chemical processing, construction, energy, and mining, helping manage everything from electrical substations and HVAC installations to fleets of factory floor robots. But developing AI and machine learning algorithms atop them — algorithms that could tackle processes previously too challenging to automate — requires expertise.
Project Bonsai attempts to marry this expertise with a powerful simulation toolkit hosted on Microsoft Azure.
Ramping up industries At a high level, Project Bonsai’s aim is to hasten the arrival of “Industry 4.0,” an industrial transformation Microsoft defines as the infusion of intelligence, connectivity, and automation into the physical world. Beyond new technology, Industry 4.0 entails new ecosystems and strategies that leverage AI to great gain. Microsoft cites a World Economic Forum study that found 50% of organizations embracing AI within the next seven years might double their cash flow.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For manufacturers in the transitional phase, often the end goal is to attain “prescriptive” intelligence, where adaptive, self-optimizing technology and processes help equipment and machinery adjust to changing inputs and conditions. Existing control systems have a limitation in that they operate on a set of deterministic instructions within predictable, unchanging environments. Next-generation control systems tap AI to go beyond basic automation, adjusting in real time to changing environments or inputs and even optimizing toward multiple goals.
Project Bonsai is designed to create these systems, which also adopt a combination of digital feedback loops and human experience to inform actions and recommendations. Historical data drives particular operations and product improvements, enabling systems to complete tasks like calibration more quickly and precisely than human operators.
Machine teaching and simulation Project Bonsai is an outgrowth of Microsoft’s 2018 acquisition of Berkeley, California-based Bonsai, which previously received funding from the company’s venture capital arm M12.
Bonsai is the brainchild of former Microsoft engineers Keen Browne and Mark Hammond, who’s now the general manager of business AI at Microsoft. The pair developed an approach on Google’s TensorFlow framework that abstracts low-level AI mechanics, enabling subject-matter experts to train autonomous systems to achieve goals — regardless of AI aptitude.
In September 2017, Bonsai established a new benchmark for autonomous industrial control systems, successfully training a robot arm to grasp and stack blocks in simulation. It performed a claimed 45 times faster than a comparable approach from Alphabet’s DeepMind.
Microsoft refers to the abstraction process as machine teaching. Its central tenant is problem-solving by breaking down workloads into simpler concepts (or subconcepts) and then individually training them before combining them. This technique is also known as hierarchical deep reinforcement learning, when AI learns by executing decisions and receiving rewards for actions that bring it closer to a goal. The company claims this technique can decrease training time while allowing developers to reuse concepts.
For example, in a warehouse and logistics scenario, an engineering team could use machine teaching to train autonomous forklifts. Engineers would start with simpler skills like aligning with a pallet, and building on that they’d teach the forklift to drive toward the pallet, pick it up, and set it down. Ultimately, the autonomous forklift would learn to detect other people and equipment and return to its charging station.
“There’s a joke in reinforcement learning among researchers that goes something like this: If you have a problem and you model it like a reinforcement learning problem, now you have two problems,” Microsoft CVP Gurdeep Pall told VentureBeat in a phone interview. “It’s a very complex field. It’s not just about selecting the right algorithm — continuous versus discrete, on-policy versus off-policy, model-based versus model-free, and hybrid models — but rewards.” As Pall explained, rewards in reinforcement learning describe every correct step that an AI tries. Crafting these rewards — which must be expressed mathematically — is difficult because they have to capture every nuance of multistep tasks. And improperly crafted rewards can result in catastrophic forgetting, where a model completely and abruptly forgets the information it previously learned.
“What machine teaching does is that it takes a lot of these hard problems and really puts the problem on rails. It constrains how you specify the problem,” added Pall. “The [Bonsai platform] automatically selects the algorithm and [parameters] … from a whole suite of options, and it has abstraction goals, which rather than requiring a user to specify a reward, instead has them specify the outcome they want to achieve. Given a state space and this outcome, we automatically figure out a reward function against which we train the reinforcement learning algorithm.” Project Bonsai’s general purpose reinforcement learning platform orchestrates AI model development. It provides access to algorithms and infrastructure both for model deployment and training, and it allows models to be deployed on-premises, on-device, or in the cloud with support for simulators like MATLAB Simulink, Transys, Gazebo, and AnyLogic. (On-premises deployments require a controller companion to interface with the controller computer in real time.) From a dashboard, Bonsai customers can view all active jobs — called BRAINs — as well as their training status and ways to debug, inspect, and refine models. And they can collaborate with colleagues to collaboratively build and deploy new models.
It’s a largely hands-off process. After concepts are programmed into a model using Project Bonsai’s special-purpose programming language, Inkling, the code is combined with a simulation of a real-world system and fed into the Bonsai AI Engine for training. The engine automatically selects the best algorithm to train a model, laying out the neural networks and tuning their parameters. And the platform runs multiple simulations in parallel to reduce training time, streaming predictions from trained models to software or hardware through Bonsai-provided libraries.
Bonsai adopts a “digital twin” approach to simulation — an approach that has gained currency in other domains. For instance, London-based SenSat helps clients in construction, mining, energy, and other industries create models of locations relevant to projects they’re working on, translating the real world into a version that can be understood by machines. GE offers technology that allows companies to model digital twins of actual machines, whose performance is closely tracked. Oracle has services that rely on virtual representations of objects, equipment, and work environments. And Microsoft itself provides Azure Digital Twins , which models the relationships and interactions between people, places, and devices in simulated environments.
Within Project Bonsai’s platform, a model learning to control a bulldozer, for instance, would receive information about the variables in the simulated environment — like the type of dirt or proximity of people walking nearby — before deciding on actions. These decisions would improve over time to maximize the reward, and domain experts could tweak the system to arrive at a solution that works.
It’s akin to — albeit ostensibly easier to use than — Microsoft’s AirSim framework for Unity, which taps machine learning to simulate environments with realistic physics for systems-testing drones, cars, and more. Like the Project Bonsai platform, it’s intended to be used as a safe, repeatable proving ground for autonomous machines — in other words, a means of collecting data prior to real-world prototyping. In a recent technical paper, Microsoft researchers demonstrated how AirSim could be used to train and transfer drone-controlling AI from simulation to the real world, bridging the simulation-reality gap.
Microsoft says that Bonsai simulations — which are hosted on Azure — can replicate millions of different real-world scenarios that a system might encounter, including edge cases like a sensor and component failure. Post-training, models can be deployed either in a decision support capacity, in which they integrate with existing monitoring software to provide recommendations and predictions, or with direct decision authority, such that the models develop solutions to challenging situations.
Project Moab To onboard engineers and developers keen to begin experimenting with the Bonsai, Microsoft created Project Moab, a new hardware kit that’s available as a simulator in MathWorks and soon a physical kit for 3D printers. (Developers who don’t wish to print it themselves will be able to purchase fully assembled units later in the year. ) It’s a three-armed robot with a joystick controller that attempts to keep a ball balanced on a magnet-attached transparent plate, and it’s intended to give users an environment in which they can learn and experiment with simulations.
Ball balancing is a classic mechanical engineering challenge that’s known as a regulator-type control problem. Given any condition, a self-balancing system must learn a control signal to produce the desired final state — i.e., a ball brought to rest at the center of the platform. Most classical ways of solving it involve differential equations, which represent physical quantities and their rates of change. But Project Moab seeks to tease out machine learning solutions to the problem.
It’s more challenging than it might sound, because any ball-balancing system must be able to generalize — that is, construct a robust control law on the basis of training data. Achieving good generalization requires generating a sufficiently rich set of inputs during the training phase. Failing to generate a diversity of inputs will result in poor performance.
Why build a kit around this problem as opposed to another? According to Hammond, the Project Moab team wanted to pick a device engineers and developers could use to learn the steps they’d have to accomplish if they were to build an autonomous system. With Moab, developers have to employ simulators to model physical systems and incorporate those into a training regime. As for engineers, many of whom are likely familiar with classical solutions to the ball-balancing problem, they have to learn to solve it with AI.
“We’re giving people more tools in their tool chest that they can use to expand the spectrum of problems they can solve,” said Hammond. “You can very quickly take it into areas where doing it in traditional ways would not be easy, such as balancing an egg instead. The point of the Project Moab system is to provide that playground where engineers tackling various problems can learn how to use the tooling and simulation models. Once they understand the concepts, they can apply it to their novel use case.” Project Moab’s tutorials delve into more than balancing balls. Moab can be taught to catch balls thrown toward it after they bounce on a table, and to rebalance balls disturbed after an object like a pencil pokes at them. It can also learn to balance objects while ensuring they don’t come into contact with obstacles placed on the plate, sort of like a self-contained game of labyrinth.
Most of Moab’s components — including the plate and arm-controlling actuators — are interchangeable. Developers can install more powerful actuators to have Moab throw things as well as catch them, for instance. And with the software development kit, other simulation products and custom simulations can be used to train Moab to accomplish more challenging tasks.
Hammond wouldn’t rule out future robotics kits for Bonsai, but he said it would largely depend on the community and their response to Moab. “We want the community to have the ability to experiment and do all sorts of fun, novel things that people hadn’t thought of before,” said Hammond. “Making [a project like this] open source makes that possible.” Project Bonsai in the real world SCG is among the companies that tapped Project Bonsai to imbue their industrial control systems with machine learning. SCG’s chemical division created a simulation within the Bonsai platform to speed up the process of optimizing petrochemical sequences, to the tune of 100,000 simulations per day, each modeling millions of scenarios. Microsoft claims the fully trained model is able to develop a sequence in a week, whereas it previously required several months for a group of experienced engineers.
“Polymers are designed with a particular application in mind. In order to figure out the stages of manufacturing, you need to know the mixing, temperature, and other factors,” said Pall. “The process of coming up with a plan of how a polymer can be manufactured takes six months, traditionally, because it’s done inside a simulator with a human expert guiding the simulator and trying a step, eventually getting it right, and then moving on to the next step. Bonsai found a BRAIN that surfaces solutions to the manufacturability of a given polymer and then controls machines to produce it.” SCG has the distinction of being the first to deploy a Bonsai-trained model into production, according to Microsoft. With respect to pricing, the machine teaching component of Bonsai is available at no cost to customers, but the simulations performed in Azure are billed according to usage. Companies must purchase a commercial license if they decide to use their models in the real world.
Siemens tapped Project Bonsai for another purpose: calibrating its CNC machines. Previously, this was a manual process that required an average of 20 to 25 iterative steps over more than two hours, typically under the supervision of third-party experts. By contrast, the Project Bonsai solution is designed to automate the machine calibration in seconds or minutes. Siemens says that by training a model with Bonsai, it was able to attain two-micron precision at an average of four to five iterative steps over 13 seconds, and less than one-micron precision in about 10 iterative steps.
“[Project Bonsai’s] approach bridges AI science and software to the traditional engineering world,” said Hammond. “[It enables fields] such as chemical and mechanical engineering to build smarter, more capable, and more efficient systems by augmenting their own expertise with AI capabilities.” The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |