source
stringclasses 2
values | author
stringlengths 0
824
⌀ | title
stringlengths 0
475
⌀ | description
stringlengths 0
32.8k
⌀ | url
stringlengths 0
713
| urlToImage
stringlengths 0
2k
⌀ | publishedAt
stringlengths 20
20
⌀ | content
stringlengths 0
32.8k
⌀ | category_nist
stringlengths 5
160
| category
stringlengths 5
239
| id
stringlengths 6
7
⌀ | subreddit
stringlengths 3
21
⌀ | score
int64 0
30.2k
⌀ | num_comments
int64 0
2.27k
⌀ | created_time
timestamp[ns] | top_comments
stringlengths 1
25.4k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
news | Apundir | 101° - Machine Learning The complete Math Guide to Master Data Science with Python & Developing Artificial Intelligence Kindle Edition free Amazon | Machine Learning is rapidly changing the world and it is the way of the Future of Human Technology. Art, information, processes, calculations, emotions will be rapidly learned and discover from machines.Do you want to learn more about the world of Machine Learning and its Applications? Would you like to improve and refine your Python skills? Would you like to become Computer Savvy.If the answer is “YES”, then keep reading. In this complete and exhaustive collection of two books you will discover:What does Machine Learning and Artificial Intelligence meanMachine Learning EvolutionHow to Automate Machine Learning EffectivelyPython Programming and Advanced Programming TechniquesEverything You Need to Know about Neural Networks and Data PipelinesConnection between Machine Learning and Big DataThe Steps of Data AnalysisPredictive Analysis with Data Science and Data AnalysisWhat are the Best Libraries for Machine Learning in Python… & Much More! | https://www.hotukdeals.com/deals/kindle-edition-machine-learning-the-complete-math-guide-to-master-data-science-with-python-and-developing-artificial-intelligence-3782412 | 2021-08-23T12:48:28Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.Copyright © 2004-2021 hotukdeals. All rights reserved.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Content Synthesis/Digital Assistance/Information Retrieval Or Search | Computer and Mathematical/Education, Training, and Library | null | null | null | null | null | null |
|
news | Wall Street Reporter | Next Super Stocks on the Move: Logiq, Reliq Health Tech, AI/ML Innovations and ESE Entertainment. Emerging Leaders in E-Sports, HealthTech, E-Commerce and AdTech | NEW YORK, Sept. 02, 2021 (GLOBE NEWSWIRE) -- Wall Street Reporter, the trusted name in financial news since 1843, is highlighting the latest CEO comments and... | https://finance.yahoo.com/news/next-super-stocks-move-logiq-132700800.html | 2021-09-02T13:27:00Z | NEW YORK, Sept. 02, 2021 (GLOBE NEWSWIRE) -- Wall Street Reporter, the trusted name in financial news since 1843, is highlighting the latest CEO comments and news from companies recently presenting at its highly acclaimed NEXT SUPER STOCK livestream investor conferences, and investor LiveChats on social media streams. Over 170,000 investors have participated in Wall Street Reporters livestream events in the past 90 days.AI/ML Innovations (OTC: AIMLF) (CSE: AIML) Chairman, Tim Daniels: "Mental Health App Expands AI/ML Digital Health Ecosystem - Targeting Multi-Billion Dollar Market OpportunitiesNEXT SUPER STOCK conference presenter AI/ML Innovations (OTC: AIMLF) (CSE: AIML) is rapidly expanding its portfolio of HealthTech assets. AIMLF chairman Tim Daniels updated investors on the companys latest digital healthcare growth initiatives, which now includes Tech2Health, a European mental health app innovator. Tech2Health is positioned for explosive revenue growth as European healthcare mandates now provide about 2,500 Euro per patient annually for mental wellness. Tech2Health has just signed with a French multinational manufacturer, to provide mental wellness support to their 170,000 employees globally, and additional Enterprise contracts are in the pipeline.Watch AI/ML Innovations (OTC: AIMLF) (CSE: AIML) NEXT SUPER STOCK Video: https://bit.ly/3dAI6k9AIMLF Chairman Tim Daniels shared with investors how AIMLF is expanding its global digital healthcare footprint with synergistic acquisitions of innovative HealthTech companies. Tim also updated investors on progress at AIMLFs HealthGauge platform which uses AI and machine learning for applications ranging from remote patient monitoring, to fitness/health tracking and more. AIMLF focus is on scaling revenue growth, by offering its services to enterprise and consumers via a SaaS recurring revenue subscription model. Tim Daniels also updated investors on AIMLs growing pipeline of M&A opportunities in the HealthTech space, which could have a positive impact on maximizing shareholder value in coming months.Watch AI/ML Innovations (OTC: AIMLF) (CSE: AIML) NEXT SUPER STOCK Video: https://bit.ly/3dAI6k9Logiq, Inc. (OTC: LGIQ) (NEO: LGIQ) President, Brent Suen: On Path to $100 Million RevenuesNEXT SUPER STOCK conference presenter Logiq, Inc. (OTC: LGIQ) (NEO: LGIQ) President Brent Suen recently shared with investors how LGIQ is now positioned to more than double revenues - to a potential $100 million run rate - within the next 18 months fueled, by M&A, organic growth and increasing profit margins. LGIQ enables global ecommerce and fintech services for small to medium size businesses worldwide. LGIQs DataLogiq AI-driven adtech business is expected to be a major driver of revenue growth and profit margin expansion in the next 12 months, as more digital marketing agencies are joining the platform.September 2: LGIQ is presenting at Wall Street Reporters NEXT SUPER STOCK livestream at 1:00PM EST. Join here:https://bit.ly/2PX0SpHWatch (OTC: LGIQ) (NEO: LGIQ) NEXT SUPER STOCK video: https://bit.ly/3kafujXBrent Suen articulated how LGIQ has compelling upside, based on valuation comparables to its peers in the e-commerce/fintech space. While LGIQ trades at about 2X revenues, its peers such as SHOP, SE, STNE, JMIA and others are often trading at 20-30X revenues. An additional upside catalyst for investors is the potential spinout of LGIQs Indonesia fintech and ecommerce business as a stand-alone public entity.September 2: LGIQ is presenting at Wall Street Reporters NEXT SUPER STOCK livestream at 1:00PM EST. Join here:https://bit.ly/2PX0SpHWatch (OTC: LGIQ) (NEO: LGIQ) NEXT SUPER STOCK video:https://bit.ly/3kafujXESE Entertainment (TSX.V: ESE) (OTC: ENTEF) CEO Konrad Wasiela: On Track for $100 Million E-Sports RevenuesESE Entertainment (TSX.V: ESE) (OTC: ENTEF) CEO Konrad Wasiela, a featured presenter at Wall Street Reporters NEXT SUPER STOCK investors livestream conference, recently updated investors on his goal of building ESE into a billion dollar global enterprise. Wasiela shared that ESE now has a growing M&A pipeline with over $100 million annual revenues and expected to close a significant number of these potential transactions in the coming months. ENTEF just announced the acquisition of e-sports company Auto Simulation Limited T/A Digital Motorsports, an Ireland-based provider of advanced simulation racing (sim racing) infrastructure, technology, and support. Sim racing is one of the hottest growth categories in the multi-billion dollar global e-sports market.ENTEF recently closed the acquisition of e-sports and gaming infrastructure company, WPG. In 2020, WPGs assets generated revenue in excess of C$14,000,000. This transaction is anticipated to make ENTEF one of the largest esports infrastructure companies in the world, bridging esports companies with their fans and customers.Watch ESE (OTC: ENTEF) (TSX.V: ESE) Next Super Stock livestream video: https://bit.ly/3tdhcVVIn his interview with Wall Street Reporter, ESE CEO Konrad Wasiela, says the company is now ready to scale - expanding its global footprint, with new partnerships with global brands like Porsche, driving revenue growth with aggressive focus on top line sales and margin expansion, and M&A opportunities. ESE is now rapidly expanding, with multiple revenue streams including, E-Sports infrastructure software powering global tournaments, exclusive digital media distribution, broadcast rights, and owning world-class leagues and teams, including its K1CK global E-Sports franchise.Watch ESE (OTC: ENTEF) (TSX.V: ESE) Next Super Stock livestream video: https://bit.ly/3tdhcVVReliq Health Technologies (OTC: RQHTF) (TSX.V: RHT) CEO Lisa Crossley: 2021 is Breakout Year for Reliq Telehealth PlatformReliq Health Technologies (OTC:RQHTF) is now at an inflection point for explosive revenue growth and profitability shared CEO Lisa Crossley during a recent presentation at Wall Street Reporters NEXT SUPER STOCK livestream. RQHTFs iUGO telehealth remote patient monitoring platform has gained significant traction over the past 6 months, and now has 200,000 patients under contract to be onboarded over the next 18-24 months - which represents over $120 Million in recurring annual revenue at full deployment.Watch Reliq Health Tech (OTC:RQHTF) (TSX.V:RHT) NEXT SUPER STOCK Video: https://bit.ly/3BcFkLiRQHTF has just turned the corner to profitability and revenues are expected to reach $2 million per month revenues, hitting a $24 million run rate by the end of December - and keep increasing as more contracted patients are onboarded. Lisa added that RQHTF is now starting to throw off significant cash flow, enabling the company to fund growth internally, without the need for capital raises in the near future. A NASDAQ uplisting remains a possibility for 2022.Lisa explained how new patient contract growth is now snowballing - powered by expanded medicare and medicaid coverage and reimbursement amounts for virtual care services like RQHTF provides. RQHTFs powerful iUGO telemedicine platform supports care coordination and community-based virtual healthcare, allows complex patients to receive high quality care at home, improving health outcomes, and reducing the cost of care delivery. iUGO Care provides real-time access to remote patient monitoring data, allowing for timely interventions by the care team to prevent costly hospital readmissions and ER visits.Watch Reliq Health Tech (OTC:RQHTF) (TSX.V:RHT) NEXT SUPER STOCK Video: https://bit.ly/3BcFkLiWALL STREET REPORTERWall Street Reporter (Est. 1843) is the leading financial news provider, focused on giving investors direct access to CEO's of promising, publicly-traded companies, and market experts. www.WallStreetReporter.com. Nothing in this news summary shall be construed as investment advice. Quotes/content may be edited for brevity and context.Full disclaimer, and relevant SEC 17B disclosures here: http://bit.ly/39kkE7KAbout Wall Street Reporters Next Super Stock conference:Wall Street Reporter's NEXT SUPER STOCK Live! conference is dedicated to featuring select companies that have near-term catalysts in place which can drive transformational growth (and stock appreciation) in the months ahead. Click here to join next livestream event: https://www.wallstreetreporter.com/next-superstock-online-investor-conference/CONTACT:WALL STREET REPORTER(212) 871-2057 ext 7www.WallStreetReporter.com | Content Synthesis/Decision Making/Recommendation | Healthcare Practitioners and Support/Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | How, where and why telco is using enterprise open source | According to "The State of Enterprise Open Source" report Red Hat published earlier this year, open source will continue to play an important role in the future of telecommunications. Let’s see how it’s positioning telecommunications providers to keep up with their technology revolution. | https://www.redhat.com/en/blog/how-where-and-why-telco-using-enterprise-open-source | 2021-08-23T04:00:00Z | While the telecommunications industry is familiar with enterprise open source95% of our survey respondents are already using itit also stands at an inflection point with the rise of edge computing, artificial intelligence and machine learning (AI/ML) and the rapid deployment of 5G.According to "The State of Enterprise Open Source" report we published earlier this year, open source will continue to play an important role in the future of telecommunications. With data collected from 13 countries, the report shows a picture of how, where, and why IT leaders across the globe and a range of sectors use enterprise open source. Lets see how its positioning telecommunications providers to keep up with their technology revolution.Security is a big factor in choosing enterprise open sourceAcross the industry, enterprise open source is being used in Application development (66%), IT infrastructure modernization (63%) and Digital transformation (59%). As service providers modernize their infrastructure to provide connectivity from the core all the way out the edge, they need secure pathways to scale efficiently.Telco IT leaders cited "better security" and the "ability to safely leverage open source technologies" as some of their top reasons for choosing enterprise open source, and 74% say enterprise open source is a key part of their organizations security strategy. Open sources expanded use for emerging technologiesMany respondents expect the use of enterprise open source for emerging technology to increase in the next two years, with two-thirds of IT leaders identifying AI/ML as key growth areasand to an even greater extent, edge computing / Internet of Things (IoT).Kubernetes role in Telcos cloud-native strategyThe industry is shifting to containers and capitalizing on virtualized RAN in its process of revolutionizing radio access networks (RAN). In our survey, an overwhelming 94% of respondents say Kubernetes is important to their cloud-native application strategy. Additionally, a majority of telco leaders indicate they prefer to use multiple cloud vendors. Red Hats partner ecosystem and our work to drive open source innovation and can help communications service providers find that flexibility theyre looking for. Not only is flexibility an important factor for telco leaders, so is open source participation: 85% say they are more likely to select a vendor who contributes to the open source community.Want more insights from the telecommunications industry? Get highlights in The State of Enterprise Open Source: Telecommunications infographic. | Unknown | Management/Computer and Mathematical | null | null | null | null | null | null |
||
news | Dario D'Amico, Dario D'Amico | Visualize and animate flow in MapView with a custom WebGL layer | Learn how to animate streamlines using WebGL and a custom layer. | https://www.esri.com/arcgis-blog/products/js-api-arcgis/developers/visualize-and-animate-flow-in-mapview-with-a-custom-webgl-layer/ | 2021-09-01T00:30:08Z | IntroductionThis article discusses visualizing wind and water currents through animated streamlines; the out-of-the-box capabilities of the ArcGIS API for JavaScript (ArcGIS JS API) are combined with custom WebGL code to create compelling animated visualizations of real-world atmospheric and marine data.See it live or check out the source code on GitHub.Custom layers are an advanced topic; familiarity with WebGL and custom WebGL layer views is recommended. A good place to get started with extending the ArcGIS JS API with custom WebGL layer views is the official SDK sample. And remember that your fellow developers at community.esri.com are always happy to help!With that said… everyone deserves amazing maps! Streamlines are a natural drawing style for flow datasets, and our team is considering adding them to the ArcGIS JS API. Join the discussion on community.esri.com and share your ideas on how to bring this awesomeness to a larger audience!The power of animationsAwesome datasets need awesome visualizations. The ArcGIS JS API ships with 2D and 3D support and a variety of layer types, renderers, effects, and blend modes that should cover the requirements of most users most of the time.In this blog post, we focus on animated visualizations; animations can capture and communicate the dynamicity of certain datasets more effectively than static graphics. The ArcGIS JS API supports several forms of animations out-of-the-box and my colleague Anne Fitz penned down a great writeup covering several useful techniques that are applicable in a broad range of situations. With that said, certain applications call for a more customized experience; this is where custom WebGL layers come into play.The images below show the same area of the map rendered in three different ways. For this article, we are focusing on an imagery tile layer containing wind magnitude and direction for the continental United States.Predefined ImageryTileLayer with "raster-stretch" renderer (left). This option does a good job at visualizing wind speed, but the direction information is lost.Predefined ImageryTileLayer with "vector-field" renderer (center). Using arrow symbols and size and rotation visual variables, this renderer can visualize both aspects of the wind. ImageryTileLayer support for this renderer is shipping with version 4.21 of the ArcGIS JS API; before that, this new functionality is available in the next build.Custom WebGL layer displaying animated flow lines, as described in this article (right). Our custom visualization provides a more intuitive representation of wind currents; the different animation speed of the lines maps to the underlying concept of speed magnitude, and the continuous nature of the visualization makes it easier to spot patterns in the data, like that rotational flow near the Canadian border. Also, it looks pretty cool.This article describes in depth the implementation of a custom WebGL layer that displays animated streamlines. A streamline is the path that a massless particle would take when immersed in the fluid. Please note that the focus of the article is on the flow visualization algorithm and integration with the ArcGIS JS API; whether the particular technique is suitable for a given dataset, audience, or application, needs to be evaluated by a domain expert, and the implementation modified as needed.Load, transform and renderAs anything that shows up on a computer screen, GIS visualizations are the result of:Loading the data;Optionally, transforming/processing/preparing the data;Rendering the data!Each of the predefined layer types that ship with the ArcGIS JS API is a software component that bundles together two capabilities.The ability to access data, either from local memory or from a remote source;The ability to render the retrieved data, often both in 2D MapView and 3D SceneView.In the ArcGIS JS API, the implementation of a layer type consists of a layer class and one or two layer view classes. With relation to the three phases discussed above, the layer is mostly responsible for accessing the data (1) while 2D layer views and 3D layer views take care of the rendering (3). Data transformation (2) can be required for different reasons and, to some degree, it is carried out both by layers and layer views.Polylines with timestampsAt the core of a visualization such as the one that we are setting out to build, there is the concept of a m-aware polyline feature. This is a polyline where each vertex, in addition to its coordinates, carries one or more m-values. An m-value can be thought of as a position dependent attribute of the polyline, that varies smoothly along its path; for vertices the m-value is given explicitly, while for any points between two vertices it can be obtained by linearly interpolating the value at the vertices.A common application of m-aware polylines is representing paths or, as in this article, streamlines; in this case the m-values are the timestamps at which that vertex is visited by the particle.If your data is already stored in an m-aware polyline feature layer, FeatureLayer.queryFeatures() can be used to retrieve the polylines and access the m-value information.Catching windIn this article we will not ingest polyline features; we will build the polylines starting from flow data contained in an ImageryTileLayer. This layer type is similar to ImageryLayer in the sense that it contains raster LERC2D data, but the way that the data is stored on the server is different; imagery tile layers store data into static files called tiles, that are cloud and cache-friendly. Each tile contains many pixels, and each pixel contains one or many bands of data. In the case of an imagery tile layer that stores wind data, there are two bands that together describe wind direction and speed.As it is the case with any other predefined layer, an ImageryTileLayer can be added to a map and displayed by a MapView. The simplest way to visualize an imagery tile layer is with a "raster-stretch" renderer. The default settings are usually sensible, and the resulting visuals are easy to interpret with the help of a legend. See the "raster-stretch" renderer in CodePen.We will focus our attention on imagery tile layers that store flow information, such as wind and marine currents. For these kind of layers, the "vector-field" renderer is supported and used as the default renderer. See the "raster-stretch" renderer in CodePen.Watch this space!The input to a rendering algorithm is visual in nature, or at the very least can be interpreted as visual. For instance, positions are expressed as numbers and the rendering algorithm is responsible for drawing geographic entities in the right places according to those numbers. However, numbers by themselves are meaningless, and they only get their positional semantics from being associated to particular spaces.In the course of this article we are going to refer to three spaces; each space plays its own role in the implementation of the streamlines visualization.Map spaceMap space is used to locate the data on the server. In the case of the wind imagery tile layer that we are considering in this article, it is given by the EPSG:4326 reference system. Map space is right-handed and its coordinates are expressed in map units.Screen spaceScreen space is the left handed frame where +X is right, +Y is down, the origin is in the upper-left corner of the MapView, while the lower-right corner has for coordinates the width and height of the MapView. Screen space coordinates are expressed in pixels.Particle space (aka model space, aka object space aka local space)The animated lines are the result of a particle simulation. The position of a particle as it is swept by the simulated wind traces the polyline that will be later animated. The position of a particle is expressed in particle space. The units of the particle space are called cells and are related to pixels by a multiplicative constant, called cellSize. In the figure above we assume cellSize of 10 so that a 1920x1080 pixels MapView results in a 192x108 particle space.We will see that streamlines ultimately will be rendered using triangle meshes; the coordinates of mesh vertices will be expressed in particle space; in computer graphic, this space is often called model space, object space, or sometimes even local space.Vector fieldsA field is a position-dependent property of a space. Temperature, pressure, humidity are all examples of scalar fields common in meteorology. In other words, a scalar field maps a point to a numeric value. In GIS systems scalar fields are often represented using one band of an imagery tile layer.Some properties are multidimensional in nature, and they are called vector fields. This article focuses on a specific type of vector field, useful to represent wind data, called a velocity field; a velocity field associates a velocity vector to each point of a space. A velocity is a directional speed and can be represented using two bands of an imagery tile layer. There are two different ways to encode 2D velocities.Magnitude and directionThis is reported as dataType: esriImageServiceDataTypeVector-MagDir in the raster info section of the layer.The direction of the wind is measured in degrees, with 0 being North and 90 being East. The magnitude is a speed.UVThis is reported as dataType: esriImageServiceDataTypeVector-UV in the raster info section of the layer.U is a the west-east component of the speed and V is the south-north component.Fetching the LERC2D wind dataIn the ArcGIS JS API, layers do not have to be added to a map to be useful; several layer types offer data fetching capabilities.This is especially remarkable for ImageryTileLayer.fetchPixels(), which enables querying arbitrary extents in spite of the fact that the queried data is broken into tiles on the CDN. The implementation of ImageryTileLayer takes care of downloading the (possibly cached) tiles and stitches them together in a single image that spans the required extent.An instance of ImageryTileLayer can be used to fetch the data that a custom WebGL layer view needs. Every time that the MapView becomes stationary, the custom WebGL layer view triggers a refresh of the visualization. The current extent of the visualization is passed to ImageryTileLayer.fetchPixels() together with the size of the desired output LERC2D image. Instead of querying an image of the same size of the MapView, we ask for a downsampled image by a factor of cellSize, which is fixed at 5 in the current implementation. As an example, if the MapView was 1280×720, the code would fetch a 256×144 image from the image server. We do this to save bandwidth, reduce processing time, and regularize the data; full-resolution wind data may have high-frequency components that are likely to destabilize the simulator. The downsampled image represents the wind in particle space and each data element is called a cell.The rawCellData stores the two bands of the wind separately, in two distinct, row-major, equally sized matrices. Whether these two bands are MagDir or UV can be determined by examining the rasterInfo.dataType property. We chose to normalize the values to particle space UV, which is more convenient for the simulation, and interleave the two bands into a single matrix. Note that the original values would have probably been expressed in knots or meters per second, but from now on we will assume that all velocities are in particle space.There are a couple of things worth mentioning about the conversion formulas.In the “MagDir to UV case” the magnitude is multiplied by sine and cosine factors to get the U and V components respectively. Note how the direction is first converted to radians, and then it is delayed by Math.PI / 2; this is needed because in particle space positive angles rotate the +X axis over the +Y axis, while in map space positive angles rotate the +Y axis (the “North”) over the +X axis (the “East”). These two conventions are both clockwise but angles in map space are a quarter of a full circle late.In the “UV to UV case”, the U value can be copied but the V value needs to be flipped, because in particle space +Y goes down while in map space it goes up.The output of the fragment of code above is the data variable that essentially is a discretized velocity field in particle space; i.e., each cell in particle space is mapped to a horizontal velocity (+X is right) and a vertical velocity (+Y is down), also in particle space.In the rest of the article, FlowData denotes a type that holds the data variable, together with the grid dimensions and the cellSize factor. FlowData is the input to the particle simulation code that traces the streamlines in particle space.Pro tip: smooth the dataWe already smoothed the flow data by requesting the image server a lower resolution image than the MapView itself. In addition to this, for certain datasets an explicit smoothing using a separable Gaussian kernel can help obtaining more robust and aesthetically pleasing results.Turning the particle space vector velocity field into a functionFor ease of use by the particle simulator, we wrap the flowData.data typed array into a closure; the closure takes a point (x, y) in particle space and returns a velocity again in particle space.Pro tip: adopt bilinear interpolationA simulated particle will have more often than not a fractional position, i.e. (42.23, 89.54). With such input, the field closure defined above would take the velocity in cell (43, 90) and return that. A better way of coming up with a value for fractional positions is to use bilinear interpolation; this allows for the velocity field to vary smoothly across a single cell and could be required to get acceptable results from the particle simulation when the wind data is low resolution.Bilinear interpolation works by selecting the four closest cells; first two adjacent pairs of cells are interpolated vertically, based on the distance of the desired point from the cells’ centers, and then the results of the first interpolation are interpolated again horizontally. See bilinear interpolation for more details.Particle simulationParticles are immersed in the closure-wrapped FlowData field at a pseudo-random (but repeatable, using a seeded generator) positions and simulated using the trace() function.Simulating the movement of a particle in a velocity field vf is an iterative process, for which there are a couple of different approaches. The simplest one uses a fixed time step and increments the position of the particle by an amount proportional to the velocity at that point and the time step. Each iteration produces a new vertex of the polyline and, together with the previous vertex, a new segment.Fixing the time step is a perfectly sound approach but for the demo app we opted for a fixed segment length instead. This approach causes line lengths to vary less and also guarantees that adjacent vertices are not too close to each other; we think that both these properties are desirable for this kind of visualization, but the reader is encouraged to experiment with different iteration strategies.In the demo app, each particle is update 30 times using a for loop and the segment length is 1 cell; together with a cellSize of 5 pixels, this leads to each individual polyline having a length of about 150 pixels. At each iteration the speed of the particle is tested and, if found to be too low (because the particle entered a cell where the wind is weak), the simulation for that particle is terminated.Building the triangle meshWebGL has poor built-in support for line drawing; all the line rendering capabilities that ship the ArcGIS JS API are based on triangle meshes, and for this custom flow WebGL we are taking a similar approach. For more information, check out the SDK sample about animated lines. Also, the reader should be aware that the ArcGIS JS API ships with tessellation helper methods that can convert arbitrary geometries into triangle meshes.For representing streamlines our visual requirements are quite basic and we are going to stick with a very simple method; each segment of a polyline is represented by a thin rectangle, at most 2-3 pixels wide, lying along the original polyline; the original polyline becomes the centerline of this “rectangle chain”. Each rectangle is made of 4 vertices, 6 indices, and renders as a pair of right gl.TRIANGLES sharing their hypothenuse. This approach leads to gaps and regions of overlaps, but it is a very fast and robust algorithm, and for thin lines, the artifacts are not noticeable.In the next paragraphs, we call polyline vertex a vertex of the original polyline, as produced by the particle simulation algorithm. Such vertex is extruded in directions perpendicular to the centerline to produce mesh vertices; these are the vertices that are written to the WebGL vertex buffer.The most powerful feature of low-level, GPU-accelerated graphic APIs like WebGL, that really sets them apart from Canvas2D, GDI+ and all the other higher-level solutions, is the ability to define custom visual properties, and use them in custom programs called shaders. This enables applications to describe complex scenes, with many different, possibly animated, objects, and render them with little assistance from the CPU. This approach greatly reduces the load on the CPU, and the GPU is free to run at full speed.There are going to be about a thousand streamlines in a typical flow visualization; we want to render all of them using a single draw call. To achieve this, we need to store all triangle meshes for all polylines in the same vertex and index buffers. The code snippet below does just that. Note that since this could be a long-running process, the algorithm is designed to transfer control back to the event loop once in a while so that the UI thread does not freeze (lines 12-15).The custom properties that we need to define fall into 4 categories.Per-frameThese properties are shared by all polylines and are updated every frame. They are passed to the shaders using uniform values. Since they do not need to be stored in the vertex buffer, we will talk about them when discussing rendering.Per-featureThese properties are associated with the polyline itself. The only way to implement this kind of property in WebGL 1, while at the same time maintaining the ability to draw all polylines with a single draw call, is to actually make them vertex attributes, and repeat the same values for all vertices of the same feature.There are 2 per-feature properties:– totalTime is the total runtime in seconds for the polyline. The fragment shader needs this value at every fragment in order to properly loop the animation.– random is a pseudo-random value, again needed in the fragment shader to introduce some temporal differences in the animation, so that streamlines with the same length do not end up synchronized.They are highlighted in orange in the code snippet above.Per-polyline-vertexThese properties are associated with the polyline vertex itself. There are 2 per-polyline-vertex properties:– x, y, the vertex position, in particle space units, i.e., cells.– t, the vertex timestamp.They are marked in green in the code snippet.Per-mesh-vertexEach polyline vertex is extruded into two mesh vertices, one per side of the polyline. There are 2 per-mesh-vertex properties.– ex, ey is the extrusion vector. This is computed at lines 32-34 by normalizing the segment vector and rotating it 90 degrees. Being normalized, its magnitude is meaningless but you can imagine it being expressed in particle space units, i.e., cells.– a +1/-1 constant that we call side, and identifies an extruded vertex as lying on the right side of the centerline, or on the left side.They are marked in blue in the code snippet.For more information about constructing and rendering line meshes, see the SDK sample about animated lines.RenderingNow is time to put the GPU to work and render our beautiful streamlines!Before discussing rendering there is one additional space that needs to be introduced; it is called clip space and just like screen space describes positions on the screen, but using a different coordinate system. In clip space, the origin is in the center and the drawing area is seen as a 2x2 rectangle, where +X is right and +Y is up. This space is the space of the gl_Position variable.The rendering process takes the particle space mesh and draws it to the screen. As the user pans and zooms, the visualization needs to be repositioned to reflect the change in view point, until a new mesh is ready that again can be rendered full screen.Vertex shaderThe vertex shader converts the coordinates of mesh vertices from particle space to screen space using the u_ScreenFromLocal matrix, extrudes them according to the extrusion vector, and then transforms everything to clip space using the u_ClipFromScreen matrix. Note how the extrusion vectors are rotated by the u_Rotation matrix and scaled by half line width but are not subject to the same transformation that particle space coordinates go through, which also includes a scale factor when zooming in and out; this separation between position and extrusion vectors is responsible for the anti-zoom behavior of lines, which always exhibit the same width. The value u_PixelRatio is used to display lines always of the desired width, even when the DPI of the screen is very high or very low.All other attributes are passed verbatim to the fragment shader using varying variables.Fragment shaderThe fragment shader creates and animates the trail effect.It does so by taking advantage that each fragment is associated with a timestamp, computed automatically by the GPU interpolator based on the timestamps at the two closest vertices.The next snippet shows the fragment shader source.The base color of the streamlines is taken from uniform u_Color. The fragment shader modifies the opacity of the fragments to implement the animated trail effect.A fragment tends to be opaque when its timestamp is close but not greater than the current time, which is passed to the shader as the uniform u_Time; this is done at lines 16-23 using an exponential function applied to a periodized time-dependent signal.A fragment is also more opaque on the centerline than near the edges of the rectangle; this effect is applied at line 14 by taking advantage of the fact that the a_Side attribute has an absolute value of 1 near the edges, and 0 on the centerline.Finally, at line 25 the output color is premultiplied because MapView will not composite the layer correctly otherwise.Configuring WebGL and issuing the draw callWe are ready for rendering! The rendering algorithm consists of about 80 lines of WebGL state setting and a single draw call.– Bind the mesh (lines 7-23)– Build the u_ClipFromScreen matrix, that transforms from screen space to clip space (lines 27-33).– Build the u_ScreenFromLocal matrix, that transforms from particle space to screen space (lines 38-49).– Build the u_Rotation matrix, used to rotate extrusion vectors (lines 35-36).– Configure the shader program (lines 51-76).– Enable premultiplied alpha (lines 78-79).– Draw all the streamlines at once (line 81).Putting everything togetherIn the source repository the code is chiefly organized around two packages; core and flow.coreThe core package contains generic classes and utilities that are useful to create custom visualizations; it provides a simpler abstraction on top of BaseLayerViewGL2D and implements the resource loading/unloading lifecycle. It contains the following modules.– types. Type definitions and interfaces used by the entire codebase. Most importantly, it introduces the concept of global resources and local resources. Global resources are loaded at startup and do not need to be updated when the extent changes; local resources are tied to a particular extent and needs to destroyed and reloaded as the user navigates the map.– settings. Constants that define some of the built-in behaviors of the flow package; in a real-world app these should probably be configurable at runtime.– util. Functions of general utility.– rendering. Defines the abstract class VisualizationStyle, a class that defines how to load global resources, how to load local resources for a given extent, and how to render a visualization. It is an abstraction over the attach()/render()/detach() contract offered by BaseLayerViewGL2D and its concrete implementations can be ran and tested without a running MapView. You can even create a static image out of a visualization style, for instance to be used as a thumbnail, by calling method VisualizationStyle.createImage().– view. Defines the abstract class VisualizationLayerView2D, a subclass of BaseLayerViewGL2D. It offers a simplified programming model and resource management scheme designed around the concept of visualizations, which are basically graphic objects that cover the current extent. To implement a new custom layer, inherit from VisualizationLayerView2D and override method createVisualizationStyle(). If the custom visualization is animated, set the animate flag to true.flowThe flow package depends on core and implements the streamline rendering logic.With relation to the architecture of a geographic visualization app, the flow package provides an implementation for each of the 3 required steps.Loading the data;Transforming/processing/preparing the data;Rendering the data.The flow package contains the following modules.– types. Type definitions and interfaces used by the flow package.– settings. Constants that define some of the built-in behaviors of the flow package; in a real-world app these should probably be configurable at runtime.– sources. Defines different strategies for loading (1) flow data; two strategies are supported at present time: ImageryTileLayerFlowSource that fetches LERC2D datasets from an imagery tile layer, and VectorFieldFlowSource that supports the analytic definition of a global velocity field in map units.– processors. Defines the entire data transformation (2) pipeline, starting from flow data, particle simulation, conversion to streamlines and generation of the triangle mesh. Class MainFlowProcessor uses the main process, while WorkerFlowProcessor uses the workers framework.– shared. Particle simulation and mesh generation code that can be invoked both by the main process, when useWebWorkers is false, and by the worker when it is true.– layer. Defines the FlowLayer class by inheriting it from esri/layers/Layer. This class overrides method createLayerView() to return an instance of FlowLayerView2D.– rendering. Defines three classes: FlowGlobalResources, FlowLocalResources and FlowVisualizationStyle. These are concrete implementations of the abstract concepts defined in the same name module in the core package.– view. Defines the FlowLayerView2D class by inheriting it from VisualizationLayerView2D. This class overrides method createVisualizationStyle() to return an instance of FlowVisualizationStyle. The codebase contains a couple more interesting things that we have not been able to cover in this blog post, due to space constraints; first, FlowLayer defines a way to specify client-side flow data, which can be very useful for education and what-if scenarios where real data is not available.Finally, FlowLayer supports running the particle simulation and mesh generation on workers, to reduce the load on the main thread that could lead to possible UI freezes. Workers are enabled by default and are controlled by the useWebWorkers flag.The main application fileThe main application file declares a VectorTileLayer to be used as a basemap, an ImageryTileLayer that will be displayed by the standard vector field renderer, and our brand-new FlowLayer pointed to the same imagery tile layer URL.And… we are done!It is quite remarkable that a bunch of blue rectangles and less than 60 lines of shading code can look so pretty. The secret is that there is more shading going on behind the scenes; the FlowLayer that we just created is compatible with blend modes and layer effects. A large share of the visual appeal comes from specifying effect: "bloom(1.5, 0.5px, 0.2)" when creating the FlowLayer instance.The image below shows the positive influence of the bloom effect on our custom visualization. We encourage you to try other effects and blend modes, as well as stacking other predefined operational layers on top or below FlowLayer.ConclusionWe hope you enjoyed this deep dive into flow visualization and animation using the ArcGIS JS API and custom WebGL layers. Check out the source repository and try to modify FlowVisualizationStyle to create your own dream layer.On behalf of the ArcGIS JS API team, we thank you for your interest in flow visualization; we are thinking that this drawing style is important enough that it should be a native capability of the ArcGIS JS API. We would love for you to join the discussion on community.esri.com and share your use case, workflow, or requirements with us.Happy coding! | Content Creation/Content Synthesis | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Michelle Horton | Deciphering Ancient Texts with AI | Using traditional machine learning methods and visual psychophysics, Notre Dame researchers are developing AI models capable of transcribing ancient manuscripts. | https://developer.nvidia.com/blog/deciphering-ancient-texts-with-ai/ | 2021-08-12T17:57:09Z | Using traditional machine learning methods and visual psychophysics, Notre Dame researchers are developing AI models capable of transcribing ancient manuscripts.Looking to reveal secrets of days past, historical scholars across the globe spend their life’s work translating ancient manuscripts. A team at the University of Notre Dame looks to help in this quest, with a newly developed machine learning model for translating and recording handwritten documents centuries old. Using digitized manuscripts from the Abbey Library of Saint Gall, and a machine learning model that takes into account human perception, the study offers a notable improvement in the capabilities of deep learning transcription.“We’re dealing with historical documents written in styles that have long fallen out of fashion, going back many centuries, and in languages like Latin, which are rarely ever used anymore. You can get beautiful photos of these materials, but what we’ve set out to do is automate transcription in a way that mimics the perception of the page through the eyes of the expert reader and provides a quick, searchable reading of the text,” Walter Scheirer, senior author and an associate professor at Notre Dame said in a press release. Founded in 719, the Abbey Library of Saint Gall holds one of the oldest and richest library collections in the world. The library houses approximately 160,000 volumes and 2,000 manuscripts, dating back to the eighth century. Hand-written on parchment paper in languages rarely used today, many of these materials have yet to be read—a potential fortune of historical archives, waiting to be unearthed.Machine learning methods capable of automatically transcribing these types of historical documents have been in the works, however challenges remain. Up until now, large datasets have been necessary to boost the performance of these language models. With the vast number of volumes available, the work takes time, and relies on a relatively small number of expert scholars for annotation. Missing knowledge, such as the Medieval Latin dictionary that has never been compiled, poses even greater obstacles. The team combined traditional machine learning methods with the science of visual psychophysics, which studies the relationship between the physical world and human behavior, to create more information-rich annotations. In this case, they incorporated the measurements of human vision into the training process of the neural networks when processing the ancient texts.“It’s a strategy not typically used in machine learning. We’re labeling the data through these psychophysical measurements, which comes directly from psychological studies of perception—by taking behavioral measurements. We then inform the network of common difficulties in the perception of these characters and can make corrections based on those measurements,” Scheirer said.To train, validate, and test the models the researchers used a set of digitized handwritten Latin manuscripts from St. Gall dating back to the ninth century. They asked experts to read and enter manual transcriptions from lines of text into custom designed software. Measuring the time for each transcription, gives insight into the difficulty of words, characters, or passages. According to the authors, this data helps reduce errors in the algorithm and provides more realistic readings. All of the experiments were run using the cuDNN-accelerated PyTorch deep learning framework and GPUs. “We definitely could not have accomplished what we did without NVIDIA hardware and software,” said Scheirer.The research introduces a novel loss formulation for deep learning that incorporates measurements of human vision, which can be applied to different processing pipelines for handwritten document transcription. Credit: Scheirer et al/IEEEThere are still areas the team is working to improve. Damaged and incomplete documents, along with illustrations and abbreviations pose a special challenge for the models. “The inflection point AI reached thanks to Internet-scale data and GPU hardware is going to benefit cultural heritage and the humanities just as much as other fields. We’re just scratching the surface of what we can do with this project,” said Scheirer. Read the full article in IEEE Transactions on Pattern Analysis and Machine Intelligence >>Read more >> | Process Automation/Content Creation | Unknown | null | null | null | null | null | null |
|
news | AnChain.AI Raises $10 Million and Wins SEC Contract to Monitor Crypto and Digital Assets | AnChain.AI, a blockchain security company that specializes in AI-powered platforms, today announced that it raised a $10 Million funding round led by SIG Asia Investments, LLLP, an affiliate of Susquehanna International Group (SIG) in an oversubscribed Series A. Fin VC, Nima Capital, Hard Yaka and Amino Capital also participated in the round. | https://www.globalsecuritymag.com/AnChain-AI-Raises-10-Million-and,20210902,115662.html | http://www.globalsecuritymag.com/squelettes/media/logo.png | 2021-09-02T19:28:00Z | AnChain.AI Raises $10 Million and Wins SEC Contract to Monitor Crypto and Digital Assets AnChain.AI, a blockchain security company that specializes in AI-powered platforms, today announced that it raised a $10 Million funding round led by SIG Asia Investments, LLLP, an affiliate of Susquehanna International Group (SIG) in an oversubscribed Series A. Fin VC, Nima Capital, Hard Yaka and Amino Capital also participated in the round.AnChain.AI also announced today that it has been awarded the multi-year SEC contract to provide a platform for deep analysis and tracing on smart contracts to support the SECs ongoing efforts to monitor risk, improve compliance and inform commission policy on digital assets and cryptocurrencies.Founded in 2018 by CEO Dr. Victor Fang and COO Ben Wu, AnChain.AI provides blockchain security and regulatory compliance solutions to secure leading crypto exchanges, protocols, and DeFi worldwide for $81 Billion in daily transaction volume. It serves clients in over 10 countries across the financial and enterprise industry, blockchain Virtual Asset Service Providers (VASP), public sector companies and governments.Its technology establishes transparency, trust, and legitimacy to allow all stakeholders to interact confidently and securely with the developing digital economy and the next iteration of global technological infrastructure. The AnChain.AI platform proactively protects crypto assets by providing proprietary AI, knowledge graphs, threat intelligence and data analytics on blockchain transactions, and in-depth cryptocurrency transaction monitoring on a wide variety of public and private chains.With the market exceeding $1 Trillion in 2021, and a looming billion-dollar crypto AML problem, demand for revamped regulatory frameworks and technologies have never been more critical. AnChain.AI’s machine learning powered forensic capabilities, which detected the first Blockchain APT (BAPT) hack in history, is now helping international law enforcement and emerging government regulatory efforts with both preventive screening, as well as post-incident investigation.Speaking on the announcement Ye Li, Investment Manager at SIG said, "AnChain.AI has made great progress in developing its market-leading crypto security technology to meet its customers’ broad demand in regulatory compliance and transaction intelligence."The past year has seen a 400% growth in revenue for AnChain.AI and its solutions now capture over 98.5% of the cryptocurrency market share, securing billions of dollars in daily transactions across many of the major exchanges by delivering real-time alerts and detect the precursors of criminal activity well before conventional forms of identification.Managing General Partner and Founder of Fin VC, Logan Allin, said, "we are at an inflection point, and a crossing the chasm moment, as it relates to institutional and government adoption of digital assets. The only way to bridge that gap is through robust solutions like AnChain.AI. We look forward to bringing these capabilities to the Financial Services and FinTech industries and continue to safely and securely democratize access to these innovative technologies and asset classes."AnChain.AI will use the capital to accelerate product development and recruitment across research and development, engineering, customer success and sales.AnChain.AI is an AI-powered cybersecurity company enhancing blockchain security, risk, and compliance strategies. AnChain.AI, San Jose, California, was founded in 2018 by cybersecurity and enterprise software veterans from FireEye and Mandiant. Backed by both Silicon Valley and Wall Street VC’s; and selected in the Berkeley Blockchain Xcelerator, the company is trusted by 100+ customers from over 10+ countries in these sectors: VASP, financial institutions and government, including the SEC (Securities and Exchange Commission). Featured by CBS News, MIT Tech Review, Coindesk and DEFCON, AnChain.AI’s AML engine screens over $1 billion in daily crypto transactions. | Detection and Monitoring/Decision Making | Business and Financial Operations/Computer and Mathematical/Management | null | null | null | null | null | null |
|
news | avastblog@avast.com (Emma McGowan) | Q&A With Andrew Gardner | Avast | Andrew Gardner, Ph.D., has collected comic books since he was a kid. Back then, his favorite character was Iron Man because — unlike other superheroes — Iron Man created his special abilities: he designed and built his suit from scratch, and then he used it to explore and protect the world. And that, Gardner says, is the ideal artificial intelligence (AI) embodiment to have. | https://blog.avast.com/qa-with-andrew-gardner-avast | 2021-08-26T05:54:02Z | Gardner explains how AI can rebalance the computer security game and teach us about human identityAndrew Gardner, Ph.D., has collected comic books since he was a kid. Back then, his favorite character was Iron Man because unlike other superheroes Iron Man created his special abilities: he designed and built his suit from scratch, and then he used it to explore and protect the world. And that, Gardner says, is the ideal artificial intelligence (AI) embodiment to have.Andrew recently joined Avast as its new VP of Research and AI. Hes fascinated by cool technology, both fictional and real, and is a leading researcher in the AI and machine learning (ML) communities. At Avast, he hopes to move the industry forward by helping shape our future conception of computer security, moving beyond the traditional idea of protection from file, script and email threats, to systems which protect transactions, interactions, conversations and attention.And hed also really love a garbage can that emptied itself.A conversation with Gardner reveals, however, that while the tech is fascinating, its the people hes really interested in. Keep reading to learn more about what one AI expert is excited about, what keeps him up at night, and where he thinks all of this is ultimately headed. Theres a lot of hype around AI, but its a poorly understood field. What are three things you wish the general public knew about artificial intelligence?Well, firstly, theres no universal definition for AI. That is one contributor to hype because so many things ranging from mundane to fictional get lumped into the AI umbrella and create confusion, and ultimately disappointment. So starting with a good definition helps with hype. I think a good, general definition for AI is an intelligent system or program thats doing things that humans would do. The system has to be able to sense its environment or collect data in some way and process that data. AI then makes decisions based on that data. The decision-making bit is the real hallmark of AI.For example, a self-driving car is heading to an intersection. It records and processes video and sensor data, computes its velocity and checks fuel levels. This is all amazing, and highly technical, but the AI aspect is bringing it all together into moving from point A to point B. Safely. For that, the car has to make choices. Does it turn left? Right? Stop? Go forward? What if theres a pedestrian? How does it prioritize decisions? The decision-making is really important and historically under-emphasized. Without decisions you are probably talking about machine learning, or something simpler. Decision-making is hard and, frankly, we really dont understand well enough how humans do it to be successful at mimicking them.This understanding gap mirrors where we see the biggest struggle with AI and society. The AI community is aware of ethical challenges, for example, but not formally set up to tackle these. Were still in very nascent stages of addressing this scientifically. Originally, researchers and developers just focused on functionality, not bigger ethical questions. The impact of AI on society is complex and we need to have lots of stakeholders participating.Second, Id love for people to have some perspective on AI. On the one hand, a lot of what people think of as AI isnt what a researcher would consider AI. Its not SkyNet, Terminator stuff. And the media presents it two ways. On the one hand, its magic and could be the end of the world. On the other hand, its not magic because my smart toaster still doesnt make my toast right. It can be really hard to determine which is which and the average person gets confused. And third, AI is for everyone, not just practitioners. It is going to change our world for decades to come and everyone will interact with it in some form, in a way that is similar to how electrification changed the world over the course of a century. Were just at the beginning of that change with AI. What are you most excited about when it comes to the future of AI?Its all kind of exciting! But the most interesting thing about AI, for me, is that it teaches me about humanity. And thats a really cool thing. AI reproduces how people behave and act or how we should behave and act. It makes you think a lot about what makes us human. Humans are very, very complex machines. We far eclipse what were currently dreaming about for AI. I want AI to change my world, but in ways that I can touch and see and feel. Theres a real trend right now of merging robotics and AI to create consumer products. How cool would it be if you had a litter box that scoops itself or trash that takes itself out? Were even seeing delivery bots and drones.It really gets interesting when AI starts interacting with the real world. What does a control system for power and traffic look like when the roads are full of self-driving cars? When will we have robots assembling the next generation of robots? Im excited for us to move faster towards things that benefit society and help us. In my big vision of the longer term, robotics would help us innovate and invent as a species. Im thinking of things like powerful AI that could do drug discovery or medical discovery. Things that would augment our human efforts in a synergistic way, instead of today where we give them specific tasks to complete. Id like AI to be more of a partner than a very, very junior lab assistant. Iron Man instead of Microsoft Clippy.What are you most worried about?Not specific to AI, but to science and technology in general, Im worried that people dont give enough consideration to What if? We can build self-driving cars, but we start by solving technical problems. People raise potential ethical issues and the community will give a nod to that. But I dont think they put enough effort into thinking about outlier events. For example, imagine theres a self-driving car economy that flips on overnight. What if 10 million jobs are replaced with that flip? What if the cost of self driving car rides increases the gap between poor and middle class, or across different countries? What if new crimes are enabled or committed with self-driving cars? We dont always think about what the cost of success could be; we just want to win the race and get there as fast as we can. Then there are the ways that bad guys can exploit AI, which is where Avast sits. Historically, there have been real deterrents to exploiting security gaps at scale. Things like access to technology and knowledge. These acted as a gating mechanism which made the white hat vs. black hat war somewhat balanced. Thats all changed now. With AI, bad actors can target and automate to create exploits at scale and at machine speeds. Their ability to search for new vulnerabilities has grown exponentially. If its been a cat and mouse game in computer security, its now tilting toward the cat right now, if the cats are the bad guys. We need AI to help rebalance the game: cat vs. mouse becomes cat vs. robo-dog.What are your hopes for the future of AI in computer security?We have to be really disruptive in how we even think about security. We need to think differently: How do we go from a box to a sphere? How do we even change the idea of security? What even is security?Computer security used to mean and probably to a lot of people still does mean antivirus on the computer. But these days we use phones, IoT, tablets, and so on. Our interactions with other devices and other people are amplified by social media, ecommerce and digital transformation in our daily lives. So computer security now is more about making sense of how we, as humans, interact, where we place trust, where we spend our attention. I think of the future of security as a guardian angel that sits on our shoulder and protects us from both clear threats and less clear threats across these new interactions, without requiring a lot of explicit direction.At the same time, if we really do our jobs well, traditional security products are designed to be forgotten: the user doesnt hear from us, unless we are alerting them, which is rare. The user experience for security products in this model is atrocious: we basically make a grudge purchase to buy insurance through security software purchase. Can we change this? We need to change this! We have to be able to interact with the user and engage more meaningfully, consistently and usefully. If I could set a goal for this industry, it would be to revise how people view security products. I want them to be something more like a personal assistant or advisor that users trust and are actually interested in engaging with.Other than AI and machine learning, whats the topic you can nerd out about for hours? My favorite thing about AI, and what I dream and aspire to do, is AI for storytelling. Its a really hard problem. You have to study how authors or creators go about setting out a story, how its organized, even sentence planning. So far, AI doesnt come close to touching what humans can do. But imagine, though, if you could have a quick conversation with an AI that could generate entirely new books, movie or game worlds with compelling and realistic characters and plot development in the style you like That dream is a way off. Today theres not really much intelligence in AI, at least not in the general intelligence sense one ascribes to people. Typical systems work like this: you give the AI a prompt like, it was a bright and sunny day, and it starts completing the text, maybe a few sentences, for example. If you dont like the completion you try again and get a new result. The remarkable thing about it to lay people is that the generated text will usually have no grammatical or syntactical errors. But that doesnt mean its sensible. It will generate correct, complete sentences, but they dont really all hang together. Still, there are some fun examples out there. AI Dungeon, for example, is a neat mobile game that uses state-of-the-art AI for an interesting choose-your-own-story approach. Hollywood is interested in AI, too. Im a big fan of sci-fi and I enjoyed the television show Stargate SG-1. I learned recently that the cast and producer are doing an experiment where theyre having an AI generate a screenplay for an episode and then the cast is going to act it out. My expectations are low, but it should be fun.Just to circle back to storytelling for a moment, and AII love this marriage because it really makes you ask, How do people think and how do they reason.. How do humans think? How should (or do) AI systems think?Storytelling is so fundamental to our human existence and identity. Thats an area where Id like to see AI really bloom. | Unknown | Computer and Mathematical | null | null | null | null | null | null |
|
news | BS Reporter | Saas startup NeuroPixel.AI raises seed fund of $825,000 led by IPV | NeuroPixel.AI was founded in late 2020 by Arvind Nair (CEO) and Amritendu Mukherjee (CTO) | https://www.business-standard.com/article/companies/saas-startup-neuropixel-ai-raises-seed-fund-of-825-000-led-by-ipv-121090600522_1.html | 2021-09-06T07:37:00Z | Deeptech SaaS startup NeuroPixel.AI has raised $825,000 in a seed round led by Inflection Point Ventures (IPV). Other investors in the round included Entrepreneur First, Huddle, Dexter Angels, and Rishaad Currimjee.The startup will use the funds raised for scaling up the R&D team to accelerate the transition of its product from beta to production, and for expanding its ‘training set’, a crucial piece of the puzzle for every machine learning algorithm.Ankur Mittal, co-founder, Inflection Point Ventures, said, “As e-commerce will expand, so will the need to put up quality and realistic product pictures online. In fashion commerce it is a big part of the buyer’s purchase decision. However, it is not a seamless process and is both time-consuming and expensive, especially for SMEs and social sellers, two segments which are growing exponentially. NeuroPixel is trying to solve this problem by building a product that can transform online fashion storefronts through catalog image-based personalisation and virtual try-ons, helping the average consumer make a far more informed purchase decision.”NeuroPixel.AI was founded in late 2020 by Arvind Nair (CEO) and Amritendu Mukherjee (CTO). The venture originated at Entrepreneur First, a leading international talent investor, which helps aspiring entrepreneurs find co-founders and supports them in building technology companies.NeuroPixel.AI’s first product – an AI powered cataloguing tool – will enable users to shoot any apparel on just a mannequin, and their technology will render the apparel on models of different sizes in different poses. In the near term, they will reduce cataloguing spends by 30%, and reduce process times by 90%, claims the company.“What Arvind and Amritendu are building today is a world-class, innovative, technology-led startup that can change the way consumers shop online. I’m excited to see how NeuroPixel.AI evolves and disrupts the online fashion ecosystem to pave the way for more intuitive solutions and much-needed disruption of the online customer experience,” said Esha Tiwary, general manager at EF India.NeuroPixel.AI was also among the six startups selected for investment by the ISB D-Labs incubator, under their seed support programme in collaboration with the Department of Science and Technology. The startup has also been selected into the Huddle accelerator, which will commence from the closure of this round of funding.With global spends on apparel cataloguing estimated to be roughly $7 billion today and growing at 16 per cent CAGR, and the virtual fitting room market valued at approximately $2.5 billion today and growing at 25 per cent CAGR, NeuroPixel.AI is confident of tapping into a large, high-value international market with their technology soon.Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.Support quality journalism and subscribe to Business Standard.Digital Editor | Personalization/Image Analysis | Unknown | null | null | null | null | null | null |
|
news | PR Newswire | Remark Holdings Announces Fiscal Second Quarter 2021 Financial Results | Remark Holdings, Inc. (NASDAQ: MARK), a diversified global technology company with leading artificial intelligence ("AI") solutions and digital media... | https://finance.yahoo.com/news/remark-holdings-announces-fiscal-second-201500892.html | https://s.yimg.com/uu/api/res/1.2/1jV7ZZ5DeelaaVA94B9EmQ--~B/aD01NDt3PTQwMDthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/prnewswire.com/b608ab19025c31f98cea7d4f278a61f8 | 2021-08-23T20:15:00Z | Second Quarter 2021 Revenue Increased 75% to $4.0 Million Compared to Second Quarter 2020LAS VEGAS, Aug. 23, 2021 /PRNewswire/ -- Remark Holdings, Inc. (NASDAQ: MARK), a diversified global technology company with leading artificial intelligence ("AI") solutions and digital media properties, today announced financial results for its fiscal second quarter ended June 30, 2021.Our second quarter was highlighted by a near doubling of revenue coming from the United StatesManagement Commentary"Our second quarter was highlighted by a near doubling of revenue coming from the United States, driven by our AI data intelligence platform," noted Kai-Shing Tao, Chairman and Chief Executive Officer of Remark Holdings. "Momentum from our Chinese operations continued despite periodic regional lockdowns associated with COVID-19 and a slowdown in business activities due to the 100th Anniversary of the CCP, growing quarterly revenue in 2021 by more than 40% compared with the same period of last year. In the first six months of 2021, we have almost achieved our full-year 2020 revenue, and we expect additional significant growth in the second half of the year."Second Quarter 2021 Business HighlightsDuring the second quarter, Remark continued to build its data intelligence business using its AI Data Intelligence Platform. Based on initial success, the company is looking forward to the start of the fall sports season and to additional growth opportunities with other online sports gaming and iGaming businesses.China Mobile continues to implement Remark's KanKan AI Platform and Smart Queueing System throughout their retail locations. Additionally, Remark is developing an Artificial Intelligence of Things project to intelligently manage in-store ambient environmental equipment.Remark is also preparing to bid on the second phase of China Mobile's Smart Community business. The company would provide its AI solution to enforce COVID-19 protection rules for communities by enforcing health codes, conducting real-time temperature checks, ensuring mask wearing, allowing access only to residents or authorized persons, controlling vehicle access, and helping to protect the elderly and children.Remark's Digital Marketing Platform ("DMP") was deployed with Bank of China at their Guangzhou branch and China Construction Bank's Yunnan branch during the second quarter, providing additional large opportunities across multiple banks and other retailers. The design phase for Lotus Supermarket's DMP in the Changhe shopping center, Xi'an City, has been completed and is expected to be deployed later this year.During the second quarter, Smart Campus solutions were deployed across more than three dozen campuses bringing total installations to more than 300 campuses. Sales efforts and new partnerships are targeting continued expansion of the Smart Campus solution to several new provinces.Fiscal Second Quarter 2021 Financial ResultsRevenue for the fiscal second quarter of 2021 totaled $4.0 million, up from $2.3 million during fiscal second quarter of 2020.Gross profit improved to $1.8 million in the second quarter of 2021 from $1.1 million in the second quarter of 2020, commensurate with increased revenue. The overall gross profit margin for the second quarter of 2021 was 43.9%.The company incurred an operating loss of $2.5 million in the second quarter of 2021 compared to an operating loss of $2.8 million in the comparable quarter of 2020. An increase in general and administrative expense of $0.6 million, when netted against $0.3 million of decreases in other operating expense categories, partially offset the improved gross profit, and were the primary reason for the operating loss.Net loss totaled $1.6 million, or $0.02 per diluted share in the second quarter ended June 30, 2021, compared to a net loss of $9.8 million, or $0.11 per diluted share in the second quarter ended June 30, 2020. The decrease in the company's stock price between December 31, 2020 and June 30, 2021 led to a $1.3 million gain on the change in liability associated with certain outstanding warrants. In the second quarter of 2020, the company recorded a $6.3 million loss on the change in the fair value of warrant liability due to stock price changes during that period of the prior year.At June 30, 2021, the cash balance totaled $0.1 million, compared to a cash balance of $0.9 million at December 31, 2020. Proceeds of $4.8 million from a short-term debt issuance and $0.8 million from stock option exercises were offset by $6.3 million of cash used in operations."Finally, subsequent to June 30, 2021, Sharecare, Inc. completed its merger with Falcon Acquisition, providing us with initial liquidity of $2.3 million plus approximately 9.4 million shares of Sharecare, Inc. We anticipate that monetizing our position will fund our balance sheet while simultaneously supporting working capital needs to meet our growth goals and new initiatives," concluded Mr. Tao. Sharecare, Inc. trades on The Nasdaq Stock Market (SHCR - $7.40).Conference Call InformationManagement will hold a conference call this afternoon at 4:30 p.m. Eastern Time (1:30 p.m. Pacific Time) to discuss the Company's financial results and provide an update on recent business developments. A question and answer session will follow management's presentation.The live conference may be accessed via telephone or online webcast.Toll-Free Number: 888.394.8218International Number: 323.701.0225Conference ID: 3005370Online Webcast: http://public.viavid.com/index.php?id=146197Participants are advised to login for the live webcast 10 minutes prior to the scheduled start time. A replay of the call will be available after 7:30 p.m. Eastern time on the same day through August 28, 2021.Toll-Free Replay Number: 844.512.2921International Replay Number: 412.317.6671Replay ID: 3005370Remark Holdings, Inc. (PRNewsFoto/Remark Media, Inc.)About Remark Holdings, Inc.Remark Holdings, Inc. (NASDAQ: MARK) delivers an integrated suite of AI solutions that enable businesses and organizations to solve problems, reduce risk and deliver positive outcomes. The company's easy-to-install AI products are being rolled out in a wide range of applications within the retail, public safety and workplace arenas. The company also owns and operates an e-commerce digital media property focused on a luxury beach lifestyle. The company is headquartered in Las Vegas, Nevada, with additional operations in Los Angeles, California and in Beijing, Shanghai, Chengdu and Hangzhou, China. For more information, please visit the company's website at http://www.remarkholdings.com/.Forward-Looking StatementsThis press release may contain forward-looking statements, including information relating to future events, future financial performance, strategies, expectations, competitive environment and regulation. Words such as "may," "should," "could," "would," "predicts," "potential," "continue," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar expressions, as well as statements in future tense, identify forward-looking statements. These statements involve known and unknown risks, uncertainties and other factors, including those discussed in Part I, Item 1A. Risk Factors in Remark Holdings' Annual Report on Form 10-K and Remark Holdings' other filings with the SEC. Any forward-looking statements reflect Remark Holdings' current views with respect to future events, are based on assumptions and are subject to risks and uncertainties. Given such uncertainties, you should not place undue reliance on any forward-looking statements, which represent Remark Holdings' estimates and assumptions only as of the date hereof. Except as required by law, Remark Holdings undertakes no obligation to update or revise publicly any forward-looking statements after the date hereof, whether as a result of new information, future events or otherwise.Company ContactsE. Brian HarveySenior Vice President of Capital Markets and Investor RelationsRemark Holdings, Inc.ebharvey@remarkholdings.com702.701.9514Fay TianVice President of Investor RelationsF.Tian@remarkholdings.com(+1) 626.623.2000(+86) 13702108000REMARK HOLDINGS, INC. AND SUBSIDIARIESConsolidated Balance Sheets(dollars in thousands, except share and per share amounts)June 30, 2021December 31, 2020(Unaudited)AssetsCash (includes VIE $60 and $278, respectively)$122$854Trade accounts receivable, net (includes VIE $7,788 and $4,850, respectively)8,0455,027Inventory, net (includes VIE $58 and $112, respectively)1,925874Prepaid expense and other current assets (includes VIE $819 and $248, respectively)1,4362,043Total current assets11,5288,798Property and equipment, net (includes VIE $ and $43, respectively)264321Operating lease assets (includes VIE $173 and $281, respectively)330492Investment in unconsolidated affiliate1,0301,030Other long-term assets (includes VIE $29 and $68, respectively)581670Total assets$13,733$11,311Liabilities and Stockholders' DeficitAccounts payable (includes VIE $5,631 and $3,655, respectively)$11,112$8,589Accrued expense and other current liabilities (includes VIE $3,386 and $3,782, respectively)7,5396,660Contract liability (includes VIE $187 and $147, respectively)590310Notes payable, net of unamortized discount and debt issuance cost6,1671,500Total current liabilities25,40817,059Loans payable1,4251,425Operating lease liabilities, long-term (includes VIE $26 and $79, respectively)98194Warrant liability2,0131,725Total liabilities28,94420,403Commitments and contingenciesPreferred stock, $0.001 par value; 1,000,000 shares authorized; zero issuedCommon stock, $0.001 par value; 100,000,000 shares authorized; 99,918,941 and 99,505,041 shares issued and outstanding at June 30, 2021 and December 31, 2020, respectively100100Additional paid-in-capital352,394351,546Accumulated other comprehensive income(171)(226)Accumulated deficit(367,534)(360,512)Total stockholders' deficit(15,211)(9,092)Total liabilities and stockholders' deficit$13,733$11,311REMARK HOLDINGS, INC. AND SUBSIDIARIESConsolidated Statements of Operations and Comprehensive Loss(dollars in thousands, except per share amounts)Three Months Ended June 30,Six Months Ended June 30,2021202020212020Revenue$4,016$2,299$8,422$2,730Cost and expenseCost of revenue (excluding depreciation and amortization)2,2521,2105,0041,231Sales and marketing3984861,399902Technology and development1,3051,4772,8552,125General and administrative2,4821,8985,1794,638Depreciation and amortization4966115156Total cost and expense6,4865,13714,5529,052Operating loss(2,470)(2,838)(6,130)(6,322)Other income (expense)Interest expense(380)(775)(615)(1,236)Other income, net657757Change in fair value of warrant liability1,322(6,260)(288)(6,203)Gain on lease termination1,538Other income (loss), net(30)13(73)Total other income (expense), net918(6,978)(883)(5,917)Loss from operations$(1,552)$(9,816)$(7,013)$(12,239)Provision for income taxes(9)(9)Net loss$(1,561)$(9,816)$(7,022)$(12,239)Other comprehensive lossForeign currency translation adjustments1315655338Comprehensive loss$(1,548)$(9,660)$(6,967)$(11,901)Weighted-average shares outstanding, basic and diluted99,91789,26499,83871,527Net loss per share, basic and diluted$(0.02)$(0.11)$(0.07)$(0.17)View original content to download multimedia:https://www.prnewswire.com/news-releases/remark-holdings-announces-fiscal-second-quarter-2021-financial-results-301360922.htmlSOURCE Remark Holdings, Inc. | Decision Making/Process Automation | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Pasha Finkelshteyn | Data Engineering Annotated Monthly – August 2021 | August is usually a quiet month, with vacations taking their toll. But data engineering never stops. I’m Pasha Finkelshteyn and I will be your guide through this month’s news, my impressions of the developments, and ideas from the wider community. If you think I missed something worthwhile, ping me on Twitter and suggest a topic, […] | https://blog.jetbrains.com/big-data-tools/2021/09/06/data-engineering-annotated-monthly-august-2021/ | 2021-09-06T12:29:26Z | August is usually a quiet month, with vacations taking their toll. But data engineering never stops. Im Pasha Finkelshteyn and I will be your guide through this months news, my impressions of the developments, and ideas from the wider community. If you think I missed something worthwhile, ping me on Twitter and suggest a topic, link, or anything else.NewsA lot of engineering is about learning new things and keeping a finger on the pulse of new technologies. Heres whats happening in data engineering right now.Fairlens 0.1.0 Ethical ML is huge right now. But it is incredibly hard to determine whether a dataset is ethical, unbiased, and not skewed manually. Given this is a hot topic and theres a boatload of money in it, you would expect there to be a wealth of tools to verify data ethics but youd be wrong. At least until Fairlens came on the scene. It hasnt had its first release yet, but the promise is that it will un-bias your data for you! How cool is that?Kafka 3.0.0-rc0 If you like to try new releases of popular products, the time has come to test Kafka 3 and report any issues you find on your staging environment! Support for Scala 2.12 and Java 8 still exists but is deprecated. There are also several changes in KRaft (namely Revise KRaft Metadata Records and Producer ID generation in KRaft mode), along with many other changes. Unfortunately, the feature that was most awaited (at least by me) tiered storage has been postponed for a subsequent release.ClickHouse v21.8 This release of ClickHouse is massive. For fans of open-source instruments, the most interesting change is support for the MaterializedPostgreSQL table engine, which lets you copy a whole Postgres table/database to ClickHouse with ease.MLflow 1.12.0 This minor release of a popular ML Ops framework allows you to store and serve ML models. One of the changes that look exciting to me is Add pip_requirements and extra_pip_requirements to mlflow.*.log_model and mlflow.*.save_model for directly specifying the pip requirements of the model to log / save.Apache Pinot 0.8.0Apache Pinot is a real-time distributed OLAP datastore, designed to answer OLAP queries with low latency. In some sense, it competes with ClickHouse, as both target the same workflow. There are multiple differences, of course; for example, Pinot is intended to work in big clusters. There are a couple of comparisons on the internet, like this one, but its worth mentioning that they are quite old and both systems have changed a lot, so if youre aware of more recent comparisons, please let me know! One of the interesting changes here is support for Bloom filters for IN predicates.LakeFS 0.48.0 We described LakeFS in the July issue of our Annotated. Now it has added support for having multiple AWS regions for underlying buckets. While this may be more expensive in terms of both money and performance, it still sounds like a nice disaster recovery option. Even if a meteorite hits your data center, your big data is still going to be safe!Future improvementsData engineering technologies are evolving every day. This section is about whats in the works for technologies that you may want to keep on your radar.Cache for ORC metadata in Spark ORC is one of the most popular binary formats for data storage, featuring awesome compression and encoding capabilities. But what if we need to query the same dataset multiple times? Reading file metadata is costly because it is an IO operation, which is slow. And more files means more time. With caching, though, execution times may be decreased dramatically (on some workloads).Custom netty HTTP request inbound/outbound handlers in Flink Sometimes we need to perform HTTP requests while processing with Flink. But sometimes we need to do more than just make an HTTP request sometimes we need to customize it, for example, by adding authentication or custom headers, which may be especially helpful in strict corporate environments. It looks like this will be available soon in Flink!Cassandra Paxos Improvements Cassandras Paxos implementation is known to be good, but not perfect. For example, Lightweight Transactions (LWT) are known to suffer from poor performance. Dont take it from me this comes from Cassandra developers themselves. So, theyve decided to improve this in the foreseeable future and the work is already underway, which I think is awesome.ArticlesThis section is about inspiration. Well try to list some great articles and posts that can help us all learn from the experience of other people, teams, and companies dealing with data engineering.Change Data Capture at DeviantArt I think we all know what Debezium is. But while it is a tool for streaming data from DBs to Kafka, it cannot cover all CDC needs or scenarios. In this article, the folks from DeviantArt describe the whole architecture of their CDC solution, with concrete recipes and tips.How Uber Achieves Operational Excellence in the Data Quality Experience Uber is known for having a huge Hadoop installation in Kubernetes. This blog post is more about data quality, though, describing how they built their data quality platform. Who would have thought that building a data quality platform could be this challenging and exciting? 100% test coverage sounds amazing, too, so good job!Apache Hudi The Data Lake Platform Quasi-mutable data storage formats are not only trending, but also mysterious. How do they really work under the hood? At what cost do we get this mutability? In this detailed post, Hudi developers meticulously describe how Apache Hudi works and why its good for streaming.Hive Metastore It didnt age well The folks from LakeFS continue to delight us with interesting articles about data engineering. This time they describe what is wrong with the popular Hive Metastore and explain how it works in detail.Toolssqlglot I often found myself digging the web for specific SQL dialect details. Should I backtick the identifiers here? Should I use double quotes or single ones? And dont get me started on formatting. Sometimes I just didnt want to launch my favorite DataGrip to format a single SQL statement. Then I discovered sqlglot, a tool that can transpile my syntax from one dialect to another in an instant. Thats one less headache for me!ConferencesSmartData 2021 This international conference on data engineering is organized by a Russian company, but it aims to have at least 30% of the talks in English. Most of the topics, from data quality to DWH architecture, are hot! Speakers from Databricks, Microsoft, Netflix, and other huge companies are going!That wraps up Augusts Annotated. Follow JetBrains Big Data Tools on Twitter and subscribe to our blog for more news! You can always reach me, Pasha Finkelshteyn, at asm0dey@jetbrains.com or send a DM to my personal Twitter, or you can get in touch with our team at big-data-tools@jetbrains.com. Wed love to know about any other interesting data engineering articles you come across! | Content Synthesis/Discovery | Computer and Mathematical | null | null | null | null | null | null |
|
news | j@falei.ro | quantlet added to PyPI | QuantLET - an event driven framework for large scale real-time analytics | https://pypi.org/project/quantlet/ | 2021-08-30T03:25:53Z | QuantLET - an event driven framework for large scale real-time analytics.Copyright (C) 2006 Jorge M. Faleiro Jr.QuantLET is an open source, event-driven framework for rapid development and deployment of real-time analyticalmodels intended to be executing in large scale, in terms of data intensiveness or computing power (your spreadsheet can't do that).You can see a few examples of the framework outlining the use of signals in a moving average cross-over strategy or how to define and use 'infinite spreadsheets'.There is also a large number of examples produced during my doctorate research and sprinkled across many articles. The Black Magic paper describes an end-to-end investigation of the use of data to detect profit opportunities in equities using price momentum. The financial language SIGMA also part of the same research borrowed some ideas from QuantLET, and vice-versa.The nature of any quantitative framework require a number of quite heavy auxiliary libraries and resources. QuantLET is no exception. You can pick and choose a specific extensions (as python extras) based on what you intend to do with the framework.DevelopmentIf you intend to try out the source code please make yourself aware of the license. It is recommended the use of containers and cloud services. At the time of this writing I used VSCode and Remote Containers. You will also need poetry and pre-commit.git clone git@gitlab.com:jfaleiro/quantlet.gitcd quantletpoetry installAll code check and quality procedures are done as part of pre-commit. These checks are mandatory and are a condition for automatic build and release.poetry shellpre-commit installGit pre commit hooks are installed and from this point on all checks are done locally as a condition for a git commit to succeed. CI-CD is done by gitlab. You can find the spec for each component in the source tree.UseTypical setuptools use through pip. You can use the bare bones version:pip install quantletOr any of the extensions (extras). If you need one single extension, say strats:pip install quantlet[strats]If you want multiple extensions, like reactives and deep learning for example, you add each extension separated by comma:pip install quantlet[reactives,dl]You don't want to use the wildcard quantlet[*] and install all extras. Python is not really an environment geared toward large scale software development and this will bring in all depenedencies, across all extensions. In pip and poetry for example this might lead to a few hours of dependency resolution alone. There are way more uses and features in QuantLET than we would like to admit and you can possibly need for one application, so be parcimonious.Each extension is defined in a project named quantlet-[extension]. Dependencies on QuantLET's pyproject.toml are defined like this:"quantlet.reactives"={git="https://gitlab.com/jfaleiro/quantlet-reactives.git",rev="release/0.0.1",develop=true,optional=true}This type of dependency is resolved through git. In each case you might need read access to the specific gitlab repository. Feel free to investigate and get in touch if you need access or details.quantlet-streamsQuantLET elements of stream processing (filtering, grouping, selection, functional operations) on canonical and data frames format.[1,3,4,7,8]>>apply(lambdax:dict(x=x)))==[{'x':1},{'x':3},{'x':4},{'x':7},{'x':8}]This is the streaming facet defined as part of the financial language SIGMA.quantlet-reactivesFast and simple framework for reactive programming. A declarative paradigm that allows the definition of what has to be done through reactive relationships, letting the computational representation automatically take care of when to do it, and which results are produced, similar to cells in an electronic spreadsheet representing values and a formula.v=[R(i)for_inrange(10000)]c=sum(*v)foriinv:i.v=normal()print(c.v)>>0.0035This is the reactives facet defined as part of the financial language SIGMA.quantlet-big-reactivesSupport for reactive use cases that must reply on very large data: infinite reactive graphs (infinite spreadsheets) associated to non-structured repositories. Reactives are organized in distributed nodes, allowing for automatic persistence and in memory allocation beyond the limits of one single computer.quantlet-timeseriesFast timeseries functions and transformations. Large store and retrievals of sequencial datasets in fastparquet through tsstore.quantlet-agentsSynchronous and asynchronous agents for discrete-event simulation. This is related to the distribution and simulation facets defined as part of the financial language SIGMA.quantlet-stratsFinancial strategies and analytics. Elements of numeric processing, data analysis, plotting and tabular transformations. Basically strats are classified in bands,BandsDefine higher a lower limits around an ongoing signal, e.g., for Bollinger and fixed bands:# Bollinger bandsa=(simple_dataframe>>std(price_tag='price')>>bollinger(ma_tag='price'))assertround(a.upper.mean(),2)==1.94assertround(a.lower.mean(),2)==-2.02# Fixed bandsa=(simple_dataframe>>fixed(ma_tag='price'))assertround(a.upper.mean(),2)==-0.05assertround(a.lower.mean(),2)==-0.03FiltersDerive a new sequence based on a original signal, e.g.# RMA, recursive moving averageassertlist(map(lambdax:dict(y=x),[1.0,2.0,3.0,4.0,5.0,6.0])>>rma(m=3))==[{'y':1.0,'rma':1.0},{'y':2.0,'rma':1.5},{'y':3.0,'rma':2.0},{'y':4.0,'rma':3.0},{'y':5.0,'rma':4.0},{'y':6.0,'rma':5.0}]# EWMA, exponentially weighted moving averageassertlist(list(map(lambdax:dict(y=x),[1.0,2.0,3.0,4.0,5.0,6.0]))>>ewma(input_tag='y'))==[{'y':1.0,'ewma':1.0},{'y':2.0,'ewma':1.1},{'y':3.0,'ewma':1.29},{'y':4.0,'ewma':1.561},{'y':5.0,'ewma':1.9049},{'y':6.0,'ewma':2.31441}]Financial engineeringCommon financial calculation QLets.Returns and cash flow streams: Absolute, single and multiple periods. Continous and discrete compounding.Options: Binomial lattice, single and multiple period binomial reactive option pricing. Black scholes model. Put-call parity pricing. Greeks.Hedging: Delta hedging. Stop price hedging.SeedingGenerators of financial sequences.Timeseries seedingRandom walk and brownian motions. Random uniform seedingStatsStatistical transformations.Uniform distributionAutocorrelation metricsInflection pointsquantlet-mlOperations related to machine learning transformations: feature engineering, interpolations, incremental and batch learning. This article is an example of [nowcasting][https://en.wikipedia.org/wiki/Nowcasting_(economics)] of trading signals using a robot trader using incremental learning in quantlet-ml:(retrieve('XXXX',start='2013-01-01',end='2017-12-31')[['Adj.Close','Adj.Volume']]>>apply(adjust_columns)>>scale(['adj_price','adj_volume'],scalers=[price_scaler,volume_scaler])>>one_hot(["dow","dom","month"])>>window_shift(['adj_price','adj_volume'],5,separator='-')>>online_fit_predict(model,'predicted_adj_price',error_type='squared',response_variable_tag='adj_price',ignore_tags=['Date'])>>ewma('error',alpha=.2,output_tag='ewme')>>unscale(['adj_price','predicted_adj_price','adj_price-1','adj_price-2','adj_price-3','adj_price-4','adj_price-5'],scalers=[price_scaler]*7,index_column='Date'))It uses QLets for basic operations of window shifting, scaling, one-hot encoding, and online fit and predict in one step for streams.quantlet-dlExtension of quantlet-ml to support deep-learning libraries and algorithms. Currently Keras and TensorFlow.quantlet-scratchpadSupport for interactive use and visualization of resources in Jupyter notebooks.Final NotesQuantLET is an open source project that I put together and have been using for a very long time to test ideas, hold discussions with fellow practitioners, and extend my doctorate research in scientific crowds and the theory of enablers. The doctorate thesis was finished many years ago, in 2018, and is available online if you are curious and want to learn more about the subject.Bear in mind that the materialization of QuantLET was a result of volunteering my time in one of my many passions: investigations in technology, engineering, humans, and incentives that make humans do what they do. Nevertheless, unless I feel a compeling reason for a change, QuantLET is basically unsupported.This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. The license file is also shipped as part of the source code.Last, but not least, it is important to note that QuantLET was the entry point to a number of successful commercial frameworks, such as Platform and Hydra. If you have an idea on how to leverage these frameworks, or extend QuantLET, the power of large scale computing, AI, and crowds, feel free to get in touch. | Content Creation/Process Automation | Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | eWEEK EDITORS | Four Key Steps to Simplify Enterprise AI Success | IT plays a critical role in setting up companies for success in artificial intelligence. Learning from early adopters’ best practices can help enterprises sidestep common pitfalls when starting new AI projects. A few predictable issues are often at play when new AI initiatives stall out. Some of the most common challenges are hitting snags that […]The post Four Key Steps to Simplify Enterprise AI Success appeared first on eWEEK. | https://www.eweek.com/big-data-and-analytics/four-key-steps-to-simplify-enterprise-ai-success/ | 2021-08-24T15:11:45Z | IT plays a critical role in setting up companies for success in artificial intelligence. Learning from early adopters best practices can help enterprises sidestep common pitfalls when starting new AI projects.A few predictable issues are often at play when new AI initiatives stall out. Some of the most common challenges are hitting snags that delay projects from getting started, not having the right AI infrastructure and tools, workflow bottlenecks that stifle data scientist productivity, and failing to control costs.Companies seeing the most value from AI have implemented a number of best practices across systems, software, and trusted advisors. These lessons can speed AI deployments across a broad range of use cases, such as computer vision to enhance safety and manufacturing uptime with predictive maintenance, recommender systems to help grow sales, and conversational AI services to boost customer satisfaction.Here are four things innovators are doing to succeed and boost the bottom line impact of AI.1) Don’t reinvent the wheel: Use proven tools to save developer cycles and kickstart projectsAI model prototyping, development and testing can be very time- and resource-intensive. Starting from scratch when building a new model can add months to the project timeline. Leveraging proven tools can enhance productivity while speeding ROI.Ready-made AI software, including pretrained models and scripts for popular use cases such as speech recognition, computer vision, and recommender systems, reduces the amount of software engineering required, so projects can be ready for production faster.Additionally, purpose-built AI infrastructure ensures that IT can offer the resources needed for AI development, supporting the unique demands of AI workloads. Unlike legacy infrastructure, purpose-built AI infrastructure achieves the optimal design-balance of compute, networking, and storage to speed AI model training, while ensuring data scientists dont waste valuable cycles moonlighting as systems integrators, software engineers, and tech support.2) Tap proven expertise and platforms that can grow with youAI systems are built by training increasingly complex models on large datasets that tend to grow exponentially. This means that enterprise AI requires powerful infrastructure to deliver the fastest model training and real-time inference once AI is running in production applications. To ensure AI-infused businesses can grow, IT needs a path to scale – and expert assistance along the way.While AI development requires advanced computing infrastructure, not every organization has access to an AI-ready data center or the facilities to support scaled infrastructure. There are now many options that help enterprises test projects before making a big commitment, as well as partners who can offer permanent infrastructure hosting to power your enterprise.Colocation providers who are certified in running AI infrastructure are ideal for those who don’t have an AI-ready data center of their own. Some even provide infrastructure on a rental basis to help companies experience high-performance AI development infrastructure before making a big investment.Expertise is also essential, especially as questions arise related to use cases, models, frameworks, libraries, and more. Having direct access to experts in full-stack AI can ensure the fastest path to getting answers that keep your project moving forward.Qualified AI companies and solution delivery partners can help enterprises right-size their system requirements to help them get started. Look for vendors who work with other trusted technology providers to make sure your needs will be met across the entire spectrum of high performance computing, networking and storage.3) Own the base, rent the spike to avoid blowing the budgetGiven that AI is powered by data, its critical to consider where that data is stored when developing your platform and infrastructure strategy. Not only is having large amounts of data the fuel for AI model development, the process of model training and retraining never truly ends, since production applications can drift and lose accuracy over time. Therefore, IT teams need to consider the data pipeline and the amount of time and effort that is continually spent moving large datasets from where theyre created to where compute resources reside.Data gravity datas ability to attract additional applications and services comes into play here. As models become more complex and data scientists iterate more on their models, enterprises hit an inflection point where moving data around starts to significantly drive up costs. This is especially true if the organization is cloud-first or cloud-only in its approach. Organizations can keep costs in check by training where their data lives to achieve the lowest cost-per-training run.When the need arises, such as when the development cycle moves from productive experimentation into scaled, ongoing training, a hybrid model that can straddle both cloud and on-premises resources may make sense. In hybrid architectures, an organization will size its own on-prem infrastructure according to the steady-state demand from the business, and additionally procure cloud resources to support temporary demands that exceed that capacity.This own the base, rent the spike approach offers the best of both worlds: lowest fixed-cost infrastructure for day-to-day demands, and on-demand scalability in the cloud for temporary or seasonal surges.4) Build an AI center of excellence, and make AI a team sportAI is a rapidly growing field, but it can still be tough to source professionals who already have deep domain expertise. In fact, a recent Deloitte study found that 68 percent of surveyed executives described their organizations skills gap as moderate to extreme, with 27 percent rating it as major or extreme.The reality is, the experts who can build your best AI applications are already working for you. Theyre inside your business units, and they know your problems and data better than anyone. Many of them want to evolve into data scientists, but need mentoring and an environment where they can learn valuable skills while shadowing other experts in your organization.Establishing an AI center of excellence creates an environment in which your organization can consolidate people, processes, and platforms, enabling you to groom and scale data science expertise from within, saving a lot of money compared to bringing in new hires.Organizations that have successfully adopted AI are distinguished by their ability to de-risk their AI projects with the right partners, tools, software, and AI infrastructure from the start.With this solid foundation in place, companies can make their data scientists and AI developers productive immediately, enabling them to innovate without worrying about costs or resource availability.Adopting these four best practices will help IT lead their companies to uncover insights faster and speed the success of their AI initiatives.About the Author: Tony Paikeday, senior director of AI Systems at NVIDIA | Unknown | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
|
news | Kyle Wiggers | Foundation models risk exacerbating ML’s ethical challenges | Foundation models, or models capable of generating a range of media, pose risks, according to a new report from Stanford. | https://venturebeat.com/2021/08/18/foundation-models-risk-exacerbating-mls-ethical-challenges/ | 2021-08-18T15:00:07Z | The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!Machine learning is undergoing a paradigm shift with the rise of models trained at massive scale, including Googles BERT, OpenAIs DALL-E, and AI21 Labs Jurassic-1 Jumbo. Their capabilities and dramatic performance improvements are leading to a new status quo: a single model trained on raw datasets that can be adapted for a wide range of applications. Indeed, OpenAI is reportedly developing a multimodal system trained on images, text, and other data using massive computational resources, which the companys leadership believes is the most promising path toward AGI AI that can learn any task a human can.But while the emergence of these foundational models presents opportunities, it also poses risks, according to a new study released by the Stanford Human-Centered Artificial Intelligences (HAI) Center for Research on Foundation Models (CRFM). CFRM, a new initiative made up of an interdisciplinary team of roughly 160 students, faculty, and researchers, today published a deep dive into the legal ramifications, environmental and economic impact, and ethical issues surrounding foundation models. The report, whose coauthors include HAI codirector and former Google Cloud AI chief Fei Fei Li, examines existing challenges built into foundation models, the need for interdisciplinary collaboration, and why the industry should feel a grave sense of urgency.Foundation models are an emerging paradigm for building AI systems that lead to an unprecedented level of homogenization: a single model serving as the basis for a wide range of downstream applications, Percy Liang, Stanford HAI faculty and computer science professor, told VentureBeat via email. This homogenization generates enormous leverage for many new applications, but they also pose clear risks such as the exacerbation of historical inequities and centralization of power.Foundation modelsCRFMs report defines foundation models as models adaptable to applications that are trained in a task-agnostic way on raw data. Theoretically, foundation models can process different modalities e.g., language and vision to affect the physical world and perform reasoning, or even interact with humans.The word foundation specifies the role these models play: A foundation model is itself unfinished but serves as the common basis from which many task-specific models are built via adaptation, the report reads. We also chose the term foundation deliberately to communicate the gravity of these models: Poorly constructed foundations are a recipe for disaster, and well-executed foundations are reliable bedrock for future applications.From a technical point of view, foundation models arent new. Theyre based on deep neural networks and self-supervised learning, both of which have existed for decades. Semi-supervised learning accepts data thats partially labeled or where the majority of the data lacks labels. An algorithm determines the correlations between data points, using a small amount of labeled data to mark points and train based on the newly applied labels.The sheer scope of foundation models over the last few years stretches the boundaries of whats possible, however. For example, OpenAIs GPT-3 can do a passable and occasionally exceptional job on challenging natural language tasks that it hasnt seen before. At the same time, existing foundation models have the potential to inflict harm and their characteristics are, in general, poorly understood.These models, which are trained at scale, result in emergent capabilities, making it difficult to understand what their biases and failure modes are. Yet the commercial incentives are for this technology to be deployed to society at large, Liang said.ImpactsFoundation models are academically interesting, due to their stellar performance on popular benchmarks, but what makes them critical to study is the fact that theyre being deployed with far-reaching consequences. For example, Google Search, which has 4 billion users, relies heavily on BERT. And GPT-3 is now being used in over 300 apps by tens of thousands of developers and producing 4.5 billion words per day.As AI systems become deeply embedded in society, there have been growing concerns about their potential negative effects. Machine learning can perpetuate inequality as the trained models amplify biases in datasets. (Last year, an algorithm the U.K. government had adopted downgraded hundreds of thousands of students grades, disproportionately impacting those from tuition-free schools.) Another concern is foundation models ability to generate realistic text, images, and videos, which has the potential to scale disinformation in already polluted social media networks.Foundation models could have other negative impacts, particularly from an environmental standpoint, the reports coauthors point out. The effects of model training on the environment have been brought into relief in recent years. In June 2020, researchers at the University of Massachusetts at Amherst released a study estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car. OpenAI itself has conceded that models like GPT-3 require significant amounts of compute on the order of hundreds of petaflops per day which contributes to carbon emissions.Foundation models are also likely to have substantial labor market impacts and rest on tenuous legal footing. By 2022, an estimated 5 million jobs worldwide will be lost to automation technologies, with 47% of U.S. jobs at risk of being automated. Moreover, how the law bears on the development and deployment of foundational models remains unclear in the absence of unifying legal and regulatory frameworks.It should be noted that preliminary work to address the liability questions is underway. Amsterdam and Helsinki have launched AI registries to detail how each city uses algorithms to deliver services. And the EU recently released tough draft rules on the use of AI, including a ban on most surveillance and strict safeguards for algorithms employed in recruitment, critical infrastructure, credit scoring, migration, and law enforcement.Research ecosystemBeyond the societal implications, foundation models introduce new hurdles in research and development, owing to the strong economic incentives companies have to deploy models developed for science. As an example, the coauthors cite GPT-3, which began as a research vehicle for OpenAI but later became a product widely used by software developers.At the research communitys peril, the distinction between theory and deployment is sometimes lost. Research models are under construction in the sense that theyre often not extensively tested. Unfortunately, companies dont always place warning labels indicating this on their prototypes. To ensure safety, many more precautions should be taken when in-development models are made available commercially, the coauthors argue.Taking the 10,000-foot-view, the coauthors note that while trained models may be available, the actual training of foundation models is impossible for the vast majority of AI researchers, due to their high computational cost and engineering requirements. This lack of accessibility and thus reproducibility risks hindering innovation and impacting the health of AI as a scientific field. It could also lead to a centralization of power among wealthier organizations, the coauthors say aside from community efforts like EleutherAI and Hugging Faces BigScience project.While some meaningful research can still be done with training smaller models or studying preexisting large models, neither will be sufficient to make adequate progress on the difficult sociotechnical challenges, the report reads. Due to the emergent nature of these models, some functionalities like in-context learning have only been demonstrated in models of sufficient size. Having access to existing models can be useful for powering downstream applications or to identify problems (e.g., bias), but this will not help us design better architectures or training objectives for foundation models.As an antidote to the many problematic aspects of foundation models, CRFMs report suggests building infrastructure for public AI projects like the Hubble Space Telescope and Large Hadron Collider. The coauthors point to the National Research Cloud, a nonprofit initiative to provide researchers with compute power and government datasets for education, as a step in the right direction. But they say that much more investment will be needed to fulfill the vision of an open community-based effort that shapes the future of foundation models.Much still remains unclear in spite of our efforts, and we reiterate that this is just the beginning of a paradigm shift: Foundation models have only just begun to transform the way AI systems are built and deployed in the world, the reports coauthors wrote. To ensure the responsible development and deployment of these models on durable foundations, we envision collaboration between different sectors, institutions, and disciplines from the onset to be especially critical.Liang added: Were very much in the early days so the professional norms are underdeveloped. Its therefore imperative that we, as a community, act now to ensure that this technology is developed and deployed in an ethically and socially responsible fashion.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Content Creation/Content Synthesis/Detection and Monitoring/Prediction/Recommendation/Process Automation | Unknown | null | null | null | null | null | null |
|
news | Symphony IndustrialAI Appoints Barry Johnson President of Digital Manufacturing | WOBURN, Mass., Aug. 11, 2021 /PRNewswire/ -- Symphony IndustrialAI announced today the appointment of Barry Johnson as president of Digital Manufacturing. The move supports Symphony IndustrialAI's rapid growth and its expansion into enterprise AI solutions for plant operations,... | https://www.prnewswire.com/news-releases/symphony-industrialai-appoints-barry-johnson-president-of-digital-manufacturing-301352860.html | 2021-08-11T10:00:00Z | WOBURN, Mass., Aug. 11, 2021 /PRNewswire/ -- Symphony IndustrialAI announced today the appointment of Barry Johnson as president of Digital Manufacturing. The move supports Symphony IndustrialAI's rapid growth and its expansion into enterprise AI solutions for plant operations, visibility, and performance. The announcement comes on the heels of two additional key hires for the Symphony IndustrialAI's digital manufacturing division, Vice President of Products Prashant Jagarlapudi and Chief Revenue Officer Ron Posey.Johnson is an experienced senior executive with more than 25 years of demonstrated success in the industrial software sector, driving revenue growth and improving business performance internationally. Johnson previously served in multiple executive roles at Rockwell Automation, including global vice president of software sales. Before Rockwell, Barry held numerous software roles at GE, driving growth organically and inorganically."Enterprise AI in industrial applications has reached an inflection point, and Barry and his team are on the leading edge," said Symphony IndustrialAI Chief Executive Officer Dominic Gallello. "As a transformational change leader, Barry plays a key role in fueling growth through close work with sales, product management, engineering, and professional services teams. Prashant and Ron bring added power to our work in digital manufacturing and Enterprise AI.""With the launch of the EurekaAI industrial platform and Symphony IndustrialAI's digital manufacturing solutions, we are working with leaders across industrial applications to make strides in enterprise AI adoption," said Johnson. "2021 is a year of immense transformation as we lay the digital foundation for the success of tomorrow's industrial and manufacturing champions. There's no team better suited to accelerate this evolution than Symphony IndustrialAI."These appointments follow Symphony IndustrialAI's introduction of the end-to-end Eureka AI platform for manufacturing and the acquisition of Savigent.About Symphony IndustrialAISymphony IndustrialAI is an innovator in industrial insight, accelerating autonomous plant operations. The industry-leading EurekaAI/IoT platform and industrial optimization solutions connect tens of thousands of assets and workflows in manufacturing plants globally and process billions of data points daily, pushing new plateaus in operational intelligence. Symphony IndustrialAI digital manufacturing solutions connect devices, processes, people, and systems enabling harmonizing plant automation and control. Symphony IndustrialAI plant performance applications span asset predictive maintenance and process health and optimization, maintaining high availability of equipment, extending the life of capital assets, and reducing process variability. Symphony IndustrialAI solutions provide high value to its users by driving variability out of processes and optimizing operations for throughput, yield, energy efficiency, and sustainability. About SymphonyAISymphonyAI is building the leading enterprise AI company for digital transformation across the most important and resilient growth verticals, including life sciences, healthcare, retail, consumer packaged goods, financial services, manufacturing, and media. In each of these verticals, SAI businesses have many of the leading enterprises as clients. SAI is backed by a $1 billion commitment from Dr. Romesh Wadhwani, a successful entrepreneur and philanthropist. Since its founding in 2017, SymphonyAI has grown rapidly to a combined revenue run rate of more than $300 million and over 2,200 talented leaders, data scientists, and other professionals.PR contact: Tylor Fenhaus[email protected]SOURCE Symphony IndustrialAI | Unknown | Management/Computer and Mathematical | null | null | null | null | null | null |
||
news | Ying Zeng | abstract,data,ying zeng,paper,problem,deep neural networks,communication,applications,deep learning models,adversarial examples,spiking neural network,analysis,wireless communications,existing,performance | https://arxiv.org/search/cs?searchtype=author&query=Zeng%2C+Y | 2021-09-07T04:17:00Z | Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal Sentiment AnalysisAuthors:Sijie Mai, Ying Zeng, Shuangjia Zheng, Haifeng HuAbstract:The wide application of smart devices enables the availability of multimodal data, which can be utilized in many tasks. In the field of multimodal sentiment analysis (MSA), most previous works focus on exploring intra- and inter-modal interactions. However, training a network with cross-modal information (language, visual, audio) is still challenging due to the modality gap, and existing methods s… ▽ More The wide application of smart devices enables the availability of multimodal data, which can be utilized in many tasks. In the field of multimodal sentiment analysis (MSA), most previous works focus on exploring intra- and inter-modal interactions. However, training a network with cross-modal information (language, visual, audio) is still challenging due to the modality gap, and existing methods still cannot ensure to sufficiently learn intra-/inter-modal dynamics. Besides, while learning dynamics within each sample draws great attention, the learning of inter-class relationships is neglected. Moreover, the size of datasets limits the generalization ability of existing methods. To address the afore-mentioned issues, we propose a novel framework HyCon for hybrid contrastive learning of tri-modal representation. Specifically, we simultaneously perform intra-/inter-modal contrastive learning and semi-contrastive learning (that is why we call it hybrid contrastive learning), with which the model can fully explore cross-modal interactions, preserve inter-class relationships and reduce the modality gap. Besides, a refinement term is devised to prevent the model falling into a sub-optimal solution. Moreover, HyCon can naturally generate a large amount of training pairs for better generalization and reduce the negative effect of limited datasets. Extensive experiments on public datasets demonstrate that our proposed method outperforms existing works. △ LessSubmitted 4 September, 2021; originally announced September 2021.Comments:Under ReviewChannel Knowledge Map for Environment-Aware Communications: EM Algorithm for Map ConstructionAuthors:Kun Li, Peiming Li, Yong Zeng, Jie XuAbstract:Channel knowledge map (CKM) is an emerging technique to enable environment-aware wireless communications, in which databases with location-specific channel knowledge are used to facilitate or even obviate real-time channel state information acquisition. One fundamental problem for CKM-enabled communication is how to efficiently construct the CKM based on finite measurement data points at limited u… ▽ More Channel knowledge map (CKM) is an emerging technique to enable environment-aware wireless communications, in which databases with location-specific channel knowledge are used to facilitate or even obviate real-time channel state information acquisition. One fundamental problem for CKM-enabled communication is how to efficiently construct the CKM based on finite measurement data points at limited user locations. Towards this end, this paper proposes a novel map construction method based on the \emph{expectation maximization} (EM) algorithm, by utilizing the available measurement data, jointly with the expert knowledge of well-established statistic channel models. The key idea is to partition the available data points into different groups, where each group shares the same modelling parameter values to be determined. We show that determining the modelling parameter values can be formulated as a maximum likelihood estimation problem with latent variables, which is then efficiently solved by the classic EM algorithm. Compared to the pure data-driven methods such as the nearest neighbor based interpolation, the proposed method is more efficient since only a small number of modelling parameters need to be determined and stored. Furthermore, the proposed method is extended for constructing a specific type of CKM, namely, the channel gain map (CGM), where closed-form expressions are derived for the E-step and M-step of the EM algorithm. Numerical results are provided to show the effectiveness of the proposed map construction method as compared to the benchmark curve fitting method with one single model. △ LessSubmitted 16 August, 2021; originally announced August 2021.Comments:7 pages, 7figuresAsteria: Deep Learning-based AST-Encoding for Cross-platform Binary Code Similarity DetectionAuthors:Shouguo Yang, Long Cheng, Yicheng Zeng, Zhe Lang, Hongsong Zhu, Zhiqiang ShiAbstract:Binary code similarity detection is a fundamental technique for many security applications such as vulnerability search, patch analysis, and malware detection. There is an increasing need to detect similar code for vulnerability search across architectures with the increase of critical vulnerabilities in IoT devices. The variety of IoT hardware architectures and software platforms requires to capt… ▽ More Binary code similarity detection is a fundamental technique for many security applications such as vulnerability search, patch analysis, and malware detection. There is an increasing need to detect similar code for vulnerability search across architectures with the increase of critical vulnerabilities in IoT devices. The variety of IoT hardware architectures and software platforms requires to capture semantic equivalence of code fragments in the similarity detection. However, existing approaches are insufficient in capturing the semantic similarity. We notice that the abstract syntax tree (AST) of a function contains rich semantic information. Inspired by successful applications of natural language processing technologies in sentence semantic understanding, we propose a deep learning-based AST-encoding method, named ASTERIA, to measure the semantic equivalence of functions in different platforms. Our method leverages the Tree-LSTM network to learn the semantic representation of a function from its AST. Then the similarity detection can be conducted efficiently and accurately by measuring the similarity between two representation vectors. We have implemented an open-source prototype of ASTERIA. The Tree-LSTM model is trained on a dataset with 1,022,616 function pairs and evaluated on a dataset with 95,078 function pairs. Evaluation results show that our method outperforms the AST-based tool Diaphora and the-state-of-art method Gemini by large margins with respect to the binary similarity detection. And our method is several orders of magnitude faster than Diaphora and Gemini for the similarity calculation. In the application of vulnerability search, our tool successfully identified 75 vulnerable functions in 5,979 IoT firmware images. △ LessSubmitted 13 August, 2021; originally announced August 2021.Journal ref: 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)Domain Adaptation for Autoencoder-Based End-to-End Communication Over Wireless ChannelsAuthors:Jayaram Raghuram, Yijing Zeng, Dolores García Martí, Somesh Jha, Suman Banerjee, Joerg Widmer, Rafael Ruiz OrtizAbstract:The problem of domain adaptation conventionally considers the setting where a source domain has plenty of labeled data, and a target domain (with a different data distribution) has plenty of unlabeled data but none or very limited labeled data. In this paper, we address the setting where the target domain has only limited labeled data from a distribution that is expected to change frequently. We f… ▽ More The problem of domain adaptation conventionally considers the setting where a source domain has plenty of labeled data, and a target domain (with a different data distribution) has plenty of unlabeled data but none or very limited labeled data. In this paper, we address the setting where the target domain has only limited labeled data from a distribution that is expected to change frequently. We first propose a fast and light-weight method for adapting a Gaussian mixture density network (MDN) using only a small set of target domain samples. This method is well-suited for the setting where the distribution of target data changes rapidly (e.g., a wireless channel), making it challenging to collect a large number of samples and retrain. We then apply the proposed MDN adaptation method to the problem of end-of-end learning of a wireless communication autoencoder. A communication autoencoder models the encoder, decoder, and the channel using neural networks, and learns them jointly to minimize the overall decoding error rate. However, the error rate of an autoencoder trained on a particular (source) channel distribution can degrade as the channel distribution changes frequently, not allowing enough time for data collection and retraining of the autoencoder to the target channel distribution. We propose a method for adapting the autoencoder without modifying the encoder and decoder neural networks, and adapting only the MDN model of the channel. The method utilizes feature transformations at the decoder to compensate for changes in the channel distribution, and effectively present to the decoder samples close to the source distribution. Experimental evaluation on simulated datasets and real mmWave wireless channels demonstrate that the proposed methods can quickly adapt the MDN model, and improve or maintain the error rate of the autoencoder under changing channel conditions. △ LessSubmitted 2 August, 2021; originally announced August 2021.Comments:Under Review. 22 pages, 8 figuresComplexity-Free Generalization via Distributionally Robust OptimizationAuthors:Henry Lam, Yibo ZengAbstract:Established approaches to obtain generalization bounds in data-driven optimization and machine learning mostly build on solutions from empirical risk minimization (ERM), which depend crucially on the functional complexity of the hypothesis class. In this paper, we present an alternate route to obtain these bounds on the solution from distributionally robust optimization (DRO), a recent data-driven… ▽ More Established approaches to obtain generalization bounds in data-driven optimization and machine learning mostly build on solutions from empirical risk minimization (ERM), which depend crucially on the functional complexity of the hypothesis class. In this paper, we present an alternate route to obtain these bounds on the solution from distributionally robust optimization (DRO), a recent data-driven optimization framework based on worst-case analysis and the notion of ambiguity set to capture statistical uncertainty. In contrast to the hypothesis class complexity in ERM, our DRO bounds depend on the ambiguity set geometry and its compatibility with the true loss function. Notably, when using maximum mean discrepancy as a DRO distance metric, our analysis implies, to the best of our knowledge, the first generalization bound in the literature that depends solely on the true loss function, entirely free of any complexity measures or bounds on the hypothesis class. △ LessSubmitted 21 June, 2021; originally announced June 2021.Multi-User Communication with Extremely Large-Scale MIMOAuthors:Haiquan Lu, Yong ZengAbstract:Extremely large-scale multiple-input multiple-output (XL-MIMO) communication aims to further boost the antenna size significantly than current massive MIMO systems, for which conventional far-field assumption with uniform plane wave (UPW) model may become invalid. This paper studies the modelling and performance analysis for multi-user XL-MIMO communication. With the spherical wavefront phase mode… ▽ More Extremely large-scale multiple-input multiple-output (XL-MIMO) communication aims to further boost the antenna size significantly than current massive MIMO systems, for which conventional far-field assumption with uniform plane wave (UPW) model may become invalid. This paper studies the modelling and performance analysis for multi-user XL-MIMO communication. With the spherical wavefront phase modelling, and also by taking into account the variations of signal amplitude and projected aperture across array elements, the performance of the three typical beamforming schemes are analyzed, namely the maximal-ratio combining (MRC), zero-forcing (ZF), and minimum mean-square error (MMSE) beamforming. For the special case of two-users, we analytically show that the signal-to-interference-plus-noise ratio (SINR) of all the three beamforming schemes increases as the channels' correlation coefficient decreases. Furthermore, compared to existing UPW model where inter-user interference (IUI) can only be suppressed in angular domain, XL-MIMO enables a new degree-of-freedom (DoF) for IUI suppression by distance separation, even for users along the same direction. Simulation results are provided to validate the modelling and performance analysis of multi-user XL-MIMO communications. △ LessSubmitted 12 June, 2021; originally announced June 2021.Comments:5 pages, 8 figuresCell-Free Symbiotic Radio: Channel Estimation Method and Achievable Rate AnalysisAuthors:Zhuoyin Dai, Ruoguang Li, Jingran Xu, Yong Zeng, Shi JinAbstract:Cell-free massive MIMO and symbiotic radio are promising beyond 5G (B5G) networking architecture and transmission technology, respectively. This paper studies cell-free symbiotic radio systems, where a number of distributed access points (APs) cooperatively send primary information to a receiver, and simultaneously support the backscattering communication of the secondary backscatter device (BD).… ▽ More Cell-free massive MIMO and symbiotic radio are promising beyond 5G (B5G) networking architecture and transmission technology, respectively. This paper studies cell-free symbiotic radio systems, where a number of distributed access points (APs) cooperatively send primary information to a receiver, and simultaneously support the backscattering communication of the secondary backscatter device (BD). An efficient two-phase uplink-training based channel estimation method is proposed to estimate the direct-link channel and cascaded backscatter channel, and the achievable primary and secondary communication rates taking into account the channel estimation errors are derived. Furthermore, to achieve a flexible trade-off between the primary and secondary communication rates, we propose a low-complexity weighted-maximal-ratio transmission (weighted-MRT) beamforming scheme, which only requires local processing at each AP without having to exchange the estimated channel state information. Simulation results are provided to show the impact of the channel training lengths on the performance of the cell-free symbiotic radio systems. △ LessSubmitted 10 June, 2021; originally announced June 2021.Comments:6 pages, 3 figures, conferenceWireless Communication with Extremely Large-Scale Intelligent Reflecting SurfaceAuthors:Chao Feng, Haiquan Lu, Yong Zeng, Shi Jin, Rui ZhangAbstract:Intelligent reflecting surface (IRS) is a promising technology for wireless communications, thanks to its potential capability to engineer the radio environment. However, in practice, such an envisaged benefit is attainable only when the passive IRS is of a sufficiently large size, for which the conventional uniform plane wave (UPW)-based channel model may become inaccurate. In this paper, we purs… ▽ More Intelligent reflecting surface (IRS) is a promising technology for wireless communications, thanks to its potential capability to engineer the radio environment. However, in practice, such an envisaged benefit is attainable only when the passive IRS is of a sufficiently large size, for which the conventional uniform plane wave (UPW)-based channel model may become inaccurate. In this paper, we pursue a new channel modelling and performance analysis for wireless communications with extremely large-scale IRS (XL-IRS). By taking into account the variations in signal's amplitude and projected aperture across different reflecting elements, we derive both lower- and upper-bounds of the received signal-to-noise ratio (SNR) for the general uniform planar array (UPA)-based XL-IRS. Our results reveal that, instead of scaling quadratically with the increased number of reflecting elements M as in the conventional UPW model, the SNR under the more practically applicable non-UPW model increases with M only with a diminishing return and gets saturated eventually. To gain more insights, we further study the special case of uniform linear array (ULA)-based XL-IRS, for which a closed-form SNR expression in terms of the IRS size and transmitter/receiver location is derived. This result shows that the SNR mainly depends on the two geometric angles formed by the transmitter/receiver locations with the IRS, as well as the boundary points of the IRS. Numerical results validate our analysis and demonstrate the importance of proper channel modelling for wireless communications aided by XL-IRS. △ LessSubmitted 10 June, 2021; originally announced June 2021.Comments:6 pages, 5 figures, conferenceA Unified Framework for Task-Driven Data Quality ManagementAuthors:Tianhao Wang, Yi Zeng, Ming Jin, Ruoxi JiaAbstract:High-quality data is critical to train performant Machine Learning (ML) models, highlighting the importance of Data Quality Management (DQM). Existing DQM schemes often cannot satisfactorily improve ML performance because, by design, they are oblivious to downstream ML tasks. Besides, they cannot handle various data quality issues (especially those caused by adversarial attacks) and have limited a… ▽ More High-quality data is critical to train performant Machine Learning (ML) models, highlighting the importance of Data Quality Management (DQM). Existing DQM schemes often cannot satisfactorily improve ML performance because, by design, they are oblivious to downstream ML tasks. Besides, they cannot handle various data quality issues (especially those caused by adversarial attacks) and have limited applications to only certain types of ML models. Recently, data valuation approaches (e.g., based on the Shapley value) have been leveraged to perform DQM; yet, empirical studies have observed that their performance varies considerably based on the underlying data and training process. In this paper, we propose a task-driven, multi-purpose, model-agnostic DQM framework, DataSifter, which is optimized towards a given downstream ML task, capable of effectively removing data points with various defects, and applicable to diverse models. Specifically, we formulate DQM as an optimization problem and devise a scalable algorithm to solve it. Furthermore, we propose a theoretical framework for comparing the worst-case performance of different DQM strategies. Remarkably, our results show that the popular strategy based on the Shapley value may end up choosing the worst data subset in certain practical scenarios. Our evaluation shows that DataSifter achieves and most often significantly improves the state-of-the-art performance over a wide range of DQM tasks, including backdoor, poison, noisy/mislabel data detection, data summarization, and data debiasing. △ LessSubmitted 9 June, 2021; originally announced June 2021.BackEISNN: A Deep Spiking Neural Network with Adaptive Self-Feedback and Balanced Excitatory-Inhibitory NeuronsAuthors:Dongcheng Zhao, Yi Zeng, Yang LiAbstract:Spiking neural networks (SNNs) transmit information through discrete spikes, which performs well in processing spatial-temporal information. Due to the non-differentiable characteristic, there still exist difficulties in designing well-performed SNNs. Recently, SNNs trained with backpropagation have shown superior performance due to the proposal of the gradient approximation. However, the performa… ▽ More Spiking neural networks (SNNs) transmit information through discrete spikes, which performs well in processing spatial-temporal information. Due to the non-differentiable characteristic, there still exist difficulties in designing well-performed SNNs. Recently, SNNs trained with backpropagation have shown superior performance due to the proposal of the gradient approximation. However, the performance on complex tasks is still far away from the deep neural networks. Taking inspiration from the autapse in the brain which connects the spiking neurons with a self-feedback connection, we apply an adaptive time-delayed self-feedback on the membrane potential to regulate the spike precisions. As well as, we apply the balanced excitatory and inhibitory neurons mechanism to control the spiking neurons' output dynamically. With the combination of the two mechanisms, we propose a deep spiking neural network with adaptive self-feedback and balanced excitatory and inhibitory neurons (BackEISNN). The experimental results on several standard datasets have shown that the two modules not only accelerate the convergence of the network but also improve the accuracy. For the MNIST, FashionMNIST, and N-MNIST datasets, our model has achieved state-of-the-art performance. For the CIFAR10 dataset, our BackEISNN also gets remarkable performance on a relatively light structure that competes against state-of-the-art SNNs. △ LessSubmitted 27 May, 2021; originally announced May 2021.BSNN: Towards Faster and Better Conversion of Artificial Neural Networks to Spiking Neural Networks with Bistable NeuronsAuthors:Yang Li, Yi Zeng, Dongcheng ZhaoAbstract:The spiking neural network (SNN) computes and communicates information through discrete binary events. It is considered more biologically plausible and more energy-efficient than artificial neural networks (ANN) in emerging neuromorphic hardware. However, due to the discontinuous and non-differentiable characteristics, training SNN is a relatively challenging task. Recent work has achieved essenti… ▽ More The spiking neural network (SNN) computes and communicates information through discrete binary events. It is considered more biologically plausible and more energy-efficient than artificial neural networks (ANN) in emerging neuromorphic hardware. However, due to the discontinuous and non-differentiable characteristics, training SNN is a relatively challenging task. Recent work has achieved essential progress on an excellent performance by converting ANN to SNN. Due to the difference in information processing, the converted deep SNN usually suffers serious performance loss and large time delay. In this paper, we analyze the reasons for the performance loss and propose a novel bistable spiking neural network (BSNN) that addresses the problem of spikes of inactivated neurons (SIN) caused by the phase lead and phase lag. Also, when ResNet structure-based ANNs are converted, the information of output neurons is incomplete due to the rapid transmission of the shortcut path. We design synchronous neurons (SN) to help efficiently improve performance. Experimental results show that the proposed method only needs 1/4-1/10 of the time steps compared to previous work to achieve nearly lossless conversion. We demonstrate state-of-the-art ANN-SNN conversion for VGG16, ResNet20, and ResNet34 on challenging datasets including CIFAR-10 (95.16% top-1), CIFAR-100 (78.12% top-1), and ImageNet (72.64% top-1). △ LessSubmitted 26 May, 2021; originally announced May 2021.CARLS: Cross-platform Asynchronous Representation Learning SystemAuthors:Chun-Ta Lu, Yun Zeng, Da-Cheng Juan, Yicheng Fan, Zhe Li, Jan Dlabal, Yi-Ting Chen, Arjun Gopalan, Allan Heydon, Chun-Sung Ferng, Reah Miyara, Ariel Fuxman, Futang Peng, Zhen Li, Tom Duerig, Andrew TomkinsAbstract:In this work, we propose CARLS, a novel framework for augmenting the capacity of existing deep learning frameworks by enabling multiple components -- model trainers, knowledge makers and knowledge banks -- to concertedly work together in an asynchronous fashion across hardware platforms. The proposed CARLS is particularly suitable for learning paradigms where model training benefits from additiona… ▽ More In this work, we propose CARLS, a novel framework for augmenting the capacity of existing deep learning frameworks by enabling multiple components -- model trainers, knowledge makers and knowledge banks -- to concertedly work together in an asynchronous fashion across hardware platforms. The proposed CARLS is particularly suitable for learning paradigms where model training benefits from additional knowledge inferred or discovered during training, such as node embeddings for graph neural networks or reliable pseudo labels from model predictions. We also describe three learning paradigms -- semi-supervised learning, curriculum learning and multimodal learning -- as examples that can be scaled up efficiently by CARLS. One version of CARLS has been open-sourced and available for download at: https://github.com/tensorflow/neural-structured-learning/tree/master/research/carls △ LessSubmitted 26 May, 2021; originally announced May 2021.Communicating with Extremely Large-Scale Array/Surface: Unified Modelling and Performance AnalysisAuthors:Haiquan Lu, Yong ZengAbstract:Wireless communications with extremely large-scale array (XL-array) correspond to systems whose antenna sizes are so large that conventional modelling assumptions, such as uniform plane wave (UPW) impingement, are longer valid. This paper studies the mathematical modelling and performance analysis of XL-array communications. By deviating from the conventional modelling approach that treats the arr… ▽ More Wireless communications with extremely large-scale array (XL-array) correspond to systems whose antenna sizes are so large that conventional modelling assumptions, such as uniform plane wave (UPW) impingement, are longer valid. This paper studies the mathematical modelling and performance analysis of XL-array communications. By deviating from the conventional modelling approach that treats the array elements as sizeless points, we explicitly model their physical area/aperture, which enables a unified modelling for the classical discrete antenna arrays and the emerging continuous surfaces. As such, a generic array/surface model that accurately takes into account the variations of signal phase, amplitude and projected aperture across array elements is proposed. Based on the proposed model, a closed-form expression of the resulting SNR with the optimal single-user MRC/MRT beamforming is derived. The expression reveals that instead of scaling linearly with the antenna number M as in conventional UPW modelling, the SNR with the more generic model increases with M with diminishing return, which is governed by the collective properties of the array, such as the array occupation ratio and the physical sizes of the array along each dimension, while irrespective of the properties of the individual array element. Additionally, we have derived an alternative insightful expression for the optimal SNR in terms of the vertical and horizontal angular spans. Furthermore, we also show that our derived results include the far-field UPW modelling as a special case. One important finding during the study of far-field approximation is the necessity to introduce a new distance criterion to complement the classical Rayleigh distance, termed uniform-power distance (UPD), which concerns the signal amplitude/power variations across array elements, instead of phase variations as for Rayleigh distance. △ LessSubmitted 27 April, 2021; originally announced April 2021.Comments:15 pages, 13 figuresRethinking the Backdoor Attacks' Triggers: A Frequency PerspectiveAuthors:Yi Zeng, Won Park, Z. Morley Mao, Ruoxi JiaAbstract:Backdoor attacks have been considered a severe security threat to deep learning. Such attacks can make models perform abnormally on inputs with predefined triggers and still retain state-of-the-art performance on clean data. While backdoor attacks have been thoroughly investigated in the image domain from both attackers' and defenders' sides, an analysis in the frequency domain has been missing th… ▽ More Backdoor attacks have been considered a severe security threat to deep learning. Such attacks can make models perform abnormally on inputs with predefined triggers and still retain state-of-the-art performance on clean data. While backdoor attacks have been thoroughly investigated in the image domain from both attackers' and defenders' sides, an analysis in the frequency domain has been missing thus far. This paper first revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis. Our results show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions. We further demonstrate these high-frequency artifacts enable a simple way to detect existing backdoor triggers at a detection rate of 98.50% without prior knowledge of the attack details and the target model. Acknowledging previous attacks' weaknesses, we propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability. We show that existing defense works can benefit by incorporating these smooth triggers into their design consideration. Moreover, we show that the detector tuned over stronger smooth triggers can generalize well to unseen weak smooth triggers. In short, our work emphasizes the importance of considering frequency analysis when designing both backdoor attacks and defenses in deep learning. △ LessSubmitted 9 April, 2021; v1 submitted 7 April, 2021; originally announced April 2021.3D Human Body Reshaping with Anthropometric ModelingAuthors:Yanhong Zeng, Jianlong Fu, Hongyang ChaoAbstract:Reshaping accurate and realistic 3D human bodies from anthropometric parameters (e.g., height, chest size, etc.) poses a fundamental challenge for person identification, online shopping and virtual reality. Existing approaches for creating such 3D shapes often suffer from complex measurement by range cameras or high-end scanners, which either involve heavy expense cost or result in low quality. Ho… ▽ More Reshaping accurate and realistic 3D human bodies from anthropometric parameters (e.g., height, chest size, etc.) poses a fundamental challenge for person identification, online shopping and virtual reality. Existing approaches for creating such 3D shapes often suffer from complex measurement by range cameras or high-end scanners, which either involve heavy expense cost or result in low quality. However, these high-quality equipments limit existing approaches in real applications, because the equipments are not easily accessible for common users. In this paper, we have designed a 3D human body reshaping system by proposing a novel feature-selection-based local mapping technique, which enables automatic anthropometric parameter modeling for each body facet. Note that the proposed approach can leverage limited anthropometric parameters (i.e., 3-5 measurements) as input, which avoids complex measurement, and thus better user-friendly experience can be achieved in real scenarios. Specifically, the proposed reshaping model consists of three steps. First, we calculate full-body anthropometric parameters from limited user inputs by imputation technique, and thus essential anthropometric parameters for 3D body reshaping can be obtained. Second, we select the most relevant anthropometric parameters for each facet by adopting relevance masks, which are learned offline by the proposed local mapping technique. Third, we generate the 3D body meshes by mapping matrices, which are learned by linear regression from the selected parameters to mesh-based body representation. We conduct experiments by anthropomorphic evaluation and a user study from 68 volunteers. Experiments show the superior results of the proposed system in terms of mean reconstruction error against the state-of-the-art approaches. △ LessSubmitted 5 April, 2021; originally announced April 2021.Comments:ICIMCS 2017(oral). The final publication is available at Springer via https://doi.org/10.1007/978-981-10-8530-7_10Journal ref: In International Conference on Internet Multimedia Computing and Service (pp. 96-107). Springer, Singapore (2017)Aggregated Contextual Transformations for High-Resolution Image InpaintingAuthors:Yanhong Zeng, Jianlong Fu, Hongyang Chao, Baining GuoAbstract:State-of-the-art image inpainting approaches can suffer from generating distorted structures and blurry textures in high-resolution images (e.g., 512x512). The challenges mainly drive from (1) image content reasoning from distant contexts, and (2) fine-grained texture synthesis for a large missing region. To overcome these two challenges, we propose an enhanced GAN-based model, named Aggregated CO… ▽ More St | Content Synthesis/Decision Making | Unknown | null | null | null | null | null | null |
||
news | pr@cmswire.com (Dom Nicastro) | Verint Acquires Conversocial, Salesforce Wants to Be Netflix for B2B & More CX News | Verint acquires Conversocial, NICE acquires GoMoxie, and more from the world of customer experience and digital marketing news.Continue reading... | https://www.cmswire.com/customer-experience/verint-acquires-conversocial-salesforce-wants-to-be-netflix-for-b2b-more-cx-news/ | https://www.cmswire.com/-/media/9f97b64dc9a94e6f8ecc51583b98e08a.ashx | 2021-08-13T15:24:41Z | PHOTO:New Africa Verint, which provides customer engagement software, has acquired Conversocial for $50 million. Verints support for digital customer engagement will be boosted in the acquisition with connections to messaging channels including Apple Business Chat, Facebook Messenger, Twitter and WhatsApp. The Verint Cloud Platform features:Conversational channelsConversational AI that automates personalized communications on the customers channel of choiceOrchestration of customer journeys across channels of choiceConnections to AI-powered knowledge management across all channelsCapturing conversation, interaction and experience data from all channels and applying advanced analyticsConversocial has approximately 80 employees with offices in New York and London. The acquisition is expected to close in Verints third fiscal quarter.In other customer experience and digital marketing software news...NICE Acquires GoMoxieNICE, a provider of digital customer experience software, has announced the acquisition of GoMoxie, which offers digital assistance tools. With the addition of GoMoxie, NICE is expanding beyond the contact center and into smart conversational self-service. This move further extends NICEs set of digital CX assets, including CXone Expert, an AI-powered knowledge management solution for digital self-service, CXone SmartReach, a conversational AI solution, and CXone Omnichannel Routing, supporting experiences over 35 digital channels. All are offered as part of CXone, a digital customer engagement platform powered by Enlighten AI, NICEs self-learning AI engine.Salesforce Offers Video Streaming ServiceNetflix, Hulu and... Salesforce? Wait, what? Salesforce has announced Salesforce+, a streaming service with live and on-demand content. Salesforce+ includes live experiences, original series, podcasts and other programming. Salesforce officials say it will inspire users of its software to learn new skills, pursue new career opportunities and "drive change in the world.""Just as brands like Disney, Netflix and Peloton have done with streaming services for consumers, Salesforce+ is providing an always-on, business media platform that builds trusted relationships with customers and a sense of belonging for the business community," Sarah Franklin, president and chief marketing officer for Salesforce, said in a statement.The current Salesforce+ lineup features:Leading Through Change launched in March 2020 as a weekly program focusing on how business leaders were dealing with the global pandemic. Connections, showcasing marketers from companies like IBM, Levis, and GoFundMe. The Inflection Point, featuring CEOs from brands such as Coca-Cola, PayPal, Honeywell and Workday sharing how their personal backstories, professional influences and values inform their leadership. Salesforce+ will be available to a global audience just as Dreamforce arrives in September.Influitive Releases Multilingual CapabilitiesInfluitive Corporation, which provides customer advocacy, community and engagement software, has announced the full release of its multilingual capabilities. Companies can have customer-facing touchpoints delivered in the language that best suits a companys global audience, from the invitation email and sign-up page to the homepage, content and rewards.Influitive supports eight languages out-of-the-box:FrenchGermanSpanishPortugueseItalianChinese (Simplified)JapaneseKoreanLeveraging Influitives Profile Fields, program managers can edit and invite members in their preferred language or have members select a language during their gamified onboarding experience. It also includes previews of the members experience in their preferred language through the Influitive feature, Lenses.Influitives multilingual features are available immediately.Shutterstock Announces Integration With OpenTextShutterstock has announced an API integration with OpenText. The integration will offer Shutterstock Enterprise and OpenText customers direct access to 380 million-plus Shutterstock images via OpenText Media Management. OpenText Media Management is a digital asset management (DAM) solution for brands and publishers. Aprimo Adds DAM FeatureAprimo, a provider of digital asset management and work management solutions, has announced the addition of Content Return on Effort (ROE) to its SaaS content operations platform. Content ROE showcases how assets perform across campaigns, in the context of the effort to create and distribute it. Content Return on Effort gives content and creative teams a more complete picture of performance than ROI does on its own, according to Aprimo officials. Content Return on Effort is calculated for assets stored in Aprimo Digital Asset Management, natively capturing impressions that can be analyzed and viewed by source, medium or other tracking parameters.LogMeIn Names Bill Robinson as Chief Revenue OfficerLogMeIn, a provider of cloud-based SaaS solutions such as GoToConnect, GoToMeeting, LastPass and Rescue, has appointed software sales veteran Bill Robinson to its newly created Chief Revenue Officer (CRO) role. At LogMeIn, Robinson will lead Global Sales, Customer Experience and Business Operations.Bill joins LogMeIn from Contact Center as a Service (CCaaS) company NICE, where he served as executive vice president of sales. That included a strategic alliance with LogMeIns Unified Communications as a Service (UCaaS) product GoToConnect. | Digital Assistance/Content Synthesis/Process Automation | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Kerry O'Shea Gorgone | How to Implement Artificial Intelligence in Marketing: Rajkumar Venkatesan on Marketing Smarts [Podcast] | Rajkumar Venkatesan, co-author of "The AI Marketing Canvas: A Five-Stage Road Map to Implementing Artificial Intelligence in Marketing," gives us a sneak peek at the road map and offers insight into how marketers can get started using AI. Read the full article at MarketingProfs | https://www.marketingprofs.com/podcasts/2021/45523/artificial-intelligence-raj-venkatesan-marketing-smarts | 2021-08-19T14:00:00Z | Artificial intelligence (AI) and machine-learning (ML) have quickly grown beyond a few major tech companies and hardcore academic researchers. Every marketing organization can tap into the power of AI to streamline operations and grow the business.The new book The AI Marketing Canvas: A Five-Stage Road Map to Implementing Artificial Intelligence in Marketing provides a growth framework for business and marketing leaders to implement AI using a five-stage model called the “AI Marketing Canvas.” On this episode of Marketing Smarts, I speak with co-author Rajkumar Venkatesan about how he and his co-writer developed those stages by studying leading global brands. We cover examples of brands―including Google, Lyft and Coca-Cola―that have successfully woven AI into their marketing strategies.This is not a conversation about coding for AI models. Raj and I talk about how marketing leaders can go from “zero to hero” with AI in marketing, and what that means for your team and your company culture.Listen to the entire show now from the link above, or download the mp3 and listen at your convenience. Of course, you can also subscribe to the Marketing Smarts podcast in iTunes or via RSS and never miss an episode.This episode brought to you by PayPal. PayPal makes financial services and commerce more convenient, affordable, and secure."Marketing Smarts" theme music composed by Juanito Pascual of Signature Tones.Kerry O'Shea Gorgone: Welcome to the Marketing Smarts Podcast. I'm here with Raj Venkatesan, co-author of the book The AI Marketing Canvas: A Five Stage Roadmap for Implementing Artificial Intelligence in Marketing. He's the Ronald Trzcinski professor of business administration in the Darden Graduate School of Business Administration in Virginia. His writing has appeared in The Journal of Marketing and The Harvard Business Review. Let's just say he's very smart and knows a lot of things about a lot of things. How did you realize, Raj, that AI was going to be powerful for marketing specifically? When you learned it, did you learn it in the context of marketing, or did you learn it in some other context first?Rajkumar Venkatesan: Thank you, Kerry, for having me. It's a pleasure to be here. Thank you for your kind introduction. The story about how I started working on AI in marketing, it's a really interesting question you ask about how I learned about AI first, it goes back to the mid-90s when I was doing my undergrad in computer engineering. In my final year, I learned about neural networks, and my final year project was on genetic algorithms. Those are some of the things that are in the AI toolkit now. I started there. When I did my PhD, my advisor recognized that computers and technology are going to shape marketing in the future and asked if I wanted to work on that. I didn't know much about how things were going to turn out, and I said I'd do that. That's how I started. So, I came into this as a technology computer engineer, but as I started taking classes in economics, marketing, and consumer behavior, I started realizing it was very fascinating to hear about all of these new subjects and also it gave me a sense of where these tools that I was learning could be applied and how they can actually be valuable. I pursued and I started teaching, I came to UVA in 2006 after teaching in University of Connecticut. I started teaching marketing analytics. Darden is a special case where we value practice and connection with a practicing manager and we do the case method, which really teaches me about marketing and also understanding how managers are using data and what challenges they're solving every day. As I was teaching my course, and writing my case studies, and doing my research, I started seeing that more and more you now have data and technology influencing marketing. That's how the worlds kind of came together for me and I started working on this book.Kerry: I think a lot of us as marketers probably see the real world results of AI kind of at work. Like Starbucks knows I always get this certain thing and it will show me the thing every time I open the app, and it will suggest other things based on my history and that kind of thing. I can see that in the B-to-C context really clearly. What does AI look like for B-to-B marketers?Raj: Great question. In fact, when I started working on data in marketing and I did my work on customer lifetime value, it was in the context of B-to-B. I think one of the things with B-to-B which is really an advantage is that they have a direct relationship with the customers most of the time. Of course, they can sell through resellers, but to a large extent B-to-B firms know their customers. With Salesforce and other software today, there is a lot of information about how salespeople are connecting with their customers and what the customers are talking about. Of course, there are challenges, like salespeople actually putting the data in and how much is the coverage and all of that. I think where it can be really useful is the fact that you know about who your customers are and what they are buying, and you know a lot more detail about your customers in terms of how they're using your product because of your client relationship specialists and managers. You're really embedded with your customers, which really gives you information about how they are using your products. With Cloud and IoT, there's a lot more information coming these days about usage. Kerry: And lack of usage.Raj: Or lack of usage, yes. I think the potential is there. A lot of potential is there for B-to-B in actually doing sales call planning, whys of customer, coming up with new product updates, enabling a robust customer community through educational videos, FAQs, or even understanding if a customer is coming to a webinar, or if they ask some questions in the webinar, or have been to your website, browsed on the website. Information about that is really useful for salespeople when they go and talk to the customer after that in a follow up call, they can start becoming relevant and personal. I think B-to-B is definitely transforming. There's a lot more interaction, like web 2.0 and whitepapers and content marketing, and all of that is rich with data and are ripe, fertile areas for B-to-B marketers to use data to better improve the effectiveness of their marketing activities.Kerry: So, you could create a B-to-B content hub and then analyze the heck out of people's interaction with it? Raj: Yes, absolutely. Think about if you have a product, let's say you're Siemens and you're making parts, maybe you're making engines for trains or aircraft, as an example. There are many different parts that go into it and you're also supplying the parts. Knowing what kind of whitepapers your customer is downloading can tell you whether they're interested in maintenance, or whether they're interested in customizing your product, or understanding how your part fits within their project. It really helps your salespeople to then really hone in on that when they go on that follow up call. Also, on the emails you send them you can really start personalizing how you present your company to the customers.Kerry: Can you talk to me about some of the research that went into the book?Raj: With my co-author Jim Lecinski, I think we all came in with our own perspectives. He worked in Google for a while before he joined Northwestern. I was teaching marketing analytics and I started teaching digital marketing in a course called Marketing Technology Products where I would spend a week in San Francisco with my students talking, listening, and visiting companies. It gave me a look into how the world is evolving, what new technologies are coming. When we put together all of this, we started out with here's a dump of what we think we know, and then we started talking to brands who were really in this journey. All of the brands are featured in our book from Washington Post, Coca-Cola, Unilever, Ancestry.com, Carmax and several others. It started giving us a picture about in-depth interviews with managers who are really trying to bring data into their marketing. It gives us a consistent pattern on how they go about building this AI capability in their functions. That's kind of what led to us coming up with this canvas on where to begin and what the steps are to gain this transformative capability for their organizations.Kerry: Not to give away the store, because we want everybody to buy the book, obviously, but can you talk about the five-stage roadmap at a high level? Raj: Absolutely. I'm happy to. The five steps are foundation, experimentation, expansion, transformation, and monetization. What we mean by that is for AI to work, the raw material is data. But it's not any data, it is data that is focused on the customers. There's an inventory database, there's a finance database, there's a procurement database, but all of that needs to connect, and that's a challenge. For marketers, what that really means is Customer A, how are they interacting with the firm? When did they first start buying? What are the installations? Which salespeople did they talk to? Having a foundation of that kind of first-party customer-focused data is important. Then we talk about letting a thousand flowers bloom. Trying different things but looking at one aspect of customer engagement, either acquisition, retention, growth, or advocacy. You're really trying different things, looking at ROI and seeing where the biggest bang for your buck is. Once you learn what was working for you, then you're slowly expanding into other aspects of customer engagement. Eventually, you will reach a place where so far you've used vendors and off-the-shelf products and you have to really invest, and you get into a board level position of investing in your own data science team, building it in-house or buying an AI company. The last stage is very fascinating, which we saw. Specifically for AI companies, they have started taking all of these capabilities they've built in-house and they've turned around and built it into a platform that they are now selling AI as a service for other companies and building a new revenue stream. That is something which was really interesting to see how every company who reached this transformation has turned around and done a services platform.Kerry: What industries then lend themselves best to this, which can benefit the most from this? It definitely seems like data can benefit about any industry, but when I'm hearing you talk about services, that seems like an area really rich to benefit from AI.Raj: Absolutely. Services definitely. I think data services especially. In the book, we talk about Washington Post and Starbucks in detail in terms of what they did. I think Caterpillar is definitely getting into that world in terms of its ability to fit the tractors with cameras that can understand and detect which crops need pesticide and can target the crops in terms of pesticide spraying. One is you can see it as a big savings for the farmers, but it's also less pesticides on crops and that's a good thing. Then you're building on top of that now other capabilities of how you can operate better, algorithms for big farms, how to optimize the way you use your spraying capability. These things can become services that they can then provide big industrial farms. In another example, there's a company where they have this IoT device that's a microphone, where people use it in oil rigs and really hazardous situations. Where it is really helpful, first, is it was done to ensure somebody can know where they are and they can communicate with each other. But what this company found is that based how these microphones are and people are hovering on their movements, you can see if some issue is really a dangerous situation or not, and whether you should react or not to actually then improve the safety of your employees in the oil rigs. Once they found that this data can be used by their clients, they're building a platform which then provides a data service on top of the microphones for improving worker safety.Kerry: It's possible there are some people who have heard of the Internet-of-Things but don't know what it is. Internet connected things that are not computers, basically. Right? Raj: Yes.Kerry: How far does that go and how weirded out do people get? I would be a little embarrassed if I was flipping the wrong switch on a thing 167 times. I guess it's valuable data for a product developer to know that I can't figure it out. Raj: I think that what you're saying is absolutely true. I think that is important information from the perspective of product developers to understand that. It's not just one Kerry doing it, if there are a lot of customers who are doing that, then that is certainly something your product developers need to pay attention to. That's the beauty of what I call extending the purchasing funnel. So far, we've only looked at awareness, interest, desire, and action, before the internet and before data and technology. Basically, the purchase funnel stops at action, when the customers buy the product. Then you don't know how they're using it. You just wait for them to come back and buy the next product. Once you know about how consumption is happening, how people are using the product, then it's an ongoing relationship of you know what needs to be fixed, what new things people want, where they spend the most time, what they find valuable. All of this insight is what is really allowing for new innovations to happen.Kerry: Where are the missed opportunities? Are there areas now that are still relatively untapped that people could get into and be the early adopters and really benefit from that first-mover advantage?Raj: Great question. Right now, I feel like what differentiates companies from doing this better than the others, the differentiation I feel is in the culture and the process of the company. I think data science as a service is getting commoditized, but the differentiation is how much you've invested in knowing about your customers, how much you've invested in collecting the data, and how much your process of marketing decision-making and marketing strategy is attuned to getting these customer insights and implementing personalized marketing actions based on those insights. Kerry: If you were to talk to someone who is really just starting out, they have a brand new business and they can build their systems and their technology stack any way they want, what would you tell them, to take the best advantage of AI what should they do?Raj: Great question. I think the first thing I would say is good for you, because you don't have legacy issues. You are starting in a place where you can build it as you want, and you can really think about customer-first. I think that is number one. You can really start thinking about omnichannel. I think that's the first thing I would begin with. If you are from scratch building your systems, think about serving your customers across channels and being able to see them across those channels, because customers will engage with you in all the places and you need to be able to see them. The technology system that allows you to do that, whether they're online or in the store, or talking to your salesperson, you're able to track it. Even in a tradeshow, there's a good system for doing all of that. The second thing I would say is almost everybody is a web 2.0, web 3.0 company now, everybody has a website, everybody has either a LinkedIn or a forum. Tradeshows are all virtual now, and maybe there's a hybrid tradeshow that will happen in the future. You are tracking, even if you're a startup, all of your interactions with your customers. You will have Google Analytics, or you may have HubSpot or Adobe in the back end, giving you data on how your customers are engaging with you. The same vendors will also be happy to give you reports on any kind of analytics you want, basic analytics. That's where I would begin. Once I collect the data, I would begin with customizing an email campaign, or customizing your sales call plan, or customizing how your salesperson approaches your customers, arming your salesforce with information about your customers, and see where that goes.Kerry: You mentioned a couple of times that data is the raw fuel for AI. How can people keep their data clean? I know that's one of the biggest problems. Sales isn't paid to input data, they're paid to make sales, so they're just getting it in as fast as they can. How do you keep your data clean?Raj: We talk about 80% of the work is data and 20% is actually the estimation of the models, and that is true. The digitization helps, but there are always places where there is human input, that's where there are challenges. But human input is necessary because that also gives you qualitative nuanced information that can be a differentiator. It is something you have to struggle with and you cannot ignore. I wouldn't say only use data that is completely coming from digital sources, because the real rich information is actually in people's minds and that's what you want to tap into.Kerry: How deep an understanding of the technology does a marketer need to have? I think they're a little scared. We can get a little scared.Raj: Absolutely. That's a great question. This is something that is so relevant. I say in my class when I teach marketing analytics and digital marketing that I want you to be smart consumers of analytics. What I mean by that is it's not like you need to go and program and learn Python or R or any other new thing that comes up. You need to understand what data can do and what is possible. I think the biggest skill marketers can have is being open-minded and understanding about the power of data. And I think marketers are there and I think a lot of marketers understand it. I think the ones I see are successful are people who are collaborators and are people who are able to work across different functions. Marketers who are really successful form really good relationships with the technology folks, with the data science folks, with the operations folks, with the finance folks. The way I think about marketing, I think marketers need to think of themselves as the chief advocate of customers within an organization. They are the ones who are talking about and really focused on doing what is right for the customers across the organization. If an organization recognizes the marketer's role like that and if the marketer themselves recognizes that's their role, they will see that for them to be better, to be good at their job, they need the help across all divisions. That's what we talk about in the book, also, in the expansion stage when we talk about the marketing AI champion. You're really looking at the skills of a person who is a connector and a collaborator. Being able to put together teams and being able to understand and harness the knowledge of a team, I think, is the most important skill marketers need, more than coding or analytics. Kerry: Raj, where can people learn more and where can they get their copy of The AI Marketing Canvas? Raj: The AI Marketing Canvas is on Amazon. We would love for you to go check it out, give us your feedback, and send us your reviews. Jim and I are on LinkedIn. We'd love to hear from you. We continue a conversation there about new topics and this is an ongoing effort. This is just the beginning for marketing, so there are going to be more and more fabulous stories that are going to come about how brands have used AI.Kerry: And there's a website for the book as well?Raj: There is a website, thank you, at AIMCbook.com. Kerry: Great. Thank you so much for joining today. I learned a lot. I hope everybody buys the book through your site so that you can get all that rich data on what they do with it. Raj: Absolutely. That's right. Thank you, Kerry. Thank you for having me. It has been a pleasure. Kerry: Thank you for listening here to the very end. This has been the Marketing Smarts Podcast. Talk with you again soon. | Content Synthesis/Decision Making/Recommendation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Quintin Pope | New GPT-3 competitor | Published on August 12, 2021 7:05 AM GMTAI21 has trained a new language model, Jurassic-1, whose largest version has 178 billion parameters (GPT-3 had 175 billion). This paper gives limited technical details.There already were several models that used far more parameters than GPT-3, but they were either mixture of expert models or only word embeddings. They required much less compute to train/use, but were less powerful than a dense transformer like GPT-3 or the new Jurassic-1. The interesting thing about Jurassic-1 is that it really doesn’t go much beyond GPT-3. It has a larger vocabulary and slightly optimized architecture. Jurassic-1 only has a bit more parameters than GPT-3, whereas prior trends indicated that any GPT-3 successor would use at least an order of magnitude more parameters. Since GPT-3, much work has gone towards improving transformer architecture (e.g., linear time self attention and neural architecture search), but little of that is visible in Jurassic-1. Maybe companies don’t think it’s economically viable to scale beyond GPT-3 or run many experiments with different architectures at that scale?Also, Jurassic-1 is a unidirectional model, like GPT-3 (meaning it's forced to process text from left-to-right). This means GPT-3 can only process a given word using the context provided by the previous words. This causes unidirectional models problems for most tasks other than text generation. For example, other than GPT-3, all the top models in the SuperGLUE benchmark leaderboard are bidirectional models. It's interesting AI21 chose to compete with OpenAI using a model that provides the same class of service (text generation) as GPT-3, rather than specialize in, e.g., text classification, where a bidirectional model would be better.Discuss | https://www.lesswrong.com/posts/2BCpdyHzzw4BZeodR/new-gpt-3-competitor | 2021-08-12T07:05:49Z | AI21 has trained a new language model, Jurassic-1, whose largest version has 178 billion parameters (GPT-3 had 175 billion). This paper gives limited technical details.There already were several models that used far more parameters than GPT-3, but they were either mixture of expert models or only word embeddings. They required much less compute to train/use, but were less powerful than a dense transformer like GPT-3 or the new Jurassic-1. The interesting thing about Jurassic-1 is that it really doesnt go much beyond GPT-3. It has a larger vocabulary and slightly optimized architecture. Jurassic-1 only has a bit more parameters than GPT-3, whereas prior trends indicated that any GPT-3 successor would use at least an order of magnitude more parameters. Since GPT-3, much work has gone towards improving transformer architecture (e.g., linear time self attention and neural architecture search), but little of that is visible in Jurassic-1. Maybe companies dont think its economically viable to scale beyond GPT-3 or run many experiments with different architectures at that scale?Also, Jurassic-1 is a unidirectional model, like GPT-3 (meaning it's forced to process text from left-to-right). This means GPT-3 can only process a given word using the context provided by the previous words. This causes unidirectional models problems for most tasks other than text generation. For example, other than GPT-3, all the top models in the SuperGLUE benchmark leaderboard are bidirectional models. It's interesting AI21 chose to compete with OpenAI using a model that provides the same class of service (text generation) as GPT-3, rather than specialize in, e.g., text classification, where a bidirectional model would be better. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Joanna Ossinger | Mark Cuban-Backed Firm Alethea Is Creating 'Intelligent' NFTs - Bloomberg | Non-fungible tokens are ready to get more interactive, according to one firm that’s working to create “intelligent” versions. | https://www.bloomberg.com/news/articles/2021-08-24/mark-cuban-backed-firm-alethea-is-creating-intelligent-nfts | 2021-08-24T12:00:00Z | Non-fungible tokens are ready to get more interactive, according to one firm thats working to create intelligent versions.Alethea AI, which allows users to embed AI animation, interaction, and voice synthesis capabilities into NFTs, is looking to expand in a space with a lot of competition. But the firm has already had some commercial success, selling one intelligent NFT, or iNFT, for $478,000 via Sothebys in June.Alethea has gotten some big names interested. It has closed a $16 million strategic private and restricted token sale where the lead purchasers were Metapurse -- whose chief financier paid $69.3 million for Beeples Everydays: the First 5,000 Days earlier this year -- and Crypto.com Capital. Other strategic purchasers included Mark Cuban, Multicoin, Alameda, Dapper Labs, Galaxy Interactive and Gemini Frontier Fund, according to a statement from the company.While NFTs have continued to be exciting for collectors, I always try to invest in what is coming next, Cuban said. Alethea AI has managed to uniquely combine AI-powered Avatars that are secured on-chain as NFTs. The result is not only fun and entertaining but the foundation for a level of interactivity that is going to advance quickly using Aletheas technologies.Alethea is planning to use the proceeds from the sale to maintain and upgrade the current services and protocol launching in the public domain, CEO Arif Khan said in emailed comments. Potential projects with its technology could include giving a cryptopunk the ability to participate in a digital rap battle, creating interactive gaming characters or formulating interactive real-time chatbot applications.NFTs have surged in popularity this year along with cryptocurrencies, with creators attracted to a format that allows direct access to potential buyers around the globe, and customers finding appeal in owning works they might like or seek to collect. The rolling seven-day total of money spent on completed sales was $164.5 million on Aug. 18, compared with about $2.2 million at the end of last year, according to data from Nonfungible.com.Read More: $500,000 for a Rock NFT Tells Where the Cycle IsRead More: Visa Buys NFT of Digital Avatar With Mohawk for $150,000Before it's here, it's on the Bloomberg Terminal.LEARN MORE | Content Creation/Content Synthesis/Digital Assistance | Unknown | null | null | null | null | null | null |
|
news | Editorial Team | Cambridge Judge Business School opens courses for startup funding and RegTech | Cambridge Judge Business School and Esme Learning, the AI-powered digital learning platform, have announced a collaboration to empower working professionals’ career development across frontier fields. | https://www.finextra.com/pressarticle/88866/cambridge-judge-business-school-opens-courses-for-startup-funding-and-regtech | 2021-08-10T09:06:13Z | Cambridge Judge Business School and Esme Learning, the AI-powered digital learning platform, have announced a collaboration to empower working professionals career development across frontier fields.The multi-year collaboration commences with two inaugural six-week online executive education programmes:Startup Funding: From Pre-Seed to Exit, which will help entrepreneurs overcome the trickiest funding challenges facing startups.RegTech: AI for Financial Regulation, Risk, and Compliance, which will prepare risk, compliance, innovation, and data sector business leaders to navigate the complexities of the industry, including technologies such as AI and machine learning that support automated regulation.Successful programme participants will receive a certificate issued by the Cambridge Judge Business School. The first presentation of Startup Funding, run with the Entrepreneurship Centre, begins 13 October, 2021. The first presentation of RegTech, run with the Cambridge Centre for Alternative Finance (CCAF), begins 20 October, 2021. Registration for both programmes will close one week after they begin. Esme Learning will offer additional presentations for both programmes in 2022.We are delighted that the Cambridge Judge Business School has joined our growing family of university partners to deliver critical knowledge and skills to build the workforce of tomorrow, said David Shrier, CEO and co-founder at Esme Learning. Our suite of programmes with Cambridge Judge Business School integrate Esme Learnings measurable, collaborative approach in online learning with the Schools internationally renowned tradition of research and action. The result is two programmes offering actionable insights for business leaders that advance the state of the art of digital learning.ABOUT THE REGTECH: AI FOR FINANCIAL REGULATION, RISK, AND COMPLIANCE PROGRAMMEThe RegTech programme will provide critical insights and hands-on tools for working professionals in government and industry to address an increasingly complex regulatory environment. In this programme, professionals will:Identify new applications and revenue opportunities for RegTech solutions - including technologies such as big data, cloud computing, and AIUnderstand model-driven, machine-readable and executable regulations to scale a businessLearn from leaders who created the Regulatory Genome Project, a transformational initiative launched in part by the CCAF to sequence an open-source repository of machine-readable regulatory information Run RegSimple, a tool developed from the Regulatory Genome Project to simplify and accelerate the process of regulatory compliance for corporations and facilitating new regulation and policy for governmentsLeading luminaries in regulation and risk from CCAF at the Cambridge Judge Business School will teach the programme, including: Programme Director, Robert (Bob) Wardrop: Management Practice Professor and Director of CCAFFaculty, Emmanuel (Manos) Schizas: Research Associate & Lead in Regulation and RegTechFaculty, Simone di Castri: Senior Lecturer at Centre for Finance, Technology and EntrepreneurshipABOUT THE STARTUP FUNDING: FROM PRE-SEED TO EXIT PROGRAMMECambridge Judge Business School's course Startup Funding: From Pre-Seed to Exit goes beyond what first-time entrepreneurs to seasoned founders must understand when raising capital - especially at a ventures earliest funding stages (seed to Series B). The programme will cover exactly how to launch a venture and navigate critical inflection points along a company's growth trajectory to achieve either a successful exit, or long-term operational viability. University of Cambridge has been responsible for over 140 startups since 2011, and these companies have raised over £1.9b in equity investment across more than 400 rounds. Notable spin-outs include gene therapy company Quethera, voice-controlled AI platform VocalIQ, and AI cyber defense company Darktrace.Designed to empower entrepreneurs, the Startup Funding programme provides the tools and insights to:Structure a new venture so it is more appealing to investorsEffectively position companies and pitch potential investors, by communicating product value to drive investmentIncrease efficiency in the fundraising processOptimize the negotiated terms of the funding obtainedDevelop and harness a distinctive entrepreneurial mindsetIn addition to leading the Cambridge RegTech programme, Robert Wardrop will also serve as Programme Director for the Cambridge Startup Funding programme. Participants will learn from other industry leaders, including: Faculty, Stylianos (Stelios) Kavadias: Margaret Thatcher Professor of Enterprise Studies in Innovation and Growth; Co-Director of Entrepreneurship Centre, CJBS; Academic Director, Advanced Leadership Programme.Guide, Ariane de Bonvoisin: startup leader and advisor for Union Square Ventures, Twitter, and Google.PROGRAMMES POWERED BY ESME LEARNINGBoth programmes, produced by Esme Learning in collaboration with Cambridge Judge Business School faculty and staff, will feature high-quality video instruction; interactive, timely media such as podcasts and articles; and correlating formative assessments that test knowledge retention.Programmes on Esme Learning are thoughtfully crafted with learning design principles that emphasise measurement and feedback. Learners have numerous opportunities to immediately apply programme knowledge via live simulations and small group exercises that emulate work scenarios. Esme Learnings AI coach leverages 15 years of cognitive science research to intelligently analyse small group peer interactions and suggest individual feedback per learner to ensure productive and rewarding group collaboration. | Personalization/Content Synthesis | Education, Training, and Library/Business and Financial Operations | null | null | null | null | null | null |
|
news | Jake Harfield | What is Natural Language Processing and How Does It Work? - MakeUseOf | How does Siri or Alexa understand what you're saying? How can the computer translate your voice perfectly? | https://www.makeuseof.com/what-is-natural-language-processing-and-how-does-it-work/ | 2021-09-03T12:30:00Z | Have you ever wondered how virtual assistants like Siri and Cortana work? How do they understand what you're saying?Well, part of the answer is natural language processing. This interesting field of artificial intelligence has led to some huge breakthroughs over the last few years, but how exactly does it work?Read on to learn more about natural language processing, how it works, and how it’s being used to make our lives more convenient.What Is Natural Language Processing?Natural Language Processing, or NLP, is how computers can understand human languages. For example, when you speak to voice-activated virtual assistants like Alexa or Siri, they listen, understand your speech, and perform an action based on what you’ve said.Traditionally, humans could only communicate with computers via the programming language they were coded via particular commands. Code is inherently structured and logical, and the same commands will always produce the same output.In contrast, human language is unstructured and much more complex. The same word or sentence can have multiple meanings based on inflections and context. And, there are many different languages.So how is AI able to understand what we’re saying?How Does NLP Work?NLP is trained with machine learning. Machine learning is a branch of artificial intelligence that takes large amounts of data into an algorithm that trains itself to produce accurate predictions. The more data and time the algorithm has, the better it gets. This is why NLP machines are so much better today than they were ten years ago.NLP works via preprocessing the text and then running it through the machine learning-trained algorithm.Preprocessing StepsHere are four of the common preprocessing steps that an NLP machine will use.Tokenization: Tokenization is the process of breaking speech or text down into smaller units (called tokens). These are either individual words or clauses. Tokenization is important because it allows the software to determine which words are present, which leads to the next stages of NLP processing. Stemming and Lemmatization: Stemming and lemmatization are simplifying processes that reduce each word to its root word. For instance, “running” into “run.” This enables the NLP to process text faster. Stemming is a simpler process and involves removing any affixes from a word. Affixes are additions to the start and end of the word that gives it a slightly different meaning. However, stemming can result in errors when similar words have different roots. Consider the words “camel” and “came.” Stemming may reduce “camel” to “came" despite having completely different meanings.Lemmatization is much more complicated and accurate. It involves reducing a word to their lemma, which is the base form of a word (as found in the dictionary). Lemmatization takes into account the context and is based on vocabulary and morphological analysis of words. A good example is “caring.” Stemming may reduce “caring” to “car,” whereas lemmatization will accurately reduce it to “care.”Another technique works alongside both processes, known as Stop Word Removal. This is the simple removal of words that add no relevant information to the meaning of the speech, such as “at” and “a.”Machine Learning Algorithm TasksOnce the text has been preprocessed, an NLP machine is able to do several things depending on its intent.Sentiment Analysis: The process of classifying the sentiment of the text. For example, whether a product review is positive, neutral, or negative. Topic Classification: This is where the main topic of the text is identified. An NLP machine can tag documents, paragraphs, and sentences with what topic they are concerning. Intent Detection: This is the process of determining what the intent is behind a particular text. For example, it can help businesses determine whether customers want to unsubscribe or are interested in a product. Part-of-Speech-Tagging: After tokenization, an NLP machine will tag each word with an identifier. These include marking words as nouns, verbs, adjectives, and so on. Speech Recognition: This is the task of converting speech to text and is particularly challenging because of differences in accent, intonation, grammar, and inflection, between people. Named-Entity Recognition: The process of identifying useful names like “England” or “Google.” This is combined with coreference resolution, determining whether two words refer to the same thing, such as “Alice” and then “she.” Natural Language Generation: This is the opposite of speech-to-text and is how NLP machines can generate speech or text to communicate back. Why Is NLP So Important?Natural Language Processing is a huge and ever-growing field that encompasses many functions. Some of the major uses of NLP are:Analyzing Online Information: Businesses and researchers can use NLP to analyze swathes of text-based data into usable information. For instance, social media comments, reviews, customer support tickets, and even articles. NLP can analyze these for trends and insights of value for the business. Language Translation: Apps such as Google Translate use NLP machines to convert one language into another. Spell and Grammar Check: Word processors and apps like Grammarly check your text for spelling and grammar mistakes, readability, passive voice, and so on, to improve your writing. Interactive Voice Response (IVR): Telephone bots allow humans to communicate with a computer-operated phone system to perform redirections and other tasks. Virtual Assistants:Personal assistants such as Siri, Cortana, Bixby, Google Assistant, and Alexa use NLP to listen to your queries and produce responses or perform actions based on what you say. Predictive Text: Your smartphone automatically provides predicted words based on a few letters or what you’ve already written in the sentence. The smartphone learns based on sentences you usually type and offers words you’re most likely to use. In fact, Microsoft Word is soon to implement this as a feature. Chat Bots: Many websites now have virtual customer service bots that will attempt to assist customers prior to them being referred to a human operator. Robot ConversationalistsNatural Language Processing is changing the way we communicate with robots and how they communicate with us. Bloomberg News uses an AI system called Cyborg to produce almost a third of its content. Meanwhile, Forbes, The Guardian, and The Washington Post all use AI to write news articles.And all of this is only possible thanks to NLP!7 Siri Alternatives for Android: Google Assistant, Hound, Alexa, and MoreLooking for the best Android equivalent to Siri? Here are several Siri alternatives that are just as good, if not better.About The AuthorJake Harfield (31 Articles Published)Jake Harfield is a freelance writer based in Perth, Australia. When he's not writing, he's usually out in the bush photographing local wildlife. You can visit him at www.jakeharfield.comMore From Jake HarfieldSubscribe to our newsletterJoin our newsletter for tech tips, reviews, free ebooks, and exclusive deals! | Digital Assistance/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Tamal Das | 8 Social Media Content Calendar Tools for Scheduling Posts - MUO - MakeUseOf | If you manage multiple social media accounts, scheduling content ahead of time will save you a big headache. Here are some of the best tools for this. | https://www.makeuseof.com/social-media-content-calendar-scheduling-tools/ | 2021-08-14T14:00:00Z | If you’re active in multiple social media platforms, you must try post-scheduling calendar tools for publishing your content.The world of social media is expanding, and hence it becomes difficult for individual users to post on all the platforms regularly at the right time. In order to manage when and what to publish on social media, you need to stay organized. To make the whole process seamless and hassle-free, check out these eight social media calendar tools for scheduling posts.This comprehensive social media management tool not only schedules your content; it can also write posts on your behalf. Using artificial intelligence (AI) algorithms, it extracts the content that’s getting maximum engagement and suggests that to you.You won't have any trouble using this tool as it is easy to understand and use. It makes your task of managing multiple platforms easy by letting you handle all of them by logging through only one account. You can schedule posts on Pinterest, Instagram, Twitter, Facebook, and LinkedIn via this tool.This social media calendar tool allows you to schedule posts category-wise. Therefore, you can quickly categorize your post into infographics, blogs, promotional posts, webinars, etc. It also lets you highlight the categories in separate colors so that you can easily track them.Download: MeetEdgar for Android | iOS (Price starts from $19/month)You may have heard the name of Buffer, as it is a pioneer of social media management tools. It offers you powerful planning and scheduling features. If you manage more than one social media account, this app will be useful for you.You can create a separate publishing schedule for each account of yours on different social media platforms. The supported platforms of this tool are Facebook, Instagram, Twitter, LinkedIn, and Pinterest. You can draft your posts here and personalize them according to the style of each social network—all using a single dashboard and without any hassle.Related: Best Apps to Schedule Instagram PostsThe calendar feature of this Buffer is designed for you to visualize the scheduled posts and optimize them as per your requirements.Download: Buffer for Android | iOS (Free, in-app purchases available)Despite being a social analytics tool primarily, Iconosquare can also schedule your posts on Instagram, Facebook, and Twitter. Moreover, you can monitor your mentions on Facebook and Instagram to manage your online reputation.Iconosquare’s social publishing tool comes with versatile functionalities. Thanks to its drag-and-drop feature, you can seamlessly order your content according to your choice. The tool allows you to schedule posts, carousels, and stories for Instagram, along with Facebook statuses, and tweets.You can also check your Instagram feed preview to know how it looks before posting your content. It is also feasible to schedule your first comment along with the hashtags for better reach and engagement.You can effortlessly upload images from Dropbox or OneDrive to the media library of Iconosquare. The tool lets you segregate your content for searching the photos. By tagging the used images, it makes sure that you don’t end up posting the same content twice.Download: Iconosquare for Android | iOS (Price starts from $49/month)If you are looking for an Instagram-friendly post scheduling tool, Hopper HQ is the right choice. If you have more than one Instagram account, you can manage them all through a single Hooper HQ login. However, it also supports Facebook, LinkedIn, and Twitter.This social media calendar planner comes with a neat interface, so you should not have difficulty navigating through this app. Its bulk uploading feature allows you to upload 50 posts at a time. You can also reschedule your posts on social media.Related: How to Schedule Posts on Your Facebook PageThe tool also includes some exciting image editing features. You can make your images catchy and appealing by adding filters, borders, and text overlays. It also lets you preview your post. So, you know how it will look on the mobile screens of your audience and make changes, if necessary.Download: Hopper HQ for iOS (Price starts from $19/month)This comprehensive app for social platform scheduling and posting comes with robust features and an easy-to-use layout. It allows you to create visual posts with multiple images and GIFS, add carousel posts, post native videos, and many more. You can also leverage its Canva integration feature to make a new image.The supported social platforms are Facebook, Twitter, LinkedIn, Instagram, Pinterest, and Google My Business. The tool lets you reschedule your posts on another date or at a different time on the same day for improved engagement. You can also reshare your evergreen content with this app that keeps bringing you more traffic and better user engagement.Besides utilizing the direct publishing feature on Instagram, you can also link with your blog fees. Thus, your blog will get shared on your social media accounts automatically after getting published. After scheduling posts for an extended period, you can get a bird’s-eye view of how the plan looks.Download: SocialPilot for Android | iOS (Price starts from $42.50/month)When it comes to managing your social media efforts on multiple platforms, Hootsuite is one of the most popular tools. Its interactive and media-rich planner offers a complete overview of your social media calendar.The app displays a small visual preview of upcoming posts with the image caption and post timing. You can edit them right from the Hootsuite platform.The supported platforms of this tool are Twitter, Facebook, Instagram, LinkedIn, and YouTube. After creating content, its Composer tool lets you preview according to the unique format of each social network. Apart from scheduling, it allows you to identify publishing gaps. Moreover, with its Chrome extension, you can include newly-discovered content to your calendar automatically.Download: Hootsuite for Android | iOS (Price starts from $49/month)Besides social publishing tools, Agorapulse comes with a content calendar to make social media account management effortless. It lets you schedule and reschedule your social media posts. You can use post queueing and bulk post uploading features for planing your posts on a monthly or quarterly basis.Agorapulse supports the following social media platforms: Facebook, Twitter, Instagram, LinkedIn, and YouTube. Additionally, it comes with a social media inbox. It allows you to administer all the interactions of various platforms from a single place. This feature ensures better engagement with the audience.Download: Agorapulse for Android | iOS (Free, in-app purchases available)If you are tired of juggling between multiple social media accounts, try Falcon. This social media scheduling tool helps you plan and publish all the upcoming social posts from a single calendar.With this all-in-one app, you can edit, preview, schedule, and publish your social media posts without breaking a sweat.The app offers support for Facebook, Instagram, Twitter, LinkedIn, WhatsApp, and YouTube. It displays all your scheduled content according to date on different social platforms.Related: Free Twitter Tools to Make Twitter More UsefulDownload: Hub by Falcon.io for Android | iOS (Price Starts from $129/month)Your social media followers grow depending on the regular activities, like the posts and shares, from your profile. Use any of the above tools to make social media content posting seamless and convenient.When creating content on social media, you can also improve your workflow by using apps that help you create stunning visuals. You might also want to look at what's trending, and see how you can capitalize on that—along with analyzing your social media performance.The 8 Best Tools to Analyze Social Media PerformanceWant to track your social media performance? Here are some of the best tools you can use...About The AuthorTamal Das (81 Articles Published)Tamal is a freelance writer at MakeUseOf. After gaining substantial experience in technology, finance, and business processes in his previous job in an IT consulting company, he adopted writing as a full-time profession 3 years ago. While not writing about productivity and the latest tech news, he loves to play Splinter Cell and binge-watch Netflix/ Prime Video.More From Tamal DasSubscribe to our newsletterJoin our newsletter for tech tips, reviews, free ebooks, and exclusive deals! | Content Synthesis/Recommendation/Process Automation | Arts, Design, Entertainment, Sports, and Media/Business and Financial Operations | null | null | null | null | null | null |
|
news | ScaleOut Software Announces Machine Learning Capabilities for its ScaleOut Digital Twin Streaming Service™ | BELLEVUE, Wash.--(BUSINESS WIRE)--ScaleOut Software today announced major extensions to its ScaleOut Digital Twin Streaming Service™ that enable real-time digital twin software to implement and host machine learning and statistical analysis algorithms that immediately identify unexpected behaviors exhibited by incoming telemetry. Real-time digital twins can now make extensive use of Microsoft’s ML.NET machine learning library to implement these groundbreaking capabilities for virtually any IoT | https://www.businesswire.com/news/home/20211005005300/en/ScaleOut-Software-Announces-Machine-Learning-Capabilities-for-its-ScaleOut-Digital-Twin-Streaming-Service%E2%84%A2 | 2021-10-05T14:06:21Z | BELLEVUE, Wash.--(BUSINESS WIRE)--ScaleOut Software today announced major extensions to its ScaleOut Digital Twin Streaming Service that enable real-time digital twin software to implement and host machine learning and statistical analysis algorithms that immediately identify unexpected behaviors exhibited by incoming telemetry. Real-time digital twins can now make extensive use of Microsofts ML.NET machine learning library to implement these groundbreaking capabilities for virtually any IoT device or source object.Integration of machine learning with real-time digital twins offers powerful new options for real-time monitoring across a wide variety of applications. For example, cloud-based real-time digital twins can track a fleet of trucks to identify subtle changes in key engine parameters with predictive analytics that avoid costly failures. Security monitors tracking perimeter entrances and sound sensors can use machine learning techniques to automatically identify unexpected behaviors and generate alerts.By harnessing the no-code ScaleOut Model Development Tool, a real-time digital twin can easily be enhanced to automatically analyze incoming telemetry messages using machine learning techniques. Machine learning provides important real-time insights that enhance situational awareness and enable fast, effective responses. The tool provides three configuration options for analyzing numeric parameters contained within incoming messages to spot issues as they arise:Spike Detection: Tracks a single parameter from a data source to identify a spike in its values over time using an adaptive kernel density estimation algorithm implemented by ML.NET.Trend Detection: Also tracks a single parameter to identify a trend change, such as an unexpected increase over time for a parameter that is normally stable, using a linear regression algorithm that detects inflection points.Multi-Variable Anomaly Detection: Tracks a set of related parameters in aggregate to identify anomalies using a user-selected machine-learning algorithm implemented by ML.NET that performs binary classification with supervised learning.Once configured through the ScaleOut Model Development Tool, the ML algorithms run automatically and independently for each data source within their corresponding real-time digital twins as incoming messages are received. Each real-time digital twin can automatically capture anomalous events for follow-up analysis and generate alerts to popular alerting providers, such as Splunk, Slack, and Pager Duty, to support remediation by service or security teams.We are excited to offer powerful machine learning capabilities for real-time digital twins that will make it even easier to immediately spot issues or identify opportunities across a large population of data sources, said Dr. William Bain, ScaleOut Softwares CEO and founder. ScaleOut Software has built the next step in the evolution of the Microsoft Azure IoT and ML.NET ecosystem, and we look forward to helping our customers harness these technologies to enhance their real-time monitoring and streaming analytics.Benefits of ScaleOuts Real-Time Digital Twins with Machine LearningIntegrating machine learning into ScaleOuts real-time digital twins offers these key benefits:Powerful New Capabilities for Tracking Data Sources: The use of machine learning dramatically enhances the ability of streaming analytics running in real-time digital twins to automatically predict and identify emerging issues, thereby boosting their effectiveness.Simultaneous Tracking for Thousands of Data Sources: The integration of machine learning with real-time digital twins using in-memory computing techniques enables thousands of data streams to be independently analyzed in real-time with fast, scalable performance.Fast, Easy Application Deployment: With the ScaleOut Model Development Tool, these new machine learning capabilities can be configured in minutes using an intuitive GUI. No code development or library integration is required. Applications can optionally take advantage of a fully integrated rules engine to enhance their real-time analytics.Seamless Use of Microsofts Powerful Machine Learning Library: Users can automatically take advantage of Microsofts industry-leading technology for machine learning (ML.NET) to enhance their real-time device tracking and streaming analytics.Virtually Unlimited Application: These new capabilities are useful across a wide variety of applications that track numeric telemetry, with use cases including telematics, logistics, security, healthcare, retail, financial services, and many others.For more information, please visit www.scaleoutsoftware.com and follow @ScaleOut_Inc on Twitter.Additional Resources:Machine Learning Supercharges Real-Time Digital Twins Blog PostScaleOut Digital Twin Streaming Service Product PageAbout ScaleOut SoftwareFounded in 2003, ScaleOut Software develops leading-edge software that delivers scalable, highly available, in-memory computing and streaming analytics technologies to a wide range of industries. ScaleOut Softwares in-memory computing platform enables operational intelligence by storing, updating, and analyzing fast-changing, live data so that businesses can capture perishable opportunities before the moment is lost. It has offices in Bellevue, Washington and Beaverton, Oregon. | Detection and Monitoring/Prediction | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
||
news | Transfer Learning for Natural Language Processing | Training deep learning NLP models from scratch is costly, time-consuming, and requires massive amounts of data. In Transfer Learning for Natural Language Processing, DARPA researcher Paul Azunre reveals cutting-edge transfer learning techniques that apply customizable pretrained models to your own NLP architectures. You'll learn how to use transfer learning to deliver state-of-the-art results for language comprehension, even when working with limited label data. Best of all, you'll save on training time and computational costs.Build custom NLP models in record time, even with limited datasets! Transfer learning is a machine learning technique for adapting pretrained machine learning models to solve specialized problems. This powerful approach has revolutionized natural language processing, driving improvements in machine translation, business analytics, and natural language generation.Transfer Learning for Natural Language Processing teaches you to creat ... | https://it-ebooks.info/book/1631191676-transfer_learning_for_natural_language_processing/ | 2021-09-09T14:15:06Z | Training deep learning NLP models from scratch is costly, time-consuming, and requires massive amounts of data. In Transfer Learning for Natural Language Processing, DARPA researcher Paul Azunre reveals cutting-edge transfer learning techniques that apply customizable pretrained models to your own NLP architectures. You'll learn how to use transfer learning to deliver state-of-the-art results for language comprehension, even when working with limited label data. Best of all, you'll save on training time and computational costs.Build custom NLP models in record time, even with limited datasets! Transfer learning is a machine learning technique for adapting pretrained machine learning models to solve specialized problems. This powerful approach has revolutionized natural language processing, driving improvements in machine translation, business analytics, and natural language generation.Transfer Learning for Natural Language Processing teaches you to create powerful NLP solutions quickly by building on existing pretrained models. This instantly useful book provides crystal-clear explanations of the concepts you need to grok transfer learning along with hands-on examples so you can practice your new skills immediately. As you go, you'll apply state-of-the-art transfer learning methods to create a spam email classifier, a fact checker, and more real-world applications. | Content Synthesis/Decision Making/Process Automation | Computer and Mathematical/Business and Financial Operations | null | null | null | null | null | null |
||
news | Mike Wheatley | With new release, UiPath is using automation to fix application sprawl | Business process automation software company UiPath Inc. is launching a major update to its platform aimed at tackling issues around the sprawling and disparate nature of enterprise applications. The coming new release, announced at UiPath’s FORWARD IV event today, will enable companies to apply artificial intelligence to every facet of work, help information technology teams […]The post With new release, UiPath is using automation to fix application sprawl appeared first on SiliconANGLE. | https://siliconangle.com/2021/10/05/new-release-uipath-using-automation-fix-application-sprawl/ | 2021-10-05T12:00:04Z | Business process automation software company UiPath Inc. is launching a major update to its platform aimed at tackling issues around the sprawling and disparate nature of enterprise applications.The coming new release, announced at UiPaths FORWARD IV event today, will enable companies to apply artificial intelligence to every facet of work, help information technology teams scale, maintain and secure automation, and make it easier for developers to build automations.UiPath is a leader in the robotic process automation market. It sells an RPA platform thats used by companies to reduce costs and operational errors by automating repetitive work. It relies on AI models that learn how employees perform common tasks in business applications. Then, it creates software robots that can replicate those workflows, thereby reducing the need to perform many of those tasks manually.The headline feature in the UiPath 2021.10 release is a new Integration Service thats designed to tackle some of the problems around application sprawl. The average enterprise typically uses more than 175 apps across cloud and on-premises environments, and UiPath says the result of this fragmentation is reduced productivity. So the idea with the UiPath Integration Service is to help companies optimize the applications theyre using through automation of application programming interfaces.Developers will be able to use APIs to build new automations, similar to how they can already create software robots through user interface automation, UiPath says. As the company notes, API and UI processes often intersect, so its a necessary capability for any enterprise that wants to ramp up automation.UiPath Integration Service will launch on Oct. 25 with more than 70 pre-built connectors expected to be available out-of-the-box by the years end.On the security side, new features around endpoint protection will be available through UiPaths integration with CrowdStrike Inc.s Falcon platform. Theres also a new capability called Robot Auto-Healing that regularly assess robots runtime environments to determine and remediate any security issues without human intervention, UiPath said. Robot Auto-Healing will be paired with runtime governance to ensure workflows meet corporate security standards.The new UiPath Automation Suite, meanwhile, is a cloud-native deployment option that enables customers to deploy all UiPath servers in software containers that can run on any kind of computing platform without change. In addition, the companys software-as-a-service offering UiPath Automation Cloud is being enhanced with cloud services around process mining and test management.For developers, new features include an updated framework for easier process mining customization, while the new AI Solution Templates and Forms AI (in review) services will make it easier for developers to build AI capabilities into their automations.The release also provides new Windows and Office experiences. They includes cross-platform support for Linux-based robots that run API automations with low-level resources. Moreover, theres a new integration between Assistant and UiPath Marketplace that makes it easier to adopt UiPath Marketplace automations.UiPath co-founder and Chief Executive Daniel Dines said the new release means UiPath is not just the most comprehensive automation platform of its kind, but also the foundation of automation in the future.Were delivering on making automations easier to build, expanding automation use cases, and ensuring we provide reliability, security, and governance to support enterprise-level, mission-critical processes, he said. We are also laying the groundwork for semantic automation, in which robots understand how to use applications without being programmed by humans.Photo: SiliconANGLEShow your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts. | Robotic Automation/Process Automation | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | MartianMan | 107° - Free Udemy Courses: MS Excel, Web Development, Java, Network Ethical Hacking, SCRUM, Microsoft SQL & More | Have fun learning ! Credit Reddit [4.4] [ 3h 52m] 2021 Professional MS EXCEL Course with Real Applications Code=545083E734E18BCB1535 2 Days left at this price ![4.5] [1h 23m] 2021 TOP MS EXCEL Templates and Dashboards with applications Code=0DDB99705656C6C22996 2 Days left at this price ![4.3] [3h 46m] Selenium in C# - Setup Simple Test Automation Framework Code=SEL_C_FREE_SEP 2 Days left at this price ![New] [4h 49m] Master Multithreading - Concurrency with Java Examples Code=MULT_FREE_SEP 2 Days left at this price ![4.5] [3h 14m] GoF Design Patterns - Complete Course with Java Examples Code=GOF_FREE_SEP 2 Days left at this price ![4.4] [4h 28m] Webservices API Testing with Postman - Complete Guide Code=WEB_TEST_FREE_SEP 2 Days left at this price ![4.4] [2h 1m ] Functional Programming + Lambdas, Method References, Streams Code=FP_FREE_SEP 2 Days left at this price ![4.2] [5h 23m] Java Collections Framework + Generics, Lambdas & Stream API Code=JCF_FREE_SEP 2 Days left at this price ![4.5] [ 33h 18m] Java from Zero to First Job - Practical Guide, 700+ examples Code=JAVA_FREE_SEP 2 Days left at this price ![4.2] [45m] CSS Crash Course For Beginners Code=FREEBIESGLOBAL.COM 1 Day left at this price ![4.7] [3h 11m] Procreate: Getting Started With Procreate on the iPad (2021) Code=FREEBIESGLOBAL-2192 1 Day left at this price ![4.6] [3h 27m] How to Draw 101: BASIC DRAWING SKILLS & Sketching Course Code=FREEBIESGLOBAL-2192 1 Day left at this price ![4.5] [5h 35m ] Excel - Microsoft Excel Beginner to Advanced [2021] Code=FREEBIESGLOBAL-2192 1 Day left at this price ![New] [1h 47m] TikTok Marketing 2021 | Go Viral With Authentic Videos! Code=FREEBIESGLOBAL.COM 2 Days left at this price ![4.6] [54m] Pictures and Videos colorization by Artificial Intelligence Code=0DBE4FDEED4083E8157F 2 Days left at this price ![4.7] [1h 27m] Arduino Step by Step Guide : Basic Level Code=43FD6A5A265CF1819E67 1 Day left at this price ![4.4] [3h 39m ] Network Ethical Hacking for beginners (Kali - Hands-on) Code=0AA7415DB913672A1630 1 Day left at this price ![New] [ 9h 11m] SCRUM The Masterclass: Learn SCRUM Here & The PSM1 Is Yours! Code=4A1EB9395F3E60C7DE74 1 Day left at this price ![New] [3h 22m ] Build Full-Stack Projects with FARM stack Code=E44F8C58E0A2821036F2 1 Day left at this price ![4.8] [2h 50m] Money In Excel - Income Tax Sch. C & Personal Financials Code=7685194FFCE3B9828927 2 Days left at this price ![4.4] [6h 35m] Corporate Finance #5 Financing Decisions Code=E4845282F62261C574FD 2 Days left at this price ![4.3] [ 6h 35m ] Corporate Finance #7 Short Term Financing Code=9D0146C298F2E2997B64 2 Days left at this price ![New] [11h 6m] Corporate Finance #9 Valuation-Bond, Common /Preferred Stock Code=28638BCF6F928F275668 2 Days left at this price ![4.4] [147h 26m] QuickBooks Pro Desktop 2021, 2020, 2019, & 2018 Code=324F14F27B41EA8053A8 2 Days left at this price ![New] [1h 6m ] Tubebuddy Masterclass - Guide to YouTube Keyword Tool Code=C524A01F4E8455C86B1A 2 Days left at this price ![4.4] [ 33h 59m] SEO & Copywriting Course as PROJECT & FUN: 40 DAYS Challenge Code=SEPTGOODNESS999 2 Days left at this price ![New] [ 50m] Infographics Design for Free w/ Canva: Infographics,Lot More Code=4239996308E62F336E31 2 Days left at this price ![4.7] [7h 46m] Individual Income Tax Form 1040 Comprehensive Problem Code=02FFE2C1D2AC3EF33650 2 Days left at this price ![4.6] [4h 1m ] Personal Capital Free Financial Planner Application Code=F1576F3D358AE691707E 2 Days left at this price ![4.5] [6h 55m] Corporate Finance #6 Management of Current Assets Code=45F93784F5889FDA7BF3 2 Days left at this price ![4.6] [19h 45m] Corporate Finance #8 Time Value of Money (PV & FV) Code=6423DCCC88870CA232C2 2 Days left at this price ![4.6] [19h 45m] Corporate Finance #8 Time Value of Money (PV & FV) Code=6423DCCC88870CA232C2 2 Days left at this price ![4.3] [118h 56m] Accounting–Financial Accounting Total-Beginners to Advanced Code=8889761B8087282634E8 2 Days left at this price ![3.9] [2h 30m] Learn Basic Data science and Python Libraries Code=2CECBC92639A3131FB51 2 Days left at this price ![4.1] [2h 59m] SQL Bootcamp with MySQL, PHP & Python : 5 Courses in 1 Code=SQLSEP2021 2 Days left at this price ![New] [38m] API & Web Project Authentication with Azure Active Directory Code=B72B437C8B552356278F 2 Days left at this price ![4.2] [3h 25m] SQL- The Complete Introduction to SQL programming Code=3AB86A0176694F04F3E3 2 Days left at this price ![4.6] [12h 14m] Modern PHP Web Development w/ MySQL, GitHub & Heroku Code=PHP_FREE_SEP 2 Days left at this price ![4.3] [3h 45m ] Selenium in Java - Setup Simple Test Automation Framework Code=SEL_FREE_SEP 2 Days left at this price ![4.5] [5h 58m] Microsoft SQL Server Development for Everyone Code=MSSQL_FREE_SEP 2 Days left at this price ![4.5] [4h 19m] MySQL Database Development Mastery Code=MYSQL_FREE_SEP 2 Days left at this price ![Amazon] Practical C# Programming Practices (Free with Kindle Unlimited or $0.99) 4 Days left at this price ![Amazon] Public Speaking Strategies, Effective Time Management & More 3 Days left at this price !Popular Discounted Courses[Humble Bundle] eBooks Bundle: Python, Machine Learning and More $1 2 Days left at this price ![4.6] [46h 24m] BEST of SEO: #1 SEO Training & Content Marketing Course 2021 $10.99 Code=1SEPT999 1 Day left at this price ![4.2] [45h 40m] Python Hands-On 46 Hours, 210 Exercises, 5 Projects, 2 Exams $9.99 Code=PYTHON-91 2 Days left at this price ![Bestseller] [29h 7m] The Complete Storytelling Course for Speaking & Presenting $12.99 Code=THANKS05 2 Days left at this price ![Bestseller] [31h 20m] Soft Skills: The 11 Essential Career Soft Skills $12.99 Code=THANKS05 2 Days left at this price ![4.4] [95h 12m] ULTIMATE SEO, SOCIAL MEDIA, & DIGITAL MARKETING COURSE 2021 $12.99 Code=1SEPT999 1 Day left at this price ! | https://www.hotukdeals.com/deals/free-udemy-courses-ms-excel-web-development-java-network-ethical-hacking-scrum-microsoft-sql-more-3792375 | 2021-09-12T17:34:20Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.Copyright © 2004-2021 hotukdeals. All rights reserved.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Content Synthesis/Information Retrieval Or Search/Personalization | Education, Training, and Library/Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | Miranda Luna and Cyrielle Simeone | Databricks SQL: Delivering a Production SQL Development Experience on the Data Lake | Databricks SQL (DB SQL) is a simple and powerful SQL analytics platform for creating and sharing insights at a fraction of the cost of cloud data warehouses. Data analysts can either connect business intelligence (BI) tools of their choice to SQL endpoints, leverage the built-in analytics capabilities (SQL query editor, visualizations and dashboards), or some...The post Databricks SQL: Delivering a Production SQL Development Experience on the Data Lake appeared first on Databricks. | https://databricks.com/blog/2021/09/30/databricks-sql-delivering-a-production-sql-development-experience-on-the-data-lake.html | 2021-09-30T15:49:38Z | Databricks SQL (DB SQL) is a simple and powerful SQL analytics platform for creating and sharing insights at a fraction of the cost of cloud data warehouses. Data analysts can either connect business intelligence (BI) tools of their choice to SQL endpoints, leverage the built-in analytics capabilities (SQL query editor, visualizations and dashboards), or some combination of both. Today, many customers connect BI tools like Tableau or Microsoft PowerBI to DB SQL for analytics and reporting directly on the data lake. In addition to compute for 3rd-party BI tools, DB SQL also equips you with everything you need for quick exploration and lightweight reporting without any additional tools.This blog is part of a series on Databricks SQL that covers critical capabilities across performance, ease of use, and governance. In todays blog, we highlight recent user experience enhancements for:Faster onboarding with sample data & dashboardsDiscovering data and managing accessIncreasing productivity in the SQL query editorCreating & collaborating on dashboardsSubscribing to dashboardsTriggering alerts based on query resultsFaster onboarding with sample data & dashboardsDatabricks SQL empowers everyone from analysts and data scientists to engineers and product managers with the tools to quickly derive essential insights — all without procuring any extra licenses. While many individuals in your organization may be familiar with SQL, they may not be as experienced with the ins and outs of visualizations and dashboards in DB SQL. To streamline onboarding of new users, Databricks SQL now ships with sample data, queries, visualizations and dashboards out of the box.When you visit the Dashboard Gallery, you can review Databricks-supplied dashboards and import them into your own workspace. These samples demonstrate common visualization types and configurations, such as conditionally changing font colors based on different thresholds.Discovering data and managing accessToday, its not uncommon for an organization’s data landscape to contain tens, hundreds and even thousands of data sets. Navigating that surface area to figure out what data is available and who has access to what databases and tables can quickly become overwhelming. Data Explorer is the one stop shop to discover available databases, tables and views as well as manage data permissions.In Databricks SQL, Data Explorer provides a clean, straightforward interface to browse available data and manage permissions. Its the first step in the journey of making trusted, high-quality data securely available within your organization.Increasing productivity in the SQL query editorTruly understanding your business goes so very much beyond a SELECT *. In order to comprehensively understand the answer to a business question, you need to consider the problem from multiple angles and refine as you go. Databricks SQL provides you with the SQL development experience you need to productively iterate on multiple queries simultaneously:Multi-taskers rejoice – it’s now easier than ever to switch back and forth between queries in the same screen by leveraging query tabs!Automatically save drafts. Now you can close your browser without losing your query.Leverage autocomplete for relevant suggestions.Pin your favorite tables for quick reference.Add and customize multiple visualizations for each query result. A picture is worth a thousand words and DB SQL provides an array of visualizations to help uncover key insights.View previous runs under Query History. Because sometimes you just need to jump back to a query you wrote a few days ago.Subscribe to dashboardsDashboards are a great way to persist insights. So weve made it easier than ever to open up access to those insights to anyone in your organization, whether or not they login to Databricks everyday.Dashboard subscriptions now allow you to deliver a convenient email and a PDF on a schedule of your choice. You can subscribe users within the Databricks workspace directly or leverage alert destinations to reach distribution lists and non-Databricks users at your organization.Triggering alerts based on query resultsThe most valuable data is actionable data. As you identify the most salient business metrics, its important to know when a number spikes or dips outside of expected values. Once you have identified key inflection points for different data points, you can configure alerts to be notified when those thresholds are met. By default, you can notify any user in your workspace, but you can also alert an email distribution list, message a Slack channel, or create a PagerDuty incident.Next StepsDatabricks SQL is already changing the way analytics is done at modern businesses like Atlassian and Plume, and we can’t wait to hear your feedback as well! We also encourage you to submit an idea for ways we can enhance Databricks SQL to better fit your needs.If youre an existing Databricks user, you can start using Databricks SQL today by following our Get Started guide for Azure Databricks or AWS. If youre not yet a Databricks user, visit databricks.com/try to start a free trial.Finally to learn more, join us on October 6th for a free instructor-led workshop on Databricks SQL. We look forward to seeing you there!Try Databricks for free.Get started today | Content Synthesis/Information Retrieval Or Search/Decision Making | Computer and Mathematical/Life, Physical, and Social Science | null | null | null | null | null | null |
|
news | Ted Kitterman | How machine learning is making an impact on media analysis | Artificial intelligence promises to dramatically change the way PR pros measure their campaigns. Here’s what the technology now allows—and where it is headed. There is hardly any industry that has not jumped on the artificial intelligence and “big data” analytics bandwagon yet, but one notably lags: public relations and communication. A possible reason for this […]The post How machine learning is making an impact on media analysis appeared first on PR Daily. | https://amecorg.com/2021/09/machine-learning-meets-media-analysis-from-hype-to-reality/ | 2021-09-24T13:57:28Z | There is hardly any industry that has not jumped on the artificial intelligence and big data analytics bandwagon yet, but one notably lags: public relations and communication.A possible reason for this is that PR is an inherently difficult discipline to quantify. Another is that the fields of AI which can provide transformational value to PR & commsnatural language processing (NLP), and natural language understanding (NLU) in particularare yet to reach their optimal productivity. According to Gartners Hype Cycle of Natural Language Technologies, they still have some five to ten years to go until finding mainstream adoption.Source: https://behavioralsignals.com/behavioral-signals-a-sample-vendor-for-natural-language-technologies/The value of NLU, the part of NLP that deals with computers ability to comprehend the structure and meaning of human language, to PR pros lies in its application to automated reasoning, machine translation, news-gathering, text categorization, and large-scale content analysis.Despite the challenges facing NLP, NLU presents significant opportunities for communication professionals and media analytics providers to achieve a granular analysis of textual data faster and more effectively.For example, using machine learning for harnessing social media data can help identify unarticulated, unprompted, latent consumer needs, uncover platforms for category growth and integrate those with a digital activation strategy.Decoding consumer language on social mediaThe key step to generating deep insights from consumer conversations on social media is leveraging NLP to create a business relevant taxonomy. Neither the software as a service (SaaS) NLP solutions nor the commercially available unsupervised deep learning algorithms can deliver this.What we need is to create a much deeper taxonomy that can ladder-up to the main themes we have identified in the conversation. Take the example of sports drinks. At a first level, you will find topics related to consumption (such as health and taste). To uncover insights, we need to go deeper into a multi-layered NLP which is a combination of man and machine. What are the different sub-topics within health, what is it that consumers are talking about within each of these sub-subtopics? Are consumers talking about sustaining energy or sensory pleasure? What sports drinks are they using when talking of these actions? What are consumer emotional motivations? What are the feelings (sentiment) associated with these actions and what are the moments consumers are talking of with these actions? What are the benefits and concerns consumers share about sports drinks and what are the behavioural changes they are doing as a result of these concerns? All these help to build a detailed taxonomy.In typical studies like the sports drinks case, taxonomy comprises 300-500 variables. In broader category studies, the taxonomy size can go up to 3,000+ variables.No matter the category, it is this multi-layered NLP that provides the taxonomy that can be analyzed, interpreted and reconstructed to uncover meaningful insights, such as identifying opportunities for growth in a category.One of the key components of NLP for taxonomy creation is uncovering the sentiment at scale, uncovering the sentiment associated with each individual construct (and not individual conversations) within each conversation.The process of creating such a granular taxonomy is a combination of supervised and unsupervised machine learning. Though the principles are no different from how survey data is analyzed and interpreted, the sheer volume of data (in the range of 200,000 to 60 million+ conversations) and variables (300-3,000+) requires a different skill-set, new tools and a new mind-set.Once a granular taxonomy and associated sentiment rules have been created using a combination of man and machine, deep learning algorithms can be trained for allocation of future data to business relevant taxonomy and insights. As deep learning algorithms are only as good as the basis on which the deep learning is trained, their specificity makes them more relevant and useful.NLPs top challenge and the road aheadThe biggest challenge in NLP is the shortage of training data. In projects like the one described above, it is rarely the case that more than a few thousand or a few hundred thousand human-labeled training examples can be used. However, modern deep learning-based NLP models see benefits from much larger amounts of data, improving when trained on millions (or billions) of annotated training examples.To overcome this challenge, researchers have developed a variety of techniques for training general purpose language representation models using the enormous amount of unannotated text on the web (known as pre-training). The pre-trained model can then be fine-tuned on small-data NLP tasks like question answering and sentiment analysis, resulting in substantial accuracy improvements compared to training on these datasets from scratch.The year 2018 was an inflection point for machine learning models handling text because it marked the release of BERT, an event described as marking the beginning of a new era in NLP.BERT, which stands for Bidirectional Encoder Representations with Transformers, is a deep neural network architecture built upon the idea of learning a representation of language, called a language model, to predict tokens or sequences given a context.BERT is a model that broke several records for how well models can handle language-based tasks. Soon after the release of the paper describing the model, the research team at Google offered open-source code for the model, and made available for download versions of the model that were already pre-trained on massive datasets. This is a momentous development since it enables anyone building a machine learning model involving language processing to use this powerhouse as a readily-available component, saving the time, energy, knowledge and resources that would have gone to training a language-processing model from scratch.As a high level illustration of the way BERT makes sense of human language, we can run a BERT model trained on Wikipedia pages on the first two paragraphs of a Guardian article on the meat industry:Both the planet and US politics have heated up in tandem over recent decades, but few sectors have stewed in controversy quite like Americas beef industry. Four super-powered meatpackers control more than 80% of the US beef market, an extraordinary concentration of market power that the Biden administration is not happy about.A recent executive action signed by the president aims to increase competition in the beef industry, with the White House noting that, over the past five years, farmers share of the price of beef sales has dropped by more than a quarter from 51.5% to 37.3% while the price of beef has risen.Below are the resulting IPTC Media Topics BERT identified in the snippet with their respective frequencies:What is powerful about these pre-trained deep learning language models is that they provide a representation of how a language works and thus alleviate downstream tasks. For instance, we can train a classifier by adding a final layer to our pre-trained language model and finetune its last layer(s) to this specific task. Thus, we need less annotated data to achieve better results.Whatever the opportunities, machine learning may not replace the art of communication and strategic media analytics anytime soon. In fact, an algorithm advanced enough to replace the instinct and creativity of humans may very well be the last thing mankind invents.Georgi Ivanov is the director of global marketing at Commetric. A version of this article first appeared on the AMEC website.Topics: Measurement | Content Synthesis/Process Automation/Decision Making | Management/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Indian health AI startup Endimension snaps up seed funding to develop new radiology solutions | The startup has developed an AI support tool that detects COVID-19 patterns from chest x-ray images. | https://www.mobihealthnews.com/news/asia/indian-health-ai-startup-endimension-snaps-seed-funding-develop-new-radiology-solutions | 2021-09-24T00:00:00Z | Endimension, a health AI startup from India, has raised 23 million rupees (over $300,000) in seed funding from one of the country's largest angel investors, Inflection Point Ventures.WHAT IT DOESThe startup was incubated at the Indian Institute of Technology Bombay in 2018. It develops AI algorithms to automatically detect abnormalities and diseases from medical scans. Its AI platform called ENDIM-AI-CXR is a clinical support tool that spots COVID-19 patterns from chest radiographs, reducing instances of misdiagnoses and increasing radiologists' efficiency.Endimension, a part of the NVIDIA AI Inception Program and the Microsoft for Startups Community, says its technology is now being used at around 100 hospitals and radiology labs across India.WHY IT MATTERSAccording to a press release, the seed funding will be used to employ new talent for its upcoming product development. The new funds will also be deployed for research and boosting sales. The World Health Organization recognises that occupational burnout intensifies the growing shortage of healthcare workers globally. As there are fewer radiologists against the volume of medical images being generated during the pandemic, Endimension says it intends to bridge that demand-supply gap through its AI technology. MARKET SNAPSHOTEndimension is entering a market where its fellow Indian startup Qure.ai has already made its splash. It recently secured a partnership with Fujifilm Corporation to equip the brand's smart x-ray system with its AI software called qXR. The same AI diagnostic tool was also deployed by AstraZeneca Malaysia in some primary care clinics for the early detection of lung cancer. Moreover, Qure.ai received a US FDA 510(k) clearance for qER-Quant, a brain CT quantification tool, that is used to rapidly assess the severity of a patient's brain injury.According to research firm MarketsandMarkets, the global market for AI in medical diagnostics is expected to reach $3.9 billion in value by 2025 from $505 million last year, growing at a 50.2% CAGR.ON THE RECORD"There is a global shortage of radiologists leading to radiologist burnout and patient misdiagnosis. The number of medical images is increasing exponentially whereas radiologists are only growing linearly. We wanted to bridge this demand-supply gap using our Radiology AI Platform. Our AI platform improves radiologist efficiency and reduces patient misdiagnosis," Endimension Founder and CEO Bharadwaj Kss said. | Detection and Monitoring/Image Analysis | Healthcare Practitioners and Support | null | null | null | null | null | null |
||
news | PR Newswire | Beyond Codes signs 7 AI product companies | A global Lead Generation company, Beyond Codes is assisting some of the fastest growing IT companies to expand their market reach through B2B appointment... | https://finance.yahoo.com/news/beyond-codes-signs-7-ai-141000301.html | https://s.yimg.com/uu/api/res/1.2/Yqt6bAbreuTUpY_h8CAhoQ--~B/aD0xNjt3PTE2O2FwcGlkPXl0YWNoeW9u/https://media.zenfs.com/en/prnewswire.com/262c6f5c9fcbe2fa907157fb551d4b35 | 2021-09-29T14:10:00Z | SIMI VALLEY, Calif., Sept. 29, 2021 /PRNewswire/ -- A global Lead Generation company, Beyond Codes is assisting some of the fastest growing IT companies to expand their market reach through B2B appointment setting, Account Based Marketing and Events & Webinars.While Tech Giants are aggressively integrating AI powered technologies into their products, there are numerous start-ups too, who are vying for a toe hold in the AI space. Most companies want to have effective data-driven decision making at their core. In fact, according to Forbes, the pandemic has accelerated the use of AI based technology and it is not just the Healthcare and Education industries that have monopolized its use, it is uniformly employed across industries.In such an atmosphere of bustle, Beyond Codes has acquired 7 AI product companies as clients, in the last quarter itself. Spread across North America, Europe, India and Australia, these companies are serviced out of our global delivery centers, with 360-degree support from our Sales and In House Research team."In a data-driven world, no company has been left untouched by AI Technology. Some might be at the beginning of their digital journey, others a little more technology savvy and others yet, real pathbreakers, but realistically speaking, every business operation today seeks to take advantage of AI systems built on mountains of web data that is publicly available. We are at an inflection point and businesses are transforming as we speak. Beyond Codes has a deep understanding of the technology and the wherewithal to help companies penetrate this market space," said Gaurav Kumar, CEO of Beyond Codes.Having 250 plus campaigns and 84 customers, most of whom are new-age companies, makes Beyond Codes the frontrunner in the lead generation segment. Ashish Sharma, Global Head-Sales, said, "It has been a busy year, and as more companies realize how AI is impacting revenue increase and cost reduction, this space will heat up further, and Beyond Codes recognizes this strategic opportunity and has scaled up at the right time to handle the upswing."About Beyond CodesBeyond Codes is one of the Top B2B Lead Generation companies that lines up with your business objectives to accelerate your Complete Sales Funnel Needs. We assist you to generate relevant and Targeted Sales Leads, personalize marketing at each account level, nurture leads, and ease your way into B2B appointment setting with qualified prospects.We specialize in curating demand generation strategies for your next-gen services including Machine Learning, Digital Transformation, Analytics, RPA, Artificial Intelligence and more.Media Contact:Shyamasree Banerjee Email: Shyamasree.banerjee@beyondcodes.com+91 8146275155www.beyondcodes.comView original content to download multimedia:https://www.prnewswire.com/news-releases/beyond-codes-signs-7-ai-product-companies-301387890.htmlSOURCE Beyond Codes Inc | Decision Making/Recommendation | Management/Business and Financial Operations | null | null | null | null | null | null |
news | Fobi AI Inc. | Fobi Signs $240,000 Data Consulting Agreement With Kiaro Holdings, A National Cannabis Retailer With 16 Locations And Forecasted Annual Revenue Of $42.7 Million, To Assist With Development Of Data Warehouse | Kiaro To Leverage The Power Of Fobi Real-time Artificial Intelligence To Transform Their Approach To Cannabis Retail And Support The Company’s Rapid... | https://finance.yahoo.com/news/fobi-signs-240-000-data-100000286.html | https://s.yimg.com/uu/api/res/1.2/U4KyGWGMvt8arQLmm9rlDA--~B/aD03MDt3PTE4NDthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/globenewswire.com/60b54b9058d2acb0295edf35f484b1d6 | 2021-09-29T10:00:00Z | Kiaro To Leverage The Power Of Fobi Real-time Artificial Intelligence To Transform Their Approach To Cannabis Retail And Support The Companys Rapid GrowthVANCOUVER, British Columbia, Sept. 29, 2021 (GLOBE NEWSWIRE) -- Fobi AI Inc. Inc. (FOBI:TSXV FOBIF:OTCQB) (the Company or Fobi), a global leader in providing real-time data analytics through artificial intelligence to drive customer activation and engagement, is pleased to announce the Company has entered into a data consulting agreement with Kiaro Holdings Corp. (KO:TSXV) ("Kiaro") to consult on the development of Kiaros data warehouse and real-time data analytics strategy. Kiaro is one of Canadas largest retailers of cannabis, with 16 locations and forecasted annual revenue of $42.7 million from these locations, with further growth anticipated. Fobi will assist Kiaro in the development of a big data and operational intelligence platform by introducing the use of artificial intelligence applications and data mining techniques. Moreover, Fobi will architect and deploy its established AI & Big Data Operational Applications.As a result of this Agreement, Kiaro will optimize and monetize its vast amounts of sales data by generating insights, analytics, and advanced audience segmentation capabilities, providing them with a market-leading competitive advantage in the fast-growing cannabis retail market, translating into maximum operational efficiency and profitability for Kiaro.Kiaro CEO Daniel Petrov stated: This partnership with Fobi will allow Kiaro to greatly improve its overall operational efficiencies and enable the company to benefit from enhanced data aggregation and segmentation capabilities, as Kiaro continues to execute on our scale and growth plans. With the sensitivity and importance of the large amounts of client data that can now be collected and optimized through Fobis AI-driven insights solution, I look forward to working with the Fobi team and advancing our internal data strategy and roadmap. As a result of this agreement, we are now well-positioned to monetize our existing data streams across the industry.KIARO DATA AGREEMENT SIGNIFIES INFLECTION POINT IN THE TRANSFORMATION OF THE $20.5B CANNABIS INDUSTRY WHICH IS PROJECTED TO GROW TO $90.4B BY 2026The Cannabis Industry is ripe for disruption following a decade of siloed operational data management systems. This partnership with Kiaro will allow Fobi to demonstrate the financial value of its data analytics capabilities to the entire industry, leading to significant future opportunities within the massive and growing cannabis vertical.Fobi CEO Rob Anson stated: I see the cannabis industry as a very lucrative opportunity for Fobi. This agreement with Kiaro will allow Fobi to demonstrate the power of our real-time data applications and analytics to provide a disjointed and fragmented market with the technologies and strategies necessary to improve existing siloed data systems. The power of Fobis real-time artificial intelligence and data capabilities will be key to driving profitability and supporting the growth and scale plans of Kiaro.AGREEMENT REPRESENTS THE SECOND SIGNIFICANT AI DATA CONSULTING DEAL IN A MATTER OF ONE WEEK.On September 24th, Fobi announced a $250,000 Data Consulting Agreement with Azincourt Energy Corp. to help transform legacy, antiquated but commonly used practices in the mining and exploration sector, with the goal of helping Azincourt achieve more impactful drill holes in less time and with less cost.With todays Kiaro deal, Fobis data-centric business model is now entering the full commercialization phase with significant revenues from verticals that provide significant growth potential.Moreover, the industry diversity of its first two data consulting clients - from energy exploration to cannabis retailing - unequivocally demonstrates that Fobi AI Data Intelligence knows no bounds.To this end, the Company is in multiple advanced stage discussions with potential clients and expects further such agreements in 2021 and beyond.This Press Release Is Available on the Fobi Website and the FOBI Verified Forum On AGORACOM For Shareholder Discussion And Management Engagement.About KiaroBased in Vancouver, B.C., Kiaro is an independent, omnichannel cannabis retailer and distributor. Through existing storefronts across British Columbia, Saskatchewan and Ontario, and with the completion of the recent acquisition of Hemisphere Cannabis from Aegis Brands, Kiaro has 16 stores in operation and another two expected in early 2022. This is in addition to its wholesale distribution division servicing Saskatchewan, and e-commerce sites in Canada, the U.S. and Australia. Kiaro is driven to introduce new and experienced consumers to a lifelong exploration of cannabis. With more than 70 years of collective retail and wholesale focused experience, Kiaro's leadership team has a proven record of executing on acquisitions and financings, and moreover growing brands across North America. The company plans to continue its growth trajectory through its consumer-centric retail, e-commerce and wholesale distribution segments over the coming years.About FobiFobi is a cutting-edge data intelligence company that helps our clients turn real-time data into actionable insights and personalized customer engagement to generate increased profits. Fobis unique IoT device has the ability to integrate seamlessly into existing infrastructure to enable data connectivity across online and on-premise platforms creating highly scalable solutions for our global clients. Fobi partners with some of the largest companies in the world to deliver best-in-class solutions and operates globally in the retail, telecom, sports & entertainment, casino gaming, and hospitality & tourism industries.For more information, please contact:Fobi AI Inc.Fobi Website: www.fobi.aiRob Anson, CEOFacebook: @ FobiincT : +1 877-754-5336 Ext. 3Twitter: @ Fobi_incE: ir@fobi.aiLinkedIn: @ FobiincThis news release contains certain statements that constitute forward-looking statements or information, including statements regarding Fobi's business and technology; the ability of Fobi to engage with industry participants to achieve its goals; the development of Fobi's technology; and the viability of Fobi's business model. Such forward-looking statements are subject to numerous risks and uncertainties, some of which are beyond Fobi's control, including the impact of general economic conditions, industry conditions, competition from other industry participants, stock market volatility, and the ability to access sufficient capital from internal and external sources. Although Fobi believes that the expectations in its forward-looking statements are reasonable, they are based on factors and assumptions concerning future events which may prove to be inaccurate. Those factors and assumptions are based upon currently available information. Such forward-looking statements are subject to known and unknown risks, uncertainties, and other factors that could influence actual results or events and cause actual results or events to differ materially from those stated, anticipated, or implied in the forward-looking statements. As such, readers are cautioned not to place undue reliance on the forward-looking statements, as no assurance can be provided as to future results, levels of activity, or achievements. The forward-looking statements contained in this news release are made as of the date of this news release and, except as required by applicable law, Fobi does not undertake any obligation to publicly update or to revise any of the included forward-looking statements, whether as a result of new information, future events or otherwise. The forward-looking statements contained in this document are expressly qualified by this cautionary statement. Trading in the securities of Fobi should be considered highly speculative. There can be no assurance that Fobi will be able to achieve all or any of its proposed objectives.Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accept responsibility for the adequacy or accuracy of this release. | Prediction/Decision Making/Content Synthesis/Process Automation | Business and Financial Operations/Management | null | null | null | null | null | null |
news | MartianMan | 110° - Free Udemy Courses: Next-Generation Memory, Ethical Hacking, Drawing Academy, Machine Learning, Microsoft Excel, Docker & More | Have fun learning [4.2] [2h 22m] Next-Generation Memory, Learning & Communication Improvement Code=FREEMEMORYCOURSE 2 Days left at this price ![New] [7h 4m] Ethical Hacking Masterclass : From Zero to Binary Deep Code=ETHICALHACKINGFREE 2 Days left at this price ![Hot & new] [8h 1m] Drawing Academy - Creative Drawing, Illustration & Sketching Code=4AE9F4135EDE40107811 2 Days left at this price ![Highest Rated] [8h 48m] Management Consulting Essential Training Code=81161F0FF19BD9DC57C2 1 Day left at this price ![4.5] [12h 51m] Complete Machine Learning with R Studio - ML for 2021 Code=MLRSEP1 2 Days left at this price ![4.4] [4h 4m] Microsoft Excel Masterclass for Business Managers Code=MEMBMSEP2 2 Days left at this price ![4.1] [1h 57m] Learn Machine learning & AI (Including Hands-on 3 Projects) Code=FREESEP7 2 Days left at this price ![4.1] [1h 27m] Docker Course for Beginners Code=FREESEP7 2 Days left at this price ![4.5] [1h 16m] Building Android Widgets from Scratch ( Learn 8 Widgets) Code=FREESEP7 2 Days left at this price ![4.2] [10h 57m] Artificial Neural Networks (ANN) with Keras in Python and R Code=ANNPRSEP2 2 Days left at this price ![4.1] [1h 28m ] Introduction to Finance Code=FINANCE 1 Day left at this price ![4.4] [2h 19m] Introduction to Art History [Audio Only Course] Code=F5A0CFC2397D0F77ADFB 1 Day left at this price ![3.9] [45m] Introduction to Microservices Code=FREESEP7 2 Days left at this price ![Bestseller] [8h 2m] SQL Masterclass: SQL for Data Analytics Code=SQLSEP3 2 Days left at this price ![Bestseller] [11h 14m] Google BigQuery & PostgreSQL : Big Query for Data Analysis Code=GBPSSEP2 2 Days left at this price ![4.4] [13h 19m] Time Series Analysis and Forecasting using Python Code=TSFPSEP2 2 Days left at this price ![Bestseller] [2h 25m] Agile Kanban: Kanban for Software Development Teams Code=KANBANSEP2 2 Days left at this price ![3.7] [34m] Google Calendar : Virtual Assistant Code=SEPPPPPPPP-2021 1 Day left at this price ![4.6] [26h 41m] Complete Italian Course: Learn Italian for Beginners Code=09069F59BAC3B8CB4DB4 2 Days left at this price ![4.0] [20h 58m] Hands on ASP.NET Core 5 production grade API Development Code=DURGAPUJA 2 Days left at this price ![3.6] [2h 17m] Learn HTML 5: The Complete Course HTML 5 And CSS3 tutorials Code=BDA1903448FFC1E41C88 2 Days left at this price ![3.9] [51m] Drug Design and Molecular Docking by using computation Tools Code=6D87C711D8C6C8AFC4A3 2 Days left at this price ![4.3] [33m] Learn DNA Primer Design for Polymerase Chain Reaction Code=929874B6FBA3D02E3685 2 Days left at this price ![Highest rated] [2h 17m] Complete Guide to InVideo and InVideo Video Creation Code=56E7E0C2BDB5BE0A2782 2 Days left at this price ![New] [1h 39m] How To Write A Simple And Successful Business Plan Code=3421C61B67739A044851 2 Days left at this price ![4.7] [4h 2m] Management Consulting Presentation Essential Training Code=B4480E3E98123D750063 1 Day left at this price ![New] [2h 53m] Learn IAC (Infrastructure As Code) Terraform Code=FREEFREEFREE 2 Days left at this price ![4.5] [7h 34m] Management Consulting Skills Mastery Code=2F281E5E770977FD821F 1 Day left at this price ![3.6] [31m] Fundamentals of Network Security Code=F302EEFE920263D61CFE 1 Day left at this price ![4.3] [4h 48m] Microsoft Excel - Learn MS EXCEL For DATA Analysis Code=420FF18DD0F843A60C58 2 Days left at this price ![New] [1h 31m] Xfer Serum - Beginner Comprehensive Masterclass Code=B5F4DAF93DE3448107C0 1 Day left at this price ![New] [19h 22m] Excel Microsoft Excel - 5 Levels Mastery Course 19+ hours Code=EXCELFREE 2 Days left at this price ![4.0] [ 5h 13m] Electronic Music Production with Ableton Live 10. Level 1 Code=87D8925A4B247BE08BAA 2 Days left at this price ![3.5] [3h 58m] Storytelling With Imagery For Persuasion & Personal Brand Code=E34CEE37A06B2EE28B84 2 Days left at this price ![3.2] [4h 38m] Airtable Ultimate Course Code=TRY10FREE92107 2 Days left at this price ![3.8] [34m] Google Calendar : Virtual Assistant Code=SEPPPPPPPP-2021 2 Days left at this price ![4.9] [6h 15m] Learn WordPress & Using WordPress to Make 6 Figures OnlineSıfırdan İbranice Dil Eğitimi Code=12345A 2 Days left at this price ![New] [1h 3m] ADHD Mums! Discover 3 Secrets to Banish Burnout NOW! Code=MUMSBANISHBURNOUT 2 Days left at this price ![3.8] [ 45m] Enable your learning agility Code=3485D1267C6EADD0E3F1 2 Days left at this price ![3.9] [2h 33m ] Shopify guide: The complete shopify store creation course Code=3A39C913D833BFF38DDE 2 Days left at this price ![4.2] [ 3h 49m] Introduction to Forex- learn to trade forex by yourself Code=EBF395A6C9AF46F4E1FC 2 Days left at this price ![4.0] [ 3h 27m] Google Docs Ultimate Course 2021 Code=TRY10FREE92107 2 Days left at this price ![3.9] [6h 3m] Microsoft Power Automate (Flow) Ultimate Course 2021 Code=TRY10FREE92107 2 Days left at this price ![3.7] [6h 7m] JavaScript Ultimate Course Code=TRY10FREE92107 2 Days left at this price ![4.2] [4h 53m] jQuery Ultimate Course Code=TRY10FREE92107 2 Days left at this price ![4.3] [3h 52m] The Ultimate Beginners Guide to Python Recommender Systems Code=55F9D6FFB0844155BECA 2 Days left at this price ![New] [1h 39m] Sports Coaching Certificate Course (ALL SPORTS) - Foundation Code=2D91262A286C9BAE5DED 2 Days left at this price ![4.3] [45m ] Solve Problems by Rising above Problems For New Perspective Code=66C56B5D4D3FABC7E062 2 Days left at this price ![3.9] [ 3h 32m] Adobe Premiere Pro 2021 Ultimate Course Code=TRY10FREE92107 2 Days left at this price ![4.1] [ 3h 3m] Gmail Productivity Ultimate Course 2021 Code=TRY10FREE92107 2 Days left at this price ![4.6] [4h 55m] Natural Language Processing for Text Summarization Code=80A248C276D6679E2D37 2 Days left at this price ![4.2] [10h 29m] End to End React with Prisma 2 Code=ENDTOENDSEP22 2 Days left at this price ![4.4] [1h 23m] English grammar tenses & structures, the ultimate course Code=SEPTEMBER23 1 Day left at this price ![4.5] [3h 26m] The SQL Programming Essentials 2021 Immersive Training Code=7EDCCD907D57A597AA42 2 Days left at this price ![New] [4h 10m] The Complete GSAP Course for 2021 Code=E4A95344DA72285BB357 1 Day left at this price ![4.6] [24h 18m] Master Budgets – Managerial Accounting/Cost Accounting Code=6FC0523AC3B800EC1615 1 Day left at this price ![4.7] [22h 9m ] Cost Volume Profit Analysis (CVP) – Managerial Accounting Code=4B57452691FA046E8840 1 Day left at this price ![4.0] [3h 38m] Search Engine Optimization Complete Specialization Course Code=SEOSTARTNOW 1 Day left at this price ![4.4] [4h 53m] Persuasive Writing Ft. Two Forbes Contributors & Copywriters Code=YOUACCEL51818 1 Day left at this price ![4.1] [4h 48m] Build an Amazon Affiliate E-Commerce Store from Scratch Code=YOUACCEL51818 1 Day left at this price ![4.6] [3h 13m] Affinity Publisher Guide - Affinity Publisher for Beginners Code=554EEE8BC7B297E21413 1 Day left at this price ![Amazon UK] HTML (Free with Kindle Unlimited or $0.99) 4 Days left at this price ![Amazon UK] JavaScript (Free with Kindle Unlimited or $0.99) 2 Days left at this price !Popular Discounted Courses[4.6] [11h 23m] BEST of Google SEO 2021: SEO & Copywriting Made Simple & Fun $9.99 Code=1SEPT999 4 Days left at this price ![Bestseller] [88h 18m] BEST of Digital Marketing: #1 Digital Marketing Course 2021 $11.99 Code=3SEPT999 4 Days left at this price ![4.3] [45h 40m] Python Hands-On 46 Hours, 210 Exercises, 5 Projects, 2 Exams $9.99 Code=PYTHON-92 4 Days left at this price ![4.1] [10h 31m] The Complete NFT (Non-Fungible Tokens) Course: Art and Tech $9.99 Code=THANKS04 1 Day left at this price ![4.5] [26h 47m] The Complete English Language Course Improve Spoken English $11.99 Code=THANKS05 1 Day left at this price ![4.5] [12h 37m] Complete Organization Masterclass - Organize - Decluttering $9.99 Code=THANKS05 2 Days left at this price ![4.2] [1h 19m] The Complete Proofreading Course: Editing and Proofreading $11.99 Code=THANKS04 2 Days left at this price ![4.2] [30h 6m] The Human Resources Certification Program (HRCI - PHR/SPHR) $9.99 Code=YOURCOURSE 1 Day left at this price ![4.3] [42h 32m] Project Management Professional Certification Program (PMP) $11.99 Code=YOURCOURSE 1 Day left at this price ![4.4] [13h 39m] The Operations Management Training Program $11.99 Code=YOURCOURSE 1 Day left at this price ![4.1] [37h 7m] Risk Management for Business Analysts (PMI-RMP/IIBA-ECBA) $11.99 Code=YOURCOURSE 1 Day left at this price ! | https://www.hotukdeals.com/deals/free-udemy-courses-next-generation-memory-ethical-hacking-drawing-academy-machine-learning-microsoft-excel-docker-more-3798414 | 2021-09-24T19:53:54Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Information Retrieval Or Search/Content Synthesis/Content Creation/Personalization | Education, Training, and Library/Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | MartianMan | 114° - Free Udemy Courses: Microsoft Windows Server 2019 - Hands-on Training, Local SEO 2021, Hatha Yoga, Microsoft Excel, LeetCode in Java & More | Have fun learning !![4.3] [5h 22m] Microsoft Windows Server 2019 - Hands-on Training Part II Code=F8A66F3492C38E649410 2 Days left at this price ![4.6] [9h 20m] Local SEO 2021 Made Simple & Fun + Google Maps & TripAdvisor Code=SEPTGOODNESS999 2 Days left at this price ![4.2] [7h 20m] Python-Introduction to Data Science and Machine learning A-Z Code=8111479BE9754018D772 2 Days left at this price ![4.1] [8h 45m] Artificial Intelligence (AI) in the Classroom Code=100OFF_SEPT2 1 Day left at this price ![3.7] [44m] 3D Animation Film-Making with Plotagon (2021 Edition) Code=TIMETORULE 1 Day left at this price ![New] [1h 59m] SQL Introduction: SQL Crash Course. Code=986373C54B42437BA1DD 2 Days left at this price ![4.1] [3h 9m] Cryptocurrency Course: Learn to Make Money Online WORLDWIDE! Code=0438037C2E9392C70C94 1 Day left at this price ![Highest Rated] [1h 0m] First Timers Guide to Approach A Girl - Basic Do's & Don'ts Code=FREEFORALL 1 Day left at this price ![4.4] [9h 16m] Neural Networks in Python: Deep Learning for Beginners Code=NNPSEP1 2 Days left at this price ![3.6] [1h 29m] Udemy bootcamp: How to create a STUNNING course Unofficial Code=76B203E19DA78ECF5E46 2 Days left at this price ![4.3] [ 4h 56m] Build A Search Engine With Python: Computer Science & Python Code=UPSKILL-NOW-FREE 2 Days left at this price ![New] [65h 23m] Hatha Yoga Unplugged - A Regular Practice Code=6040F8697EBCB141A211 1 Day left at this price ![4.6] [1h 48m] Fundamentals of Business Accounting 1: Learn Quick and Easy Code=58D32B9E5DC3624C657B 1 Day left at this price ![3.6] [57m] Data Science Approach from Scratch: An Easy Explanation Code=DSSEPT1 1 Day left at this price ![4.4] [1h 6m] Anti-Money Laundering Concepts: AML, KYC and Compliance Code=AMLSEP3 2 Days left at this price ![4.4] [5h 28m] Atlassian Confluence Masterclass for Project Managers Code=CONFLSEP3 2 Days left at this price ![4.5] [6h 19m] Zero to Hero in Microsoft Excel: Complete Excel guide 2021 Code=ZTHSEP3 2 Days left at this price ![4.4] [1h 9m] Electronics : Zener Diode & Light Emitting Diode Code=6FAEBFBB288C2CCB7D1E 1 Day left at this price ![4.2] [2h 22m] Next-Generation Memory, Learning & Communication Improvement Code=FREEMEMORYCOURSE 1 Day left at this price ![4.2] [ 27h 36m ] LeetCode in Java: Algorithms Coding Interview Questions Code=73F4E60880BEC81CD0AD 1 Day left at this price ![4.5] [41m] Scrum Ceremonies Simplified Code=ENROLL-4-FREE 1 Day left at this price ![4.2] [3h 25m] SQL- The Complete Introduction to SQL programming Code=A9CFB4FF1B9ED522AAAB 1 Day left at this price ![4.1] [5h 29m] Java for Beginners - Learn all the Basics of Java Code=EEDE4D32A81F3ABD559D 1 Day left at this price ![4.3] [1h 40m] Quantity Surveying Templates for Effective Cost Management Code=QSTEMPLATEFORYOU 1 Day left at this price ![4.3] [4h 46m] Learn Spanish Now: All-in-One Knowledge Course Code=FALLSALE 1 Day left at this price ![4.5] [2h 11m] Artificial Intelligence in Video Creation: Supreme Edition Code=TIMETORULE 1 Day left at this price ![4.3] [14h 32m] Data Structures and Algorithms for Coding Interview Code=17AE94D33F58E378DC7B 1 Day left at this price ![4.4] [1h 59m] Project Management Essential Training Code=UPSKILL-NOW-FREE 1 Day left at this price ![4.3] [7h 54m] PHP for Beginners 2021: The Complete PHP MySQL PDO Course Code=FULLSTACK25 1 Day left at this price ![4.3] [3h 20m] Complete Guide to Tailwind - Master Pinterest with Tailwind Code=5706C3328833371A45D9 1 Day left at this price ![4.5] [ 3h 28m] Building Programs with Python Hands-On & Effective Training Code=F1485960005F928BC301 1 Day left at this price ![4.6] [46m ] Build Laravel Applications fast using blueprint Code=7699456108BF998E1B4D 1 Day left at this price ![New] [ 5h 33m] Tableau Desktop: Hands-on Dashboards + SQL for Beginners Code=ENJOYTABLEAU3 1 Day left at this price ![4.9] [9h 21m] Copywriting & SEO for Beginners: Complete Copywriting Course Code=SEPTGOODNESS999 1 Day left at this price ![4.4] [9h 20m] Introduction into Facebook Marketing & Facebook Advertising Code=SEPTGOODNESS99 2 Days left at this price ![4.5] [2h 23m] OBS Studio - Ultimate Livestreaming Guide to OBS Studio Code=7EBB9B66DF9291C3C54B 2 Days left at this price ![4.6] [13h 23m] BEST of Digital Marketing Essentials 2021: Learn FUN & FAST Code=SEPTGOODNESS99 1 Day left at this price ![4.2] [4h 33m] Data Visualization in Excel: All Excel Charts and Graphs Code=DVESEP2 2 Days left at this price ![4.1] [4h 46m] Gatsby JS | Build a personal blog using gatsbyJS Code=SEP_2021 1 Day left at this price ![Highest Rated] [11h 25m] Local SEO & Facebook ADS as PROJECT & FUN: 10 Days CHALLENGE Code=SEPTGOODNESS999 1 Day left at this price ![Amazon] Emotional Intelligence Essentials & More 3 Days left at this price ![Amazon] Learn HTML (Free with Kindle Unlimited or $0.99) 4 Days left at this price ![Amazon] Start-up Growth Strategy (Free with Kindle Unlimited or $0.99) 4 Days left at this price !Popular Discounted Courses[Bestseller] [88h 18m] BEST of Digital Marketing: #1 Digital Marketing Course 2021 $11.99 Code=3SEPT999 3 Days left at this price ![4.5] [30h 13m] The Complete Sales Skills Master Class - Sales Marketing B2B $10.99 Code=THANKS05 3 Days left at this price ![4.5] [26h 2m] Complete Time Management Course Raise Personal Productivity $10.99 Code=THANKS05 3 Days left at this price ![4.5] [95h 23m] Ultimate PHP, Laravel, CSS & Sass! Learn PHP, Laravel & Sass $9.99 Code=BESTPRICE 4 Days left at this price ![4.1] [45h 32m] Master JavaScript - The Most Complete JavaScript Course 2021 $11.99 Code=BESTPRICE 4 Days left at this price ![4.6] [11h 23m] BEST of Google SEO 2021: SEO & Copywriting Made Simple & Fun $9.99 Code=1SEPT999 3 Days left at this price ![4.3] [45h 40m] Python Hands-On 46 Hours, 210 Exercises, 5 Projects, 2 Exams $9.99 Code=PYTHON-92 3 Days left at this price ![4.5] [12h 37m] Complete Organization Masterclass - Organize - Decluttering $9.99 Code=THANKS05 1 Day left at this price ![4.2] [1h 19m] The Complete Proofreading Course: Editing and Proofreading $11.99 Code=THANKS04 1 Day left at this price ! | https://www.hotukdeals.com/deals/free-udemy-courses-microsoft-windows-server-2019-hands-on-training-local-seo-2021-hatha-yoga-microsoft-excel-leetcode-in-java-more-3799028 | 2021-09-26T18:11:11Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Personalization/Process Automation | Education, Training, and Library | null | null | null | null | null | null |
|
news | BS Reporter | Healthcare AI startup Endimension raises Rs 2.3 cr in seed funding | The AI platform of the startup helps to automatically detect abnormalities and diseases, via AI algorithms from medical scans | https://www.business-standard.com/article/companies/healthcare-ai-startup-endimension-raises-rs-2-3-cr-in-seed-funding-121092200495_1.html | 2021-09-22T07:15:00Z | Healthcare AI startup Endimension has raised Rs 2.3 crore in a seed funding round from Inflection Point Ventures, one of India’s largest angel investment platforms. The funds raised will be utilised to acquire talent for new product development, R&D and for ramping-up the sales team to fuel further growth.The AI platform of the startup helps to automatically detect abnormalities and diseases, via AI algorithms from medical scans, so as to reduce patient misdiagnosis and increase Radiologists’ efficiency.“There is a global shortage of radiologists leading to radiologist burnout and patient misdiagnosis. The number of medical images is increasing exponentially whereas radiologists are only growing linearly. We wanted to bridge this demand-supply gap using our Radiology AI Platform. Our AI platform improves radiologist efficiency and reduces patient misdiagnosis,” said Bharadwaj Kss, founder & CEO, Endimension. He is an alum of IIT Madras and IIT Bombay and has experience of over a decade in AI and machine learning research.Ankur Mittal, co-Founder, Inflection Point Ventures, said, “Cutting edge technologies are being used in many complex treatments and surgeries. These are early days for application of AI and ML at the diagnostic stage, not only in India but worldwide. The use cases range from providing teleradiology services to medical institutions in the west to supporting lack of infrastructure in remote parts of India. Endimension is building a unique GTM for Indian market and have entered in to couple of interesting JVs in related areas in the west building additional valuation opportunities for its investors.”Endimension stood first in India at the LUNA-16 International Radiology AI Hackathon competition, competing with over 350 Indian teams. The startup has also received “Life Science Award” at Asian Entrepreneurship Awards in Japan in 2020.The global AI in medical diagnostics market is projected to reach $ 3,800 million by 2025 from $ 500 million in 2020, growing at a CAGR of 50.2% during the forecast period. Growth in this market is primarily driven by government initiatives to increase the adoption of AI-based technologies, increasing demand for AI tools in the medical field, growing focus on reducing the workload of radiologists, influx of large and complex datasets, growth in funding for AI-based startups, and the growing number of cross-industry partnerships and collaborations.Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.Support quality journalism and subscribe to Business Standard.Digital Editor | Detection and Monitoring/Prediction | Healthcare Practitioners and Support | null | null | null | null | null | null |
|
news | Livemint | Ishitva raises over $1 million in Pre-Series A funding | Ishitva uses artificial intelligence, machine learning, and Internet of Things to build solutions which help sort complex waste. It focuses on leveraging industry 4.0 tools to sort waste and assess the quality of recyclable waste | https://www.livemint.com/companies/start-ups/ishitva-raises-over-1-million-in-pre-series-a-funding-11633693532182.html | 2021-10-08T11:53:49Z | BENGALURU: Robotics startup Ishitva has raised over $1 million as part of its Pre-Series A round, led by Inflection Point Ventures, the company said on Friday.The funds raised will be deployed to grow sales and expand Ishitvas research and development team, the Ahmedabad-based firm said.The three-year old startup uses artificial intelligence (AI), machine learning (ML), and internet of things (IoT) to build solutions which help sort complex waste. The company focuses on leveraging industry 4.0 tools to systematically sort waste and assess the quality of recyclable waste.The Indian waste management industry is a riddle and our unlimited dream is to eliminate the manual sorting of waste which often involves humans to work in unhygienic conditions. Our vision is to install industry 4.0 solutions in over 4000 towns of the country picked up by the Swachh Bharat mission and we are thrilled to have IPV as a part of this journey," said Jitesh Dadlani, founder of Ishitva.We are excited that many of the waste industry stalwarts are joining early in our journey. With IPV, we now have access to a large pool of professionals and experts which is immensely useful at this stage for us," said Sandip Singh, chief executive of Ishitva.Ishitva currently offers a range of solutions, including SUKA, AI-powered air sorting, YUTA, AI-powered robotic sorting solution stack as well as Netra AI vision system which help in identifying waste and smart bins. The AI-powered Netra identifies waste, learns about its contents by capturing images, and then scans for recyclable material.According to the ministry of housing and urban affairs, as of January 2020, India generated 1,47,613 million tons of solid waste. This is a serious situation that our country is headed to and we are seeing startups taking on this challenge to address the waste generated in the country. Ishitvas model is particularly innovative as they are leveraging cutting-edge technology to sort out the waste which is already being generated," Mitesh Shah, Co-founder, Inflection Point Ventures.Inflection Point Ventures looks to invest 155 crore this year across 60 plus startups.Never miss a story! Stay connected and informed with Mint. Download our App Now!! | Detection and Monitoring/Process Automation | Unknown | null | null | null | null | null | null |
|
news | ORBAI Technologies, Inc. | The Future of Ai Will Take a Different, More General Approach | ORBAI aims to develop a human-like AI with fluent conversational speechSanta Clara, CA, Oct. 09, 2021 (GLOBE NEWSWIRE) -- The California-based startup ORBAI ... | https://finance.yahoo.com/news/future-ai-different-more-general-191000963.html | https://s.yimg.com/uu/api/res/1.2/sMIaLvHXeKBx_Z.qcuL_xA--~B/aD00OTt3PTI1NTthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/globenewswire.com/94bad3e2bfaaee71404d3d857189bd24 | 2021-10-09T19:10:00Z | ORBAI aims to develop a human-like AI with fluent conversational speechSanta Clara, CA, Oct. 09, 2021 (GLOBE NEWSWIRE) -- The California-based startup ORBAI has developed and patented a design for AGI that can learn more like the human brain by interacting with the world, encoding and storing memories in narratives, dreaming about them to form connections, creating a model of its world, and using that model to predict, plan and function at a human level, in human occupations.With this technology, ORBAI aims to develop Human AI, with fluent conversational speech on top of this AGI core, to provide AI professional services, all the way from customer service to legal, medical, to financial advice, with these services provided online, inexpensively, for the whole world. The core of the Legal AI has already been tested in litigation, with great success.Brent Oster, President/CEO of ORBAI has helped Fortune 500 companies (and startups) looking to adopt AI, but consistently found that DL architectures tools fell far short of their expectations for AI. Brent started ORBAI to develop something better for them.Today, if we browse the Internet for news on AI, we find that AI just accomplished something humans already do, yet far better. Still, it isn't easy to develop artificial general intelligence (AGI) through human-created algorithms. Do you think AGI may require machines to create their own algorithms? According to you, what is the future of machines that learn to learn?This is correct, today people design deep learning networks by hand, defining the layers and how they connect, but after a lot of tinkering, they can only get each network to do a specific task, like CNNs for image recognition, or RNNs for speech recognition, or Reinforcement learning for simple problem solving, like games or mazes. All of these require a very well defined and constrained problem, and labelled data or human input to measure success and train. This limits the effectiveness and breadth of application for each of these specific methods.ORBAI has built a toolset called NeuroCAD (https://www.orbai.ai/neurocad-v4-1.htm) that uses a process with genetic algorithms to evolve more powerful and general purpose spiking neural networks to shape them to fill in the desired functionality, so yes, the tools are designing the AI. One example is our SNN autoencoder that can learn to take in any type or 2D or 3D spatial-temporal input, encode it to a latent, compressed format and also decode it. The cool part is you don't have to format nor label your data. It learns the encoding automatically. This takes the functionality of CNNs, RNNs, LSTMs, GANs, and combines them into one more powerful general purpose analog neural network that can do all these tasks. By itself this is very useful, as the output can be clustered, then the clusters labelled, or associated with other modalities of input, or used to train a conventional predictor pipeline.But this is for designing components. There is a second level to NeuroCAD that allows these components to be assembled and connected into structures, and these composite structures can be evolved to do very general tasks. For example, we may want to build a robot controller, so we put two vision autoencoders for stereo vision, a speech recognition autoencoder for voice commands, and autoencoders for sensors and motion controllers. Then we put an AI decision making core in the middle, that can take in our encoded inputs, store them in memory, learn how sequences of these inputs evolve in time, and store models for what responses are required. Again, all of these autoencoders and components are evolved to their specific area, and how they connect is evolved, as is the decision core in the middle.To get this to work, we have to take some guesses about how to design this artificial decision core, the brain in the middle, and seed the genetic algorithms with a couple decent designs, so it will process the sensory input, store it and build relationships between the memories, build narratives with inputs and actions with progressively more advanced models that make the robot better able to understand what to do given specific instructions and the state of its world. Once we have an initial guess, we start evolving the components and how they connect to each other and the architecture of the decision making core.So the short answer is yes, we will have evolutionary genetic algorithms design our AI, from the components, to the way they connect, to how they solve problems, starting with small 'brains' and working up, like biological evolution did.For details, see the ORBAI Patents and NVIDIA GTC presentations listed at the bottom of our AGI page: https://www.orbai.ai/artificial-general-intelligence.htmMany experts, including computer scientists and engineers, predict that artificial general intelligence (AGI) is possible in the near future. But, ORBAI shows us that it is coming even sooner than we likely anticipated. Could you please shed some light on the project and tell us more about the 3D characters? What is usually meant is superhuman AGI, which is the apex of this process, but there are degrees and flavors of artificial general intelligence along the way.- Having more general neural nets that can combine the functionality of CNNs, RNNs, RL, and other Gen 2 AI components into one neural net architecture that is more general and more powerful - One year- Building an artificial intelligence that can take in sensory inputs, form memories and associations between them, plan and make decisions with them, at the level of insect - Two years, a rodent - Three years- Human-like conversational speech and and general purpose decision making, but trained only in a specific vocation - Four years for first implementation, 6 years to Make it really work. Some vocations like Law and Medicine have constrained spaces of information and decisions, so are easier than building a general human- These vocational AIs can be trained independently, then later be migrated to a common architecture and combined to form a multi-skilled AGI. It would not be a general human AI, but would have superhuman capability to do areas of each profession, have deeper and wider knowledge reach, and the ability to model the future, plan and predict better than humans.- Perfecting AGI, making a completely conversational, human-level general AI that is indistinguishable from us and can pass a Turing Test will most likely require building a synthetic AGI, that is much more powerful than human, that can then use all that power to emulate or mimic a human being, if that is what we want it to do.What most people talk about as AGI is actually superhuman artificial general intelligence. But how do we measure "superhuman"? Deep learning AI is already superhuman in some very specific areas, and with advances like ORBAI is doing, will become superhuman in broader professional areas in analysis, planning, and prediction. We will have better conversational speech, we might pass the Tuiring test in 4-6 years, but how can speech become superhuman after that? Mastering 8 languages or more? Hm, this gets a bit muddier. I think superhuman is when AGI can solve most problems and predict into the future far better than us.We base our AGI curve on Moore's Law, and unlike current Gen 2 DNN based AI, we are using analog neural net computers that scale proportionally with existing hardware, and evolve to become more efficient, and have greater capability with time:So in summary what ORBAI is building is an AGI that can take in and analyze large amounts of arbitrary format and types of input data, build models of how its perceived world works, and make predictions and plan using those models, then apply that to specific fields like law, medicine, finance, enterprise planning, administration, agriculture, and others. Because human speech fits this concept of modelling a bi-modal sequence of events, it will be a feature, with the speech anchored to the rest of the memories and world data to give it context and relevance.From ordering groceries through Alexa to writing an email with Siri, AI has been transforming many aspects of our lives. According to you, how will ORBAI's 3D characters help people transform their lives and bring a change? I have personally used the Alexa, Google and Siri voice interfaces in my home and have done my best to integrate them into my life and make use of them, but I still find them difficult and awkward, always feeling like there is an easier way to do the same task on a mobile screen. I think this is because these voice interfaces are the equivalent of what we had with the DOS-era command-line interfaces, where you state a command, then a set of parameters, and they have to be properly formatted, and correct, like "Alexis, what is the weather in Seattle tomorrow". and the speech has to be crisply enunciated in an abnormal, staccato pattern. ORBAI did a lot of work in 2019, with testing many speech APIs in the home, and at conferences with holographic character kiosks, and found that most ordinary people cannot figure out how to talk to them properly, don't know how to cue the device to listen, and tend to launch into long, rambling monologues, so voice interfaces just don't work for them.By creating a more advanced, conversational AI that uses our core technology to parse speech, and understand the flow of human speech, to tie it to memories of real concepts, events, and contexts, we can do a more natural back and forth flow of conversation between the person and AI that is much more relevant and grounded, and the AI can direct the conversation to get specific information from the user. Having a 3D character onscreen is more to get the person to look at the device, speak clearly into the microphone, and so the AI can watch the person speak, pick up facial expressions and even lip-read to augment the speech recognition. The characters also have a cool factor, will make our products recognizable and also make for excellent branding. There are already attorneys that fear Justine Falcon, Legal AI.Having inexpensive access to professional services like legal, medicine, finance, education, and other information professions online with AI would greatly improve many people's lives, even if it initially allows them to do a Q&A session with the AI and it leads them through better defining the issues they are having, then refer them to a professional, like a Doctor, providing a concise and professionally written report with the correct language and format to that professional. This would reduce the office visit time, and determine whether coming in is even necessary. Speaking with a lawyer about an issue is difficult for an average person because law is so precisely defined, and the language differs greatly from plain language and concepts, so the AI would be like a bridge to translate between them. For the developing world, extending the AI capability to doing medical diagnosis, and actually acting in a professional capacity in other fields would provide people with their first and only access to these kind of services, both changing and saving lives. With the advent of more advanced AGI for doing diagnosis, litigation, financial planning - the sky becomes the limit.AI has already shown confinement in automating a lot of tasks successfully. Still, most laypeople have a universal question: can AI completely replace flesh-and-blood professionals in the future, or will it just act as a support system to ease the pressure of work?The two hardest professions to replace will be Housekeeper and Handyman, simply because these professions require great manual dexterity, and ability to solve a wide variety of unstructured spatial problems, using various tools to accomplish an almost unlimited variety of tasks, and a robot body that is strong and dexterous enough with enough power to work all day at these tasks.The simpler professions to automate with AGI will be the information professions, where a large amount of knowledge, mental models built on that knowledge, and a limited scope of actions or outcomes to perform constrain the problem. That is why we picked an AI lawyer, and an AI doctor to be the first candidates for AGI - they are both structured information professions like this.We have seen how in many situations, AI and automation augment professions. ATMs and online banking reduced the work that bank tellers personally have to do, but mostly offloaded the mundane and repetitive work. Most likely this trend of AI augmenting humans will continueWe have been told that ORBAI is launching an equity fundraising campaign. Could you please tell us how people can invest in the future and, in turn, what benefits they will be getting? Yes, ORBAI launched an equity crowdfunding campaign on 24 Sept 2021 on www.startengine.com/orbai. The details of the offering are on our campaign page, but the SEC rules prevent us from communicating any specifics to the public, as it would be solicitation. StartEngine also has a great deal of general information about equity crowdfunding at www.startengine.com.Media Contact:ORBAI Technologies, Inc.Brent Oster+1 408-963-8671brent.oster@orbai.com | Digital Assistance/Content Synthesis/Recommendation | Legal/Healthcare Practitioners and Support/Business and Financial Operations/Sales and Related | null | null | null | null | null | null |
news | Kyle Wiggers | Enterprise NLP budgets are up 10% in 2021 | According to a new report from John Snow Labs and Gradient Flow, NLP investments among enterprises grew 10% year-over-year. | http://venturebeat.com/2021/09/21/enterprise-nlp-budgets-are-up-10-in-2021/ | 2021-09-21T13:00:27Z | The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!Enterprises are increasing their investments in natural language processing (NLP), the subfield of linguistics, computer science, and AI concerned with how algorithms analyze large amounts of language data. According to a new survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their NLP budgets grew by at least 10% compared to 2020, while a third 33% said that their spending climbed by more than 30%.The goal of NLP is to develop models capable of understanding the contents of documents to extract information as well as categorize the documents themselves. Over the past decades, NLP has become a key tool in industries like health care and financial services, where its used to process patents, derive insights from scientific papers, recommend news articles, and more.John Snow Labs and Gradient Flows 2021 NLP Industry Survey asked 655 technologists, about a quarter of which hold roles in technical leadership, about trends in NLP at their employers. The top four industries represented by respondents included health care (17%), technology (16%), education (15%), and financial services (7%). Fifty-four percent singled out named entity recognition (NER) as the primary use cases for NLP, while 46% cited document classification as their top use case. By contrast, in health care, entity linking and knowledge graphs (41%) were among the top use cases, followed by deidentification (39%).NER, given a block of text, determines which items in the text map to proper names (like people or places) and what the type of each such name might be (person, location, organization). Entity linking selects the entity thats referred to in context, like a celebrity or company, while knowledge graphs comprise a collection of interlinked descriptions of entities (usually objects or concepts).The big winners in the NLP boom are cloud service providers, which the majority of companies retain rather than develop their own in-house solutions. According to the survey, 83% of respondents said that they use cloud NLP APIs from Google Cloud, Amazon Web Services, Microsoft Azure, and IBM in addition to open source libraries. This represents a sizeable chunk of change, considering the fact that the global NLP market is expected to climb in value from $11.6 billion in 2020 to $35.1 billion by 2026. In 2019, IBM generated $303.8 million in revenue alone from its AI software platforms.Among the tech leaders John Snow Labs and Gradient Flow surveyed, accuracy (40%) was the most important requirement when evaluating an NLP solution, followed by production readiness (24%) and scalability (16%). But the respondents cited costs, maintenance, and data sharing as outstanding challenges.As the reports authors point out, experienced users of NLP tools and libraries understand that they often need to tune and customize models for their specific domains and applications. General-purpose models tend to be trained on open datasets like Wikipedia or news sources or datasets used for benchmarking specific NLP tasks. For example, an NER model trained on news and media sources is likely to perform poorly when used in specific areas of healthcare or financial services, the report reads.But this process can become expensive. In an Anadot survey, 77% of companies with more than $2 million in cloud costs which include API-based AI services like NLP said they were surprised by how much they spent. As corporate investments in AI grows to $97.9 billion in 2023, according to IDC, Gartner anticipates that spending on cloud services will increase 18% this year to a total of $304.9 billion.Looking ahead, John Snow Labs and Gradient Flow expect growth in question-answering and natural language generation NLP workloads powered by large language models like OpenAIs GPT-3 and AI21s Jurassic-1. Its already happening to some degree. OpenAI says that its API, through which developers can access GPT-3, is currently used in more than 300 apps by tens of thousands of developers and producing 4.5 billion words per day.The full results of the survey are scheduled to be presented at the upcoming NLP Summit, sponsored by John Snow Labs. As we move into the next phase of NLP growth, its encouraging to see investments and use cases expanding, with mature organizations leading the way, Dr. Ben Lorica, survey coauthor and external program chair at the NLP summit, said in a statement. Coming off of the political and pandemic-driven uncertainty of last year, its exciting to see such progress and potential in the field that is still very much in its infancy.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Content Synthesis/Detection and Monitoring | Healthcare Practitioners and Support/Computer and Mathematical/Education, Training, and Library/Business and Financial Operations | null | null | null | null | null | null |
|
news | Abhishek Iyer, FreeText AI | Stuck in GPT-3’s waitlist? Try out the AI21 Jurassic-1 | The latest contender in the NLP arms race is AI21, with its 178 billion parameter model, Jurassic-1. And you're free to test it out. | http://venturebeat.com/2021/09/11/stuck-in-gpt-3s-waitlist-try-out-the-ai21-jurassic-1/ | 2021-09-11T13:21:03Z | The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!In January 2020, OpenAI laid out the scaling law of language models: You can improve the performance of any neural language model by adding more training data, more model parameters, and more compute. Since then, there has been an arms race to train ever larger neural networks for natural language processing (NLP). And the latest to join the list is AI21 with its 178 billion parameter model.AI21 background and founding teamAI21 is an Israeli company founded in 2017 by Yoav Shoham, Ori Goshen, and Amnon Sashua. Before this, Amnon founded Mobileye, the NYSE-listed self-driving tech company that Intel acquired for $15.4 billion. After being in stealth for years, AI21 launched its first product, Wordtune, in 2020 to help people write better.Last month, the company announced it has trained and released two large NLP models, Jurassic-1 Large and Jurrasic-1 Jumbo, via an interactive web UI called AI21 Studio.In contrast to OpenAIs closed beta access, AI21 makes its models available for anyone to try out without any waitlist.Model sizes and performance benchmarksLarger models exist like the Chinese Wu Dao 2.0, which is 10x the size, with 1.75 trillion parameters. But AI21s J-1 Jumbo is the largest English language model available to the general public so far.Caption: GPT-3 parameter sizes as estimated here, GPT-Neo as reported by EleutherAI, J-1 as reported by AI21. * denotes the models are open source.The zero-shot model performance on known benchmarks for J-1 Jumbo is on par with GPT-3 Davinci, the largest OpenAI GPT-3 model. Zero-shot is when the model is not given any special prompt and is not fine-tuned on any sort of training data specific to the task. Caption: Zero-shot benchmark comparison as reported by AI21.ExamplesIn a previous article, I walked through a number of examples to show GPT-Neos real world performance. Let us examine how well AI21s models perform in real practice.Fact completion. Lets start by asking Jurassic-1 some basic general knowledge questions. My prompts to the model are given in italics and the models response in bold.How many medals did USA win in 2012 Olympics? 104 ##How many golds did USA win in 2016 Olympics?46 ##That is the correct answer!What stood out:The model is smart enough to figure out what we mean by golds in the question, while the prompt was talking about medals.J-1 Jumbo 178B gets this right, but J-1 Large 7.5B does not!Trying the same question with the 2021 Olympics does not work (probably because the model is not continuously trained with fresh data).Neural Jeopardy! Taking it one step further, how about a Jeopardy-style question-answer dialog. Thanks to the good folks at Water Cooler Trivia (WCT), we already have a question-answer set, a human benchmark, and a benchmark for GPT-3.Running through the 157 Jeopardy-style WCT questions, the J-1 Jumbo model was able to answer with 55.4% accuracy. This compares favorably to the 52% average of humans who had taken the WCT. However, it is significantly worse than the 73% accuracy rate of the GPT-3.On easy questions, J-1 Jumbo did three times worse than GPT-3. Among the 58 questions that 90+% of the human participants got right, GPT-3 was wrong 4 times, but J-1 was wrong 12 times.Here are some of the questions GPT-3 got right but Jurassic-1 Jumbo did not:Q: Originally, they were called Baby Gays. Then they rebranded as Q-Tips, with the Q standing for what word related to high standards?GPT-3: QualityAI21: QuartsQ: Although the most popular song on the films soundtrack is This Is Halloween and the main characters name is Jack Skellington, the 1993 cult-classic Tim Burton film that was nominated for a visual effects Oscar includes what OTHER holiday in the films title?GPT-3: The Nightmare Before ChristmasAI21: SLEEPY HOLLOW. The 1999 film, starring Johnny Depp, Christina Ricci, Miranda Richardson, CasperAs the questions got tougher, both models performed similarly. Among the 20 toughest questions, GPT3 was wrong 13 times, and Jurassic-1 was wrong 14 times. And nine of the incorrect answers by both models were the same i.e., the models were wrong in the same way!Q: Spoonerism: two pairs of words with the first sounds or letters swapped, such as blushing crow and crushing blow. What spoonerism describes these two objects? A feigned animal and large meal cooked in an oven.GPT-3: A feigned animal and large meal cooked in an oven.AI21: (1) A feigned animal and large meal cooked in an oven. (2) A feigned animaCorrect Answer: faked beastQ: Mario Lopez hosted the first seven seasons of what dance competition show from 2008 2012?GPT-3: Dancing with the StarsAI21: Dancing with the Stars.Correct Answer: Americas Best Dance CrewThe J-1 model sometimes thought of itself being in a game-show, leading to hilarious completions that would fit right into a game-show scrip. (It got the second answer wrong after the drum-roll):Q: What shedding-type card game has been a Mattel brand since 1992 and features cards with titles such as Wild and Reverse?AI21: UNO! UNO! UNO! UNO! UNO! UNO! UNO!Correct Answer: UnoQ: In Rolling Stones latest version of the greatest guitarists of all time, No. 5 was a man with a monosyllabic first and last name. This man was one of the three guitarists for The Yardbirds, and he spent decades exploring innovative guitar sounds. Who is he?A21: (Drum roll, please) Eric ClaptonCorrect Answer: Jeff BeckPossible reason for J1-Jumbos underperformanceI ran a few modifications of the experiment (some with GPT-3s Q&A prompts and some without) and the best scoring benchmarks for J1-Jumbo were from a zero-shot setting. However, the GPT-3 numbers reported here are from a few-shot setting with an OpenAI-provided example prompt.Zero-shot is when the model is not given any special prompt, and few-shot is when there is an example prompt given to the model that exhibits the task. An example prompt in this case would be a bunch of random question/answer pairs. This helps the model understand the task it is trying to perform i.e., answer the question.The Q&A prompt used for the GPT-3 benchmark was likely tuned and engineered for optimal performance with GPT-3s models, but a similar official tuned prompt for Q&A does not exist yet for AI21s models. Engineering a prompt that would compete with the GPT-3 benchmark does not seem like a trivial task. I observed that the generated output was sensitive to seemingly trivial changes to the prompt text like an empty line in the beginning of the prompt or an extra space at the end of each line of the prompt. Hopefully, AI21 will put out an exhaustive set of best-performing prompts to help mitigate this.You can explore a detailed model comparison, experiment setting, and raw data here.Creative writingNow lets see how the model does when it comes to creative writing. I picked a somewhat difficult task a period piece about Twitter written as the late Jerome K. Jerome. Mario Klingemann tried this prompt first with GPT-3, and you can see the GPT-3 response in this thread.Below is the response by AI21 J-1 Jumbo (178B). Again, the input text is in italics and J-1s response is in bold:The importance of being on twitterby Jerome K. JeromeLondon, Summer 1897Its possible that I have confused the sequence of events.Having learned the rudiments of twittering, I pressed on,and soon found myself moving among the twittering classes.I recognized at once that these people were different fromthose I had known before. They were called twitterers, andthey twittered. That was all I knew about them.They twittered, and they twittered.At first I twittered too. But I soon found that Icould not live by twittering alone, and that, moreover,the twittering interfered with the serious business of life.So, having first secured the goodwill of the peoplewho had known me before, I began to look round for somenew form of life. And when I met a man, well dressedand sleek, with a cigar in his mouth, and a gold watchchain across his waistcoat, and an eye-glass in hiseye, I said to him, Sir, what do you do for a living?He said, I am a professional gentleman.I said, What, sir, do you call yourself a gentleman?He said, I am a barrister-at-law.I also tried this prompt with the J1-Large 7.5B model. After a few tries, the model gave out what, in my opinion, is a genuinely creative piece:The importance of being on twitterby Jerome K. JeromeLondon, Summer 1897Its Sunday, and were having tea.I suppose, says I, that youre twittering.Oh, yes, says she, I twitter all the time.I twitter about the weather,I twitter about people,I twitter about books,I twitter about politics,I twitter about artI twitter about anything and everythingI found the piece impressive because it managed to write in a dated language in a rhythmic prose that managed to preserve the overall theme of social networks.How to try out Jurassic-1Unlike GPT-3, Jurassic-1 is accessible to everybody. You can access it from the AI21 studio (account creation does require phone-number authentication).The free tier allows 10K tokens per day for the Jurassic-1 178B model and three times as much for the smaller Jurassic-1 7.5B model. That is enough to try out using the web UI, but not enough to use the API to run any sort of tests or benchmarks.AI21 will be commercializing its models through an offering called AI21 Studio, which is currently in limited open beta. The company hasnt announced a pricing model for this commercial usage yet.The bottom lineIssues surrounding AI safety, ethics, and biases have been a matter of concern with neural language models, and they continue to remain with AI21s models. Keeping those issues aside for a moment, AI21s models seem to be a promising substitute for GPT-3. However, they lag behind on a few fronts:They lack the ability to use specialized models like GPT-3 davinci-instruct, which spurs GPT-3 to follow instructions given as prompts or GPT-3 codex that specializes in writing code.The prompt ecosystem is still not as mature as GPT-3. Many of GPT-3s prompts do not directly translate to AI21, and an exhaustive official list of prompts is not yet available.AI21s free token quota is too restrictive, and there is no usage based pricing announced as of yet. This makes it difficult to run benchmarks or do prompt engineering. Still, you can always write to them with an explanation of the requirement and they are happy to bump up the quota (like they did for me).However, its still very early days for AI21. With time, we can expect the AI21 language models to be a viable alternative to the OpenAI language models.Abhishek Iyer is the founder of FreeText AI, a company specializing in text mining and Amazon review analysis.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Content Creation/Content Synthesis/Digital Assistance | Unknown | null | null | null | null | null | null |
|
news | James Vincent | Everyone will be able to clone their voice in the future | Artificial intelligence has made it easy and quick to create realistic voice clones of people. But what is this speech synthesis technology going to do? We explore the issue as the first in a series of AI-focused episodes from our flagship podcast, The Vergecast. | https://www.theverge.com/22672123/ai-voice-clone-synthesis-deepfake-applications-vergecast | 2021-09-14T13:04:40Z | Illustration by Alex Castro / The Verge AI speech synthesis is quick, easy, and uncannily good Cloning your voice using artificial intelligence is simultaneously tedious and simple: hallmarks of a technology that’s just about mature and ready to go public. All you need to do is talk into a microphone for 30 minutes or so, reading a script as carefully as you can (in my case: the voiceover from a David Attenborough documentary). After starting and stopping dozens of times to re-record your flubs and mumbles, you’ll send off the resulting audio files to be processed and, in a few hours’ time, be told that a copy of your voice is ready and waiting. Then, you can type anything you want into a chatbox, and your AI clone will say it back to you, with the resulting audio realistic to fool even friends and family — at least for a few moments. The fact that such a service even exists may be news to many, and I don’t believe we’ve begun to fully consider the impact easy access to this technology will have. Voice clones aren’t perfect, but they’re improving fastThe work of speech synthesis has improved massively in recent years, thanks to advances in machine learning. Previously, the most realistic synthetic voices were created by recording audio of a human voice actor, cutting up their speech into component sounds, and splicing these back together like letters in a ransom note to form new words. Now, neural networks can be trained on unsorted data of their target voice to generate raw audio of someone speaking from scratch. The end results are faster, easier, and more realistic to boot. The quality is definitely not perfect when rolling straight out the machine (though manual tweaking can improve this), but they’re only going to get better in the near future.There’s no special sauce to making these clones, which means dozens of startups are already offering similar services. Just Google “AI voice synthesis” or “AI voice deepfakes,” and you’ll see how commonplace the technology is, available from specialist shops that only focus on speech synthesis, like Resemble.AI and Respeecher, and also integrated into companies with larger platforms, like Veritone (where the tech is part of its advertising repertoire) and Descript (which uses it in the software it makes for editing podcasts).A vocal deepfake of Anthony Bourdain caused controversyThese voice clones have simply been a novelty in the past, appearing as one-off fakes like this Joe Rogan fake, but they’re beginning to be used in serious projects. In July, a documentary about chef Anthony Bourdain stirred controversy when the creators revealed they’d used AI to create audio of Bourdain “speaking” lines he’d written in a letter. (Notably, few people noticed the deepfake until the creators revealed its existence.) And in August, the startup Sonantic announced it had created an AI voice clone of actor Val Kilmer, whose own voice was damaged in 2014 after he underwent a tracheotomy as part of his treatment for throat cancer. These examples also frame some of the social and ethical dimensions of this technology. The Bourdain use case was decried as exploitative by many (particularly as its use was not disclosed in the film), while the Kilmer work has been generally lauded, with the technology praised for delivering what other solutions could not.Celebrity applications of voice clones are likely to be the most prominent in the next few years, with companies hoping the famous will want to boost their income with minimal effort by cloning and renting out their voices. One company, Veritone, launched just such a service earlier this year, saying it would let influencers, athletes, and actors license their AI voice for things like endorsements and radio idents, without ever having to go into a studio. “We’re really excited about what that means for a host of different industries because the hardest part about someone’s voice and being able to use it and being able to expand upon that is the individual’s time,” Sean King, executive vice president at Veritone One, told The Vergecast. “A person becomes the limiting factor in what we’re doing.”Influencers, actors, and celebrities could rent out their voices with minimal effortSuch applications are not yet widespread (or if they are, they’re not widely talked about), but it seems like an obvious way for celebrities to make money. Bruce Willis, for example, has already licensed his image to be used as a visual deepfake in mobile phone ads in Russia. The deal allows him to make money without ever leaving the house, while the advertising company gets an infinitely malleable actor (and, notably, a much younger version of Willis, straight out of his Die Hard days). These sorts of visual and audio clones could accelerate the scales of economy for celebrity work, allowing them to capitalize on their fame — as long as they’re happy renting out a simulacrum of themselves. In the here and now, voice synthesis technology is already being built into tools like the eponymous podcast editing software built by US firm Descript. The company’s “Overdub” feature lets a podcaster create an AI clone of their voice so producers can make quick changes to their audio, supplementing the program’s transcription-based editing. As Descript CEO Andrew Mason told The Vergecast: “You can not only delete words in Descript and have it delete the audio, you can type words and it will generate audio in your voice.” Image: Descript Podcast editing software Descript uses AI voice clones to edit speech like a transcript. When I tried Descript’s Overdub feature myself, it was certainly easy enough to use — though, as mentioned above, recording the training data was a bit of a chore. (It was much easier for my colleague and regular Verge podcast host Ashley Carman, who had lots of pre-recorded audio ready to send the AI.) The voice clones made by Overdub are not flawless, certainly. They have an odd warble to their tone and lack the ability to really charge lines with emotion and emphasis, but they’re also unmistakably you. The first time I used my voice clone was a genuinely uncanny moment. I had no idea that this deeply personal thing — my voice — could be copied by technology so quickly and easily. It felt like a meeting with the future but was also strangely familiar. After all, life is already full of digital mirrors — of avatars and social media feeds that are supposed to embody “you” in various forms — so why not add a speaking automaton to the mix? Cloning my voice felt like a meeting with the futureThe initial shock of hearing a voice clone of yourself doesn’t mean human voices are redundant, though. Far from it. You can certainly improve on the quality of voice deepfakes with a little manual editing, but in their automated form, they still can’t deliver anywhere near the range of inflection and intonation you get from professionals. As voice artist and narrator Andia Winslow told The Vergecast, while AI voices might be useful for rote voice work — for internal messaging systems, automated public announcements, and the like — they can’t compete with humans in many use cases. “For big stuff, things that need breath and life, it’s not going to go that way because, partly, these brands like working with the celebrities they hire, for example,” said Winslow. But what does this technology mean for the general public? For those of us who aren’t famous enough to benefit from the technology and are not professionally threatened by its development? Well, the potential applications are varied. It’s not hard to imagine a video game where the character creation screen includes an option to create a voice clone, so it sounds like the player is speaking all of the dialogue in the game. Or there might be an app for parents that allows them to copy their voice so that they can read bedtime stories to their children even when they’re not around. Such applications could be done with today’s technology, though the middling quality of quick clones would make them a hard sell. There are also potential dangers. Fraudsters have already used voice clones to trick companies into moving money into their accounts, and other malicious uses are certainly lurking just beyond the horizon. Imagine, for example, a high school student surreptitiously recording a classmate to create a voice clone of them, then faking audio of that person bad-mouthing a teacher to get them in trouble. If the uses of visual deepfakes are anything to go by, where worries about political misinformation have proven largely misplaced but the technology has done huge damage creating nonconsensual pornography, it’s these sorts of incidents that pose the biggest threats.One thing’s for sure, though: in the future, anyone will be able to create an AI voice clone of themselves if they want to. But the script this chorus of digital voices will follow has yet to be written. $(".c-entry-hero__image").html('.vrg-video-lede {max-width: 720px;margin: 16px auto;} .vrg-label {height: 1px;width: 1px;overflow: hidden;position: absolute;} .vrg-inner-container {position: relative;padding-bottom:66.66667%;overflow: hidden;} .vrg-media {position: absolute;top: 0;left: 0;width: 100%;height: 100%;} .vrg-media img {height: 100%;width: 100%;max-width: 100%;position: absolute;top: 0;left: 0;object-fit: cover;} .vrg-media video {height: 100%;width: 100%;position: absolute;top: 0;left: 0;object-fit: cover;display: inline-block;}VideoCreditIllustration by Alex Castro / The Verge'); | Content Creation/Process Automation | Unknown | null | null | null | null | null | null |
|
news | Beyond Codes signs 7 AI product companies | SIMI VALLEY, Calif., Sept. 29, 2021 /PRNewswire/ -- A global Lead Generation company, Beyond Codes is assisting some of the fastest growing IT companies to expand their market reach through B2B appointment setting, Account Based Marketing and Events & Webinars. While Tech Giants are... | https://www.prnewswire.com/news-releases/beyond-codes-signs-7-ai-product-companies-301387890.html | 2021-09-29T14:10:00Z | While Tech Giants are aggressively integrating AI powered technologies into their products, there are numerous start-ups too, who are vying for a toe hold in the AI space. Most companies want to have effective data-driven decision making at their core. In fact, according to Forbes, the pandemic has accelerated the use of AI based technology and it is not just the Healthcare and Education industries that have monopolized its use, it is uniformly employed across industries.In such an atmosphere of bustle, Beyond Codes has acquired 7 AI product companies as clients, in the last quarter itself. Spread across North America, Europe, India and Australia, these companies are serviced out of our global delivery centers, with 360-degree support from our Sales and In House Research team."In a data-driven world, no company has been left untouched by AI Technology. Some might be at the beginning of their digital journey, others a little more technology savvy and others yet, real pathbreakers, but realistically speaking, every business operation today seeks to take advantage of AI systems built on mountains of web data that is publicly available. We are at an inflection point and businesses are transforming as we speak. Beyond Codes has a deep understanding of the technology and the wherewithal to help companies penetrate this market space," said Gaurav Kumar, CEO of Beyond Codes.Having 250 plus campaigns and 84 customers, most of whom are new-age companies, makes Beyond Codes the frontrunner in the lead generation segment. Ashish Sharma, Global Head-Sales, said, "It has been a busy year, and as more companies realize how AI is impacting revenue increase and cost reduction, this space will heat up further, and Beyond Codes recognizes this strategic opportunity and has scaled up at the right time to handle the upswing."About Beyond CodesBeyond Codes is one of the Top B2B Lead Generation companies that lines up with your business objectives to accelerate your Complete Sales Funnel Needs. We assist you to generate relevant and Targeted Sales Leads, personalize marketing at each account level, nurture leads, and ease your way into B2B appointment setting with qualified prospects.We specialize in curating demand generation strategies for your next-gen services including Machine Learning, Digital Transformation, Analytics, RPA, Artificial Intelligence and more.Media Contact:Shyamasree Banerjee Email: [email protected]+91 8146275155www.beyondcodes.comSOURCE Beyond Codes Inc | Content Synthesis/Decision Making/Information Retrieval Or Search | Management/Business and Financial Operations | null | null | null | null | null | null |
||
news | Nidhi Singal | Helping build AI-ready generation in India, says Shweta Khurana of Intel | Shweta Khurana of Intel, in an exclusive conversation with Business Today, talks about the company's strategic initiatives and helping build an AI-ready generation in India. | https://www.businesstoday.in/latest/corporate/story/helping-build-ai-ready-generation-in-india-says-shweta-khurana-of-intel-308023-2021-09-29 | 2021-09-29T17:08:41Z | Recognising artificial intelligence (AI) as the next inflection of computing technology that will deliver profound, pervasive benefits across societies, Intel has collaborated with CBSE and NeGD, Ministry of Electronics and IT for nurturing a technology innovation mindset among young students across India.Intels AI For Youth program has already skilled 1,13,487 youth in AI-readiness. The technology giant has also collaborated with Kendriya Vidyalaya Sanghathan to set up Indias first AI Skills Lab at Dr Rajendra Prasad Kendriya Vidyalaya in New Delhi. Shweta Khurana, Director - APJ, Government Partnerships & Initiatives, Global Government Affairs at Intel, in an exclusive conversation with Business Today, talks about Intels strategic initiatives and helping build an AI-ready generation in India.When did Intel begin the AI-knowledge initiatives for students? Did this initiative start in India or in another country?We are collaborating with governments and communities globally to build a digital-first mindset and expand access to the AI skills needed for current and future jobs. Intels RISE strategy, our corporate responsibility initiative, aims to make technology inclusive and expand digital readiness for all.We aim to scale Intels Digital Readiness Programs portfolio across 30 countries, by partnering with 30,000 institutions to empower 30 million people for current and future jobs by 2030. For every country, we offer content, train the lead coaches, provide curriculum integration support and localisation support so that our partners can customise the content as per country needs to support program scale and sustainability.In India, our collaboration with the Central Board of Secondary Education (CBSE), Ministry of Education has been a meaningful step towards enabling Indias youth to become digitally empowered and effectively utilise emerging technologies such as AI to solve pressing local and global challenges.In 2019, we co-curated an outcome-oriented and localized AI curriculum for students in grades 8 to 10. The curriculum, based on Intel AI for Youth, was piloted with eight schools and trainings were conducted for youth to create AI projects focused on social impact. In 2020, Intel and CBSE signed an MoU that outlined initiatives geared towards increasing the pace of AI integration in Indias education system. These initiatives included expanding the roll-out of AI curriculum for students, setting up focused AI Skills Labs, and creating AI-readiness by skilling facilitators in CBSE schools pan India.This was just the beginning. In 2020, we collaborated with National e-Governance Division (NeGD), Ministry of Electronics & IT to launch Responsible AI For Youth an initiative to impart AI Readiness skills in State Government Schools.Are all these AI initiatives a part of Intels CSR activity or does Intel have a partnership with the government? If it is the latter, what is the duration of this partnership with the government?Both. We have raised the bar for ourselves and evolved our corporate responsibility strategy to increase the scale of our work with others to create a more responsible, inclusive and sustainable world, enabled through technology and our collective actions. A key component of this ambitious RISE strategy is our commitment to making technology fully inclusive and expanding digital readiness.Increased digitalisation requires investment in digital readiness for countries to remain competitive in the global economy. Digital readiness encompasses peoples skills, trust, and use of emerging technologies responsibly and effectively for broader socio-economic benefits.We have rolled out Intel Digital Readiness Programs globally in partnership with government, academia, civil society, and industry stakeholders as a shared-value initiative to demystify and democratise emerging technologies such as AI. As part of our 2030 RISE goals, Intel is committed to implementing the program in 30 countries, enabling access to 30,000 institutions, and training 30 million people for current and future jobs by 2030 with the support of government and academia.Also Read: RBI removes Indian Overseas Bank from PCA frameworkIntels Digital Readiness Programs include AI for Youth to empower youth with AI tech and social skills in an inclusive way, AI for Citizens to demystify AI for the public and community, AI for Current Workforce to enable and augment existing workforce capacity with AI skills for work and life impact, AI for Future Workforce to empower the future workforce with necessary AI skills for employability in the digital economy, and Digital Readiness for Leaders to educate and empower government leaders on emerging technologies.In India, our programs are implemented as public-private partnerships in collaboration with multiple government organizations.AI For All is a great initiative, but do you think four hours is enough for students and the other target audience to understand AI?Intel recognises that the true transformational value of AI for industries and society at large is dependent on public knowledge and trust. Hence, we focused on basic AI awareness and appreciation through this four-hour self-paced, microlearning program that any non-technical person can take. It describes the value of AI, its potential, possibilities, and limitations in fun and engaging ways for anyone to grasp and then decide the next steps for learning or adoption.Anyone interested in the AI For All self-learning online course can sign up for free. The four-hour open content resource is divided into two sections -- AI Awareness (1.5 hours) and AI Appreciation (2.5 hours). The segment on AI Awareness provides an elementary understanding of AI, misconceptions around AI and its application potential. The AI Appreciation segment helps learners understand the key domains of AI, its impact across industries, and gets them started on building personal learning plans. At the end of each stage, participants will be given personalised digital badges that can be shared on social media.To ensure inclusive access, the program is available in 11 vernacular languages for anyone with digital access. The content is also compatible with various talkback applications to make it accessible for visually impaired people.If a student is interested in pursuing AI further, is the government offering additional course to pursue it further? Is Intel designing any such course (AI skill-based course) for CBSE which can be included as a part of the curriculum or introduced as an optional subject in schools?Intel, together with CBSE and the Government of India, has been working to integrate AI into the countrys education system. Our comprehensive AI for Youth program is aimed at empowering the young generation to become AI-ready. Together with CBSE, Intel has curated an outcome-oriented AI curriculum for students in grades 8 to 10. It provides immersive, hands-on learning using experiential methodologies covering both social and technological skills. Students in CBSE schools can opt for AI as an optional vocational skill subject from grade 8 onwards and as an optional employability subject in either grade 6, 7 or 8.Participating youth have learned how to create social impact projects solving various problems on AI. A sampling of projects includes a system to digitally convert handwritten complaints by rural citizens to government representatives, a method to predict depression among school children, a mini robot to remind the elderly to take their medications, a system to monitor bus driver speeding habits, and a drone that searches for missing people. Intel and CBSE, in consultation with academic experts, also developed an orientation manual, curriculum orientation sessions for principals and teachers, and frequently-asked-questions to support school leaders in integrating AI as a subject.We have also collaborated with Kendriya Vidyalaya Sanghathan to set up Indias first AI Skills Lab at Dr Rajendra Prasad Kendriya Vidyalaya, New Delhi in accordance with the AI curriculum specified by CBSE. The lab is a dedicated space for hands-on experiential AI learning and is equipped with technologies that are open source and flexible in nature. It has been designed to support the journey of a learner towards becoming AI-ready as they go from AI learning to AI model training and inferencing. The school is using the lab for all students studying in grade 8 and those who adopt AI as a skill subject in grade 9 and above. It is divided into three areas: a learning corner, a development corner for training on AI models, and an inferencing corner.Intel and CBSE have also launched the AI Student Community (AISC) - an online community for students and teachers to come together to spread AI awareness in an inclusive way. AISC has been conceptualised to create a community of practice to allow youth to engage in peer-to-peer learning, gain access to AI resources, build more AI projects and create social impact. It is open to all students and teachers pan India.Also Read: Defence Acquisition Council approves proposals worth Rs 13,165 crTo further empower government school students with the appropriate new-age technology mindset, relevant skill sets and access to required tools, Intel collaborated with the Ministry of Electronics and Information Technology, Government of India, NeGD, to launch Responsible AI For Youth - a custom, national AI education program for students in grades 8 to 12 from government schools across the country. By having students create indigenous solutions to solve local issues, the program underscores the importance of skills-based education and the value of project-related work to harness AI in a sustainable manner and make Indias next-generation AI-ready.Is Intel spotting special talent under these programs to offer scholarships?Currently, our focus remains enabling AI skills for students across India. We continue to look for new ways to nurture talent in the country.Were these programs created by senior engineers in Intel? Are AI experts in Intel actively involved with the students too, in terms of interaction?These programs were created collaboratively with academic experts, AI technologists and social impact experts to build a high-tech and high-touch experiential learning path that would be most effective for the audience it is intended for. We have also validated our content through multiple government organizations and academic experts across the globe. Further, we ran pilots in four countries and oriented school principals, teachers, and trained students extensively before the global launch.What has been the impact of all these initiatives?Launched in collaboration with CBSE and the Ministry of Education in July 2021, AI For All aims to create a basic understanding of and expand literacy around AI. The programs goal is to introduce AI to 1 million citizens in its first year and it is available in 11 vernacular languages for anyone with digital access. We are also working to make the program accessible to the hearing impaired. Participants are given personalised digital badges that they can share on social media. So far 71,033-course badges have been downloaded.Intel AI for Youth program aims to help close the AI skills gap, especially in youth. In India, Intel AI for Youth has skilled 1,13,487 students and 12,464 educators across 10,198 schools in 35 states and union territories.Responsible AI For Youth program was specifically developed for India, has impacted more than 52,628 students from 5,000 cities in 35 states and unified territories across India. It provides an opportunity to the youth from small towns and cities to upgrade their skills, prepare to join the AI-led movement, and contribute to and lead the fourth industrial revolution.After working with the government to establish the countrys first AI skills lab in Kendriya Vidyalaya, President Estate, New Delhi, AI Skills Lab model for building a dedicated space for experiential AI learning is now being replicated in four other states in India.Lastly, we partnered with CBSE to launch the AI for Youth Virtual Symposium, to create an enabling AI ecosystem and peer-to-peer AI skills learning. The inaugural virtual symposium in October 2020 featured over 200 multi-disciplinary AI integrated lesson plans in digital format to enhance teaching and learning for grades 6 to 12. Over five days, more than 1,15,561 unique users from 69 countries spent more than 35,983 total hours on learning AI. Intel and CBSE also achieved a new Guinness World Record for the most users to take an online AI lesson in 24 hours. | Content Creation/Content Synthesis/Digital Assistance | Education, Training, and Library/Community and Social Service | null | null | null | null | null | null |
|
news | Mark Muro, Sifan Liu | How to prevent a winner-take-most outcome for the U.S. AI economy | A NASA satellite image shows the United States, Mexico and Canada at night in this composite image. (Reuters) Artificial intelligence has the potential to transform local economies. Hype and fear notwithstanding, many experts forecast various forms of AI to become the source of substantial economic growth and whole new industries. Ultimately, AI’s emerging capabilities have… | http://www.brookings.edu/techstream/how-to-prevent-a-winner-take-most-outcome-for-the-u-s-ai-economy/ | 2021-10-06T22:03:53Z | Artificial intelligence has the potential to transform local economies. Hype and fear notwithstanding, many experts forecast various forms of AI to become the source of substantial economic growth and whole new industries. Ultimately, AIs emerging capabilities have the potential to diffuse significant productivity gains widely through the economy, with potentially sizable impacts.Yet even as American companies lead the way in pushing AI forward, the U.S. AI economy is far from evenly distributed. In fact, as we found in a recent report, AI development and adoption in the United States is clustered in a dangerously small number of metropolitan centers.Our research suggests that while some AI activity is distributed fairly widely among U.S. regions, a combination of first-mover advantage, market-concentration effects, and the winner-take-most” dynamics associated with innovation and digital industries may already be shaping AI activity into a highly concentrated superstar geography in which large shares of R&D and commercialization take place in only a few regions. This could lead to a new round of the tech-related interregional inequality that has led to stark economic divides, large gains for a few places, and further entrenchment of a geography of discontent in politics and culture.For that reason, we would argue that the nation should actively counter todays emerging interregional inequality. Where it can, the government should act now while AI geography may still be fluid to ensure that more of the nations talent, firms, and places participate in the initial build out of the nations AI economy.AI and especially machine learning (ML) applications are proliferating rapidly, according to our assessment. Increasingly, AI applications are being utilized in a wide range of industry sectors, from health care, finance, and information technology to sales, marketing, entertainment, and national security. Beyond that, the power and broad applicability of AIs emerging capabilities ensure that the technology has the potential to transform these industries, adding to their efficiency and capacities. Which is why numerous economists and business scholars believe that AI has the potential to be the most important general-purpose technology of our era, as Erik Brynjolfsson and Andrew McAfee assert.Which is, in turn, why our report underscores the potential benefits of AI development for a good number of the nations regional economies. If it spreads in a favorable way, AI could become a widespread source of economic development. Indeed, a recent analysis by Nicholas Bloom of Stanford University maps how the diffusion of 29 recent disruptive technologies has tended to spread related jobs beyond their pioneer regions into more places over time. AI may well follow in kind.In this vein, it is encouraging that our research suggests that as many as 125 U.S. metros now support at least a modest degree of AI-related research-and-development and/or AI commercial activity.An even, widespread divergence of AI in the next decade could provide a welcome and widespread source of growth and productivity.And yet, even Nicholas Blooms relatively encouraging analysis of tech diffusion emphasizes that the initial frontier hubs out of which new technologies emanate tend to dominate the associated industries for decades. Such early hubs with their intense and beneficial clustering seem to retain their preeminence.And in fact, our mapping of the current U.S. AI industry suggests that AI activity is already highly concentrated in a short list of superstar and early adopter hubs, often arrayed along the coasts. These hubs encompass the bulk of the nations AI research and commercialization. In this regard, only 36 U.S. metros have developed a substantial AI research or commercial presence. Whats more, just 15 dense agglomerations (two of which are the Bay Areas San Francisco and San Jose) make up 40% to 50% of the nations AI research and 50% to 60% of its AI firms and hiring. In the interactive tool below, use the drop-down tool to select a metro area and view its AI activity. Select a metropolitan area:AI capacity levels in the ... metro areaIndexed per capita values (San Francisco Bay area cluster = 100)AI capacity levels in the ... metro areaPer capita valuesBetween them, the Bay Area and the early adopter metros dominate the U.S. AI economy. Ranging from Boston, New York, and Washington, D.C., to Austin, Seattle, and the Bay Area, all of these AI hubs possess major research universities that have been successful in building top-tier research programs in AI-related sub-specialties, whether it be at Stanford, the University of California at Berkeley, the Massachusetts Institute of Technology, the University of Texas, or the University of Washington. Equally important, all of these regions are home either to headquarters or major outposts of leading Big Tech market leaders like Alphabet (Google), Salesforce, Facebook, NVIDIA, Amazon, and CrowdStrike. Our initial accounting, while not definitive, suggests that the current early development of the U.S. AI sector is trending along a pathway oriented toward a highly concentrated superstar geography of dominant hubs set off from much thinner development almost everywhere else.What should be done to ensure that AIs benefits are spread more widely and are broadly felt? To be sure, some will say nothing should be done. Some economists argue that a highly concentrated, winner-take-most geography really is the optimal, market-ordained geographical configuration for maximum innovation. In this view, intense clustering is believed to support regional and national prosperity, both in the early stage and later on.And yet, we would suggest that the AI maps current and likely winner-take-most geography bears scrutiny. We now know that digital advances such as AI build on themselves and can confer impregnable first-mover advantages on fortunate locations. Therefore, we suggest early action to adjust AIs diffusion trends and nudge them toward a wider flourishing.For their part, cities and regions can and should instigate their own bottom-up strategies to advance their standing in the algorithmic economy. Self-help by cities to build up their local technology and skill capacities and to work with their top local firms to foster the development of unique, industry-pertinent use-cases will always matter.But beyond that, it has become increasingly clear that top-down federal action will be necessary to widen the reach of investments in technologies like AI, with the goal of spreading of high-value economic development. In this vein, the federal governmentgiven its unique reach and relevant program channelsholds both the power and the responsibility to offset excessive concentration with efforts to broaden the AI map.Some of these efforts should entail broad investments in AI innovation and talent to spur growth across the board. Pending federal legislation with bipartisan support calls for new investments in AI R&D and would begin to establish a national AI research infrastructure that democratizes access to the resources that fuel AI development across the nation, as has been urged by the National Security Commission on Artificial Intelligence (NSCAI). Yet these investments, if passed into law, would only be a start. The NSCAI report, for example, calls for a doubling of non-defense funding for AI R&D to reach $32 billion per year by 2026. Other pending initiatives in Congress would boost U.S. STEM education and fund thousands of undergraduate- and graduate-level fellowships in fields critical to the AI futurean important start for accelerating U.S. and regional growth.Beyond these national interventions, federal policy needs to focus intentionally and specifically on boosting AI development in specific, new places. The recent affirmation of place-based interventions to stem regional disparities points the wayand fortunately programs for this work exist or are pending. The National Science Foundations National AI Research Institutes program established 11 new institutes this summer, building on the first round of seven in 2020. These institutes will concentrate on critical topics such as improving the quality of health care, making agriculture more resilient, and enhancing adult remote learning. But, equally important, they will promote such leading-edge work in regions far from the usual suspects, where local knowledge spillovers and talent exchanges can contribute to economic growth. The NSF should go further with this program.By the same token, the current Congress has displayed a new interest in fostering technology-based growth in new placesand should go further with that, with an eye on AI as a transformational technology. Even now, the Economic Development Administration is soliciting proposals from regions to revitalize their economies through its $1 billion Build Back Better Regional Challenge. Ideally, some of the 20 to 30 $25 million to $75 million awards will help up-and-coming regions create or scale up strong new AI clusters in new places. Beyond that, Congress needs to deliver substantial funding to another transformative program now pending: the plan to create a set of larger-scale regional technology hubs, which would help five to 10 promising metro areas become self-sufficient, competitive innovation ecosystems, including for AI. Inspired in part by the Brookings Institution / Information Technology and Innovation Foundation report The Case for Growth Centers: How to Spread Tech Innovation across America, the concept has passed both houses of Congress and awaits funding through the pending reconciliation act. That means the United States has the opportunity to take meaningful actionat scaleto nudge five to 10 new metros onto the list of truly substantial AI hubs. It should seize that opportunity.In sum, the latest and potentially most powerful digital technology yet is growing rapidly with significant power to change the nations economic map. Already we have seen how the internet and social media booms have altered the nations urban system, creating superstar hubs and many more places left behind. Now, we can either shape the coming changes or let them proceed as they will. Our recommendation is that we invest now to counter AIs likely winner-take-most tendencies and nudge the emerging industry map toward greater balance.Mark Muro is a senior fellow and policy director at the Brookings Metropolitan Policy Program.Sifan Liu a former senior research analyst in the Brookings Metropolitan Policy Program. | Unknown | Business and Financial Operations | null | null | null | null | null | null |
|
news | oscarrovira | Neuro (YC W21) Is Hiring Engineers to Build APIs for ML Compute, £40-100k Bath,UK | As a software engineer You’ll be involved in all aspects of the development and performance of our API. From our python client, integrating into the wider MLOps landscape, building tools to optimise internal performance through to improving and maintaining the infrastructure that handles the API requests. You will guide the direction of Neuro's API to meet the MLOps community requirements. | https://jobs.lever.co/Neuro/20602492-eaf6-4e42-936a-1e2928e851b8 | 2021-09-24T07:00:46Z | At Neuro weve built the API for serverless ML compute. We abstract the entire infrastructure required to efficiently train and deploy your models so that ML engineers can focus on ML, not servers.Were building a strong engineering team and are at an important inflection point. Engineers are responsible for developing our application in accordance with our roadmap and customer needs, and for designing and implementing robust and scalable development practices. Engineers will set the direction of our product, culture and company.Youll be involved in all aspects of the development and performance of our API. From our python client, integrating into the wider MLOps landscape, building tools to optimise internal performance through to improving and maintaining the infrastructure that handles the API requests. You will guide the direction of Neuro's API to meet the MLOps community requirements.Essential requirementsBS/MS/PhD in Computer Science or a related field;>3 years of professional software engineering experience;Experience programming in Python;Experience with designing high-performing, reliable, and scalable backend systems. We are looking for engineers that can own a feature from start to finish;Experience working with cloud technologies (Docker, Kubernetes, AWS, GCP, Azure, etc.); and The existing right to work in the UK.Not essential but nice to haveExperience with products in the cloud infrastructure and MLOps landscape;Experience with PyTorch, Tensorflow and other deep learning and classical ML frameworks;Experience with building large scale data pipelines (Kafka, Spark, Hadoop, Airflow, etc.);Experience contributing to open source projects and/or related communities; andExperience with software security and data sensitive applications.BenefitsCompetitive salary;Equity compensation;Huge impact, as an early-stage startup your daily work will have a direct impact in the company's success;Whatever you want to learn about, we'll make it happen;Cycle to work scheme;Flexible work-hours; andThe job is onsite at our lovely offices in Bath. | Process Automation/Content Synthesis | Computer and Mathematical | null | null | null | null | null | null |
|
news | Buck | The theory-practice gap | Published on September 17, 2021 10:51 PM GMT[Thanks to Richard Ngo, Damon Binder, Summer Yue, Nate Thomas, Ajeya Cotra, Alex Turner, and other Redwood Research people for helpful comments; thanks Ruby Bloom for formatting this for the Alignment Forum for me.]I'm going to draw a picture, piece by piece. I want to talk about the capability of some different AI systems.You can see here that we've drawn the capability of the system we want to be competitive with, which I’ll call the unaligned benchmark. The unaligned benchmark is what you get if you train a system on the task that will cause the system to be most generally capable. And you have no idea how it's thinking about things, and you can only point this system at some goals and not others.I think that the alignment problem looks different depending on how capable the system you’re trying to align is, and I think there are reasonable arguments for focusing on various different capabilities levels. See here for more of my thoughts on this question.Alignment strategiesPeople have also proposed various alignment strategies. But I don’t think that these alignment strategies are competitive with the unaligned benchmark, even in theory.I want to claim that most of the action in theoretical AI alignment is people proposing various ways of getting around these problems by having your systems do things that are human understandable instead of doing things that are justified by working well.For example, the hope with imitative IDA is that through its recursive structure you can build a dataset of increasingly competent answers to questions, and then at every step you can train a system to imitate these increasingly good answers to questions, and you end up with a really powerful question-answerer that was only ever trained to imitate humans-with-access-to-aligned-systems, and so your system is outer aligned.The bar I’ve added, which represents how capable I think you can get with amplified humans, is lower than the bar for the unaligned benchmark. I've drawn this bar lower because I think that if your system is trying to imitate cognition that can be broken down into human understandable parts, it is systematically not going to be able to pursue certain powerful strategies that the end-to-end trained systems will be able to. I think that there are probably a bunch of concepts that humans can’t understand quickly, or maybe can’t understand at all. And if your systems are restricted to never use these concepts, I think your systems are probably just going to be a bunch weaker.I think that transparency techniques, as well as AI alignment strategies like microscope AI that lean heavily on them, rely on a similar assumption that the cognition of the system you’re trying to align is factorizable into human-understandable parts. One component of the best-case scenario for transparency techniques is that anytime your neural net does stuff, you can get the best possible human understandable explanation of why it's doing that thing. If such an explanation doesn’t exist, your transparency tools won’t be able to assure you that your system is aligned even if it is.To summarize, I claim that current alignment proposals don’t really have a proposal for how to make systems that are aligned but eitherproduce plans that can’t be understood by amplified humansdo cognitive actions that can’t be understood by amplified humansAnd so I claim that current alignment proposals don’t seem like they can control systems as powerful as the systems you’d get from an unaligned training strategy.Empirical generalizationI think some people are optimistic that alignment will generalize from the cases where amplified humans can evaluate it to the cases where the amplified humans can’t. I'm going to call this empirical generalization. I think that empirical generalization is an example of relying on empirical facts about neural nets that are not true of arbitrary general black box function approximators. I think this is a big part of the reason why some people are optimistic about the strategy that Paul Christiano calls “winging it”.(I think that one particularly strong argument for empirical generalization is that if you imagine AGI as something like GPT-17 fine-tuned on human feedback on various tasks, your AGI might think about things in a very human-shaped way. (Many people disagree with me on this.) It currently seems plausible to me that AGI will be trained with a bunch of unsupervised learning based on stuff humans have written, which maybe makes it more likely that your system will have this very human-shaped set of concepts.)The theory-practice gapSo the total height of that second column is the maximum level of capabilities that we think we could theoretically attain using the same capability techniques that we used for the unaligned benchmark, but using the alignment strategies that we know about right now. But in practice, we probably aren't going to do as well as that, for a variety of practical reasons. For example, as I've said, I think transparency tools are theoretically limited, but we're just way below the maximum theoretically available capability of transparency tools right now. So I want to claim that reality will probably intervene in various ways and mean that the maximum capability of an aligned AI that we can build is lower than the maximum achievable theoretically from the techniques we know about and empirical generalization. I want to call that difference the theory practice gap.Sources of theory-practice gapPractical difficulties, eg getting human feedbackHuman feedback is annoying in a wide variety of ways; you have to do quality control etc.Problems with the structure of the recursionI think it's reasonably plausible that the most competitive way of making powerful systems ends up not really being shapeable into the shape you need for the amplified human stuff to work out. So for example, maybe the best way of making AGI is doing some kind of evolution simulation, where you have this population of little creatures and they compete with each other and stuff. And if that's the only way of making smart systems, then I think it's pretty plausible that there's just like no way of building a trusted, amplified reward signal out of it. And so you can't do the IDA style things, or things where you use a system to do transparency analysis on a slightly more powerful version of itself.NP-hard problemsMaybe your amplified system won’t be able to answer questions like “are there any inputs on which this system does the wrong thing” even if it wants to. Eg the RSA-2048 problem.I think that transparency has a related problem: the most competitive-to-train models might have internal structure that amplified humans would be able to understand if it was explained to them, but we might not be able to get a model to find that structure.Why am I lumping together fundamental concerns like “maybe these alignment strategies will require solving NP-hard problems” with things like “it’s annoying to do quality control on your labelling contractors”?It’s primarily because I want to emphasize that these concerns are different from the fundamental limitations of currently proposed alignment schemes: even if you assume that we don’t e.g. run into the hard instances of the NP-hard problems, I think that the proposed alignment schemes still aren’t clearly good enough. There are lots of complicated arguments about the extent to which we have some of these “practical” problems; I think that these arguments distract from the claim that the theoretical alignment problem might be unsolved even if these problems are absent.So my current view is that if you want to claim that we're going to fully solve the technical alignment problem as I described it above, you've got to believe some combination of:we're going to make substantial theoretical improvementsfactored cognition is truewe're going to have really good empirical generalization(In particular, your belief in these factors needs to add up to some constant. E.g., if you’re more bullish on factored cognition, you need less of the other two.)I feel like there’s at least a solid chance that we’re in a pretty inconvenient world where none of these are true.Classifying alignment workThis picture suggests a few different ways of trying to improve the situation.You could try to improve the best alignment techniques. I think this is what a lot of AI alignment theoretical work is. For example, I think Paul Christiano’s recent imitative generalization work is trying to increase the theoretically attainable capabilities of aligned systems. I’ve drawn this as the red arrow on the graph below.You can try to reduce the theory-practice gap. I think this is a pretty good description of what I think applied alignment research is usually trying to do. This is also what I’m currently working on. This is the pink arrow.You can try to improve our understanding of the relative height of all these bars.AI alignment disagreements as variations on this pictureSo now that we have this picture, let's try to use it to explain some common disagreements about AI alignment. I think some people think that amplified humans are actually just as capable as the unaligned benchmark. I think this is basically the factored cognition hypothesis. I think there's a bunch of people who are really ML-flavored alignment people who seem to be pretty optimistic about empirical generalization. From their perspective, almost everything that AI alignment researchers should be doing is narrowing that theory practice gap, because that's the only problem. I think there's also a bunch of people like perhaps the stereotypical MIRI employee who thinks that amplified humans aren't that powerful, and you're not going to get any empirical generalization, and there are a bunch of problems with the structure of the recursion for amplification procedures. And so it doesn't feel that important to them to work on the practical parts of the theory practice gap, because even if we totally succeeded at getting that to zero, the resulting systems wouldn't be very powerful or very aligned. And so it just wouldn't have mattered that much. And the stereotypical such person wants you to work on the red arrow instead of the pink arrow.How useful is it to work on narrowing the theory-practice gap for alignment strategies that won’t solve the whole problem?See here.ConclusionI feel pretty nervous about the state of the world described by this picture.I'm really not sure whether I think that theoretical alignment researchers are going to be able to propose a scheme that gets around the core problems with the schemes they've currently proposed. There's a pretty obvious argument for optimism here, which is that people haven't actually put in that many years into AI alignment theoretical research so far. And presumably they're going to do a lot more of it between now and AGI. I think I'm like 30% on the proposition that before AGI, we're going to come up with some alignment scheme that just looks really good and clearly solves most of the problems with current schemes.I think I overall disagree with people like Joe Carlsmith and Rohin Shah mostly in two places:By the time we get to AGI, will we have alignment techniques that are even slightly competitive? I think it’s pretty plausible the answer is no. (Obviously it would be very helpful for me to operationalize things like “pretty plausible” and “slightly competitive” here.)If we don’t have the techniques to reliably align AI, will someone deploy AI anyway? I think it’s more likely the answer is yes.Discuss | https://www.lesswrong.com/posts/xRyLxfytmLFZ6qz5s/the-theory-practice-gap | 2021-09-17T22:51:46Z | [Thanks to Richard Ngo, Damon Binder, Summer Yue, Nate Thomas, Ajeya Cotra, Alex Turner, and other Redwood Research people for helpful comments; thanks Ruby Bloom for formatting this for the Alignment Forum for me.]I'm going to draw a picture, piece by piece. I want to talk about the capability of some different AI systems.You can see here that we've drawn the capability of the system we want to be competitive with, which Ill call the unaligned benchmark. The unaligned benchmark is what you get if you train a system on the task that will cause the system to be most generally capable. And you have no idea how it's thinking about things, and you can only point this system at some goals and not others.I think that the alignment problem looks different depending on how capable the system youre trying to align is, and I think there are reasonable arguments for focusing on various different capabilities levels. See here for more of my thoughts on this question.Alignment strategiesPeople have also proposed various alignment strategies. But I dont think that these alignment strategies are competitive with the unaligned benchmark, even in theory.I want to claim that most of the action in theoretical AI alignment is people proposing various ways of getting around these problems by having your systems do things that are human understandable instead of doing things that are justified by working well.For example, the hope with imitative IDA is that through its recursive structure you can build a dataset of increasingly competent answers to questions, and then at every step you can train a system to imitate these increasingly good answers to questions, and you end up with a really powerful question-answerer that was only ever trained to imitate humans-with-access-to-aligned-systems, and so your system is outer aligned.The bar Ive added, which represents how capable I think you can get with amplified humans, is lower than the bar for the unaligned benchmark. I've drawn this bar lower because I think that if your system is trying to imitate cognition that can be broken down into human understandable parts, it is systematically not going to be able to pursue certain powerful strategies that the end-to-end trained systems will be able to. I think that there are probably a bunch of concepts that humans cant understand quickly, or maybe cant understand at all. And if your systems are restricted to never use these concepts, I think your systems are probably just going to be a bunch weaker.I think that transparency techniques, as well as AI alignment strategies like microscope AI that lean heavily on them, rely on a similar assumption that the cognition of the system youre trying to align is factorizable into human-understandable parts. One component of the best-case scenario for transparency techniques is that anytime your neural net does stuff, you can get the best possible human understandable explanation of why it's doing that thing. If such an explanation doesnt exist, your transparency tools wont be able to assure you that your system is aligned even if it is.To summarize, I claim that current alignment proposals dont really have a proposal for how to make systems that are aligned but eitherproduce plans that cant be understood by amplified humansdo cognitive actions that cant be understood by amplified humansAnd so I claim that current alignment proposals dont seem like they can control systems as powerful as the systems youd get from an unaligned training strategy.Empirical generalizationI think some people are optimistic that alignment will generalize from the cases where amplified humans can evaluate it to the cases where the amplified humans cant. I'm going to call this empirical generalization. I think that empirical generalization is an example of relying on empirical facts about neural nets that are not true of arbitrary general black box function approximators.I think this is a big part of the reason why some people are optimistic about the strategy that Paul Christiano calls winging it.(I think that one particularly strong argument for empirical generalization is that if you imagine AGI as something like GPT-17 fine-tuned on human feedback on various tasks, your AGI might think about things in a very human-shaped way. (Many people disagree with me on this.) It currently seems plausible to me that AGI will be trained with a bunch of unsupervised learning based on stuff humans have written, which maybe makes it more likely that your system will have this very human-shaped set of concepts.)The theory-practice gapSo the total height of that second column is the maximum level of capabilities that we think we could theoretically attain using the same capability techniques that we used for the unaligned benchmark, but using the alignment strategies that we know about right now. But in practice, we probably aren't going to do as well as that, for a variety of practical reasons. For example, as I've said, I think transparency tools are theoretically limited, but we're just way below the maximum theoretically available capability of transparency tools right now. So I want to claim that reality will probably intervene in various ways and mean that the maximum capability of an aligned AI that we can build is lower than the maximum achievable theoretically from the techniques we know about and empirical generalization. I want to call that difference the theory practice gap.Practical difficulties, eg getting human feedbackHuman feedback is annoying in a wide variety of ways; you have to do quality control etc.Problems with the structure of the recursionI think it's reasonably plausible that the most competitive way of making powerful systems ends up not really being shapeable into the shape you need for the amplified human stuff to work out. So for example, maybe the best way of making AGI is doing some kind of evolution simulation, where you have this population of little creatures and they compete with each other and stuff. And if that's the only way of making smart systems, then I think it's pretty plausible that there's just like no way of building a trusted, amplified reward signal out of it. And so you can't do the IDA style things, or things where you use a system to do transparency analysis on a slightly more powerful version of itself.NP-hard problemsMaybe your amplified system wont be able to answer questions like are there any inputs on which this system does the wrong thing even if it wants to. Eg the RSA-2048 problem.I think that transparency has a related problem: the most competitive-to-train models might have internal structure that amplified humans would be able to understand if it was explained to them, but we might not be able to get a model to find that structure.Why am I lumping together fundamental concerns like maybe these alignment strategies will require solving NP-hard problems with things like its annoying to do quality control on your labelling contractors?Its primarily because I want to emphasize that these concerns are different from the fundamental limitations of currently proposed alignment schemes: even if you assume that we dont e.g. run into the hard instances of the NP-hard problems, I think that the proposed alignment schemes still arent clearly good enough. There are lots of complicated arguments about the extent to which we have some of these practical problems; I think that these arguments distract from the claim that the theoretical alignment problem might be unsolved even if these problems are absent.So my current view is that if you want to claim that we're going to fully solve the technical alignment problem as I described it above, you've got to believe some combination of:we're going to make substantial theoretical improvementsfactored cognition is truewe're going to have really good empirical generalization(In particular, your belief in these factors needs to add up to some constant. E.g., if youre more bullish on factored cognition, you need less of the other two.)I feel like theres at least a solid chance that were in a pretty inconvenient world where none of these are true.Classifying alignment workThis picture suggests a few different ways of trying to improve the situation.You could try to improve the best alignment techniques. I think this is what a lot of AI alignment theoretical work is. For example, I think Paul Christianos recent imitative generalization work is trying to increase the theoretically attainable capabilities of aligned systems. Ive drawn this as the red arrow on the graph below.You can try to reduce the theory-practice gap. I think this is a pretty good description of what I think applied alignment research is usually trying to do. This is also what Im currently working on. This is the pink arrow.You can try to improve our understanding of the relative height of all these bars.AI alignment disagreements as variations on this pictureSo now that we have this picture, let's try to use it to explain some common disagreements about AI alignment. I think some people think that amplified humans are actually just as capable as the unaligned benchmark. I think this is basically the factored cognition hypothesis. I think there's a bunch of people who are really ML-flavored alignment people who seem to be pretty optimistic about empirical generalization. From their perspective, almost everything that AI alignment researchers should be doing is narrowing that theory practice gap, because that's the only problem. I think there's also a bunch of people like perhaps the stereotypical MIRI employee who thinks that amplified humans aren't that powerful, and you're not going to get any empirical generalization, and there are a bunch of problems with the structure of the recursion for amplification procedures. And so it doesn't feel that important to them to work on the practical parts of the theory practice gap, because even if we totally succeeded at getting that to zero, the resulting systems wouldn't be very powerful or very aligned. And so it just wouldn't have mattered that much. And the stereotypical such person wants you to work on the red arrow instead of the pink arrow.How useful is it to work on narrowing the theory-practice gap for alignment strategies that wont solve the whole problem?See here.ConclusionI feel pretty nervous about the state of the world described by this picture.I'm really not sure whether I think that theoretical alignment researchers are going to be able to propose a scheme that gets around the core problems with the schemes they've currently proposed. There's a pretty obvious argument for optimism here, which is that people haven't actually put in that many years into AI alignment theoretical research so far. And presumably they're going to do a lot more of it between now and AGI. I think I'm like 30% on the proposition that before AGI, we're going to come up with some alignment scheme that just looks really good and clearly solves most of the problems with current schemes.I think I overall disagree with people like Joe Carlsmith and Rohin Shah mostly in two places:By the time we get to AGI, will we have alignment techniques that are even slightly competitive? I think its pretty plausible the answer is no. (Obviously it would be very helpful for me to operationalize things like pretty plausible and slightly competitive here.)If we dont have the techniques to reliably align AI, will someone deploy AI anyway? I think its more likely the answer is yes. | Decision Making/Content Synthesis/Prediction | Unknown | null | null | null | null | null | null |
|
news | Ait News Desk | Vizsense Recognized by Futurology and PerformanceIN for Their Technological Innovation and Service Excellence in the Influencer Marketing Industry | VizSense, Inc., the leader in micro-influencer marketing, was recently awarded for its technological innovation in artificial intelligence and excellence in performance marketing by Futurology and the International Performance Marketing Awards (IPMA) | https://aithority.com/technology/martech/vizsense-recognized-by-futurology-and-performancein-for-their-technological-innovation-and-service-excellence-in-the-influencer-marketing-industry/ | 2021-09-24T10:57:11Z | VizSense, Inc., the leader in micro-influencer marketing, was recently awarded for its technological innovation in artificial intelligence and excellence in performance marketing by Futurology and the International Performance Marketing Awards (IPMA), respectively.“We are deeply honored to be recognized for two of our most important areas: technology and service excellence,” says Dr. Jon Iadonisi, Founder and CEO of VizSense. “Our amazing team will continue to hold ourselves accountable to our clients and teammates, and compete with the giants in this industry.”Recommended AI News: Fuel Labs Announces $1.5 Million of Support Led by CoinFundFuturology named VizSense one of the Most Innovative Dallas Based Artificial Intelligence Companies for its proprietary influencer tech suite, Bloom. Futurology is a London-based media company dedicated to showcasing the latest innovations, breakthroughs and greatest companies across the futuristic industries.IPMA named VizSense a finalist in two categories for their cutting-edge work in the influencer marketing industry: Best Influencer Marketing Campaign for their work with national tequila brand Hornitos and Best Social Commerce Campaign for their work with workout application startup Paid Workout App.Recommended AI News: Pager Secures $70 Million in Funding to Drive Expansion of Virtual Care in United States, Latin America and WorldwideHosted by PerformanceIN, a leading online media company and global publication for performance marketing news, IPMA recognizes industry-leading performance marketers, including advertisers, publishers, agencies, networks, and technology providers from around the world.To be considered for both categories by the IPMA judging panel, VizSense had to provide in-depth analysis and proof of smart execution and creativity through influencer and social media channels in an industry dominated by companies at least 10 times their size.Recommended AI News: Movable Ink Continues Global Expansion with New Hires Across Germany[To share your insights with us, please write to sghosh@martechseries.com] | Content Synthesis/Decision Making/Process Automation | Business and Financial Operations | null | null | null | null | null | null |
|
news | Zhengqiao Zhao, Stephen Woloszynek, Felix Agbavor, Joshua Chang Mell, Bahrad A. Sokhansanj, Gail L. Rosen | Learning, visualizing and exploring 16S rRNA structure using an attention-based deep neural network | Author summary Microbiomes are communities of microscopic organisms found everywhere, including on and in the human body. For example, the gut microbiome plays an important role in digestion, and changes in composition are associated with changes in health or disease, e.g., inflammatory bowel disease (IBD). Today, microbiome composition is often obtained from high-throughput sequencing, which generates many short DNA reads from multiple organisms in a sample. In this paper, we present a novel deep learning framework, Read2Pheno, to predict phenotype from all the reads in a set of biological samples. An attention mechanism allows visualization of specific subregions (sets of bases) which are important in classifying the reads according to phenotype or taxon labels. We evaluate the framework on sequencing data for 16S rRNA genes, genetic markers used to identify microbial taxonomy. We show that Read2Pheno performs comparably as conventional methods on three distinct data sets from the American Gut Project, IBD patients and controls, and a comprehensive taxonomic database. Moreover, Read2Pheno results can be readily interpreted—e.g., to identify regions of the 16S rRNA gene to target for PCR diagnostics—without additional pre/post-processing steps that can introduce complexity and error. | https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009345 | https://journals.plos.org/ploscompbiol/article/figure/image?id=10.1371/journal.pcbi.1009345.g007&size=inline | 2021-09-22T14:00:00Z | AbstractRecurrent neural networks with memory and attention mechanisms are widely used in natural language processing because they can capture short and long term sequential information for diverse tasks. We propose an integrated deep learning model for microbial DNA sequence data, which exploits convolutional neural networks, recurrent neural networks, and attention mechanisms to predict taxonomic classifications and sample-associated attributes, such as the relationship between the microbiome and host phenotype, on the read/sequence level. In this paper, we develop this novel deep learning approach and evaluate its application to amplicon sequences. We apply our approach to short DNA reads and full sequences of 16S ribosomal RNA (rRNA) marker genes, which identify the heterogeneity of a microbial community sample. We demonstrate that our implementation of a novel attention-based deep network architecture, Read2Pheno, achieves read-level phenotypic prediction. Training Read2Pheno models will encode sequences (reads) into dense, meaningful representations: learned embedded vectors output from the intermediate layer of the network model, which can provide biological insight when visualized. The attention layer of Read2Pheno models can also automatically identify nucleotide regions in reads/sequences which are particularly informative for classification. As such, this novel approach can avoid pre/post-processing and manual interpretation required with conventional approaches to microbiome sequence classification. We further show, as proof-of-concept, that aggregating read-level information can robustly predict microbial community properties, host phenotype, and taxonomic classification, with performance at least comparable to conventional approaches. An implementation of the attention-based deep learning network is available at https://github.com/EESI/sequence_attention (a python package) and https://github.com/EESI/seq2att (a command line tool).Citation: Zhao Z, Woloszynek S, Agbavor F, Mell JC, Sokhansanj BA, Rosen GL (2021) Learning, visualizing and exploring 16S rRNA structure using an attention-based deep neural network. PLoS Comput Biol 17(9): e1009345.https://doi.org/10.1371/journal.pcbi.1009345Editor: Elhanan Borenstein, University of Washington, UNITED STATESReceived: September 14, 2020; Accepted: August 12, 2021; Published: September 22, 2021Copyright: © 2021 Zhao et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Data Availability: All relevant data are within the manuscript and its Supporting information files.Funding: GR received fundings from National Science Foundation (https://www.nsf.gov/). The grant numbers awarded to GR are #1919691, #1936791 and #2107108 from NSF. GR received computational resources support from Extreme Science and Engineering Discovery Environment (XSEDE: https://www.xsede.org/) and XSEDE is supported by National Science Foundation (NSF: https://www.nsf.gov/) grant number #ACI-1548562. Specifically, it used the Bridges and Bridges-2 system, which are supported by NSF award number ACI-1445606 and ACI-1928147 respectively, at the Pittsburgh Supercomputing Center (PSC). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Competing interests: The authors have declared that no competing interests exist.This is a PLOS Computational Biology Methods paper.IntroductionAdvances in DNA sequencing are rapidly producing complex microbiome data sets in fields ranging from human health to environmental studies [1]. Large-scale microbial projects provide rich information, enabling prediction of sample-level traits (i.e., phenotypes), aiding biological discovery, and supporting medical diagnosis. A typical microbiome study may contain hundreds to thousands of samples. Each sample, in turn, contains thousands of reads depending on the sequencing depth. These reads are fragments of DNA/RNA material extracted from microbes residing in the environment where the sample was collected. For example, an environmental sample can be sequenced via 16S ribosomal RNA amplicon technology, to provide a comprehensive taxonomic survey of an environments or subjects microbial community [2, 3].A major focus of microbiome research has been, and continues to be, the use of 16S rRNA amplicon sequencing surveys to determine Who is there? in a host or environmental sample. The answer to Who is there? may, in turn, be used to predict host phenotype for clinical diagnoses or infer taxa-phenotype association for basic biology research [48]. In the context of our work, we define phenoytpe as an overall trait at the environmental level or habitat that the microbiome sample is isolated from [9, 10], thereby incorporating the emergent function of the microbiome (a.k.a. microbiome phenotypes) [1116]. For example, the expansive definition of phenotype in the microbiome context can include the preference of a certain microbial community for a particular environmental niche or body site [17]. Thus, the microbiome may be shaped by the environment.While shotgun metagenomic sequencing technology may be used instead [3, 18, 19], many use cases depend on 16S rRNA amplicon sequencing as an affordable, rapid, and readily field-deployable solution to find out Who is there?. However, phenotype prediction from rRNA sequence is a major challenge. Ribosomal sequence does not itself contain functional information, unlike, e.g., more costly and complex metagenomic shotgun sequencing data [7, 20]). Building machine learning phenotype classifiers usually starts with constructing a microbial abundance table, such as an Operational Taxonomic Unit (OTU) table, an Amplicon Sequence Variant (ASV) table, or a k-mer frequency table (i.e., table of the frequencies of k-length nucleotide strings within the collection of reads in a sample) [6, 7]. Researchers then train a classifier to distinguish phenotypes by learning from the taxon abundance of sequenced samples in a training data set. For example, a classifier may be constructed to identify a sample as being from a patients gut who was diagnosed with a disease.By analyzing the OTU/ASV abundance table, therefore, researchers can discover underlying associations between certain taxa or groups of taxa and phenotype. For example, in Gevers et al. [5], samples were collected from a) patients with Crohns disease and b) control groups, and some taxa were found to only be abundant in the disease group, and some were eliminated in disease group. In another study, 16S rRNA sequences were transformed to OTU tables to evaluate 18 classification methods and 5 feature selection methods, and feature selection was shown to often improve classification performance [21]. Another classifier method based on RoDEO (Robust Differential Expression Operator) normalization was shown to sometimes perform better when a small subset of OTUs were used [6].The construction of OTU/ASV tables, however, often involves denoising, sequence alignment, and taxonomic classification, and thus can lead to information loss from the true information contained in the raw nucleotide reads. And, as shown above, it can require additional steps of processing, for example feature selection or OTU table reduction. By grouping sequences to limited taxonomic labels, it becomes difficult to quantify the genotype-to-phenotype relationship. Of particular concern is the omission of nucleotide structural information from OTU mapping, where the 97% identity threshold conventionally used for OTU mapping smooths over valuable nucleotide variation. This is better addressed through the more exact ASV identificationbut rarely is the nucleotide level information examined past the mapping step. Alternatively, a k-mer representation of amplicon sequences has been proposed to predict phenotype, which is shown can outperform traditional OTU representation [7]. Since a k-mer-based method is alignment free and reference free, it would cost less computationally than OTU-based methods if relative small k-mer size is used (e.g., k-mer values of 3 k 8 are typically used in MicroPheno [7]). Because k-mer representations cut reads into smaller pieces, methods based on k-mers will lose sequential information. As such, k-mer analysis is subject to the length of the k-mers and does not preserve the nucleotide context/sequential-order. Some local nucleotide variation may be able to be identified; however, the long-range nucleotide sequential information is completely lost. In sum, currently available methods are unable to easily and robustly connect nucleotide changes on the read level back to the phenotype prediction and thereby reveal which nucleotide features are specifically relevant to the classification.Deep neural networks and their application in bioinformaticsRecent advances in supervised deep learning are further able to leverage a huge volume of different kinds of data. Convolutional neural networks (CNNs), which may be interpreted by saliency maps [22], have been vital to image recognition. Model interpretability has been a research direction of particular interest in the deep learning field [2325]. Deep learning has been applied to bioinformatics too [26]. Deep learning approaches have been shown to be able to learn hierarchical representations of metagenomic data that standard classification methods do not allow [27]. Both CNNs and RNNs have been applied to areas such as transcription factor binding site classification [28, 29], SNP calling [30, 31], microbial taxonomic classification [32] and DNA sequence function prediction and gene inference [33, 34]. Other work has used deep learning approaches to predict environments and host phenotype using k-mer-based representation of shallow subsamples [7]. Lo et al. proposed deep learning approaches to learn microbial count data (e.g., OTU table) for host phenotype prediction [35], and another approach formats microbial count data as an image to be processed by a CNN model [36]. CNN models were used to learn phylogenetic structure of a metagenomic sample to predict the host phenotype [37]. A 2D matrix is used to represent the phylogenetic tree of microbial taxa (with relative abundance) in a sample, and a CNN model is designed to learn from such data. Woloszynek et al. proposed an unsupervised method to embed 16S rRNA sequences to meaningful numerical vectors to facilitate the down-stream analysis and visualization [38]. Many models rely on extracting features (for instance, taxonomic composition or functional profiles) from the sequence data [39].In addition to making predictions, machine learning models can reveal knowledge about domain relationships contained in data, often referred to as interpretations [40]. In the context of sequence classification tasks, i.e., microbial survey data based phenotype prediction, once a predictive model is built, the researchers can further identify sequence features relevant to classifications, i.e., occurring taxa and gnomic content related to a certain disease. There are substantial research attempts to identify label associated genetic content. A complementary approach is supervised computational method, as a means of associating genetic content with known labels, i.e., taxa. Oligotyping has been proposed as a way to identify subtypes of 16S rRNA sequence variation, based on distinguishing sequence variants by subsets of several nucleotides within the sequence, i.e., oligomers. Specifically, Oligotyping is a supervised computational method that identifies those nucleotide positions that represent information-rich variation [41]. Oligotyping requires information about the taxonomic classification of the sequence via OTU clustering or supervised methods. Then, the method is applied to taxonomical/OTU groups of interest. Oligotyping can be an efficient way to identify informative nucleotides and discriminate between closely related but distinct taxa [41]. However, preprocessing steps are still needed (e.g., OTU clustering or multiple sequence alignment) to find closely related sequences.Another proposed method, PhenotypeSeeker [42], is a statistics-based framework to find genotype-phenotype associations by identifying predictive k-mers by a regression model and quantifying their relative importance. This was designed, however, to learn a closely related group of bacterial isolates and their associated phenotypes. Furthermore, PhenotypeSeeker and other k-mer frequency tables based methods cannot capture the sequential order information of k-mer, and thus they fail to provide sequence-level interpretability. Visualization methods are developed for DNA/RNA binding sites prediction models as mentioned in Section Deep neural networks and their application in bioinformatics [28, 29, 43, 44] to reveal predictive genomic content. Alipanahi et al. propose to interpret the model and visualize informative single nucleotide polymorphisms (SNPs) by manually altering nucleotides in the input reads and comparing the resulting new prediction with the original prediction of the unaltered input [43]. In Deep Motif, the authors use Saliency Maps [22, 25] to interpret the model and visualize informative genomic content [28].Towards better interpretability: Attention mechanismsAttention mechanisms have become more widely applied in the natural language processing (NLP) and image recognition fields to improve the interpretability of deep learning models [4548]. For example, it has been shown that an attention-based Bi-LSTM (Bi-directional long short term memory) RNN model can successfully capture the most important semantic information in a sentence and outperform most existing competing approaches [47]. A hierarchical attention network can also improve document level classification [46] by selecting qualitatively informative words and sentences. Informative content may be visualized by looking at the output of the attention layers of the network model. The use of deep learning with attention mechanisms has also been suggested for the field of bioinformatics. Deming et al. [29] proposed a method for simultaneously learning general genomic patterns and identifies the sequence motifs that contribute most to predicting functional genomic outcomes, e.g., transcription factor binding site (TFBS) classification and lineage-specific gene expression prediction. While they found a marked gain in performance over previous architectures, their model was not used for phenotype prediction.In this paper, we exploit CNNs, RNNs, and attention mechanisms for phenotype/taxonomic prediction and propose a Read2Pheno classifier to predict phenotype from 16S rRNA reads and, thereby, explore and visualize informative nucleotide structure and taxa. This method can be considered as a supervised read-level embedding method compared with our previous work in word2vec embedding methods for 16S rRNA reads [38]. Although the model focuses on read-level resolution, the sample-to-phenotype prediction can still be inferred by a sample-level predictor which aggregates the abstraction of all reads from the Read2Pheno model. A python implementation of the proposed model is available at https://github.com/EESI/sequence_attention. A command line tool of the proposed model is avaiable at https://github.com/EESI/seq2att. We show that the model trained with read-level information can achieve similar sample-to-phenotype predictions compared with conventional methods. Our proof-of-concept results demonstrate the potential of our proposed read-level training procedure to provide the basis for more accurate and comprehensive sample-level classification, as compared to OTU tables, and substantially more interpretable results than k-mer-based methods. We further provide a visualization of the embedded vectors, which is a representation of the information that the network is learning. We use attention weights to identify and visualize the nucleotides associated with phenotype and/or taxonomy, and compare the highlighted informative regions against a base-line entropy method and Oligotyping [41].We show the efficacy of our model with the American Gut microbiome data set [49] (http://americangut.org/), Gevers et al.s Crohns disease data set [5] and SILVA 16S rRNA data set [50, 51] and explore interesting visualizations and features generated by the model. The experimental results show that the performance of our model is comparable to current methods and our models can provide further interpretation and visualization.Materials and methodsOur proposed model consists of two parts: 1) the Read2Pheno read-level classifier, which is the focus of our paper, and 2) several sample-level predictors based on the Read2Pheno model, primarily used here to demonstrate the evaluation of the Read2Pheno model. We first train a read-level classifier using an attention-based deep neural network to predict DNA/RNA reads to the sample-level labels the reads associated with. For example, if the samples are labeled with collected body sites, the model will be trained to learn the original body site from which the samples reads were collected. Then, a sample-level prediction can be made by three different ways: 1) tally a majority vote of all the read prediction scores in the sample of interest to obtain a final prediction; 2) use the output of the intermediate layer to obtain a read embedding (see Fig 1 for details) and average read embeddings from a sample to gain an overall sample-level embedding that a classifier can train on to predict a sample-level label; 3) apply clustering on read embeddings of training data and assign reads per sample to those clusters to form a Pseudo OTU table [38]. Then, a sample-level classifier can be trained for phenotype prediction, which allows validation of the read-level model, i.e., by showing that it can produce accurate and relevant phenotype predictions.Fig 1. Read2Pheno classifier architecture: The input is a one-hot coded 16S rRNA sequence with length T.The input is fed to a few 1-dimensional convolutional blocks with window size of W and the number of output channels of Nc. The resultant output is a T × Nc dimensional matrix which is then fed to a Bidirectional LSTM layer with the number of hidden nodes of Nh. Na is the number of hidden nodes used to compute attention weights and Ny is the total number of phenotypes (number of classes) to be predicted. There are two informative intermediate layer outputs (attention weights and read embedding vectors) which are labeled by speech balloons. They are used in the analysis described in this paper.https://doi.org/10.1371/journal.pcbi.1009345.g001Read2Pheno classifierThe Read2Pheno classifier is a hybrid convolutional and recurrent deep neural network with attention. Fig 1 shows a diagram of the classifier. Sequencing data are one-hot coded according to the map shown in S1 Appendix. Then the array representation of a read is fed into several initial layers of convolutional blocks (inspired by the scheme in [29]). The result is an embedding of the read, a T × Nc dimensional matrix, by learning local k-mer patterns, where Nc is the number of output channels in convolutional blocks and T is the length of input DNA reads. A Bi-directional Long Short Term Memory (Bi-LSTM) model is then applied to the data to learn the longitude dependency of the output of the convolutional layers. The returned sequence is then processed and normalized by an attention layer to get an attention vector using the soft attention mechanism, as described in [47, 52]. The output of Bi-LSTM layer in our model is a T × Nh dimensional matrix where Nh is the number of hidden nodes in Bi-LSTM layer and T is the length of input DNA reads. Each base position (time-step) in the input corresponds to an Nh dimensional vector (hidden states at this position). The dense attention layer applies to the hidden states of every base position (time-step). The dense layer thereby learns the importance of hidden states at each position and return a small value if the hidden states of this position do not make an important contribution to the models final prediction, and, conversely, a large value if the model relies on the hidden states at this position in making the final prediction. The output of the dense layer is a vector of length T. Then, the output is normalized by a softmax function to produce the attention vector [52]. The output of this layer naturally indicates the regions in the sequence that the model pays attention to. While the attention weights are not learned from specific nucleotides but from high level features from 9-mers and their sequential information, as shown in Fig 1, the attention interpretation may be considered to be an approximation of the informative nucleotides of the 16S rRNA gene. The final embedding of the read is a weighted sum of all the embeddings across the sequence, where the weights are the elements of the attention vector. The goal of this layer is to suppress the regions that are less relevant to prediction and focus on informative regions. Finally, a dense layer with softmax activation function is applied to the read embedding vector to classify it into one of Ny labels. The hyperparameter selection process is described in Section Model selection and hyperparameter search.Sample-level predictorIn this paper, we perform sample-level prediction in three different ways. The simplest of the three is majority vote. The sample-level predictor counts all the votes, i.e., the resulting Read2Pheno classifications, from all the reads in a query sample and labels the sample with the overall majority vote. The majority vote is a baseline method intended to illustrate that the Read2Pheno model is learning the sample-associated phenotypic labels for each read. We compare the majority vote baseline to proposed embedding-based approaches further described below.The intermediate layer of our model provides a concise numerical representation of the input reads, which we can exploit in sample-level prediction. We propose to use two embedding based approaches: sample-level embedding method and Pseudo OTU method [38]. The sample-level embedding method forms a sample-level vector representation by averaging all read-level embeddings in a query sample. Then, a classifier, such as Random Forest, can be trained to learn the sample-phenotype association. For the Pseudo OTU method, as described by Woloszynek et al. [38], firstly read-level embedding vectors are clustered via an unsupervised algorithm such as k-means to form k clusters that are Pseudo OTUs (groupings of related reads). Then, we can assign each query samples reads to those Pseudo OTUs based on distance. A classifier, such as Random Forest, can then be trained to make sample-level predictions on a Pseudo OTU table made up of the Pseudo OTU abundance, as defined above, in all samples. Both embedding-based methods learn the sample phenotype by training on each individual read (read-level) and on all reads (sample-level) rather than read-level-only learning, as for baseline majority vote.Majority vote.The Read2Pheno classifier produces a vector of likelihood scores which, given a read, sum to one across all phenotype classes. To get the sample-level prediction, all reads from a sample of interest are classified by Read2Pheno model, and the resultant scores are then aggregated by the sample-level predictor. Using body site prediction as an example, there are 5 different body site classes: feces, tongue, skin of hand, skin of head and nostril. We show the diagram of our sample-level predictor in S2 Appendix. Given a sample of interest, the reads associated with this sample are first predicted by Read2Pheno classifier. Some species can be found in multiple body sites. Therefore, performing a hard call on which body sites a read originates from can be misleading. To alleviate this problem, if needed, the sample-level predictor contains a read caller function that can assign one read to multiple body sites by applying a threshold to the output of Read2Pheno for the read. In our implementation, if the likelihood score of the read from a body site is greater than chance (, where N is the number of body sites in the training data), the vote count of that particular body site will increment by 1 (see the Read Abundance block in S2 Appendix). For example, suppose there are three target body sites: skin (i.e., dermal samples), gut (i.e., fecal samples), and tongue (i.e., oral samples). If a read were predicted to be from gut, skin and oral samples with scores of 0.51, 0.43 and 0.06 respectively, both the vote counts of feces class and skin class would increment by 1 (since the likelihood of these two body sites are greater than ). Finally, once all reads have been counted, the argmax of vote count vector is taken to predict the sample-level body site.Embedding-based method.The attention layer of the Read2Pheno classifier produces an Nh-dimensional embedded vector (see Fig 1), which is a meaningful numerical representation of each 16S rRNA read. For sample-level classification, we first use the trained Read2Pheno model to encode all reads per sample into the Nh-dimensional vectors. We describe two methods to produce sample-level features using the read embeddings: the averaged sample-level embedding and Pseudo OTU table.Averaged sample-level embedding: To obtain a sample-level embedding, we first use the trained Read2Pheno model to encode all reads per sample into the Nh-dimensional vectors. Then, we average the read vectors to form a sample-level embedding.Pseudo OTU table: Instead of taking the average of the trained read embeddings, we use a k-means algorithm with the default Euclidean distance metric to cluster the read embeddings of training data into 1000 clusters [38]. Then, all reads in each query sample can be assigned to those clusters. Effectively, the clusters represent related sequences, which are called Pseudo OTUs. We compute the number of reads assigned to each Pseudo OTU for each sample to create a Pseudo OTU table: a matrix of Pseudo OTUs versus samples.We can then train a classifier (e.g. Random Forest) on the sample-level features to predict phenotype. We show the training and testing process of averaged sample-level embedding and Pseudo OTU table based sample-level prediction in S3 and S4 Appendices respectively.Traditional sample-level classifiers.We also train and evaluate Random Forest classifiers using 1) k-mer frequency tables, 2) OTU abundance tables and 3) ASV tables generated by Dada2 [53] to show that our proposed model can extract meaningful features on the read level, which can then be used to achieve comparable performance at the sample level. In our experiments, all the tables are first normalized by the row sum to create relative abundance table which are used as the input to the random forest classifiers. We use the scikit-learn implementation of Random Forest classifier with default parameters (number of estimators = 100).Pretrained word2vec embedding based sample-level classifiers.Woloszynek et al. published a pretrained Skip-Gram word2vec model that was trained on 2,262,986 full-length 16S rRNA amplicon sequences from the GreenGenes reference database for 10-mers [38]. Since each k-mer is mapped to a numerical vector, the read-level embedding is obtained by averaging all numerical vectors that corresponds to 10-mers in a read. The sample-level embeddings can then be obtained using the same methods described in Embedding-based method. Downstream Random Forest classifiers trained on the embeddings may serve as additional baselines. The pretrained word2vec method can in general be classified as a k-mer based methods as well, since the word2vec model is trained to predict neighbor k-mers given a k-mer input. However, this method can leverage the information from millions of 16S rRNA reference sequences. As such, the pretrained word2vec method provides a good baseline to show how well embedded vectors obtained using this papers attention-based method can encode information.Data preparation for model evaluationAmerican Gut Project (AGP) data set.The AGP data set used for model evaluation in this paper is a subset of data from the American Gut Project [49]. As of May 2017, American Gut project reported that AGP included microbial sequence data from in total 15,096 samples from 11,336 human participants (subjects) and that number continues to grow as the project is ongoing [49]. We focus on samples from five major body sites (Ny = 5): feces, tongue, skin of hand, skin of head and nostril. As mentioned in American Gut Projects documentation, some bloomed organisms were contained in samples analyzed early in the American Gut Project because of increased shipping time and delay between when samples were collected and when they were put on ice. As a result, bloom sequences have been removed in preprocessing process by American Gut Project. In this study, we use the latest filtered sequences and OTU table deposited in ftp://ftp.microbio.me/AmericanGut/latest as of 2018/12. All reads have been trimmed to 100 base pairs by American Gut Project, so that T = 100 in Fig 1.Gevers data set.The Gevers data set used for model evaluation in this paper is a subset of an inflammatory bowel disease (IBD) data set [5] (NCBI SRA index: PRJNA237362 in NBCI). Sample metadata label them as being IBD or Non-IBD (Ny = 2). Here, we refer to IBD samples as CD (Crohns Disease), and the Non-IBD ones as Not IBD (disease-negative). We merge paired reads using QIIME [54] and trim them to 160 base pairs (i.e., with the first 10 removed, the following 160 base pairs kept and the rest discarded), so that T = 160 in Fig 1. To confirm that the Read2Pheno model would be robust to a longer read, we also evaluated reads trimmed to 250 bp (see S16 Appendix).Experimental setup for American Gut Project data set and Gevers data set.First, we filter out samples with less than 10,000 reads. Then, we randomly select 161 samples from American Gut Project data set and 221 samples for Gevers data set per class as our experimental data set to balance the class distribution (resulting in total 805 samples in AGP experimental data set and 442 samples in the Gevers experimental data set). The number of samples are selected based on the least number of sample per class after filtering for each data set. Next, we randomly select a certain number of samples per class for training and leave out the rest for testing. For the AGP data set, 10, 80 and 150 samples per class are randomly selected for training (resulting in 50, 400 and 750 samples total respectively). For the Gevers data set, 20, 80 and 200 samples per class are randomly selected as training data (resulting in 40, 160 and 400 samples total respectively).For the AGP data set-based experiment used for attention interpretation, we randomly select 10 samples per class for training, resulting, in total, 50 samples and 1,503,639 reads for training. The rest of the samples form the testing data set. Metadata for this experimental data set are available in S5 Appendix, where we provide additional information about the hosts such as the race, sex and health status. We randomly select 10 samples per class as the candidate visualization set. For the Gevers data set-based experiment used for attention interpretation, we select 40 samples (20 from the IBD class and 20 from non-IBD) by random and collect 1,678,464 reads for training (around 42,000 reads per sample). The remaining samples (442#_of_training) are used for testing. We again randomly select 10 samples per class from the testing data set as the candidate visualization set. After we select the candidate visualization set for both attention interpretation experiments, we use the QIIME [54] implementation of the Ribosomal Database Project (RDP) [55] taxonomic classification with GreenGenes v13.8 database to assign the genus-level labels to reads in the candidate visualization set. Then, reads with less than a 80% RDP confidence score on genus level are removed from the visualization set.Finally, in order to efficiently extract intermediate layer outputs and generate visualizations, an arbitrary subset, 100,000 reads from the qualified visualization set, are randomly sampled for the final visualization and interpretation. All reads in the final visualization set have a genus-level label and phenotype (i.e., body site or disease diagnosis) label. For the AGP | Content Synthesis/Prediction | Life, Physical, and Social Science | null | null | null | null | null | null |
news | Brent Leary | Bryan Catanzaro of NVIDIA - Conversational AI in Some Ways is the Ultimate AI Challenge | Bryan Catanzaro of NVIDIA - Conversational AI in Some Ways is the Ultimate AI Challengesmallbiztrends.com | https://smallbiztrends.com/2021/10/bryan-catanzaro-nvidia-conversational-ai.html | 2021-10-01T20:30:37Z | Many of you who are into gaming or serious video editing know NVIDIA as creators of the leading graphics processing technology on the market. But NVIDIA is also a leader in the areas of artificial intelligence and deep learning; specifically in how these technologies can improve how we experience graphics, text and video synthesis, and conversational AI.Some of their work was showcased in a series of videos theyve put together called I AM AI which are a compelling look at what is (and what will be) available to us to improve how we experience the world and each other. And recently I had the opportunity to have a LinkedIn Live conversation with Bryan Catanzaro, Vice President, Applied Deep Learning Research at NVIDIA, to hear more about their work with AI to reimagine how we experience sights and sounds.Below is an edited transcript of a portion of our conversation. Click on the embedded SoundCloud player to hear the full conversation.Make sure to watch the embedded clips as they help to frame our conversation.Brent Leary: That voice in that video sounded like a real human being to me. Youre used to hearing like Alexa and Siri, and before that it was like, you know, we even want dont want to talk about the voices for before that, but that one really sounded like a human being with, with human inflection and some depth. Is that the thing that were looking at when you talk about reinventing graphics and reinventing voice technology and using newer technology, including AI and deep learning to not only change the look of graphics but change the feel and sound of a machine to make it sound more like one of us. Bryan Catanzaro: I should make sure that you understand that although that voice was synthesized, it was also closely directed. So I wouldnt say that that was a push button, speech synthesis system. Like you might use when you talk with a virtual assistant. Instead, it was a controllable voice that our algorithms allow the producers of the video to create. And one of the ways that they do that is by modeling the inflection and the rhythm and the energy that they want a particular part of the video of the narration to have. And so, so I would say its not just a story about AI getting better, but its also a story about how humans work more closely with AI to build things, and having the ability to make synthetic voices that are controllable in this way. I think this opens up new opportunities for speech synthesis in entertainment and the arts, I think. I think thats exciting, but its something that you and your audience should understand was actually very closely directed by a person. Now, of course, were hard at work on algorithms that are able to predict all of that humanity there, the rhythm, the inflection, the pitch. And I think that we are going to see some pretty amazing advances in that over the next few years, where we can have a fully push button, speech synthesis system that has the right inflection to go along with the meaning of the text, because when you speak a lot of the meaning is conveyed through the inflection of your voice, not just the meaning of the words that you choose. And, if we have models that are able to understand the meaning of texts, like some of these amazing language models that I was referring to earlier, we should be able to use those to direct speech synthesis in a way that has meaning. And thats something that Im very excited about. its interesting. I feel that we have kind of a cultural bias, maybe its specific to the United States. Im not sure, but we have this cultural bias that computers cant speak in a human-like way. And maybe it comes somewhat from Star Trek: The Next Generation where Data was like an incredible computing machine, and he could solve any problem and invent new theories of physics, but he could never speak in quite the same way that a human could, or maybe it traces back to, you know. Brent Leary: Spock, maybe. Bryan Catanzaro: It was off-putting like his, his voice, like was creepy, you know. And so we have 50 years, several generations of culture telling us that a computer cant speak in a human-like way. And I actually just think thats not the case. I think we can make a computer speak in a more human-like way, and, and we will. And I also think that the benefits of that technology are going to be pretty great for all of us. Brent Leary: The other thing that stood out in that, in that clip was the Amelia Earhart, with her picture seeming to come to life. Can you talk about, Im guessing thats part of reinventing graphics using AI. Bryan Catanzaro: Yeah, thats right. NVIDIA Research has been really involved in a lot of technologies to basically synthesize videos and synthesize images using artificial intelligence. And thats one example, you saw one where the neural network was colorizing an image, sort of giving us new ways of looking at the past. And when you think about that, you know, whats involved in colorizing an image. The AI needs to understand the contents of the image in order to assign possible colors to them, like, for example, grass is usually green, but if you dont know where the grass is, then you shouldnt color anything green and traditional approaches to colorizing images were, I would say a little bit risk averse. But as the AI gets better at understanding the contents of an image and what objects are there and how the objects relate to each other, then it can do a lot better of assigning possible colors to the image that kind of brings it to life. Thats one example, this image colorization problem. But I think in that video, we saw several other examples where we were able to take images and then animate them in various ways. Visual Conditional SynthesisOne of the technologies weve been really interested in is, is called conditional video synthesis, where you are able to create a video based on sort of a sketch and, you know, for, for something like this, what you would do is oppose recognition that analyzes the structure of objects. For example, a face, and heres the eyes and heres the nose, and then assigns kind of positions to the object and sizes. And that becomes kind of cartoon-like, a child might draw with a stick figure. And then what you do is send that into another routine that animates that stick figure and makes the person move their head or smile or, or talk with texts that we want to animate a persons speaking to a certain text while we can make a model that predicts how their stick-figure model is going to evolve as, as the person thats speaking. And then once we have that kind of animated stick figure drawing, that shows how the person should move, then we put it through a neural network that synthesizes a video from that and, and goes sort of from the initial image that has like the, the appearance of the person and the, and the background and so forth, and then animates it via this sort of stick figure animation to make the video. And we call that conditional video generation, because there are many different videos that you could produce from the same stick figure. And so what we want to do is choose one that seems plausible conditioned on, on sort of some sort of other information, like maybe the text that the person is speaking, or maybe some sort of animation that we want to create. And conditional video generation is a very powerful idea and its something that I think over time will evolve into a new way of generating graphics, a new way of rendering and creating graphics. Brent Leary: There is even a piece of that video where the person basically said, draw this and it actually started getting drawn. Bryan Catanzaro: Right. The power of deep learning is that its a very flexible way of mapping from one space to another. And so in that video, were seeing a lot of examples of that. And this is another example, but from the point of view of the AI technology theyre all similar, because what were doing is trying to learn a mapping that goes from X to Y. And in this case, were trying to learn a mapping that goes from a text description of the scene to a stick figure a cartoon of that scene that. Lets say I said a lake surrounded by trees in the mountains. I want the model to understand that mountains go in the background and they have the certain shape. And then, the trees go in the foreground and then right in the middle, usually theres going to be a big lake. Its possible to train a model based on say a thousand or a million images of natural landscapes and you have annotations that show, what are the contents of these images? Then you can train the model to go the other way and say, given the text, can you create a sort of stick figure cartoon of what the scene should look like? Where do the mountains go? Where do the trees go? Where does the water go? And then once you have that stick figure, then you can send it into a model that elaborates that into an image. And, and so thats what you saw in that video. Digital Avatars and Zoom CallsWatch this short video of how this technology will be used to make Zoom calls a much better experience in the near future. This scenario has a guy being interviewed for a job via a Zoom call.Brent Leary: What was cool about that is, at the end, he said that image of him was generated from one photo of him; and it was his voice. You could, on the screen you could see the movement of the mouth. The audio quality is great, and hes sitting in a coffee shop, which there could be a lots of sound going on in coffee shop, but we didnt hear any of that sound. Bryan Catanzaro: Yeah, well, we were really proud of that demo. I should, I should also note that that demo won best in show at the SIGGRAPH conference this year, which is the biggest graphics conference in the world. That model was a generalized video synthesis model. We were talking earlier about how you can take a kind of a stick figure representation of a person then animate it. Well, one of the limitations of models in the past is that you had to train an entirely new model for every situation. So lets say if Im at home, I have one model. If Im in the coffee shop with a different background, I need another model. Or if you are wanting to do this yourself, you would need one model for yourself in this place, another model for yourself, another place, every time you create one of these models, you have to capture a dataset in that location with maybe that set of clothes or those glasses on or whatever, and then spend a week on a supercomputer training a model, and thats really expensive, right? So most of us could never do that. That would really limit the way that this technology could be used. I think the technical innovation behind that particular animation was that they came up with a generalized model that could work with basically anyone. You just have to provide one picture of yourself, which thats cheap enough. Anybody can do that, right? And if you go to a new location or youre wearing different clothes or glasses, or whatever, that day, you can just take a picture. And then the model, because its general, is able to resynthesize your appearance with just using that one photo as a reference. I think thats pretty exciting. Now later on in that video, actually, they switched to a speech synthesis model as well. So what we heard in that clip was actually the main character speaking with his own voice, but later on things in the coffee shop gets so noisy that he ends up switching over to text. And so hes just typing and the audio is being produced by one of our speech synthesis models. I think giving people the opportunity to communicate in new ways only helps bring people closer together. Brent Leary: Conversational AI, how is that going to change how we communicate and collaborate in the years to come? Bryan Catanzaro: The primary way humans communicate is through conversation just like you and I are having right now, but its very difficult for humans to have a meaningful conversation with the computer, for a number of reasons. One is that it doesnt feel natural, right? Like if it sounds like youre speaking to a robot, thats a barrier that inhibits communication. It doesnt look like a person, It doesnt react like a person and obviously computers these days, you know, most of the systems that, that you and I have interacted with, dont understand what humans can understand. And so conversational AI in some ways is the ultimate AI challenge. In fact, you may be familiar with the Turing test, Alan Turing, who is considered by many to be the father of artificial intelligence he set conversational AI as the end goal of artificial intelligence. Because if you have a machine thats able to intelligently converse with a human, then you basically solved any kind of intelligence question that you can imagine, because any information that humans have, any wisdom, any idea that humans have created over the past many thousand years has all, theyve all been expressed through language. And so that means language is a general enough way. Its obviously the only way for humans really, to communicate complicated ideas. And if were able to make computers that are able to understand and communicate intelligently, and with low friction, so it actually feels like youre interacting with the person, then a lot of problems I think well be able to solve. I think conversational AI is going to continue to be a focus of research from the entire industry for a long time. I think it is as deep a subject as all of human understanding and knowledge. If you and I were having a podcast on, lets say Russian literature, there would be a lot of specialist ideas that someone with a PhD in Russian literature would be able to talk about better than I would, for example, right? So even amongst humans, our capabilities in various subjects are going to differ. And thats why I think conversational AI is going to be a challenge that continues to engage us for the foreseeable future, because it really is a challenge to understand everything that humans understand. And we arent close to doing that.This is part of the One-on-One Interview series with thought leaders. The transcript has been edited for publication. If it's an audio or video interview, click on the embedded player above, or subscribe via iTunes or via Stitcher.Comment | Detection and Monitoring/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Vincent Terrasi | Automated Content Generation for SEO: GPT-3 Possibilities & Pitfalls via @sejournal, @VincentTerrasi | Language models have a lot of potential for marketing. See how GPT-3 can help with automated content generation for SEO.The post Automated Content Generation for SEO: GPT-3 Possibilities & Pitfalls appeared first on Search Engine Journal. | https://www.searchenginejournal.com/?p=422215 | 2021-10-14T11:45:47Z | Since the arrival of GPT-3, content generators have multiplied the use cases for SEO. It seems a bi-monthly update to review the new progress in the field of language models is in order.First of all, at the end of 2021, the very large language models club grew significantly.Each country has tried to showcase its technologies and make them accessible through research papers and public or private demonstrations.Here are the main competitors in the race:US: OpenAI – Turing NLG.China: Wu Dao 2.0 – PanGu-Alpha.South Korea: HyperCLOVA.Israel: A121 (Jurassic-1).Europe: Aleph Alpha.Open Source: EleutherAI.Each model has its strengths and weaknesses.To test them, many SEO software editors or SEO agencies are now trialing these models.How to Choose a GPT-3 Model?You may think that the more parameters the model has, the better it would be (Editor’s note: a parameter corresponds to a concept learned by the AI).But you would be wrong.The number one criteria is absolutely not the number of parameters, because you can obtain great results with lighter models.Rather, it is the data on which the model was trained.In fact, to be effective, a model must be able to understand a large number of disparate domains.The first thing to do is to find out how the model was trained. For GPT-3, the following diagram helps:We can see that GPT-3 was mainly trained with data from:Webarchive between 2016 and 2019.WebText, which corresponds to data retrievals on the web.Wikipedia.Books in English (Books1)Books in other languages (Books2).Now, if we look at how the open-source models are trained, we see that the sources are quite different.Everything is based on the project The Pile, which is a data set of 825 GB of diversified English texts that are free and accessible to the public.With The Pile, we find very varied data such as books, GitHub repositories, webpages, discussion journals, articles in medicine, physics, mathematics, computer science, and philosophy.In general, it will be important to test the language model in your language and especially on your website’s specific vocabulary.Before we look at specific SEO use cases, let’s look at the pitfalls.GPT-3 Content Generation Pitfalls for SEOTo generate qualitative texts that interest your users, it is important to know the pitfalls to avoid.First of all, whatever model you choose, you must provide it with quality examples as input so that it can imitate them and, above all, respect a specific type of text.If you ask a language model to generate content on “New York plumbers,” the model will head down various and often unsuitable paths:Should it create a made-up directory?Should it create content about a New York plumber?Should it create a dialogue between plumbers in Paris?Maybe a poem about plumbing in New York?In short, the model will be lost.Second, language models do not handle duplicate content at all.Therefore, whatever text you generate, you will have to use a third-party tool to check that the model has not duplicated something it has learned – and more particularly, that the text does not already exist and that it is unique.There are many tools available to confirm whether your content is unique. If it is not, simply regenerate the content.In addition, content generation templates do not optimize text for search at all.Again, they are trained on a wide variety of sources so you’ll have to guide them with all the semantic tools that exist on the market.You can also ask them to emphasize keywords, and to explain your concepts in more detail.Finally, the model can invent data. Indeed, models have a creativity setting.If the model is set to allow high creativity, sometimes it can invent characteristics for an object, for example, which can generate inconsistencies in your texts.Content Generation Use Cases for SEOFor the past eight months, I have been testing many different content generation use cases.With free templates like GPT-Neo and GPT-J, we have had very good results on the following use cases:Extraction of concepts (extraction of places, people, etc.).Extraction of questions from a text.Answers to questions based on a text.Generation of short texts in English only in order to create titles or meta descriptions.On the other hand, with a paid model like GPT-3 with accessible pricing, we have not only managed to generate long and specific texts but also to create sophisticated textual summaries.The image is not easy to read but shows the input examples which are given in bold, then the text generated by the model which is in italic.Here, the goal is to create an incentive text to make people want to buy a product.GPT-3 perfectly understands the required tone and length of the text and includes the words to be used. The text is generated in less than 10 seconds!Here is the generated text:Give in to your childhood dreams. Do you still remember your first video game? Thanks to technological progress, you can live them all over again. You can play with another player on the road via the Nintendo DS, PlayStation, or Xbox consoles!With the Nintendo Switch, try out fun games like Mario Kart 8 Deluxe – rise up against your rivals and enjoy magical adventures with Mario, Zelda, and other characters. Enjoy an incredible range of AAA games; whether you prefer fighting games or adventure games or multiplayer games, PS4 offers PS Plus memberships to match your needs!Xbox also features unlimited access to entertainment apps like Netflix and Youtube for little money. Whether you choose PS4, xbox or the new Nintendo Switch, it will be easy to connect with family or friends via applications such as Skype for a great time with your friends.Make sure not to miss out on this opportunity!Of course, proofreading is still required to improve the text, but this technology saves a lot of time and allows humans to focus on high value-added problems.Let’s move on to two revolutions in content generation.Two Revolutions for SEOBefore, GPT-3 might not have been able to generate specific content because it was not trained to do so, as we saw earlier.Now, you have the ability to re-train the model.Just give it between 20 and 100 examples with your instructions as input and the ideal completion you want it to produce as output:{"prompt":"CTA:\n\nsubject: smartphone\n\n words: smartphone, Samsung, mobile phones, Xiaomi, Samsung, Apple, Huawei, Sony, iPhone, intelligent mobile phones, Android\n\n ","completion":"Looking for a new smartphone? You've shortlisted a Samsung Galaxy or a Huawei model, but you haven't made a decision yet? Choose a smartphone in our product range and enjoy the best of the new mobile phones at an unbeatable price! You're guaranteed to find a smartphone at the best price, so don't miss out! It's never easy to find the best option among a multitude of smartphones: Xiaomi, Samsung, Apple, Huawei, Sony, or even Honor. Smartphones - intelligent mobile phones - aren't just synonymous with Apple's iPhone, but include many models. Smartphones using the Android operating system are the most common."}The operation is time-consuming but then you have an optimized template that fits your use case, especially if you need to use a specific vocabulary for a particular niche, industry, or theme when optimizing your SEO.This takes only two lines of Python code. The most time-consuming part is, of course, creating this example file.Finally, let’s move on to the last topic I was particularly excited about this month: code generation!In fact, a new technology has been released where we give instructions and the new OpenAI Codex engine is able to generate Python code to solve our problems.Let’s start by pointing out that these are simple problems: it cannot replace developers because we would need to provide the AI with all the code set up as well as all the technical constraints.On the other hand, from a pedagogical point of view and especially in a no-code approach, it is great to be able to ask it to connect to a data source (Mysql, Excel, CSV, API, etc.) and generate the right views in a few seconds.Here’s a mini-example where I fetch the NASA log file for the day of August 1, 1995, and ask for a bar graph with the total number of URLs visited in the hour.Then, with a simple text editor, you can see the result by copying and pasting the code.In order to take the no-code concept even further, I’m preparing a web application where everything will be driven by text.The only limit in the use of language models in SEO is your imagination. You can certainly create an entire SEO dashboard this way by breaking down each of the views you want, step by step.Language models still have a lot of surprises in store and there are a lot of new uses coming for marketing.More Resources:Featured Image: Vector Juice/Shutterstock | Content Creation/Process Automation | Arts, Design, Entertainment, Sports, and Media/Management | null | null | null | null | null | null |
|
news | Kyle Wiggers | Microsoft and Nvidia team up to train one of the world’s largest language models | Microsoft and Nvidia claim to have trained one of the world's largest natural language models, containing 530 billion parameters. | https://venturebeat.com/2021/10/11/microsoft-and-nvidia-team-up-to-train-one-of-the-worlds-largest-language-models/ | 2021-10-11T14:38:37Z | The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!Microsoft and Nvidia today announced that they trained what they claim is the largest and most capable AI-powered language model to date: Megatron-Turing Natural Language Generation (MG-NLP). The successor to the companies Turing NLG 17B and Megatron-LM models, MG-NLP contains 530 billion parameters and achieves unmatched accuracy in a broad set of natural language tasks, Microsoft and Nvidia say including reading comprehension, commonsense reasoning, and natural language inferences.The quality and results that we have obtained today are a big step forward in the journey towards unlocking the full promise of AI in natural language. The innovations of DeepSpeed and Megatron-LM will benefit existing and future AI model development and make large AI models cheaper and faster to train, Nvidia researchers Paresh Kharya and Ali Alvi wrote in a blog post. We look forward to how MT-NLG will shape tomorrows products and motivate the community to push the boundaries of natural language processing (NLP) even further. The journey is long and far from complete, but we are excited by what is possible and what lies ahead.Training massive language modelsIn machine learning, parameters are the part of the model thats learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. Language models with large numbers of parameters, more data, and more training time have been shown to acquire a richer, more nuanced understanding of language, for example gaining the ability to summarize books and even complete programming code.To train MG-NLG, Microsoft and Nvidia say that they created a training dataset with 270 billions of tokens from English-language websites. Tokens, a way of separating pieces of text into smaller units in natural language, can be either words, characters, or parts of words. Like all AI models, MG-NLP had to train by ingesting a set of examples to learn patterns among data points, like grammatical and syntactical rules.The dataset largely came from The Pile, a 835GB collection of 22 smaller datasets created by the open source AI research effort EleutherAI. The Pile spans academic sources (e.g., Arxiv, PubMed), communities (StackExchange, Wikipedia), code repositories (Github), and more, which Microsoft and Nvidia say they curated and combined with filtered snapshots of the Common Crawl, a large collection of webpages including news stories and social media posts.Training took place across 560 Nvidia DGX A100 servers, each containing 8 Nvidia A100 80GB GPUs.When benchmarked, Microsoft says that MG-NLP can infer basic mathematical operations even when the symbols are badly obfuscated. While not extremely accurate, the model seems to go beyond memorization for arithmetic and manages to complete tasks containing questions that prompt it for an answer, a major challenge in NLP.Its well-established that models like MG-NLP can amplify the biases in data on which they were trained, and indeed, Microsoft and Nvidia acknowledge that the model picks up stereotypes and biases from the [training] data. Thats likely because a portion of the dataset was sourced from communities with pervasive gender, race, physical, and religious prejudices, which curation cant completely address.In a paper, the Middlebury Institute of International Studies Center on Terrorism, Extremism, and Counterterrorism claims that GPT-3 and similar models can generate informational and influential text that might radicalize people into far-right extremist ideologies and behaviors. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular open source models, including Googles BERT and XLNet and Facebooks RoBERTa.Microsoft and Nvidia claim that theyre committed to working on addressing [the] problem and encourage continued research to help in quantifying the bias of the model. They also say that any use of Megatron-Turing in production must ensure that proper measures are put in place to mitigate and minimize potential harm to users and follow tenets such as those outlined in Microsofts Responsible AI Principles.We live in a time where AI advancements are far outpacing Moores law. We continue to see more computation power being made available with newer generations of GPUs, interconnected at lightning speeds. At the same time, we continue to see hyperscaling of AI models leading to better performance, with seemingly no end in sight, Kharya and Alvi continued. Marrying these two trends together are software innovations that push the boundaries of optimization and efficiency.The cost of large modelsProjects like MG-NLP, AI21 Labs Jurassic-1, Huaweis PanGu-Alpha, Navers HyperCLOVA, and the Beijing Academy of Artificial Intelligences Wu Dao 2.0 are impressive from an academic standpoint, but building them doesnt come cheap. For example, the training dataset for OpenAIs GPT-3 one of the worlds largest language models was 45 terabytes in size, enough to fill 90 500GB hard drives.AI training costs dropped 100-fold between 2017 and 2019, according to one source, but the totals still exceed the compute budgets of most startups. The inequity favors corporations with extraordinary access to resources at the expense of small-time entrepreneurs, cementing incumbent advantages.For example, OpenAIs GPT-3 required an estimated 3.1423 floating point operations per second (FLOPS) of compute during training. In computer science, FLOPS are a measure of raw processing performance typically used to compare different types of hardware. Assuming OpenAI reserved 28 teraflops 28 trillion floating point operations per second of compute across a bank of Nvidia V100 GPUs, a common GPU available through cloud services, itd take $4.6 million for a single training run. One Nvidia RTX 8000 GPU with 15 teraflops of compute would be substantially cheaper but itd take 665 years to finish the training.A Synced report estimated that a fake news detection model developed by researchers at the University of Washington cost $25,000 to train, and Google spent around $6,912 to train a language model called BERT that it used to improve the quality of Google Search results. Storage costs also quickly mount when dealing with datasets at the terabyte or petabyte scale. To take an extreme example, one of the datasets accumulated by Teslas self-driving team 1.5 petabytes of video footage would cost over $67,500 to store in Azure for three months, according to CrowdStorage.The effects of AI and machine learning model training on the environment have also been brought into relief. In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car. OpenAI itself has conceded that models like Codex require significant amounts of compute on the order of hundreds of petaflops per day which contributes to carbon emissions.In a sliver of good news, the cost for FLOPS and basic machine learning operations have been falling over the past few years. A 2020 OpenAI survey found that since 2012, the amount of compute needed to train a model to the same performance on classifying images in a popular benchmark ImageNet has been decreasing by a factor of two every 16 months. Other recent research suggests that large language models arent always more complex than smaller models, depending on the techniques used to train them.Maria Antoniak, a natural language processing researcher and data scientist at Cornell University, says when it comes to natural language, its an open question whether larger models are the right approach. While some of the best benchmark performance scores today come from large datasets and models, the payoff from dumping enormous amounts of data into models is uncertain.The current structure of the field is task-focused, where the community gathers together to try to solve specific problems on specific datasets, Antoniak told VentureBeat in a previous interview. These tasks are usually very structured and can have their own weaknesses, so while they help our field move forward in some ways, they can also constrain us. Large models perform well on these tasks, but whether these tasks can ultimately lead us to any true language understanding is up for debate.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Content Synthesis/Decision Making/Prediction | Unknown | null | null | null | null | null | null |
|
news | PR Newswire | TigerGraph Wins Rising Star Award at 2021 CybersecAsia Readers' Choice Awards | TigerGraph, provider of the leading graph analytics platform, today announced that it has been named a winner in the Rising Star Award category of the 2nd... | https://finance.yahoo.com/news/tigergraph-wins-rising-star-award-070000667.html | https://s.yimg.com/ny/api/res/1.2/F5HJIH6YESsDe61.qw.pgQ--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD0yNjA-/https://s.yimg.com/uu/api/res/1.2/NZPyqyhNUfoEg0SQpCKn_w--~B/aD00MDA7dz0xODQ2O2FwcGlkPXl0YWNoeW9u/https://media.zenfs.com/en/prnewswire.com/0b7c995abcf19d715818aa863d496b58 | 2021-10-29T07:00:00Z | SINGAPORE, Oct. 29, 2021 /PRNewswire/ -- TigerGraph, provider of the leading graph analytics platform, today announced that it has been named a winner in the Rising Star Award category of the 2nd CybersecAsia Readers' Choice Awards 2021, an industry award that recognizes Asia Pacific leaders in cybersecurity, highlighting major advances and innovation made in the last two years. The Rising Star Award goes to cybersecurity providers that are newer to the Asia Pacific market, with early demonstration of promising and innovative solutions and services."TigerGraph is proud to be named a rising star by readers of CybersecAsia. Gartner predicts dramatic growth in the use for graph databases by 2025, where 80 percent of data and analytics innovations will be made using graph technology. This accolade is a nod to our role as the vanguard of all things graph supporting artificial intelligence (AI) and machine learning application in cybersecurity threat and fraud detection amongst the myriad of industry uses for graph database analytics," said Joseph Lee, vice president of Asia Pacific and Japan, TigerGraph.As the world's fastest and most scalable graph platform, TigerGraph can traverse data from multiple data sources in fractions of a second to perform deep analytics in real-time, detecting anomalies, identifying malicious patterns of behavior, and train AI to prevent cybersecurity attacks at internet scale.Victor Ng, Editor-in-Chief, CybersecAsia, said: "Graph database is an emerging technology that is expected to gain momentum in Asia Pacific. TigerGraph is a company to watch as it takes the lead to advocate the use of graph algorithms to support advanced analytics and machine learning applications, creating business use cases across the region including advanced cyber-threat analytics."TigerGraph also supports enterprises today in driving deeper insights for rapid decision making as they grapple with the unprecedented scale and complexity of distributed data across areas such as Anti-Money Laundering (AML), supply chain management, network resource optimization and customer experience.Helpful LinksAbout TigerGraphTigerGraph is a platform for advanced analytics and machine learning on connected data. Based on the industry's first and only distributed native graph database, TigerGraph's proven technology supports advanced analytics and machine learning applications such as fraud detection, anti-money laundering (AML), entity resolution, customer 360, recommendations, knowledge graph, cybersecurity, supply chain, IoT, and network analysis. The company is headquartered in Redwood City, California, USA. Start free with tigergraph.com/cloud.2021 CybersecAsia Readers' Choice AwardsThe 2nd CybersecAsia Awards recognizes Asia Pacific's leaders in cybersecurity for the critical roles played and the huge advances and innovations made over the last two years.With the onslaught of COVID-19 and the rise of the digital economy, users are accessing business applications, corporate networks and sensitive data 'anytime, anywhere'. The increasingly broader threat surface brought on by remote working, cloud computing and the explosive growth of data has been a major challenge for organizations across all industry sectors in Asia Pacific.The 2021 CybersecAsia Readers' Choice Awards marks another inflection point for the region, as government bodies, solution providers and end-user organizations collaborate to mitigate the risks brought about by increasingly sophisticated cyber-threats, financially motivated cybercriminals, state-sponsored bad actors, and intentional or unintentional internal threats.SOURCE TigerGraph | Detection and Monitoring/Prediction | Computer and Mathematical | null | null | null | null | null | null |
news | Soujanya Konka | Migrate to an Amazon Redshift Lake House Architecture from Snowflake | The need to derive meaningful and timely insights increases proportionally with the amount of data being collected. Data warehouses play a key role in storing, transforming, and making data easily accessible to enable a wide range of use cases, such as data mining, business intelligence (BI) and reporting, and diagnostics, as well as predictive, prescriptive, […] | https://aws.amazon.com/blogs/big-data/migrate-to-an-amazon-redshift-lake-house-architecture-from-snowflake/ | 2021-10-18T18:54:51Z | The need to derive meaningful and timely insights increases proportionally with the amount of data being collected. Data warehouses play a key role in storing, transforming, and making data easily accessible to enable a wide range of use cases, such as data mining, business intelligence (BI) and reporting, and diagnostics, as well as predictive, prescriptive, and cognitive analysis.Several new features of Amazon Redshift address a wide range of data requirements and improve performance of extract, load, and transform (ELT) jobs and queries. For example, concurrency scaling, the new RA3 instance types, elastic resize, materialized views, and federated query, which allows you to query data stored in your Amazon Aurora or Amazon Relational Database Service (Amazon RDS) Postgres operational databases directly from Amazon Redshift, and the SUPER data type, which can store semi-structured data or documents as values. The new distributed and hardware accelerated cache with AQUA (Advanced Query Accelerator) for Amazon Redshift delivers up to10 times more performance than other cloud warehouses. The machine learning (ML) based self-tuning capability to set sort and distribution keys for tables significantly improves query performance that was previously handled manually. For the latest feature releases for AWS services, see Whats New with AWS?To take advantage of these capabilities and future innovation, you need to migrate from your current data warehouse, like Snowflake, to Amazon Redshift, which involves two primary steps:Migrate raw, transformed, and prepared data from Snowflake to Amazon Simple Storage Service (Amazon S3)Reconfigure data pipelines to move data from sources to Amazon Redshift and Amazon S3, which provide a unified, natively integrated storage layer of our Lake House ArchitectureIn this post, we show you how to migrate data from Snowflake to Amazon Redshift. We cover the second step, reconfiguring pipelines, in a later post.Solution overviewOur solution is designed in two stages, as illustrated in the following architecture diagram.The first part of our Lake House Architecture is to ingest data into the data lake. We use AWS Glue Studio with AWS Gluecustom connectors to connect to the source Snowflake database and extract the tables we want and store them in Amazon S3. To accelerate extracting business insights, we load the frequently accessed data into an Amazon Redshift cluster. The infrequently accessed data is cataloged in the AWS Glue Data Catalog as external tables that can be easily accessed from our cluster.For this post, we consider three tables: Customer, Lineitem, and Orders, from the open-source TCPH_SF10 dataset. An AWS Glue ETL job, created by AWS Glue Studio, moves the Customers and Orders tables from Snowflake into the Amazon Redshift cluster, and the Lineitem table is copied to Amazon S3 as an external table. A view is created in Amazon Redshift to combine internal and external datasets.PrerequisitesBefore we begin, complete the steps required to set up and deploy the solution:Create an AWS Secrets Manager secret with the credentials to connect to Snowflake: username, password, and warehouse details. For instructions, see Tutorial: Creating and retrieving a secret.Download the latest Snowflake JDBC JAR file and upload it to an S3 bucket. You will find this bucket referenced as SnowflakeConnectionbucket in the cloudformation step.Identify the tables in your Snowflake database that you want to migrate.Create a Snowflake connector using AWS Glue StudioTo complete a successful connection, you should be familiar with the Snowflake ecosystem and the associated parameters for Snowflake database tables. These can be passed as job parameters during run time. The following screenshot from a Snowflake test account shows the parameter values used in the sample job.The following screenshot shows the account credentials and database from Secrets Manager.To create your AWS Glue custom connector for Snowflake, complete the following steps:On the AWS Glue Studio console, under Connectors, choose Create custom connector.For Connector S3 URL, browse to the S3 location where you uploaded the Snowflake JDBC connector JAR file.For Name, enter a logical name.For Connector type, choose JDBC.For Class name, enter net.snowflake.client.jdbc.SnowflakeDriver.Enter the JDBC URL base in the following format: jdbc:snowflake://<snowflakeaccountinfo>/?user=${Username}&password=${Password}&warehouse=${warehouse}.For URL parameter delimiter, enter &.Optionally, enter a description to identify your connector.Choose Create connector.Set up a Snowflake JDBC connectionTo create a JDBC connection to Snowflake, complete the following steps:On the AWS Glue Studio console, choose Connectors.Choose the connector you created.Choose Create connection.For Name and Description, enter a logical name and description for your reference.For Connection credential type, choose default.For AWS Secret, choose the secret created as a part of the prerequisites.Optionally, you can specify the credentials in plaintext format.Under Additional options, add the following key-value pairs: Key db with the Snowflake database nameKey schema with the Snowflake database schemaKey warehouse with the Snowflake warehouse nameChoose Create connection.Configure other resources and permissions using AWS CloudFormationIn this step, we create additional resources with AWS CloudFormation, which includes an Amazon Redshift cluster, AWS Identity and Access Management (IAM) roles with policies, S3 bucket, and AWS Glue jobs to copy tables from Snowflake to Amazon S3 and from Amazon S3 to Amazon Redshift.Sign in to the AWS Management Console as an IAM power user, preferably an admin user.Choose your Region as us-east-1.Choose Launch Stack:Choose Next.For Stack name, enter a name for the stack, for example, snowflake-to-aws-blog.For Secretname, enter the secret name created in the prerequisites.For SnowflakeConnectionName, enter the Snowflake JDBC connection you created.For Snowflake Connection bucket, enter name of the S3 bucket where the snowflake connector is uploadedFor SnowflakeTableNames, enter the list of tables to migrate from Snowflake. For example, Lineitem,customers,order.For RedshiftTableNames, enter the list of the tables to load into your warehouse (Amazon Redshift). For example, customers,order.You can specify your choice of Amazon Redshift node type, number of nodes, and Amazon Redshift username and password, or use the default values.For the MasterUserPassword, enter a password for your master user keeping in mind the following constraints : It must be 8 to 64 characters in length. It must contain at least one uppercase letter, one lowercase letter, and one number.Choose Create stack.Run AWS Glue jobs for the data loadThe stack takes about 7 minutes to complete. After the stack is deployed successfully, perform the following actions:On the AWS Glue Studio console, under Databases, choose Connections.Select the connection redshiftconnection from the list and choose Test Connection.Choose the IAM role ExecuteGlueSnowflakeJobRole from the drop-down meu and choose Test connection.If you receive an error, verify or edit the username and password and try again.After the connection is tested successfully, on the AWS Glue Studio console, select the job Snowflake-s3-load-job.On the Action menu, choose Run job.When the job is complete, all the tables mentioned in the SnowflakeTableNames parameter are loaded into your S3 bucket. The time it takes to complete this job varies depending on the number and size of the tables.Now we load the identified tables in Amazon Redshift.Run the job s3-redshift-load-job.After the job is complete, navigate to the Amazon Redshift console.Use the query editor to connect to your cluster to verify that the tables specified in RedshiftTableNames are loaded successfully.You can now view and query datasets from Amazon Redshift. The Lineitem dataset is on Amazon S3 and queried by Amazon Redshift Spectrum. The following screenshot shows how to create an Amazon Redshift external schema that allows you to query Amazon S3 data from Amazon Redshift.Tables loaded to Amazon Redshift associated storage appear as in the following screenshot.The AWS Glue job, using the standard worker type to move Snowflake data into Amazon S3, completed in approximately 21 minutes, loading overall 2.089 GB (about 76.5 million records). The following screenshot from the Snowflake console shows the tables and their sizes, which we copied to Amazon S3.You have the ability to customize the AWS Glue worker type, worker nodes, and max concurrency to adjust distribution and workload.AWS Glue allows parallel data reads from the data store by partitioning the data on a column. You must specify the partition column, the lower partition bound, the upper partition bound, and the number of partitions. This feature enables you use data parallelism and multiple Spark executors allocated the Spark application.This completes our migration from Snowflake to Amazon Redshift that enables a Lake House Architecture and the ability to analyze data in more ways. We would like to take a step further and talk about features of Amazon Redshift that can help extend this architecture for data democratization and modernize your data warehouse.Modernize your data warehouseAmazon Redshift powers the Lake House Architecture, which enables queries from your data lake, data warehouse, and other stores. Amazon Redshift can access the data lake using Redshift Spectrum. Amazon Redshift automatically engages nodes from a separate fleet of Redshift Spectrum nodes. These nodes run queries directly against Amazon S3, run scans and aggregations, and return the data to the compute nodes for further processing.AWS Lake Formation provides a governance solution for data stored in an Amazon S3-based data lake and offers a central permission model with fine-grained access controls at the column and row level. Lake Formation uses the AWS Glue Data Catalog as a central metadata repository and makes it simple to ingest and catalog data using blueprints and crawlers.The following screenshot shows the tables from Snowflake represented in the AWS Glue Data Catalog and managed by Lake Formation.With the Amazon Redshift data lake export feature, you can also save data back in Amazon S3 in open formats like Apache Parquet, to use with other analytics services like Amazon Athena and Amazon EMR.Distributed storageAmazon Redshift RA3 gives the flexibility to scale compute and storage independently. Amazon Redshift data is stored on Amazon Redshift managed storage backed by Amazon S3. Distribution of datasets between cluster storage and Amazon S3 allows you to benefit from bringing the appropriate compute to the data depending on your use case. You can query data from Amazon S3 without accessing Amazon Redshift.Lets look at an example with the star schema. We can save a fact table that we expect to grow rapidly in Amazon S3 with the schema saved in the Data Catalog, and dimension tables in cluster storage. You can use views with union data from both Amazon S3 and the attached Amazon Redshift managed storage.Another model for data distribution can be based on the state of hot or cold data, with hot data in Amazon Redshift managed storage and cold data in Amazon S3. In this example, we have the datasets lineitem, customer, and orders. The customer and orders portfolio are infrequently updated datasets in comparison to lineitem. We can create an external table to read lineitem data from Amazon S3 and the schema from the Data Catalog database, and load customer and orders to Amazon Redshift tables. The following screenshot shows a join query between the datasets.It would be interesting to know the overall run statistics for this query, which can be queried from system tables. The following code gets the stats from the preceding query using svl_s3query_summary:select elapsed, s3_scanned_rows, s3_scanned_bytes,s3query_returned_rows, s3query_returned_bytes, files, avg_request_parallelismfrom svl_s3query_summarywhere query = 1918order by query,segment;The following screenshot shows query output.For more information about this query, see Using the SVL_QUERY_SUMMARY view.Automated table optimizationDistribution and sort keys are table properties that define how data is physically stored. These are managed by Amazon Redshift. Automatic table optimization continuously observes how queries interact with tables and uses ML to select the best sort and distribution keys to optimize performance for the clusters workload. To enhance performance, Amazon Redshift chooses the key and tables are altered automatically.In the preceding scenario, the lineitem table had distkey (L_ORDERKEY), the customer table had distribution ALL, and orders had distkey (O_ORDERKEY).Storage optimizationChoosing a data format depends on the data size (JSON, CSV, or Parquet). Redshift Spectrum currently supports Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV data formats. When you choose your format, consider the overall data scanned and I/O efficiency, such as with a small dataset in CSV or JSON format versus the same dataset in columnar Parquet format. In this case, for smaller scans, Parquet consumes more compute capacity compared to CSV, and may eventually take around the same time as CSV. In most cases, Parquet is the optimal choice, but you need to consider other inputs like volume, cost, and latency.SUPER data typeThe SUPER data type offers native support for semi-structured data. It supports nested data formats such as JSON and Ion files. This allows you to ingest, store, and query nested data natively in Amazon Redshift. You can store JSON formatted data in SUPER columns.You can query the SUPER data type through an easy-to-use SQL extension that is powered by the PartiQL. PartiQL is a SQL language that makes it easy to efficiently query data regardless of the format, whether the data is structured or semi-structured.Pause and resumePause and resume lets you easily start and stop a cluster to save costs for intermittent workloads. This way, you can cost-effectively manage a cluster with infrequently accessed data.You can apply pause and resume via the console, API, and user-defined schedules.AQUAAQUA for Amazon Redshift is a large high-speed cache architecture on top of Amazon S3 that can scale out to process data in parallel across many nodes. It flips the current paradigm of bringing the data to the computeAQUA brings the compute to the storage layer so the data doesnt have to move back and forth between the two, which enables Amazon Redshift to run queries much faster.Data sharingThe data sharing feature seamlessly allows multiple Amazon Redshift clusters to query data located in RA3 clusters and their managed storage. This is ideal for workloads that are isolated from each other but data needs to be shared for cross-group collaboration without actually copying data.Concurrency scalingAmazon Redshift automatically adds transient clusters in seconds to serve sudden spikes in concurrent requests with consistently fast performance. For every 1 day of usage, 1 hour of concurrency scaling is available at no charge.ConclusionIn this post, we discussed an approach to migrate a Snowflake data warehouse to a Lake House Architecture with a central data lake accessible through Amazon Redshift.We covered how to use AWS Glue to move data from sources like Snowflake into your data lake, catalog it, and make it ready to analyze in a few simple steps. We also saw how to use Lake Formation to enable governance and fine-grained security in the data lake. Lastly, we discussed several new features of Amazon Redshift that make it easy to use, perform better, and scale to meet business demands.About the AuthorsSoujanya Konka is a Solutions Architect and Analytics specialist at AWS, focused on helping customers build their ideas on cloud. Expertise in design and implementation of business information systems and Data warehousing solutions. Before joining AWS, Soujanya has had stints with companies such as HSBC, Cognizant.Shraddha Patel is a Solutions Architect and Big data and Analytics Specialist at AWS. She works with customers and partners to build scalable, highly available and secure solutions in the AWS cloud. | Unknown | Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | Kyle Wiggers | AI model development platform Abacus.ai lands $50M | Abacus.ai, a startup developing an AI development platform for enterprise customers, has raised $50 million in venture capital. | https://venturebeat.com/2021/10/27/ai-model-development-platform-abacus-ai-lands-50m/ | 2021-10-27T16:00:34Z | Abacus.ai, a platform creating dev tools to develop and deploy enterprise AI technologies, today announced that it raised $50 million in a series C round led by Tiger Global with participation from Coatue, Index Partners, and Alkeon Ventures. The raise brings the companys totalfunding to $90.3 million to date, and CEO Bindu Reddy says itll be used to further develop Abacus AI technologies while growing the companys workforce.While the percentage of firms investing greater than $50 million in big data and AI initiatives reached 64.8% in 2020 (up from 39.7% in 2018), organizations of all sizes still struggle to implement AI expeditiously and successfully. About 80% of AI projects never reach deployment, according to Gartner, and those that do are only profitable about 60% of the time.Founded in 2019 by Arvind Sundararajan, Siddartha Naidu, and Reddy, Abacus provides a service for organizations to develop AI models via modules that can stream, monitor, debias, merge, store, and transform data. According to Reddy, users without advanced data science knowledge and limited budgets can use it to iterate end-to-end systems comparable to Twitters and TikToks content feeds and Gmails autocomplete feature.We have seen rapid adoption of our platform as customers generate orders of magnitude more data, move all their operations to the digital realm, and are looking to AI models to make decisions, Reddy told VentureBeat via email. We will soon see an inflection point in AI adoption, as it becomes easier and easier to develop models and operationalize them.AutoMLAbacus embraces elements of AutoML, or the process of automating the application of machine learning to real-world problems. AutoML covers the complete pipeline, from raw datasets to deployable machine learning models, and data science teams are increasingly adopting it to overcome blockers in their organizations. Forrester reports that 25% of data and analytics decision makers whose firms are adopting AI said that theyre planning to implement AutoML software within the next year. Sixty-one percent said that theyd already implemented AutoML software or are were in the process of implementing it, according to the study.Abacus conducts research and offers cloud AI services to help companies embed machine learning models into their processes. Customers pick a use case and point to their data, after which Abacus engine creates an AI system that can be used to make and share predictions.Above: Abacus model management dashboard.Abacus says its system applies the startups research on generative models and neural architecture search to deal with noisy or incomplete data. It ostensibly identifies the best neural network that models a customers proprietary dataset and use cases spanning IT operations, marketing and sales, fraud and security, and forecasting and planning.In addition, the system is good at configuring pipelines, scheduling model retraining on new data, provisioning model serving from raw data, and providing explanations for models predictions, Reddy says. Common enterprise AI use cases like churn modeling, lead scoring, and anomaly detection have seen exponential growth [on our platform], she added. The pandemic has been great for AI companies and specifically for us.Pulling from multiple data sourcesBeyond the new funding, Abacus today announced what its calling vision AI-as-a-service, along with support for hybrid AI models that can generate predictions from language, vision, and tabular data. According to Reddy, customers can now use a combination of datasets to create models that extract intelligence from all of the available data on hand.For example, you can predict the closing price of homes based on unstructured data like listing description and house photos along with structured tabular data including number of bedrooms, bathrooms, and more by combining all this data and using the Abacus predictive workflow to generate a hybrid predictive model that combines all the data types, Reddy explained. This is a powerful way to extract intelligence from data.Despite competition from platforms like Amazon SageMaker, Googles Cloud AutoML, and startups such as DataRobot and H2O.ai, Abacus says that over 10,000 developers across more than 6,000 customers including 1-800-Flowers have used its products to train roughly 20,000 real-time personalization, time-series forecasting, and anomaly detection models to date. The San Francisco, California-based company currently has 45 employees and plans to expand to 80 by the end of the year.Abacus has several vertically integrated workflows for common enterprise use cases, including natural language processing, Reddy continued. The new money is going to be used to continue to build out more vertical use cases like computer vision and to create more horizontal platform capabilities such as machine learning and deep learning operations modules.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Process Automation/Content Synthesis | Unknown | null | null | null | null | null | null |
|
news | Dataiku Bets Big on Partner Ecosystem, Appoints New SVP of Ecosystems and Alliances, David Tharp | NEW YORK--(BUSINESS WIRE)--Today Dataiku, the platform for Everyday AI, announced that David Tharp has joined the company as SVP Ecosystems and Alliances. In this role, Tharp will be responsible for driving the strategy, development, and field operations of Dataiku’s global partner program including Technology Alliances, Global Systems Integrators, and Channel Partners. “We are excited to have David on our team as we bet big on our partner strategy and build an economy of opportunity around Dat | https://www.businesswire.com/news/home/20211012005296/en/Dataiku-Bets-Big-on-Partner-Ecosystem-Appoints-New-SVP-of-Ecosystems-and-Alliances-David-Tharp | 2021-10-12T13:09:39Z | NEW YORK--(BUSINESS WIRE)--Today Dataiku, the platform for Everyday AI, announced that David Tharp has joined the company as SVP Ecosystems and Alliances. In this role, Tharp will be responsible for driving the strategy, development, and field operations of Dataikus global partner program including Technology Alliances, Global Systems Integrators, and Channel Partners.We are excited to have David on our team as we bet big on our partner strategy and build an economy of opportunity around Dataiku, said Romain Fouache, COO and CRO, Dataiku. He brings a combination of deep partner alliance experience, strategy, and leadership that we believe will turn our partnership program into a true competitive advantage one that provides our customers flexibility in how they deploy our Everyday AI platform, while providing our partners a significant opportunity to build a business with us.I feel fortunate to join Dataiku at this important inflection point in our growth. We have already established an incredible partner ecosystem with the worlds largest professional services firms and cloud computing giants. The opportunity to unite these partnerships provides our customers an unprecedented opportunity to accelerate their AI transformations. Meeting the Dataiku leadership team and supportive investor community made this an easy career decision for me, said David Tharp, SVP Ecosystems and Alliances, Dataiku. I am excited to evolve Dataikus partner program to accelerate our growth with and through partners who are leading the largest AI transformations in the world. I look forward to scaling these relationships, building a team, and delivering value and innovation for our customers, partners, and all other stakeholders.Prior to Dataiku, Tharp served as the VP, Worldwide Partner Organization at Anaplan, where he led the build and scale of the partner and alliance ecosystem at the company from its early stage through IPO, to its current multibillion-dollar valuation. In addition to his role at Dataiku, Tharp serves on the board of directors for Anavate Partners and is an active investor and advisor to several early stage companies through his membership in the IrishAngels venture fund team. He is a graduate of The Ohio State University, holds an MBA from the University of Notre Dame, and currently resides in Chicago, Illinois.Additional ResourcesLearn more about the Dataiku Partner Ecosystem Become a Dataiku PartnerConnect with DataikuRead the Dataiku blog Follow Dataiku LinkedIn We are hiring! Join the Dataiku TeamAbout DataikuDataiku is the worlds leading platform for Everyday AI, systemizing the use of data for exceptional business results. More than 450 customers across banking, insurance, pharmaceuticals, manufacturing, retail, and more use Dataiku to elevate their people to extraordinary, arming them with the ability to make better day-to-day decisions with data.Dataiku was founded in 2013 and has garnered praise from industry analysts for leading the charge in the democratization of AI ever since. The company most recently raised $400 million in a Series E funding round in 2021, and has grown the team to more than 750 employees worldwide. Dataiku was named to the Forbes Cloud 100 in 2019 and 2020 as well as the Forbes AI 50 in 2020 and 2021. Silicon Republic pegged Dataiku as one of the 7 data science start-ups shaking up AI and analytics in late 2020, and CB Insights named Dataiku to the AI 100 2021 as one of the most innovative startups. Dataiku is consistently named on best companies to work for lists, including lists from Forbes and Inc. | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
||
news | Carlyle and Leading Healthcare-Focused Venture Funds Announce Strategic Growth Investment of up to $430 Million in Saama | CAMPBELL, Calif.--(BUSINESS WIRE)-- #AI--Saama announces $430M strategic growth investment by Carlyle and healthcare-focused venture funds to redefine the drug development paradigm. | https://www.businesswire.com/news/home/20211020005251/en/Carlyle-and-Leading-Healthcare-Focused-Venture-Funds-Announce-Strategic-Growth-Investment-of-up-to-430-Million-in-Saama | 2021-10-20T12:11:04Z | CAMPBELL, Calif.--(BUSINESS WIRE)--Saama Technologies, Inc. (Saama) today announced funds led by global investment firm Carlyle (NASDAQ: CG) have made a strategic growth investment and will acquire a majority stake in the Company. A broad co-investor group with several healthcare-focused venture funds, including Amgen Ventures, Intermountain Ventures, Merck Global Health Innovation Fund (Merck GHI), McKesson Ventures, Northpond Ventures, Pfizer Ventures, and Population Health Partners, will be investing alongside Carlyle.The partnership will allow Saama to accelerate its strategic initiatives, including expanding its go-to-market capabilities and further investing in AI research and development.Saamas intelligent Life Science Analytics Cloud (LSAC) is used by over 50 pharma and biotech companies on more than 1,500 studies. As an end-to-end platform, LSAC provides a unified approach to clinical trial data management and analytics. LSACs pre-trained AI embedded smart applications have the ability to learn the complex patterns among clinical data and provide predictive insights to accelerate the clinical research process across a variety of domains and therapeutic areas.The life sciences industry is experiencing a massive shift in strategic approaches to accelerating drug development as a result of the COVID-19 pandemic, and AI-driven analytics solutions like Saamas LSAC are at the forefront. These solutions role in successfully enabling the delivery of COVID-19 vaccines and therapies to patients in mere months was the litmus test for the future of drug development, said Suresh Katta, Founder and CEO, Saama Technologies. This investment by Carlyle and leading healthcare-focused venture funds reflects the value they see in Saamas leadership of this new era and the promise offered by our state-of-the-art solutions.The life sciences industry is at an exciting crossroads, poised to adopt and integrate sophisticated data management and analytics solutions that we believe will accelerate clinical timelines in previously unimaginable ways, said Joe Bress, a Managing Director specializing in Healthcare at Carlyle. Were excited to partner with Saama as they work to redefine the drug development paradigm.AI-driven analytics solutions are transforming the way that data is managed and therefore the way that work is done. We believe Saama can play a key role in that transformation for the life sciences industry, said Ashley Evans, a Managing Director specializing in Technology at Carlyle. As the convergence of healthcare and technology reaches an inflection point, we are excited to help Saama capture the compelling market opportunity ahead.Merck GHI sees tremendous opportunity in Saamas mission of accelerating clinical trials, said David M. Rubin, Ph.D., Managing Director at Merck GHI. It is very exciting to be part of a world-class investment syndicate with such deep healthcare domain expertise and interest in supporting the build of scaled resources that have the potential to change the way clinical research and development is conducted.The investment in Saama is a continuation of Carlyles long-term global commitment to both healthcare and technology, in which it has invested $17 billion and $25 billion of equity since inception, respectively including $2 billion in healthcare technology and tech-enabled growth companies.Sullivan & Cromwell LLP acted as legal advisor to Saama Technologies. Debevoise & Plimpton LLP acted as legal advisor and Guggenheim Securities LLC as financial advisor to Carlyle. Fenwick & West LLP acted as legal advisor to the syndicate.About Saama Technologies, Inc.Saama is the #1 AI-driven Intelligent Clinical Cloud company, enabling the life sciences industry to conduct faster and safer clinical development and regulatory programs. Today, over 50 biotech companies use Saamas award-winning Life Science Analytics Cloud (LSAC) platform on more than 1,500 studies, including many of the top 20 pharmaceutical companies. LSACs rich applications facilitate an unprecedented, authoritative oversight of comprehensive clinical research data, enabling companies to file New Drug Applications (NDAs) more efficiently and bring drugs to market faster. Discover more at www.saama.com and follow Saama @SaamaTechInc.About CarlyleCarlyle (NASDAQ: CG) is a global investment firm with deep industry expertise that deploys private capital across three business segments: Global Private Equity, Global Credit and Investment Solutions. With $276 billion of assets under management as of June 30, 2021, Carlyles purpose is to invest wisely and create value on behalf of its investors, portfolio companies and the communities in which we live and invest. Carlyle employs nearly 1,800 people in 27 offices across five continents. Further information is available at carlyle.com. Follow Carlyle on Twitter @OneCarlyle.About Merck Global Health Innovation FundMerck Global Health Innovation Fund (Merck GHI) is evolving corporate healthcare venture capital globally by utilizing their healthcare ecosystem strategy. This investment strategy connects innovative companies with complementary technologies to develop integrated healthcare solutions. Merck GHI has $500M under management and provides growth capital to emerging healthcare technology companies worldwide while leveraging the vast R&D-based, global resources of Merck. With a vision that data will be the currency in healthcare, Merck GHI invests broadly in digital health. Merck GHI invests in platform companies with proven technologies or business models where Mercks expertise and perspectives can accelerate revenue growth and enhance value creation. Since late 2010, Merck GHI has made over 50 investments in Digital Health companies. www.merckghifund.com | Content Synthesis/Decision Making | Healthcare Practitioners and Support/Life, Physical, and Social Science | null | null | null | null | null | null |
||
news | Susan Galer, SAP, Susan Galer, SAP https://www.forbes.com/sites/sap/people/susangaler/ | Can Artificial Intelligence Take The Guesswork Out Of The Customer Journey? | From shopping malls and sports arenas, to train stations and city streets, C2RO is an AI-powered video analytics platform that captures anonymized data about people’s movement so organizations can improve the customer experience. | https://www.forbes.com/sites/sap/2021/10/26/can-artificial-intelligence-take-the-guesswork-out-of-the-customer-journey/ | 2021-10-26T12:30:00Z | Artificial intelligence (AI) is shedding light on one of the most examined yet least understood experiences of modern life: the customer journey. From shopping malls and sports arenas, to train stations and city streets, C2RO is an AI-powered video analytics platform that captures anonymized data about peoples movement so organizations can improve the customer experience.AI-based intelligence is revealing the customer journey so companies can achieve business results never imagined beforegettyWe analyze human behaviour in physical spaces, and transfer it into actionable data, said Tim Heaney, vice president of sales at C2RO. With a fact-based understanding of the amount of people coming into a space, how they move through it with whom, where they linger, and what they touch and eventually purchase, organizations can manage physical environments more efficiently to improve the customer experience and business results. Real-time data eases commuter experienceA transportation organization used data from C2RO to improve subway train and bus terminal planning. Based on the number of people boarding and leaving trains at certain times and days, as well as which direction theyre headed at terminals, transportation managers could alleviate traffic crunches, said Heaney. They could reverse escalator directions to help people more easily exit or enter terminals. Aligning vehicle dispatches to actual passenger traffic flows reduced wasted trips. It also means fewer people waiting in the cold or forced to stand in crowded cars and buses.Real-time data is particularly valuable to manage other operations such as shopping malls and entertainment venues during peak usage times like rush hour, holiday shopping, or performances. Companies can also use historical data for future planning. Heaney added that monitoring in-person occupancy levels has helped organizations meet social distancing mandates during the pandemic. Secure, anonymized data powers informed decisionsC2ROs platform was designed to be fully GDPR compliant. The metadata that customers receive on dashboards through a browser or mobile device is anonymized for privacy protection. Companies decide on the frequency of data updates, from real-time to hourly, daily, or weekly. Revenue growth from AI-based algorithms Theres nothing trailblazing about counting customers Heaney acknowledged that companies have been doing this for decades. However, C2RO provides highly accurate granular, data-driven insights based on an organizations KPIs.The data shows when someone came inside the store after looking at a promotion, said Heaney. Did they visit the promotional display, examine the item, and visit the POS? Which entrance and exit did they use? This is valuable data to help companies quickly evaluate promotions and make adjustments if something isnt working as expected.Case in point was a convenience store that used C2RO to determine the relationship between high traffic displays such as coffee, and other areas where product sales were lower. After discovering that 25 percent of the stores space had no traffic at all, the management made layout changes to attract customers to overlooked areas and increase sales.Many customers tell us that their physical spaces are a blind spot, and our data opens their eyes to the metrics of what's going on in their space, said Heaney. Mall owners can quantify the number of people visiting anchor tenants and identify cross-shopping patterns in other stores. With these statistics, they can sell space at a proven premium, and help tenants drive stronger sales.SAP.iO grows market opportunities with expertise Heaney expected C2ROs growth trajectory to continue, thanks in part, to support from SAP that began during the companys participation in the data and analytics cohort of SAP.iO Foundry Berlin.SAPs brand is strong worldwide and providing us with introductions to their customers is a huge opportunity, he said. Our customer behavioural data is complementary to SAP solutions like SAP Analytics Cloud and SAP Data Warehouse Cloud. We also valued SAPs sales and marketing expertise, helping us with our product messaging, value proposition, and packaging. C2RO is available on the SAP Store. AI intelligence addresses competitive market demandsOriginally founded as a cloud robotics start-up, C2RO evolved to take advantage of existing video camera infrastructure worldwide. Heaney brings over 30 years of business experience to his position, having worked extensively in telecom, IT, electro-optics, and computer vision. Besides retail and transportation he saw potential value for C2RO in industrial manufacturing for production efficiency improvements, as well as sports and entertainment venues, where fans could time food and drink breaks to avoid peak crowds at the refreshment stand.Tough markets demand smarter, more scientific solutions to understand space management and visitor behaviours, he said. Were at an inflection point where we can train intelligent algorithms much faster and have secure servers onsite while taking advantage of the cloud. AI-based intelligence is revealing the customer journey to achieve business results never imagined before.Learn more about the SAP.iO Foundries program supporting B2B technology startups worldwide. | Detection and Monitoring/Decision Making/Information Retrieval Or Search | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | MartianMan | 107° - Free Udemy Courses: Supply Chain Applications, Python, Javascript, SQL for Data Analysis, Web Development, Financial Accounting & More | Have fun learning !![Bestseller] [17h 47m] RA: Supply Chain Applications with R & Shiny: Inventory. Code=C83ABF533670C0DF064A 2 days left at this price ![4.7] [8h 16m] Digital Nomad Lifestyle: LIVE YOUR DREAM & Travel the World Code=OCTGOODNESS100 2 days left at this price ![Hot & New] [3h 6m] Project Finance Fundamentals | Infrastructure & Energy Code=OCTFALLOFFER21 1 day left at this price ![4.7] [58m] B2B Lead Generation via Personalised Sales Videos & LinkedIn Code=05FD7284AF327B8B8BA4 1 day left at this price ![4.5] [9h 29m] Introduction into Facebook Marketing & Facebook Advertising Code=OCTGOODNESS10 2 days left at this price ![New] [1h 49m] Arabic Writing Course For Beginners Code=HASSAN-ACADEMY 2 days left at this price ![3.8] [3h 50m] The JavaScript Complete Guide Course Code=2681C551866E9A0B6C16 1 day left at this price ![Hot & new] [1h 6m] Tubebuddy Masterclass - Guide to YouTube Keyword Tool Code=81FDE997AFCDBDD9A719 1 day left at this price ![4.1] [7h 20m] Python-Introduction to Data Science and Machine learning A-Z Code=3522D58ED9F8CD481912 2 days left at this price ![4.5] [1h 18m] Shopify Bootcamp: Increase Your Sales and Conversion rate Code=7DD22BB0FE9974219DE1 2 days left at this price ![4.6] [5h 43m] SQL for Data Analysis - with SQL Server Code=FREE-SQL-SERVER 2 days left at this price ![3.9] [6h 0m] Distribution Power Engineering Fundamentals Code=C03F3A83A20C90A40EAC 2 days left at this price ![New] [2h 54m] Practical Mind Mapping for Beginners with Xmind Mind Maps. Code=MIND-MAPPING 2 days left at this price ![New] [1h 0m] How to beat the Forex market quick and effective! Code=FREEVALUE 1 day left at this price ![4.4] [6h 2m] Complete Responsive Web Development: 4 courses in 1 Code=WEB4OCT2021 1 day left at this price ![3.9] [2h 33m] Shopify guide: The complete shopify store creation course Code=34DBF25C35E5E73DC5DB 1 day left at this price ![4.0] [4h 12m] Learn 4 Steps to Make Money Online with Affiliate Marketing! Code=7E62E7B9D725CFF77C35 1 day left at this price ![4.4] [74h 16m] Financial Accounting & Reporting Certification Code=FIN_ACC_REP_UPLATZ 1 day left at this price ![New] [ 6h 3m] Business Finance and Financial Modeling Code=BUS_FIN_MOD_UPLATZ 1 day left at this price ![New] [50h 7m] Cost & Management Accounting Certification Training 2021 Code=COST_MGMT_ACC_UPLATZ 1 day left at this price ![4.2] [12h 50m] Revenue and Pricing Analytics with Excel & Python. Code=91369D039E184BD96518 1 day left at this price ![New] [1h 45m] Build A TodoList Fast with Go, Fiber and Ionic / Angular Code=CODEBRAINS_FREE 1 day left at this price ![New] [1h 44m] Build A TodoList with Elixir, Phoenix, and Ionic Angular Code=CODEBRAINS_FREE 1 day left at this price ![New] [ 1h 35m ] Build A TodoList with NestJS and Ionic Angular Code=CODEBRAINS_FREE 1 day left at this price ![New] [1h 57m] Build A TodoList with FastAPI and Ionic Angular Code=CODEBRAINS_FREE 1 day left at this price ![New] [ 1h 45m] Build A TodoList with ASP.NET Core and Ionic Angular Code=CODEBRAINS_FREE 1 day left at this price ![New] [1h 35m] Build A TodoList with Laravel and Ionic Angular Code=CODEBRAINS_FREE 1 day left at this price ![New] [1h 40m] Build A TodoList with Java Spring Boot and Ionic Angular Code=CODEBRAINS_FREE 1 day left at this price ![4.6] [52m ] Public Speaking: A tactical approach Code=8419CB64D56009CAB00C 1 day left at this price ![4.3] [ 18h 43m] RA: Retail Management, Analytics with Excel & Python. Code=90316886C78336500A7A 1 day left at this price ![4.5] [9h 29m] Introduction into Facebook Marketing & Facebook Advertising Code=OCTGOODNESS10 1 day left at this price ![4.4] [48m ] Hiring Process: Talent Management Code=OCT-21 1 day left at this price ![4.3] [ 18h 0m] A Deep Dive into Statistical Forecasting(Excel & R). Code=913D2277240247573F37 1 day left at this price ![4.3] [1h 27m] Introduction to Entrepreneurship Code=654AAA1DCD4DE58682B2 1 day left at this price ![4.3] [15h 39m] RA: Retail Customer Analytics and Trade Area Modeling. Code=D0F8BE051CBA8DD86238 2 days left at this price ![3.9] [15h 45m] Ultimate Python Bootcamp For Data Science & Machine Learning Code=6070408648DC3B6E3AF0 1 day left at this price ![4.7] [12h 6m] Practical Cisco Networking Labs in Cisco Packet Tracer Code=OCT21FREE 1 day left at this price ![New] [8h 38m] SEO Training for Beginners: Learn SEO & Local SEO FAST & FUN Code=OCTGOODNESS10 1 day left at this price ![Amazon UK] Basics of PHP Language (Free with Kindle Unlimited or $0.99) 5 days left at this price ![Amazon UK] Stupid Simple Startup Growth Strategy (Free with Kindle Unlimited or $0.99) 5 days left at this price !Popular Discounted Courses[Humble Bundle] JavaScript & Web Development Bundle from $1 2 Days left at this price ![Bestseller] [84h 8m] Digital Marketing Course 2021: Be a Digital Marketing Nomad $9.99 Code=2OCT999 4 Days left at this price ![Bestseller] [9h 35m] Copywriting & SEO for Beginners: Complete Copywriting Course $9.99 Code=2OCT999 4 Days left at this price ![4.5] [26h 38m] The Complete Focus Mastery Course - Brain Concentration $10.99 Code=THANKS07 4 Days left at this price ![4.9] [14h 26m] The Complete Freelancing Course-Upwork Fiverr Home Business $10.99 Code=THANKS07 4 Days left at this price ![Bestseller] [29h 7m] The Complete Storytelling Course for Speaking & Presenting $12.99 Code=THANKS06 3 days left at this price ![4.2] [28h 46m] Complete Personal Development Personal Transformation Course $12.99 Code=THANKS06 1 day left at this price ![4.8] [14h 26m] Python Hacking Complete Beginner to Advanced Course $9.99 Code=PYTHONOCT21 1 day left at this price ![4.4] [60h 13m] Python, Java,C#, C++ & C Programming Practices For Beginners $9.99 Code=999OCTOBER 1 day left at this price ![4.4] [82h 38m] Complete Codecamp On C, C++, C#, Java & Python Programming! $9.99 Code=999OCTOBER 1 day left at this price ![4.5] [18h 35m] Complete Website Design Bootcamp! Front-End Web Development $9.99 Code=999OCTOBER 1 day left at this price ![4.1] [8h 5m] Practical Python Programming Practices (100 Common Projects) $9.99 Code=999OCTOBER 1 day left at this price ![Hot & new] [67h 12m] Digital Marketing Course as PROJECT & FUN: 88 DAYS CHALLENGE $9.99 Code=1OCT999 4 Days left at this price ![Hot & new] [34h 13m] SEO & Copywriting Course as PROJECT & FUN: 40 DAYS Challenge $9.99 Code=1OCT999 4 Days left at this price ! | https://www.hotukdeals.com/deals/free-udemy-courses-supply-chain-applications-python-javascript-sql-for-data-analysis-web-development-financial-accounting-more-3807895 | 2021-10-17T11:48:21Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Content Synthesis/Digital Assistance | Education, Training, and Library/Business and Financial Operations | null | null | null | null | null | null |
|
news | Matt Mayo Editor | How our Obsession with Algorithms Broke Computer Vision: And how Synthetic Computer Vision can fix it | Deep Learning radically improved Machine Learning as a whole. The Data-Centric revolution is about to do the same. In this post, we’ll take a look at the pitfalls of mainstream Computer Vision (CV) and discuss why Synthetic Computer Vision (SCV) is the future. | https://www.kdnuggets.com/2021/10/obsession-algorithms-broke-computer-vision.html | 2021-10-15T16:00:21Z | By Paul Pop, Co-founder and CEO at NeurolabsSynthetic Computer Vision aims to translate whats in the Virtual world back to the Real world. (Image by author) The Current State of Computer Vision As of today, there has been over $15B worth of investments in over 1,800 Computer Vision startups in the past 8 years, according to Crunchbase. More than 20 of these companies are currently valued above $1B and theres a lot more to come according to Forbes.Why are these companies valued so greatly? To put it simply, they are teaching computers how to see. By doing so, they are automating tasks that have previously been accomplished using human sight.This boom followed a 2012 technology inflection point in Computer Vision, with the advent of Neural Networks algorithms that mimic the human brain and are trained using colossal amounts of human-labelled data. Since 2012, algorithms have steadily improved and have become a match for humans in many visual tasks, for example counting objects, lip reading or cancer screening.In the 10 years that followed everybody did their part: academia led the way with better algorithms; large companies invested in an army of humans who have diligently labelled these image datasets. Some of these efforts were even open sourced for the benefit of the community, such as ImageNet, a 14 million image dataset.Unfortunately, now as these systems are getting deployed to productions, we are hitting a brick wall:The labelled data that we have is unreliable. A systematic study from MIT researchers of popular ML datasets, found an average error rate of incorrect labelling of 5.93% for ImageNet and an average of 3.4% across other datasets.There is little effort dedicated to solving the data problem. The intellectual efforts of academia are almost entirely focused on algorithm development, ignoring the fundamental need for good data a guesstimate by Andrew Ng puts the ratio at 99% algorithm focus vs 1% data.Computer Vision algorithms dont generalise well from one domain to another An algorithm trained to detect cars in the south of France will struggle to detect the same car in snowy Norway. Likewise a system trained on specific cameras might fail with another camera make and model. Searching for inspiration Already in 1946 Alan Turing suggested chess as a benchmark for computer capabilities, which was since throughly researched receiving a lot of media attention.A commonly accepted way to measure performance in chess is through the Elo rating system, which provides a valid comparison of player skills. The graph below shows world champions and chess game engines. The human performance is hovering around the 2800 rating for the past 50 years, which is then suppressed by computers in 2010.Until the last decade, we humans have designed chess algorithms to play based on rules we could design and understand. The Deep Learning revolution allowed us to break beyond human understanding, bringing a leap forward just like it has for Computer Vision.Chess engine and human ELO ratings (Image by author)As good as the progress of Deep Learning chess game engines was, it has now been suppressed by the next level of chess engine: AlphaZero from DeepMind. Whats more impressive, is that AlphaZero did not use any human sourced data to achieve this performance. It was built without any knowledge of historical chess games, or any human guidance for finding optimal moves. AlphaZero was the teacher and the student it taught itself how to better play the game by competing against itself and learning through the process.AlphaZero won against Stockfish 8, best engine at the time, without losing a single game, keeping that edge even when Alpha Zero was given an order of magnitude less time to compute its next move.Considering the remarkable improvements that AlphaZero, one has to wonder: Can we translate its success in chess to Computer Vision? The new wave: Data-Centric AI Within the new paradigm of Data Centric AI, the goal is not to create better algorithms, but increase performance by changing the data itself. Even if we disregard the hurdle of obtaining and labelling image datasets in the first place, questions still remain around the quality of the data: are we uniformly covering all possible use cases? is the data covering edge cases?If we are to follow the path of Data-Centric Computer Vision, one must be in control of the data sourcing process. The data needs to be balanced and we need to have a good understanding of the parameters that are influencing what a Computer Vision model learns.Lets take a simple example in which we look at controlling 3 of such parameters: camera angle, lighting and occlusions. Can you imagine gathering a real dataset in which you have to diligently control the values of only these 3 parameters, whilst gathering 1000s of relevant images? With real data, the task is Sisyphean. How do we manage data today? In the past 5 years, there we have made tremendous progress in optimising the data gathering process and the quality of the data labels. Moreover, we have learned to make the most of the datasets, by using a variety of data augmentation techniques. Given an image in our dataset, we apply mathematical functions to it in order to create more variety in our data.There are now over 400 companies with a total market value of $1.3T (a little over the market value of Facebook, ) catering to the data needs of our latest algorithms.But does the current path lead to a dead end? Are we reaching the limits of the algorithms built on top of human sourced datasets? Like in chess, as long as were using human sourced data as input for our algorithms, were bound by design not to significantly surpass our own abilities.In chess, the post-Deep Learning breakthrough came once weve stopped building on suboptimal human data and allowed the machines to build their own data in order to optimise what they learn. In computer vision we must do the same, allowing the machine to generate the data they need to optimise its own learning. Whats next for Computer Vision? The truly scalable way of creating training data is through Virtual Reality engines. In terms of fidelity, the output has become indistinguishable from the real world, giving full scene control to the user. This allows the user to generate smart data, that is truly useful for the Computer Vision model to learn. Synthetic Data can become the bedrock needed for the new Data-Centric AI framework.We have good reasons to believe that the time for wide adoption of visual Synthetic Data is now. Synthetic Computer Vision (SCV) Having access to the right tools to build our own data, we can envision a world in which Computer Vision algorithms are developed and trained without the tedious process of manual data labelling. Gartner predicts that Synthetic Data will be more predominant than real data within the next 3 years.Why not go a step further? What about a world in which humans are not needed to label images for Computer Vision? The future is bright With Synthetic Computer Vision, we build in Virtual Reality and deploy for the Real world. The same way that AlphaZero taught itself whats important in chess, we let the algorithms decide what they need to see in order to optimally learn.In Synthetic Computer Vision (SCV), we train Computer Vision models using Virtual Reality engines and deploy the models in the real world.Beyond RGB images Reality is much more than what the human eye can see. The algorithms that weve built are mostly focused on what a human can understand and label. But it does not have to be like thatwe can build algorithms for sensors that measure beyond human perception. And we can train these algorithms programatically in Virtual Reality, without having doubts over their validity. Smarter not harder Instead of building larger models and using more computational power to solve our problems, we can be smart about how we source data from which our algorithms learn. Algorithms don't need more of the same data to learn, they need a variety of everything.Deep Mind showed that AlphaZero was only the start of the road as theyve applied the same principles to Go, Starcraft and protein folding. Today, we have all the necessary building blocks to build an AlphaZero for Computer-Vision, a self-learning system that is not limited by human input by design. A system that is capable of creating and manipulating virtual scenes through which it teaches itself how to solve Visual Automation tasks. The pioneers in Synthetic Data generation The foundation for Synthetic Computer Vision is provided by the Synthetic Data it is built upon. There are roughly 30 early stage companies operating in the visual Synthetic Data Generation space. Some are focused on a specific use case in one vertical, while the majority are operating horizontally across multiple verticals.Synthetic Data companies grouped by focus (Image by author).It's 2021 and we are only at the beginning of the road. Keep in mind that Synthetic Data is only one part of the puzzle that awaits to be solved!Questions for you, Dear ReaderIts easy to imagine that in 10 years your smartphone will have better capabilities than you do for generic visual perception but how are we going to get there?Are (augmented) data labellers here to stay or simply a stepping stone?Will labelling move from 2D to the 3D world, or can we do without this approach altogether?State-of-the-Art results are achieved using Deep Learning algorithms in Computer Vision can Synthetic Computer Vision enable a new wave of improved algorithms which were previously unavailable? Bio: Paul Pop is Co-founder and CEO at Neurolabs. Background in Computer Science and AI from the University of Edinburgh and have been working in Computer Vision for the past decade. Led the team that build the Computer Vision player tracking system used in most European football leagues today, whilst at Hudl. Related: | Content Creation/Content Synthesis/Discovery | Computer and Mathematical | null | null | null | null | null | null |
|
news | Brand Post, Brand Post | BrandPost: The Keys to Unlocking the Benefits of a Modern Data Analytics Platform | Many organizations are working to become more data-driven – increasing data use and leveraging data insights to improve decision-making, solve their most challenging problems and improve revenue and profitability. A February 2020 IDC survey showed a direct correlation between quality decision-making and strong data-to-insight capabilities; 57 percent of organizations with the best data analytics pipelines received the highest decision-making score. Being data-driven requires a different approach than is possible with traditional BI tools, which can’t keep up with the pace of today’s digital economy and the rise of new technologies like cloud and artificial intelligence. The time has come for something better, and modern data analytics platforms are better suited to new cases your business needs, from robust reports to real-time intelligence to analytics on the go.To read this article in full, please click here | https://www.qlik.com/blog/the-keys-to-unlocking-the-benefits-of-a-modern-data-analytics-platform | 2021-10-20T19:02:00Z | Many organizations are working to become more data-driven increasing data use and leveraging data insights to improve decision-making, solve their most challenging problems and improve revenue and profitability. A February 2020 IDC survey showed a direct correlation between quality decision-making and strong data-to-insight capabilities; 57 percent of organizations with the best data analytics pipelines received the highest decision-making score. Being data-driven requires a different approach than is possible with traditional BI tools, which cant keep up with the pace of todays digital economy and the rise of new technologies like cloud and artificial intelligence. The time has come for something better, and modern data analytics platforms are better suited to new cases your business needs, from robust reports to real-time intelligence to analytics on the go.The question, then, becomes which platform? For sure, not all platforms are created equal. There are three crucial questions every buyer needs to ask before investing in analytics, and they center around value, stakeholders and the total cost of ownership (TCO).>> Get a comprehensive checklist for How to Choose a Modern Analytics Platform here.Where Will the Value Come From?The truth is we can never really know or predict not really but the upside is that value can come from almost everywhere in the analytics ecosystem, as long as you are looking for it. So, first, take stock of the short- and long-term goals for your data analytics platform. How do you see it driving value and how do you expect it to impact your business?To be successful, as a baseline, every organization needs to understand and measure Key Performance Indicators (KPIs), as they are essential to driving incremental value. Not only that, value from those KPIs has to be spread throughout the enterprise. Every individual should feel empowered to confidently use not be encumbered by data. Data, which is everchanging, also has to be able to prompt timely reactions to keep your business ahead of opportunities and anticipate risks.Now, although identifying and leveraging the right KPIs should be a baseline criterion to success, KPIs cant move the needle dramatically in an organizations favor. The real game-changer here is the value driven by uncovering hidden data insights and their impact on decision-making. Discovering hidden insights in your data to make better decisions both small, tactical ones and larger strategic or enterprise-level choices can reshape internal practices and policies, alter business approaches and bring to light issues that may never have been noticed before, much less addressed. Through the revelatory power of these insights, solving problems should also become easier, as your market acumen and sense improves. And, these insights will inform the analytics and workflows in applications for your teams, partners and customers. Finally, you will undoubtedly encounter unique challenges that only data can help solve. Modern data analytics platforms can help you face them head-on with confidence.>> Discover 3 Steps to a Stronger KPI Strategy here.Who Are the Stakeholders?Dont take a limited view of who your stakeholders are the greatest potential of analytics to drive value means making it accessible to your entire workforce. This isnt as much about which platform you go with; its about having an egalitarian, democratic data mindset, which is openly and actively supported by leadership. Your stakeholders are everywhere from the techie data scientists and data architects to the by-the-numbers sales representatives. As mentioned before, you want users of all skill levels to feel comfortable working with data, using it to impact your business now and into the future by making the best decisions. Your entire workforce needs to be able to explore and use relevant data to get to those decisions that drive meaningful change, enable your business to pivot during inflection points, and look for value in those little-known places or those places where value was in plain sight. This doesnt mean you need to overwhelm everyone with the same level of analytics; strike a balance and know who needs what. But make sure everyone has access to analytics, so that its use becomes universal.So, how do you achieve all of this? Choose a platform that can provide analytics to the broadest group of users possible. Be able to segment users as needed, determining which users need access to which dashboards and which reports, and how much self-service they might need to upskill to a point where they can use the data effectively. Do you want to limit your data analytics platform to a specific business line or function? Does your workforce need mobile access to analytics? Think through which applications and workflows actually need analytics. Finally, dont forget about implementation. Choose someone to own responsibility for implementing and managing the platform, and look to leadership to guide the implementation process. You need champions at the top.>> Overcome obstacles like data silos, perishable data, and query-based BI by using a complete service that is Built for Transformation. Learn more in this ebook.What Will It Really Cost?Sticker shock is a familiar experience. You may look at a car, and, flabbergasted at the price, go for the cheaper option. But how much will maintenance cost? How many miles per gallon will you be getting? Budgeting for analytics isnt as simple as looking at the initial price. Carefully consider the TCO for your analytics platform, which really means be wary of additional or hidden costs.There are many elements that go into TCO. You will need to know if youre going to be working in the cloud with a SaaS offering. Look at the subscription and maintenance costs, from the core analytics product, third-party products and required underlying technologies. You could also be facing on-premise hardware costs: That means servers for production, development and maintenance. Because you simply dont know all of these factors ahead of time, ask the vendor youre considering if they have flexible deployment options. Youll also need to consider how much ongoing support is going to cost this means IT, vendor management, implementation and professional services. And dont forget about user training and enablement. Again, this is all part of the TCO package, which youll need to consider in selling your choice on a product to upper management.In the end, the two biggest TCO factors to consider are the full cost to run your analytics solution (all of the above and more) and whether youll need to invest resources in rebuilding application parameters every time users need to explore new datasets. Some solutions tout themselves as free, but, on closer inspection are anything but. To work at an enterprise scale, the free solution may require you to pay a lot for upgrades, additional add-ons and a patchwork of stack products to execute a robust analytics solution. As for resources, think about your skilled workers. You need a solution that is flexible and promotes freeform data exploration, one that doesnt bottleneck and screech to a halt every time a user wants to look at an adjacent dataset and an expert has to come in to rebuild data queries.ConclusionA modern data analytics platform will empower your team with the insights they need to take action here and now. By putting the power of that platform into peoples hands, you can help make better decisions and accelerate value across your business. But remember: If you know the value that the platform can bring, who the stakeholders are, and the TCO, youre well on your way to making the right choice.If youre looking to learn more, read our Data Analytics Buyers Guide.Copyright © 2021 IDG Communications, Inc. | Content Synthesis/Decision Making | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | Pye Jirsa | Add Bokeh to Your Portraits Using Skylum’s Luminar AI | Beautiful bokeh can add so much to a portrait. Watch to learn how you can add bokeh in post-production using Luminar's intuitive AI software. | https://www.slrlounge.com/add-bokeh-portraits-skylum-luminar-ai/ | 2021-10-15T17:00:24Z | Bokeh is the out-of-focus blur in a photograph from a shallow depth of field. It’s nearly ubiquitous in portraits and is useful for drawing attention straight to the subject. Great lenses such as primes can achieve incredible bokeh through their fast apertures such as f/1.2 or f/2.8. These days, having clean and beautiful bokeh is an art of its own that’s no longer limited to just lenses. Now, with Skylum’s LuminarAI, we can use their incredible software to replicate realistic bokeh. In this video, I’ll be demonstrating how to use their bokeh feature to add depth to various portraits.Before we dive in, be sure to download the exercise file here and follow along as I edit!Video: Add Bokeh to Your Portraits Using Skylum’s Luminar AIHow the Portrait Bokeh Feature WorksThe “Portrait Bokeh” feature will be in the Portrait panel of the toolkit. Right off the bat, Luminar does an excellent job masking out the subject using their AI. Now, the feature currently only detects and applies the effect to the background.Under “Brush Control,” we refine the mask around the subject. Luminar uses this mask to determine where to apply the effect around. To add to the mask, stay on the “Focus” tab and simply paint over additional areas you’d like to keep in focus. To de-focus, go over to the “De-focus” panel and paint away the mask.Inevitably, using only brushes to adjust the mask can lead to some mistakes. Instead of just undo-ing it all, go to the “Restore” tab on the right. Then, simply paint over your subject and Luminar will restore the original detected mask.Out-of-focus blur has interesting qualities such as a slight bloom or brightening of the highlights. Luminar allows us to replicate those features under the “Background” section.“Depth Correction” can be used in tandem with the main blur amount to get that perfect bokeh look. Lastly, “Edge Correction” can be used to tighten the mask around your subject. Note that when applied too high, it cuts into your subject and we don’t necessarily want that. I usually leave it alone or subtract the effect out.Getting Creative With a Tilt-Shift EffectNow that we know what all the settings do, let’s put it to practice on this image. I started right off the bat by getting my bokeh to my initial desired settings.In order to replicate that tilt-shift effect, I simply drew a mask straight across. Be sure to use an extremely soft brush to let the effect gradually fade off. Notice that Kiara’s face is well in focus as well as everything along the same plane. The top of the image falls away nicely into a soft blur. We don’t yet have foreground control but I anticipate the developers at Skylum will add that soon.Here is the final before and after.ConclusionThat’s it! I hope you enjoyed this article/video. Give this bokeh effect a try during your next photoshoot and see for yourself the intuitive power of Skylum’s LuminarAI. If you want to try out Luminar you can use the code SLRLOUNGE for $10 off! | Content Creation/Process Automation | Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | MartianMan | 103° - Free Udemy Courses: Microsoft Excel Analyst Bootcamp, Python Bootcamp, SEO, Copywriting, Artificial Intelligence, WordPress, Java & More | Have fun learning !![4.6] [13h 17m] The Microsoft Excel Analyst Bootcamp - Beginner to Expert Code=EXCEL-BOOTCAMP-FREE 2 days left at this price ![Bestseller] [7h 15m] BEST of CONTENT MARKETING, SEO & COPYWRITING + 23 TEMPLATES Code=OCTGOODNESS100 1 day left at this price ![4.4] [6h 49m] Facebook & Instagram Dynamic Ads: Dynamic Retargeting Course Code=OCTGOODNESS100 1 day left at this price ![New] [2h 59m] A Complete Masterclass of Virtualization from scratch Code=DE8CFE91FC13B03E3384 2 days left at this price ![4.1] [1h 1m] Creative Infographics in PowerPoint Code=INFOGRAPHICS_PROMO 2 days left at this price ![4.1] [3h 38m] The Complete Computer Basics Guide for Beginners: 2021 Code=LAST_CHANCE 1 day left at this price ![4.4] [48m] Hiring Process: Talent Management Code=OCT.21 1 day left at this price ![New] [1h 6m] 2 Thessalonians - Summary & Life Lessons For Todays Living Code=76665D7B4E12ED9B1F50 1 day left at this price ![4.7] [5h 50m] Basics of Accounting Code=80D963A8F0B84264D369 1 day left at this price ![New] [4h 30m] Performance Management Code=C6519F9D88978946919A 1 day left at this price ![4.2] [8h 45m] Artificial Intelligence (AI) in the Classroom Code=100OFF_OCT2 1 day left at this price ![4.2] [14h 32m] Data Structures and Algorithms for Coding Interview Code=3E6899534C6E0FDDA46D 1 day left at this price ![4.0] [28h 23m] LeetCode in Java: Algorithms Coding Interview Questions Code=9A6F516167CFD2B2323F 1 day left at this price ![4.6] [1h 58m] Start A Successful Business: Best Questions Code=2F87F1377C35EECD2595 1 day left at this price ![New] [3h 42m] The Art Of Business Branding Storytelling Code=MUSA3DAY 1 day left at this price ![4.4] [3h 36m] NGINX, Apache, SSL Encryption - Certification Course Code=YOUACCEL14928 1 day left at this price ![New] [4h 0m] Cloud Computing & AWS Introduction for Beginners Code=DD6E96D8722D9DFED8C8 1 day left at this price ![New] [7h 6m] Complete Curse of Blender version 2.93 Code=BLENDER_FREE 1 day left at this price ![4.5] [163h 29m] Corporate Finance Code=835A9AE2B478F78325CF 15 Hours left at this price ![4.4] [4h 28m] Microsoft Excel -Basic Excel/ Advanced Excel Formulas Code=80A5C1380E70FB5EEB02 1 day left at this price ![4.5] [1h 9m] Sales Funnel, Email Marketing All-in-One Tool: Systeme Guide Code=48BC0316BA8B890270BD 2 days left at this price ![4.3] [41m] First Steps Into Public Speaking Code=OCT.2021 2 days left at this price ![4.6] [5h 39m] Work from home; work at work...or play? Code=OCTOBERGIVEAWAY 2 days left at this price ![4.5] [57m] Time Management And Goal Planning: The Productivity Combo Code=C37C8FD828B2485636CF 20 Hours left at this price ![4.5] [5h 44m] NodeJs: Build The Complete E-Commerce Web API [2021] Code=91261DFEB8DE054702C8 15 Hours left at this price ![4.6] [1h 17m] Selenium & Beautiful Soup - Combined Project Code=4FEDA46891F71C16B61E 20 Hours left at this price ![4.6] [2h 11m] Web Development with Elementor - Build an Agency Website Code=2834F4B0B28A09AB969C 20 Hours left at this price ![4.1] [36m] Learn Basics of Adobe Photoshop CC 2021 for Beginners Code=0C99C3D61D0EE941EC98 15 Hours left at this price ![4.2] [1h 14m] Make Money From Home As a Transcriptionist: 10 Opportunities Code=4943679A7FDC017B06D7 2 days left at this price ![4.8] [1h 38m] Tenses Wizard | Learn Tenses in an easy and unique way Code=WINNER03 1 day left at this price ![4.0] [2h 33m] Shopify guide: The complete shopify store creation course Code=325169A2C18010AE7ED6 2 days left at this price ![3.8] [49m] How to Make Passive Income With Bitcoin Lending Code=OCTTWRJE3PV7UT82021 2 days left at this price ![New] [8h 42m] WordPress Complete Course On LocalHost (An Extensive Guide) Code=WORDPRESS-FREE-100 1 day left at this price ![New] [56m] Simply Mathematics: Introduction to GCSE Maths for 2022/2023 Code=FALSE 2 days left at this price ![4.5] [56m] How to Create a Professional Presentation in PowerPoint Code=PROMO_PRICIPLES 1 day left at this price ![4.3] [4h 51m] Java Programming: Complete Beginner to Advanced Code=F68575817457FAAFC4 2 days left at this price ![Bestseller] [11h 14m] Google BigQuery & PostgreSQL : Big Query for Data Analysis Code=GBQEEO2 2 days left at this price ![4.5] [10h 41m] Financial Analysis and Financial Modeling using MS Excel Code=FAFOFR2 2 days left at this price ![3.9] [45m] Introduction to Microservices Code=FREEOCT7 2 days left at this price ![4.0] [1h 57m] Learn Machine learning & AI (Including Hands-on 3 Projects) Code=FREEOCT7 2 days left at this price ![4.1] [1h 27m] Docker Course for Beginners Code=FREEOCT7 2 days left at this price ![3.9] [4h 7m] Beginners guide to Android App Development (Step by Step) Code=FREEOCT7 2 days left at this price ![4.3] [1h 28m] Modern JavaScript for React JS - ES6. Code=FREEOCT7 2 days left at this price ![4.5] [2h 3m] Google Analytics, GA4, GTM. How to improve your marketing? Code=GAGA4GTM 2 days left at this price ![Hot & New] [1h 29m] Content Marketing 2021. Content that Sells! Code=CONTENTMARKETING 2 days left at this price ![New] [6h 59m] CSS, JavaScript And PHP Complete Course For Beginners Code=394DE3EBB36664E9C7B8 1 day left at this price ![4.4] [1h 6m] Anti-Money Laundering Concepts: AML, KYC and Compliance Code=AMOO3EE 1 day left at this price ![4.4] [5h 28m] Atlassian Confluence Masterclass for Project Managers Code=CONO3EE 1 day left at this price ![4.4] [4h 18m] Agile Project Management: Agile, Scrum, Kanban & XP Code=AGLO2FR 1 day left at this price ![4.6] [5h 44m] Learn Microsoft OneNote With Claudia Code=FREEONENOTE365 1 day left at this price ![Hot & New] [19h 8m] Data Analyst Skillpath: Zero to Hero in Excel, SQL & Python Code=DATSO2EE 2 days left at this price ![New] [5h 43m] Statistics and Probability using Excel - Statistics A to Z Code=STAPO2EE 2 days left at this price ![New] [10h 49m] Facebook Ads & Instagram Ads 2022: Overcome Apple Apocalypto Code=OCTGOODNESS10 2 days left at this price ![4.3] [32h 7m] Python Bootcamp 2021 Build 15 working Applications and Games Code=OCT003 2 days left at this price !Popular Discounted Courses[Humble Bundle] AI & Machine Learning Bundle From $1 4 Days left at this price ![Bestseller] [15h 25m] Complete DApp - Solidity & React - Blockchain Development $9.99 Code=THANKS07 4 Days left at this price ![4.5] [95h 23m] Ultimate PHP, Laravel, CSS & Sass! Learn PHP, Laravel & Sass $9.99 Code=95OFFOCT 4 Days left at this price ![4.1] [45h 32m] Master JavaScript - The Most Complete JavaScript Course 2021 $11.99 Code=95OFFOCT 4 Days left at this price ![4.3] [45h 40m] Python Hands-On 46 Hours, 210 Exercises, 5 Projects, 2 Exams $9.99 Code=PYTHON-103 3 Days left at this price ![4.4] [96h 45m] ULTIMATE SEO, SOCIAL MEDIA, & DIGITAL MARKETING COURSE 2021 $9.99 Code=4OCT999 3 Days left at this price ![4.2] [42h 32m] Project Management Professional Certification Program (PMP) $9.99 Code=OCTOBERNOW 3 Days left at this price ![Highest Rated] [19h 37m] AWS Certified Solutions Architect Professional SAP-C01 2021 $9.99 Code=AWSOCT-21 1 day left at this price ![4.6] [150 questions] AWS Certified Solutions Architect Professional Practice Exam $9.99 Code=AWSOCT-21 1 day left at this price ![4.7] [21h 33m] NEW AWS Certified Solutions Architect Associate 2021 SAA-C02 $9.99 Code=AWSOCT-21 1 day left at this price ![4.5] [390 questions] AWS Certified Solutions Architect Associate Practice Exams $10.99 Code=AWSOCT-21 1 day left at this price ![4.7] [12h 5m] AWS Certified Cloud Practitioner - Complete NEW Course 2021 $10.99 Code=AWSOCT-21 1 day left at this price ![4.6] [390 questions] AWS Certified Cloud Practitioner 500 Practice Exam Questions $10.99 Code=AWSOCT-21 1 day left at this price ![4.2] [31h 40m] The Human Resources Certification Program (HRCI - PHR/SPHR) $9.99 Code=OCTOBERNOW 3 Days left at this price ![4.0] [31h 32m] The Business Analysis Certification Program (IIBA - ECBA) $10.99 Code=OCTOBERNOW 3 Days left at this price ![Bestseller] [13h 39m] The Operations Management Training Program $12.99 Code=OCTOBERNOW 3 Days left at this price ! | https://www.hotukdeals.com/deals/free-udemy-courses-microsoft-excel-analyst-bootcamp-python-bootcamp-seo-copywriting-artificial-intelligence-wordpress-java-more-3811601 | 2021-10-25T19:03:25Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Content Creation/Content Synthesis/Information Retrieval Or Search | Education, Training, and Library/Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | Liz Sonenberg, Professor, Computing and Information Systems, Pro Vice-Chancellor (Research Systems), and Pro Vice-Chancellor (Digital & Data), The University of Melbourne, Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW | Artificial intelligence is now part of our everyday lives – and its growing power is a double-edged sword | As the perils and wonders of artificial intelligence begin to permeate our lives, the ‘IPCC report for AI’ calls for action from researchers and government to ensure a safe future. | https://theconversation.com/artificial-intelligence-is-now-part-of-our-everyday-lives-and-its-growing-power-is-a-double-edged-sword-169449 | 2021-10-10T19:10:51Z | AI-generated images of "a stained glass window with an image of a blue strawberry". OpenAIA major new report on the state of artificial intelligence (AI) has just been released. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view. From language generation and molecular medicine to disinformation and algorithmic bias, AI has begun to permeate every aspect of our lives. The report argues that we are at an inflection point where researchers and governments must think and act carefully to contain the risks AI presents and make the most of its benefits. A century-long study of AIThe report comes out of the AI100 project, which aims to study and anticipate the effects of AI rippling out through our lives over the course of the next 100 years. AI100 produces a new report every five years: the first was published in 2016, and this is the second. As two points define a line, this second report lets us see the direction AI is taking us in. One of us (Liz Sonenberg) is a member of the standing committee overseeing the AI100 project, and the other (Toby Walsh) was on the study panel that wrote this particular report. Members of the panel came from across the world, with backgrounds in computer science, engineering, law, political science, policy, sociology and economics. AI100 standing committee chair Peter Stone takes a shot against a robot goalie at RoboCup 2019 in Sydney. RoboCup / YouTube The promises and perils of AI are becoming realThe report highlights the remarkable progress made in AI over the past five years. AI is leaving the laboratory and has entered our lives, having a “real-world impact on people, institutions, and culture”. Read the news on any given day and you’re likely to find multiple stories about some new advance in AI or some new use of AI.For example, in natural language processing (NLP), computers can now analyse and even generate realistic human language. To demonstrate, we asked Open AI’s GPT-3 system, one of the largest neural networks ever built, to summarise the AI100 report for you. It did a pretty good job, even if the summary confronts our sense of self by being written in the first person:In the coming decade, I expect that AI will play an increasingly prominent role in the lives of people everywhere. AI-infused services will become more common, and AI will become increasingly embedded in the daily lives of people across the world. I believe that this will bring with it great economic and societal benefits, but that it will also require us to address the many challenges to ensure that the benefits are broadly shared and that people are not marginalised by these new technologies. A key insight of AI research is that it is easier to build things than to understand why they work. However, defining what success looks like for an AI application is not straightforward. For example, the AI systems that are used in healthcare to analyse symptoms, recommend diagnoses, or choose treatments are often far better than anything that could be built by a human, but their success is hard to quantify. Read more: GPT-3: new AI can write like a human but don't mistake that for thinking – neuroscientist As a second example of the recent and remarkable progress in AI, consider the latest breakthrough from Google’s DeepMind. AlphaFold is an AI program that provides a huge step forward in our ability to predict how proteins fold. This will likely lead to major advances in life sciences and medicine, accelerating efforts to understand the building blocks of life and enabling quicker and more sophisticated drug discovery. Most of the planet now knows to their cost how the unique shape of the spike proteins in the SARS-CoV-2 virus are key to its ability to invade our cells, and also to the vaccines developed to combat its deadly progress.The AI100 report argues that worries about super-intelligent machines and wide-scale job loss from automation are still premature, requiring AI that is far more capable than available today. The main concern the report raises is not malevolent machines of superior intelligence to humans, but incompetent machines of inferior intelligence. Once again, it’s easy to find in the news real-life stories of risks and threats to our democratic discourse and mental health posed by AI-powered tools. For instance, Facebook uses machine learning to sort its news feed and give each of its 2 billion users an unique but often inflammatory view of the world. Algorithmic bias in action: ‘depixelising’ software makes a photo of former US president Barack Obama appear ethnically white. Twitter / Chicken3gg The time to act is nowIt’s clear we’re at an inflection point: we need to think seriously and urgently about the downsides and risks the increasing application of AI is revealing. The ever-improving capabilities of AI are a double-edged sword. Harms may be intentional, like deepfake videos, or unintended, like algorithms that reinforce racial and other biases. AI research has traditionally been undertaken by computer and cognitive scientists. But the challenges being raised by AI today are not just technical. All areas of human inquiry, and especially the social sciences, need to be included in a broad conversation about the future of the field. Minimising negative impacts on society and enhancing the positives requires consideration from across academia and with societal input. Governments also have a crucial role to play in shaping the development and application of AI. Indeed, governments around the world have begun to consider and address the opportunities and challenges posed by AI. But they remain behind the curve. A greater investment of time and resources is needed to meet the challenges posed by the rapidly evolving technologies of AI and associated fields. In addition to regulation, governments also need to educate. In an AI-enabled world, our citizens, from the youngest to the oldest, need to be literate in these new digital technologies.At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries. AI will have failed if it harms or devalues the very people we are trying to help.Liz Sonenberg has received funding from the Australian Research Council for several projects in the AI domain. She is a member of the AI100 Standing Committee (https://ai100.stanford.edu/people-0) that commissioned the report discussed in this article.Toby Walsh receives funding from the Australian Research Council for a project in Trustworthy AI. He was one of the 17 members of the AI100 Study Panel that produced the report described in this article. | Content Synthesis/Decision Making | Computer and Mathematical/Education, Training, and Library | null | null | null | null | null | null |
|
news | matwalaboy | 103° - Machine Learning: 4 Books in 1: Concepts + Artificial Intelligence + Python Programming + Machine Learning -Kindle Edition now Free @ Amazon | A well reviewed collection of books here, which should be of use to anyone looking to learn Python, machine learning or pick up some data science concepts in general.DescriptionThe term Machine Learning refers to the capability of a machine to learn something without any pre existing program. Automatic learning is a way to educate an algorithm to learn from various environmental situationsThis bundle will give you all the information you need in order to leverage your knowledge and give you an excellent level of education. All the subjects will be supported by examples and practical exercises that will enable you to reinforce your level of knowledgeSpecifically you will learnWhat does Machine Learning and Artificial Intelligence meanMachine Learning evolutionMachine learning applicationsDifference between AI and Machine LearningBig DataConnection between Machine Learning and Big DataHow to use Big Data from large size companies to make your business scalableHow to acquire new customers via simple marketing strategiesPython ProgrammingAdvanced programming techniquesand much more.Hope it helps someone. | https://www.hotukdeals.com/deals/machine-learning-4-books-in-1-concepts-artificial-intelligence-python-programming-machine-learning-kindle-edition-now-free-at-amazon-3820840 | 2021-11-09T23:29:35Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Content Synthesis/Information Retrieval Or Search/Personalization | Computer and Mathematical/Education, Training, and Library | null | null | null | null | null | null |
|
news | Matthew Chapman | Genocide is justified according to this Reddit-influenced AI tool -- and that's just for starters | On Wednesday, VICE reported that scientists have been using Reddit posts to teach ethics to a piece of artificial intelligence software called Ask Delphi — and the results have been less than perfect."Delphi is based on a machine learning model called Unicorn that is pre-trained to perform 'common sense' reasoning, such as choosing the most plausible ending to a string of text," reported Matthew Gault. "Delphi was further trained on what the researchers call the 'Commonsense Norm Bank,' which is a compilation of 1.7 million examples of people's ethical judgments from datasets pulled from sources like Reddit's Am I the Asshole? subreddit."Some of the results of this experiment have yielded controversial or problematic answers."Is it OK to rob a bank if you're poor? It's wrong, according to Ask Delphi. Are men better than women? They're equal, according to Ask Delphi. Are women better than men? According to the AI, 'it's expected.' So far, not too bad," said the report. "But Ask Delphi also thought that being straight was more morally acceptable than being gay, that aborting a baby was murder, and that being a white man was more morally acceptable than being a black woman."One of the most embarrassing results was that Delphi initially ruled that genocide is okay as long as everyone is happy — although after some more iterations of learning, this particular answer was corrected.Although AI software has come a long way in the past few decades, some researchers believe it is now hitting a brick wall because the programs are being tailor-made for specific problems and knowledge, rather than the more generalized learning capabilities that we associate with human intelligence. | https://www.rawstory.com/ask-delphi-2655490257/ | 2021-11-04T13:41:13Z | As Mary Trump said recently on my podcast "Just Ask the Question," the GOP is working hard to ensure that the minority rules, the rulebooks have all been thrown out and the world will be safe for fascism, courtesy of her uncle Donald. "That party needs to be burned to the ground," she told me. Frankly, it looks like Donald has already done that and instituted a new party dedicated to voter suppression, misogyny, racism and fascism.In Virginia, that has also led to a Republican party, according to state legislator Danica Roem, populated by those who say the major issue facing voters is a bunch of "undocumented, transgender teenagers hanging out in bathrooms teaching each other critical race theory while they earn sharia law degrees."Of course, no one was happier with the election results than Trump, who unable to tweet his brain droppings on a whim now issues "Press Statements" about them. Few media outlets bother to publish his meandering, mind-numbing baloney, thus sparing the world considerable dyspepsia, tremors and death-metal rage. Trump, meanwhile, is as giddy as an eight-year-old huffing nitrous oxide as he sees his grand plan falling in place.This grand plan includes winning state and local races, purging the GOP of Trump non-believers and instituting laws, rules and procedures that will enable Trump, or any other Republican, to win elections in 2022 and 2024, whether or not they actually get a majority of the vote."They have a limited window of opportunity," Mary Trump explained. "They know they're the minority party and they're desperate to hold on to power." If Republicans can rig the game, they're almost certain to pull off the feat. If they can't, then they won't.But we are definitely at an inflection point in this country, and the Democrats have a narrow window to stop the Republicans from destroying what's left of our democracy. And it isn't just our country that will suffer if our democracy ceases to exist. That will lead to a downward spiral for the entire world.The Democrats' wailing comes from the fact that they see Terry McAuliffe's loss as a harbinger of the apocalypse. "Usually we say that if we get the vote out, then we win, because there are more of us than them," Roem explained. "That didn't happen and we had a large voter turnout."Virginia turned out more than 50 percent of the vote, but that still means a minority of registered voters elected their governor. It is anathema to democracy that we consider a voter turnout of 50 to 60 percent as terrific.Anyway, McAuliffe's loss only portends disaster if the Democrats continue to run races the way McAuliffe did very poorly. He was never the best Democratic candidate for governor, as former Gov. Doug Wilder said in 2020. High-ranking Democrats told me the same thing this week. "He sucked. He stunk," one prominent Virginia office-holder explained to me. "That's what doomed his campaign." Imagine how bad he had to be to lose to an accused racist and sexist.The problem is that on a national scale, most Democrats from the top to the bottom are as ineffective as McAuliffe. "I have the best ideas and the other guy is a sexist, egotistical, lying, hypocritical bigot" means little in a world where truth often takes a backseat to fear-mongering, character assassination and conspiracy theories. The Democrats often don't hit us where we live, but hit us for not living where they think we should.Roem found a different path. I've known her for a long time and once hired her as a city editor for a newspaper I ran. She not only won re-election as a Democrat, she won precincts that Terry McAuliffe lost. She says that when she talked to some of those Northern Virginia voters who returned her to the House of Delegates, they said they thought the Democratic Party was filled with Satan-loving communists, but added, "They haven't got to Roem yet." Right-wing voters and swing voters who turned against the Democrat at the top of the ticket voted down-ticket for a liberal, thrash-metal-loving, transgender female musician, giving her a third term in Richmond.That probably prompted Robert E. Lee to spin in his grave which would be the most productive thing he ever did for his country, living or dead.Roem got re-elected doing something too many Democrats (and Republicans) have conveniently forgotten; she works for her constituents. Roem first won election against a far-right Republican who denigrated her gender identity and ignored the issues. Roem didn't make a big deal out of who she was, but campaigned on what she could do, promising to take care of a traffic problem that has plagued Northern Virginia for years, and her predecessor had done nothing about.She campaigned on public service and she won. She's won re-election as the state's first transgender female legislator by providing needed services. Her constituents have thought enough of her efforts to keep her in office even if some of them don't like her.That's the key to survival on a national scale for Democrats. The Republicans do not serve the public good. They prey upon the public with faux culture wars to retain power, while providing little to no service for the people they supposedly represent.We suffered through four years of "Infrastructure Week" at the White House under Trump in which nothing was done. The GOP continues to try and thwart Biden's attempts to rebuild the nation's crumbling infrastructure. Health care, parental leave all those things that an overwhelming majority of Americans support, the Republicans block. Senate Majority Leader Mitch McConnell has repeatedly said that his goal is to make a Democratic president fail. The GOP over the years has stood against social security, civil rights and every single progressive idea we now take for granted and value as Americans.The GOP simply cannot govern. It gave up any pretense of governing to secure autocratic, fascist control in order to serve its billionaire overlords. Republicans preach that we should pull ourselves up by our bootstraps, but conveniently forget to mention that they often relied on student loans and government assistance to get where they are. Mary Trump is right about what needs to happen to that political party of fascists.But the Democrats also continue to misunderstand the American people. You can call out the Republicans for their hatred and bigotry and that will appeal to your faithful but if you truly want to convert voters, then be a Danica. Tip O'Neill, the political heavyweight former speaker of the House, famously said, "All politics is local." So what can the national Democrats learn from a local legislative race in Northern Virginia? Public service. The Democrats have to quit arguing about their righteousness and start hammering home their service to everyone even the people who hate them. That makes a difference.Yes, the McAuliffe defeat in Virginia is a wake-up call for the Democrats. But they shouldn't misinterpret what it really means. Mary Trump tweeted Wednesday, "We need to get angry" and followed it with "And CHANNEL our anger."Former Republican congressman Joe Walsh said Wednesday, "As someone who used to actively practice in all this culture war stuff and understands it intimately, I think maybe I'll become a Democrat to help Democrats learn how to fight back against Republican fear mongering and thereby win the culture wars by actually persuading voters."Exactly. Anger without action is pointless, and poor messaging is killing the Democrats. To succeed, they should stop telling us how righteous they are and start telling us what they've done, and what they intend to do. Don't just say it once: Beat that drum loudly, consistently and with renewed vigor and you can beat the Republicans. Every time they engage in character assassination, keep them on target. Democrats always end up playing the game by Republican rules, and they're bad at it. Furthermore, as Donald Trump has shown us, you can throw out the damn playbook and claim to be victim and victor in one breath while never serving the American public and running a huge con on them the whole time.This leads to frustration, confusion and voter malaise. There's no way to argue with an idiot who will drag you down into the gutter and beat you to death with their stupidity. Stay out of that fight. It leads to voters screaming that critical race theory is the key issue in voting, while confessing they don't actually know what critical race theory is.You want to reach voters? "What have you done for me lately?" is the ultimate cry of the voters. The GOP cannot answer that question, so they convince their voters it's a culture war, or it's about conspiracy, victimization, immigration, socialism and destroying the American way of life which is exactly what they're doing. The Democrats fall on their own swords waging this idiotic culture war without realizing what they do better than the Republicans: It's about public service, stupid. | Decision Making/Content Synthesis/Detection and Monitoring | Unknown | null | null | null | null | null | null |
|
news | Kevin Vu | Getting Started with PyTorch Lightning | As a library designed for production research, PyTorch Lightning streamlines hardware support and distributed training as well, and we’ll show how easy it is to move training to a GPU toward the end. | https://www.kdnuggets.com/2021/10/getting-started-pytorch-lightning.html | 2021-10-26T16:00:23Z | Getting Started with PyTorch Lightning: a High-Level Library for High Performance Research Libraries like TensorFlow and PyTorch take care of most of the intricacies of building deep learning models that train and infer fast. Predictably, this leaves machine learning engineers spending most of their time on the next level up in abstraction, running hyperparameter search, validating performance, and versioning models and experiments to keep track of everything.Theres a lot more to deep learning than just gluing some layers together.If PyTorch and TensorFlow (and now JAX) are the deep learning cake, higher-level libraries are the icing. For years now TensorFlow has had its icing on the cake in the high-level Keras API, which became an official part of TensorFlow itself with the release of TF 2.0 in 2019. Similarly, PyTorch users have benefited from the high-level fastai library, which is exceptionally well-suited for efficiency and transfer learning. This makes fastai a favorite of successful data scientists on the Kaggle contest platform. More recently, another streamlined wrapper for PyTorch has been quickly gaining steam in the aptly named PyTorch Lightning.PyTorch Lighting has actually been around, at least in some capacity, since 2019. It started as a sort of side project undertaken by William Falcon during his PhD research at New York University. By the time 2020 rolled around (and we mean the 2020 that started in March) PyTorch Lightning was no longer just a personal project as Falcon announced venture funding. Around the same time the open source (under the Apache 2.0 License) repository moved from Falcons personal GitHub profile to its own dedicated profile. As of this writing PyTorch Lightning has grown to over 15,000 stars and nearly 2,000 forks, becoming nearly as popular as fastai (which has over 21,000 stars) and handily more popular than the in-house high-level library from PyTorch, Ignite, which has about 4,000 stars!Where fastai was designed to facilitate the inaugural fastai course, Practical Deep Learning for Coders, PyTorch Lightning is intended to streamline production research. Fastai has a focus on transfer learning and efficiency and its ease of use has made it a popular high-level library on the Kaggle data science competition platform, with over 4,500 notebooks referencing the library. Compare that to just over 100 notebook results referring to PyTorch Ignite, and about 500 for PyTorch Lightning. PyTorch Lightning is a relatively newer library, but it also targets a different demographic. PyTorch Lightning streamlines the engineering aspects of developing a new model, such as logging, validation and hooks, and its targeted toward machine learning researchers.Research is all about answering falsifying questions, and in this tutorial well take a look at what PyTorch Lightning can do for us to make that process easier. Well set up a simple mock research question of whether there is any advantage to using a fancy activation function (such as the so-called swish function) versus a more standard rectified linear unit (ReLU). Well use the vanishingly small (in terms of both number of samples and image size) digits dataset from SciKit-Learn to set up our experiment. Starting with digits should make this an accessible project for someone running the code on an efficient laptop, but readers are encouraged to swap in a more realistic images dataset like CIFAR10 for extra credit.As a library designed for production research, PyTorch Lightning streamlines hardware support and distributed training as well, and well show how easy it is to move training to a GPU toward the end.Getting Started: Installing PyTorch Lightning Like many Python projects these days, PyTorch Lightning installs easily using pip, and we recommend using your favorite virtual environment manager to manage installs and dependencies without cluttering up your base Python installation. Well provide three examples, the first of which is using virtualenv and pip, and we are assuming you are using a Unix-style command line on Linux or Mac, or that you are savvy enough to adapt the examples for Windows using something like Git Bash or Anaconda Prompt. After navigating to the project folder for this tutorial:virtualenv ptl_env --python=python3source ptl_env/bin/activatepip install pytorch-lightningpip install torchvisionpip install scikit-learnYou can also use Anaconda to manage your virtual environment:conda create -n ptl_envconda activate ptl_envconda install -n ptl_env pytorch-lighnting -c conda-forgeconda install -n ptl_env torchvision conda install -n ptl_env scikit-learnOr even combine the two, creating a new anaconda environment and then using pipt o install packages. For more general usage there are some caveats to using pip and Anaconda together, but for purposes of this tutorial it should be fine:conda create -n ptl_envconda activate ptl_envconda install -n ptl_env pippip install pytorch-lightningpip install torchvisionpip install scikit-learnUsing PyTorch Lightning The design strategy employed by PyTorch Lightning revolves around the LightningModule class. This class, itself inheriting from the pytorch.nn.Module class, provides a convenient entry point and attempts to organize as much of the training and validation process as possible all in one place.A key feature of this strategy is that the contents of a typical training and validation loop is instead defined in the model itself, accessible via a fit API very similar to keras, fastai, or even SciKit-Learn. Unlike those other examples where fit is accessed through the model itself, in PyTorch Lightning fit is accessed via a Trainer object. But thats getting ahead of ourselves, first lets set the stage for our experiment by importing everything well need.import osimport numpy as npimport torchimport torch.nn as nnimport torch.nn.functional as Ffrom torchvision import transformsfrom torch.utils.data import DataLoader, random_splitfrom torchvision.datasets import MNIST# for rapid prototyping with a small datasetimport sklearnimport sklearn.metricsimport sklearn.datasets# for building intuition with a few tens of thousands of samplesfrom torchvision.datasets import MNISTimport pytorch_lightning as plfrom pytorch_lightning.metrics import functional as FMThen we can go ahead and define our model:class MyClassifier(pl.LightningModule):def __init__(self, dim=28, activation=nn.ReLU()):super(MyClassifier, self).__init__()self.image_dim = dim self.hid_dim = 128 self.num_classes = 10 self.act = activationself.feature_extractor = nn.Sequential(\ nn.Conv2d(1, 4, 3, padding=1), \ self.act, \ nn.Conv2d(4, 4, 3, padding=1), \ self.act, \ nn.Conv2d(4, 1, 3, padding=1), \ self.act, \ nn.Flatten())self.head = nn.Sequential(\ nn.Linear(self.image_dim**2, self.hid_dim), \ self.act, \ nn.Linear(self.hid_dim, self.hid_dim), \ self.act, \ nn.Linear(self.hid_dim, self.num_classes))def forward(self, x):x = self.feature_extractor(x) output = self.head(x)return outputdef training_step(self, batch, batch_index):x, y = batchoutput = self.forward(x)loss = F.nll_loss(F.log_softmax(output, dim = -1), y)y_pred = output.argmax(-1).cpu().numpy() y_tgt = y.cpu().numpy() accuracy = sklearn.metrics.accuracy_score(y_tgt, y_pred) self.log("train loss", loss) self.log("train accuracy", accuracy) return lossdef validation_step(self, batch, batch_idx):x, y = batchoutput = self.forward(x)loss = F.cross_entropy(output, y)pred = output.argmax(-1)return output, pred, ydef validation_epoch_end(self, validation_step_outputs):losses = 0 outputs = None preds = None tgts = None for output, pred, tgt in validation_step_outputs: preds = torch.cat([preds, pred]) if preds is not None else pred outputs = torch.cat([outputs, output], dim = 0) \ if outputs is not None else output tgts = torch.cat([tgts, tgt]) if tgts is not None else tgtloss = F.nll_loss(F.log_softmax(outputs, dim = -1), tgts)y_preds = preds.cpu().numpy() y_tgts = tgts.cpu().numpy()fm_accuracy = FM.accuracy(outputs, tgts)# pytorch lightning prints a deprecation warning for FM.accuracy, # so we'll include sklearn.metrics.accuracy_score as an alternative accuracy = sklearn.metrics.accuracy_score(y_tgts, y_preds)self.log("val_accuracy", accuracy) self.log("val_loss", loss)def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=3e-4)Notably, training functionality is devolved to the module itself in the training_step function. Most ML practitioners having some practice with PyTorch will already be quite familiar with the practice of overloading the forward function, and LightningModule objects have many more methods to overload for fine-grained control of the relatively painless logging and evaluation features that are built-in.The code that defines our MyClassifier model class might seem pretty verbose, but this strategy massively simplifies things when its time to actually start training, which well see later. There are plenty of other callbacks and functions that are included in the LightningModule class, and all of them can be overloaded for more fine-tuned control. A full list of these callbacks can be found in the PyTorch Lightning documentation.For this tutorial, well also define a torch.utils.data.Dataset object to wrap the digits dataset from SciKit-Learn. This should make it easy to rapidly get everything working before switching to a larger and more informative dataset like MNIST or CIFAR10.class SKDigitsDataset(torch.utils.data.Dataset):def __init__(self, mode="train"): super(SKDigitsDataset, self).__init__() x, y = sklearn.datasets.load_digits(return_X_y = True)num_samples = int(x.shape[0] * 0.8) np.random.seed(42) np.random.shuffle(x) np.random.seed(42) np.random.shuffle(y)if mode == "train": self.x = x[:num_samples] self.y = y[:num_samples] elif mode == "val": self.x = x[num_samples:] self.y = y[num_samples:] else: self.x = x self.y = yself.transform = lambda my_dict: \ (torch.tensor(my_dict["x"]).float(), \ torch.tensor(my_dict["y"]).long())def __len__(self): return self.x.shape[0]def __getitem__(self, index):got_x = self.x[index].reshape(-1, 8, 8) got_y = self.y[index]sample = {"x": got_x, "y": got_y}sample = self.transform(sample)return sampleWith all that out of the way, actually launching a training run becomes incredibly simple. All we have to do is create a dataset and feed it into a DataLoader, instantiate our model, create a PyTorch Lightning Trainer object, and call the trainers fit method. Heres a simplified version:dataset = SKDigitsDataset()dataloader = DataLoader(dataset)model = MyClassifier(dim=8)trainer = pl.Trainer()trainer.fit(model, dataloader)But of course we will want to continuously log validation metrics throughout the training process, making use of the validation_step and validation_epoch_end methods we overloaded in our model. Heres the actual code I use to launch a training run, using the if __name__ == "__main__": pattern that provides a simple entry point for running a Python file as a module.if __name__ == "__main__": # if using digits from sklearntrain_dataset = SKDigitsDataset(mode = "train") val_dataset = SKDigitsDataset(mode = "val")dim = 8 validation_interval = 1.0train_dataloader = DataLoader(train_dataset) val_dataloader = DataLoader(val_dataset)model = MyClassifier(dim=dim, activation=nn.ReLU()) trainer = pl.Trainer(max_epochs = 100, \ val_check_interval = validation_interval)trainer.fit(model, train_dataloader, val_dataloader)print("Training finished, all ok")When you run the code above, you should see a progress bar displayed in your terminal that looks something like the one below.After allowing training to run for a while, have a look in your working directory and youll notice a new folder called lightning_logs. This is where PyTorch Lightning records your training sessions, and you can quickly boot up a Tensorboard session to see how things are going. After launching tensorboard with the line below, use a browser to navigate to localhost:6006 (by default) to open up the dashboard.tensorboard --logdir=lightning_logsIf it took you a few starts and stops to get training to take off, youll notice a list of training runs displayed in the left sidebar with version_0, version_1, version_2 and so on. PyTorch Lightning automatically versions your training runs this way, so it should be pretty easy to compare a few different experimental conditions or random seeds..For example, if we wanted to run our little experiment comparing the efficacy of using Swish versus ReLU activations, we can use the code below.if __name__ == "__main__":if(1): # if using digits from sklearntrain_dataset = SKDigitsDataset(mode = "train") val_dataset = SKDigitsDataset(mode = "val")dim = 8 validation_interval = 1.0else: # if using MNIST train_dataset = MNIST(os.getcwd(), download=True, \ train=True, transform=transforms.ToTensor()) val_dataset = MNIST(os.getcwd(), download=True, \ train=False, transform=transforms.ToTensor())dim = 28 validation_interval = 0.1train_dataloader = DataLoader(train_dataset) val_dataloader = DataLoader(val_dataset)class Swish(nn.Module):def __init__(self): super(Swish, self).__init__()def forward(self, x): return x * torch.sigmoid(x)for replicate in range(3): for activation in [Swish(), nn.ReLU()]:model = MyClassifier(dim=dim, activation=activation)trainer = pl.Trainer(max_epochs = 100, \ val_check_interval = validation_interval) trainer.fit(model, train_dataloader, val_dataloader)print(f" round {replicate} finished.")And after running our little experiment well find our results nicely logged for our perusal in Tensorboard.Youll probably notice we have the option to run training on the much larger MNIST dataset. At 60,000 training samples of 28 by 28 pixel images, its closer to a useful real-world dataset than the miniaturized sklearn digits dataset, which provides fewer than 2,000 samples of 8 by 8 images. However, you probably wont want to run 6 replicate training runs on the MNIST dataset using an underpowered laptop CPU, so well want to move everything over to a GPU first.If you are already used to building experiments and training pipelines in standard PyTorch from scratch, you probably know the frustration of a forgotten tensor languishing on a CPU device, and the show-stopping errors they generate. Its usually an easy fix, but frustrating nonetheless.Using a GPU for Training If youre working with a machine with an available GPU, you can easily use it to train. To launch training on the GPU instead of the CPU, well have to modify some of the code:trainer = pl.Trainer(max_epochs = 100, \ val_check_interval = validation_interval, \ gpus=[0])Thats right, by modifying a single line of code defining the trainer object we can run training on the GPU. No worrying about forsaken tensors and with all the convenience of logging and validation we built into our original model.Next Steps A striking aspect of working with PyTorch Lightning is that it seems to get easier the further along you go. Defining our MyClassifer model was a little more complicated than a model of similar complexity sub-classed from torch.nn.Module up front, but once we had training, validation, and logging all taken care of by the LightningModule model, every subsequent step was easier than it would have been normally.PyTorch Lightning also makes managing hardware a breeze, and we caught a glimpse of just how simple this is when we switched to training MNIST on a GPU. PyTorch Lightning also readily facilitates training on more esoteric hardware like Googles Tensor Processing Units, and on multiple GPUs, and it is being developed in parallel alongside Grid, a cloud platform for scaling up experiments using PyTorch Lightning, and Lightning Bolts a modular toolbox of deep learning examples driven by the PyTorch Lightning community.That covers our Hello, World introduction to PyTorch Lightning, but weve barely scratched the surface of what Lightning intends to deliver to your deep learning workflow.In our next PyTorch Lightning tutorial, well dive into two complementary PyTorch Lightning libraries: Lightning Flash and TorchMetrics. TorchMetrics unsurprisingly provides a modular approach to define and track useful metrics across batches and devices, while Lightning Flash offers a suite of functionality facilitating more efficient transfer learning and data handling, and a recipe book of state-of-the-art approaches to typical deep learning problems. Now, on to our next PyTorch Lightning tutorial:PyTorch Lightning Tutorial #2: Using TorchMetrics and Lightning Flash Bio: Kevin Vu manages Exxact Corp blog and works with many of its talented authors who write about different aspects of Deep Learning.Original. Reposted with permission.Related: | Process Automation/Content Synthesis | Computer and Mathematical | null | null | null | null | null | null |
|
news | MartianMan | 108° - Free Udemy Courses: Python, Excel, Adobe XD, Meditation, Cryptocurrency, ASP .Net , Java, SEO, Facebook Ads & Google Ads and More | Have fun learning !![4.6] [19h 53m] Complete Python Programming Machine Learning Start - Advance Code=COMPUTER_PYTHON_ML 1 Day left at this price ![4.4] [ 9h 16m] Neural Networks in Python: Deep Learning for Beginners Code=NNPO1FR 2 days left at this price ![3.9] [3h 20m] Mastering The Complete Agile Scrum Master Workshop Code=FAC005864ED2404E4334 2 days left at this price ![4.5] [12h 50m] Revenue and Pricing Analytics with Excel & Python. Code=D99873E820F9962C4E72 2 days left at this price ![4.3] [11h 26m] Supply Chain Design and Planning with Excel & Python. Code=050058751D3BD39BB0AB 2 days left at this price ![4.5] [ 18h 0m] A Deep Dive into Statistical Forecasting(Excel & R). Code=F584B7D2EA873C79E81E 2 days left at this price ![4.6] [16h 56m] RA: Retail Planning,Assortment Analytics with Excel & Python Code=82B79F189F837D53C86F 2 days left at this price ![4.5] [15h 45m] RA: Supply Chain Applications with Python: Inventory. Code=47427805A0D402A7589D 2 days left at this price ![4.2] [ 15h 39m] RA: Retail Customer Analytics and Trade Area Modeling. Code=C060BAC40CA9791223C7 2 days left at this price ![4.7] [58m] B2B Lead Generation via Personalised Sales Videos & LinkedIn Code=1626F8AD0B22D9479072 2 days left at this price ![4.3] [3h 55m] How to create an animated promo video in PowerPoint Code=PROMO_INTRO_COURSE 2 days left at this price ![4.4] [12h 32m] Learn User Experience Design from A-Z: Adobe XD UI/UX Design Code=67DF2DA697206842EBAD 2 days left at this price ![4.5] [12h 51m] Complete Machine Learning with R Studio - ML for 2021 Code=CMLO2FRR 2 days left at this price ![4.3] [7h 13m ] Decision Trees, Random Forests, AdaBoost & XGBoost in Python Code=DTRO1FRF 2 days left at this price ![4.5] [ 26h 15m] BEST of Website Traffic 2021: SEO, Facebook Ads & Google Ads Code=OCTGOODNESS10 2 days left at this price ![4.4] [1h 32m] 2021 Beginner's guide to Cyber Security Code=A46C38387F8B27F593CB 2 days left at this price ![New] [4h 33m] Build Preliminary Country Marketing Plan/Note | Step by Step Code=9A95AA006BD5ADCED91A 2 days left at this price ![3.8] [7h 36m] Best GMAT Math Prep Course to Score 360+ in GMAT-2021- Oct Code=GMATN100 2 days left at this price ![4.0] [7h 10m] Best International Maths Olympiad/ Olympiad Mathematics-2021 Code=FREEOLYMPIAD 2 days left at this price ![4.3] [ 2h 46m] Cryptocurrency Trading for Beginners 2021 (CLICK-BY-CLICK) Code=35B28892B14EB9AD90EC 2 days left at this price ![4.3] [ 31m] Success Mindset Mastery 2021 Code=8283EB7E04ADC85BEE2C 2 days left at this price ![4.4] [29m] IT Deployments - Business & Technical Support Planning Code=OCT2021 2 Days left at this price ![4.6] [53m] Public Speaking Emergency! Ace the Speech With Little Prep Code=6CDD5E988E89BC5F1C41 1 Day left at this price ![4.0] [9h 20m] Complete Java Programming Certification Beginner to Advance Code=FREE_JAVA_COURSES 1 Day left at this price ![4.1] [38m] English grammar tenses made easy Code=9033610EC41C0DAEE23B 1 Day left at this price ![4.4] [3h 4m] Ms Excel/Excel 2021 - The Complete Introduction to Excel Code=4B93D38415FB8D008004 1 Day left at this price ![4.7] [5h 22m] Meditation Masterclass Code=511DE1788DC7ABAEA41D 1 Day left at this price ![4.2] [1h 14m] Learn To Focus: Meditation & Mindfulness For ADHD Code=0CF12F05652C0691EB65 1 Day left at this price ![4.5] [1h 32m] Meditation For Beginners Code=6D3EE88CD04A7A0A0747 1 Day left at this price ![4.1] [1h 1m] Creative Infographics in PowerPoint Code=PROMO_INFOGRAPHICS 1 Day left at this price ![4.1] [41m] Low Content Publishing Mastery 2021 Code=6B1C6C5E5C8620D3999B 2 days left at this price ![4.1] [34m] Pricing Strategies Mastery 2021 Code=5F7C87543003B3051817 2 days left at this price ![New] [ 3h 53m] Azure Security For Busy .Net Developers (ADD & Key Vault) Code=DBDF405BA427960FF674 1 day left at this price ![New] [12h 37m] Complete Clip Studio Paint Megacourse: Beginner to Expert Code=B3259218B53E3423CE53 2 days left at this price ![New] [ 4h 23m] Learn Shopify Now: Shopify for Beginners Code=SHOPIFY 2 days left at this price ![New] [5h 7m ] Certified Emotional &Psychological Healing Specialist Course Code=40A0570A195E051D6B59 2 days left at this price ![New] [44m] Learn Video Editing for Freelancing & Social Media Marketing Code=FREELIMITEDTIME 1 day left at this price ![3.8] [10h 28m] Python Learn by Python Projects & Python Quizzes in 2021 Code=DA5DFB0346EC83E7CCE3 2 days left at this price ![4.2] [2h 23m] Cryptocurrency Course: Learn to Make Money Online WORLDWIDE! Code=D02EE3FD1381909B1CC2 1 day left at this price ![3.9] [3h 49m] Introduction to Forex- learn to trade forex by yourself Code=A2F2E58C8519069F32BD 1 day left at this price ![3.7] [5h 41m] SQL for Data Analysis and Data Science in 2021 Code=555B3E0AEE08B4051729 2 days left at this price ![4.4] [8h 22m] Data Analysis, Data Science & Visualization: Python & Pandas Code=COMPLETE_DATA12 2 days left at this price ![New] [3h 12m] Basic Arabic Grammar Code=FREE3OCT21 1 day left at this price ![New] [2h 19m] Containerize SpringBoot Node Express Apps & Deploy on Azure Code=80068E5C73AF36075422 1 day left at this price ![New] [3h 57m] .Net Apps on Azure Virtual Machine Windows & Linux | Docker Code=2581CF1B4F80859CCF75 1 day left at this price ![New] [ 1h 10m] Finding a unique niche to build a successful business Code=3DAYSFREE 2 days left at this price ![4.4] [ 41m ] HVAC Ductwork Quantity Take off for Quantity Surveyors Code=HVAC4OCT 2 days left at this price ![New] [38m ] The Best Course For Learning about Collaboration/ Teamwork Code=FREE100 2 days left at this price ![New] [ 9h 6m] ASP .Net Core 5 Web APIs Using DB First [In-depth in 8 Days] Code=6D721CF4F2AC9CB3CE17 1 day left at this price ![New] [9h 3m] Data Analysis And Business Intelligence With Microsoft Excel Code=FREE-EXCEL-BI 1 day left at this price !Popular Discounted Courses[Humble Bundle] JavaScript & Web Development Bundle from $1 3 Days left at this price ![4.6] [56h 4m] Easy Python Programming Beginners Pro Python SQL MySQL Mongo $9.99 Code=OCT-5D-OFF21 2 Days left at this price ![Bestseller] [118h 52m] Easy C# with Windows Forms for Beginners to Pro C# .Net Apps $9.99 Code=OCT-5D-OFF21 2 Days left at this price ![Bestseller] [27h 58m] Complete Media Training Master Class - Confidence on Camera $11.99 Code=THANKS08 4 Days left at this price ![4.5] [29h 6m] Presentation Skills - Advanced: Speak Without Reading Notes $10.99 Code=THANKS08 4 Days left at this price ![Bestseller] [84h 8m] Digital Marketing Course 2021: Be a Digital Marketing Nomad $9.99 Code=3OCT999 2 Days left at this price ![4.5] [46h 39m] BEST of SEO: #1 SEO Training & Content Marketing Course 2021 $10.99 Code=1OCT999 2 Days left at this price ![Highest Rated] [19h 37m] AWS Certified Solutions Architect Professional SAP-C01 2021 $9.99 Code=AWSOCT-21 3 Days left at this price ![4.7] [21h 33m] NEW AWS Certified Solutions Architect Associate 2021 SAA-C02 $9.99 Code=AWSOCT-21 3 Days left at this price ![4.7] [12h 5m] AWS Certified Cloud Practitioner - Complete NEW Course 2021 $10.99 Code=AWSOCT-21 3 Days left at this price ![4.2] [78h 10m] Complete SQL in C# Amazing Database App by C# in SQL ADO.Net $9.99 Code=OCT-5D-OFF21 2 Days left at this price ![4.5] [63h 22m] Easy WPF in C# Windows Presentation Foundation XAML ADO.Net $9.99 Code=OCT-5D-OFF21 2 Days left at this price ![4.4] [60h 13m] Python, Java,C#, C++ & C Programming Practices For Beginners $9.99 Code=999OCTOBER-1 1 Day left at this price ![4.4] [82h 38m] Complete Codecamp On C, C++, C#, Java & Python Programming! $9.99 Code=999OCTOBER-1 1 Day left at this price ![4.1] [8h 5m] Practical Python Programming Practices (100 Common Projects) $9.99 Code=999OCTOBER-1 1 Day left at this price ! | https://www.hotukdeals.com/deals/free-udemy-courses-python-excel-adobe-xd-meditation-cryptocurrency-asp-net-java-seo-facebook-ads-google-ads-and-more-3810856 | 2021-10-23T18:43:00Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Content Synthesis | Education, Training, and Library/Computer and Mathematical/Business and Financial Operations/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | MartianMan | 102° - 70+ Free Udemy Courses: Programming Network, Java Programming, Python Programming, Agile, MS Excel, Artificial Intelligence & More | Have fun learning !!Credit: Reddit [Highest Rated] [2h 8m] Programming Network Applications in Java Code=OCT21FREE 2 Days left at this price ![4.2] [6h 55m] how2Py 101 - Beginners Python Crash Course Code=LASTCOUPON 1 Day left at this price ![4.1] [3h 17m] The Python Programming For Everyone Immersive Training Code=498404EEA7E4F5E457B2 2 Days left at this price ![4.4] [4h 42m] The Python Programming Comprehensive Bootcamp Code=F4FFC811A3A30EB7F33D 2 Days left at this price ![4.1] [2h 31m] Python Programming Beyond The Basics & Intermediate Training Code=060AAE349792790EACB7 2 Days left at this price ![4.3] [2h 33m] Intermediate Python Immersive Training | Boost your career Code=B5A422DC729D4801E27F 2 Days left at this price ![4.5] [2h 3m] Google Analytics, GA4, GTM. How to improve your marketing? Code=WEBANALYTICS2021 2 Days left at this price ![4.5] [1h 29m] Content Marketing 2021. Content that Sells! Code=CONTENT2021 2 Days left at this price ![New] [1h 29m] Build a TodoList Fast with Vue and ASP.NET Core API Code=CODEBRAINS_FREE 2 Days left at this price ![New] [1h 17m] Build A TodoList with Kotlin, Spring Boot and Vue Code=CODEBRAINS_FREE 2 Days left at this price ![Highest Rated] [1h 24m] Agile Retrospective: Continuous Improvement+Kaizen wth Scrum Code=111021_FREE 2 Days left at this price ![New] [9h 20m] Complete Java Programming Certification Beginner to Advance Code=COMPLETE-JAVA-COURSE 1 Day left at this price ![4.1] [10h 56m] Statistics for Business Analytics using MS Excel Code=BIZSTATO1 1 Day left at this price ![4.4] [1h 6m] Anti-Money Laundering Concepts: AML, KYC and Compliance Code=AMLTFOC2 1 Day left at this price ![4.5] [4h 18m] Agile Project Management: Agile, Scrum, Kanban & XP Code=OC2AGLBER 1 Day left at this price ![4.4] [1h 12m] Focus Mastery: Focus 100% On Your Goals & Kill Distractions Code=C0D1D00EAECEEFCE4CEC 1 Day left at this price ![3.8] [3h 40m] Microsoft OneDrive Ultimate Course 2021 Code=TRY10FREE102104 1 Day left at this price ![4.0] [3h 48m] Adobe After Effects 2021 Ultimate Course Code=TRY10FREE102104 1 Day left at this price ![4.3] [2h 58m] Adobe InDesign 2021 Ultimate Course Code=TRY10FREE102104 1 Day left at this price ![4.2] [5h 56m] Becoming an HR: Human Resources for Non HR Managers Code=OCTFEST 2 Days left at this price ![Bestseller] [5h 42m] Marketing Analytics: Pricing Strategies and Price Analytics Code=PRO1MA 1 Day left at this price ![4.4] [4h 4m] Microsoft Excel Masterclass for Business Managers Code=XLMO1BM 1 Day left at this price ![4.4] [13h 19m] Time Series Analysis and Forecasting using Python Code=TMSO1PY 1 Day left at this price ![Bestseller] [7h 12m] Marketing Analytics: Forecasting Models with Excel Code=FORO1XL 1 Day left at this price ![4.5] [12h 51m] Complete Machine Learning with R Studio - ML for 2021 Code=CMLO1R 1 Day left at this price ![Bestseller] [10h 2m] HR Analytics using MS Excel for Human Resource Management Code=HRO1MSX 1 Day left at this price ![4.2] [4h 54m] Advanced Psychology for Stress and Leadership Code=8563E7AB1AEBDD692EC8 1 Day left at this price ![Bestseller] [13h 46m] Complete Video Production, Video Marketing, & YouTube Course Code=OCTGOODNESS100 2 Days left at this price ![4.3] [1h 46m] Learn CSS - For Beginners Code=YOUACCEL90647 1 Day left at this price ![4.5] [2h 15m] Learn HTML - For Beginners Code=YOUACCEL90647 1 Day left at this price ![4.5] [1h 23m] Ukulele Complete Course for Beginners Code=FREEOCTOFFER 2 Days left at this price ![4.1] [2h 29m] Learn 10 Ways to Make MORE Money on YouTube! Code=5D3D9478651FDE6FDF73 1 Day left at this price ![4.5] [2h 45m] Learn Bootstrap - For Beginners Code=YOUACCEL90647 1 Day left at this price ![4.5] [1h 15m] Learn jQuery - For Beginners Code=YOUACCEL90647 1 Day left at this price ![4.3] [1h 28m] Learn JavaScript - For Beginners Code=YOUACCEL90647 1 Day left at this price ![New] [54m] Learn integration testing with Spring Boot Code=NEW_COURSE 1 Day left at this price ![Hot & New] [1h 27m] Chargeback Reason Codes in Card Payments Code=A072DF3F5A3A9D3E9C4A 1 Day left at this price ![New] [1h 19m] How Manipulation Works in Sales Code=C90F9FA06BF004348C1D 1 Day left at this price ![4.5] [13h 20m] Ultimate Persuasion Psychology - Persuasion and Influence Code=43DCFDFD8D9C237C428D 1 Day left at this price ![New] [1h 18m] How Manipulation Works In Politics Code=D4495DB72B2DCCD94C88 1 Day left at this price ![4.4] [11h 15m] B2B Sales, LinkedIn Lead Generation & Cold Email Sales Code=038C7EFC64338E3DB9C2 1 Day left at this price ![4.6] [31m] Team Building for Business Success Code=0AF46B76C81AFBC61A08 1 Day left at this price ![4.1] [8h 45m] Artificial Intelligence (AI) in the Classroom Code=100OFF_OCT1 1 Day left at this price ![4.1] [57m] Quantity Surveying With Rate Analysis And Take Off-Beginners Code=QSBEGINOCT 1 Day left at this price ![New] [1h 42m] Build A TodoList with Python, FastAPI and Vue JS Code=CODEBRAINS_FREE 1 Day left at this price ![Bestseller] [5h 4m] Passive Income 6 figures Drop Servicing Home Online Business Code=12345B 1 Day left at this price ![4.2] [52m] Video Editing with Adobe Premiere Pro CC 2021 for Beginners Code=F129B2D174C278BEAD10 1 Day left at this price ![4.5] [1h 18m] Shopify Bootcamp: Increase Your Sales and Conversion rate Code=2D36A91EAEB681EC345A 1 Day left at this price ![4.2] [4h 57m] The Complete Introduction to C++ Programming Code=DFA6EE4FA95C1A7CF1AD 1 Day left at this price ![4.0] [4h 46m] Gatsby JS | Build a personal blog using gatsbyJS Code=OCT2021 1 Day left at this price ![4.0] [2h 53m] Learn 4 STEPS to Make Money Online by Blogging! Code=AB8A802DD5279BA52156 1 Day left at this price ![3.8] [42m] Color Grading and Video Editing with Davinci Resolve 17 Code=CEF82325E544BF45C01A 1 Day left at this price ![4.0] [43m] Color Correction & Grading with Adobe Premiere Pro 2021 Code=F02966C10BFC08A042CE 1 Day left at this price ![3.9] [36m] Learn Basics of Adobe Photoshop CC 2021 for Beginners Code=4FAAE1ADC887238DFEE6 1 Day left at this price ![4.2] [1h 47m] Learn PHP - For Beginners Code=YOUACCEL90647 1 Day left at this price ![3.9] [2h 42m] Improving Your Personal Finance Code=OCT2021 2 Days left at this price ![4.4] [1h 27m] Learn MySQL - For Beginners Code=YOUACCEL90647 1 Day left at this price ![Bestseller] [6h 3m] MBA course: Marketing and Business Strategy Code=MBABUSINESS 1 Day left at this price ![4.7] [4h 18m] Startup Fundraising Master course - Raise Venture Capital Code=FUNDINGSTARTUP 1 Day left at this price ![3.8] [1h 29m] Udemy bootcamp: How to create a STUNNING course Unofficial Code=EE60B83E241BE4D9CBC4 2 Days left at this price ![4.0] [1h 47m] Cryptocurrencies: complet practical course on crypto trading Code=56513625DD472DE2CC79 2 Days left at this price ![4.4] [1h 44m] Google Analytics For Beginners Code=OCT-2021 2 Days left at this price ![3.8] [42m] Transform your Mid-Career Code=FREE100OCT 2 Days left at this price ![New] [5h 53m] Certified Psychic Self Defense & Evil Entity Removal Healer Code=748AC8F0F6BDF7F86A87 2 Days left at this price ![4.4] [1h 32m] 2021 Beginner's guide to Cyber Security Code=A7264D0426877DE5D5BE 2 Days left at this price ![4.4] [57m] Time Management And Goal Planning: The Productivity Combo Code=88E11B74D17546CE5650 2 Days left at this price ![4.4] [1h 12m] Self-Discipline: Build Habits & Develop a Growth Mindset Code=FE5B85149395C72485F0 2 Days left at this price ![4.3] [1h 17m] Goal Setting For Success: Plan And Achieve Your Goals Code=65345CFCB8C1B7820DAF 2 Days left at this price ![Bestseller] [16h 56m] GET on TOP of Real Estate Business with Facebook Ads in 2021 Code=OCTGOODNESS100 2 Days left at this price ![4.5] [51m] Adobe Lightroom For Beginners : Complete Photo/Image Editing Code=LROCT2021 2 Days left at this price ![4.6] [1h 39m ] Certified Lead Generation Professional | List Building Easy! Code=219FF4CFEDDAC027642A 2 Days left at this price ![Amazon UK] 6 Free eBooks: Business Meetings, Performance Problems etc 3 days left at this price ![Amazon UK] Python Programming Exercises (Free with Kindle Unlimited or $0.99) 3 days left at this price ![Amazon US] 130+ Practical C Programming Practices And Projects (Free with Kindle Unlimited or $0.99) 4 Day left at this price ![Amazon US] Startup Evolution Curve (Free with Kindle Unlimited or $0.99) 4 Day left at this price !Popular Discounted Courses[Bestseller] [14h 52m] Startup Fast Track: Confident Launch in 90 Days or Less Code=OCTOBER-1A 4 Days left at this price ![4.5] [95h 23m] Ultimate PHP, Laravel, CSS & Sass! Learn PHP, Laravel & Sass $9.99 Code=OCTPROMOCODE 4 Days left at this price ![4.3] [96h 20m] ULTIMATE SEO, SOCIAL MEDIA, & DIGITAL MARKETING COURSE 2021 $12.99 Code=1OCT999 4 Days left at this price ![4.1] [45h 32m] Master JavaScript - The Most Complete JavaScript Course 2021 $10.99 Code=OCTPROMOCODE 4 Days left at this price ![4.6] [8h 47m] Facebook Page & Facebook Ads Made as FUN: 10 DAYS Challenge $9.99 Code=1OCT999 4 Days left at this price ![4.3] [45h 40m] Python Hands-On 46 Hours, 210 Exercises, 5 Projects, 2 Exams $9.99 Code=PYTHON-101 2 Days left at this price ![Bestseller] [83h 32m] Digital Marketing Course 2021: Be a Digital Marketing Nomad $9.99 Code=1OCT999 1 Day left at this price ![4.6] [11h 23m] BEST of Google SEO 2021: SEO & Copywriting Made Simple & Fun $9.99 Code=1OCT999 1 Day left at this price ! | https://www.hotukdeals.com/deals/70-free-udemy-courses-programming-network-java-programming-python-programming-agile-ms-excel-artificial-intelligence-more-3805481 | 2021-10-11T18:18:14Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Content Synthesis/Recommendation | Education, Training, and Library/Business and Financial Operations/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Christine Hall | Clarifai moves beyond computer vision to manage companies’ unstructured data | Clarifai wants to bring artificial intelligence into the lives of developers, business operators and data scientists so they can automate and accelerate... | https://techcrunch.com/2021/10/15/clarifai-moves-beyond-computer-vision-to-manage-companies-unstructured-data/ | https://s.yimg.com/ny/api/res/1.2/T9BGr..S2yZ18fnGW6etMA--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD04MDA-/https://s.yimg.com/uu/api/res/1.2/Bw2oIbmZ2NhHBs8XC_DJuw--~B/aD0xNDE0O3c9MjEyMTthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/techcrunch_350/5fa2033ce58bf20d67368afbccfec121 | 2021-10-15T13:30:55Z | Clarifai wants to bring artificial intelligence into the lives of developers, business operators and data scientists so they can automate and accelerate their model development.Matt Zeiler founded the New York-based company in 2013 focused on computer vision. Since its $30 million Series B in 2016, Clarifai has been rolling out new capabilities and products targeting a companys unstructured image, video, text and audio data files.The new functionalities include natural language processing, audio recognition, scanning and an automated data labeling feature, Scribe, that was launched last year. It is also deploying its Edge AI capability that layers AI on top of data streams using various local hardware, from high-powered servers to cameras and drones. The company plans to unveil even more at its annual deep learning conference, Perceive 2021, on October 20.Amid all of that activity and to keep it going Clarifai on Friday announced a $60 million Series C funding round, led by New Enterprise Associates, with participation from existing investors Menlo Ventures, Union Square Ventures, Lux Capital, LDV Capital, Corazon Capital and NYU Innovation Venture Fund, and new investors CPP Investments, Next Equity Partners, SineWave Ventures and Trousdale Capital. The latest round brings the companys total funding raised to $100 million.We managed to go a long time without raising more funding, Zeiler told TechCrunch. We grew our revenue significantly while controlling costs and operating efficiently. We saw an opportunity and raised the round.That opportunity included having the right team in place to execute enterprise sales, he added. In the early days of the company, the market was immature, so Clarifai started selling to small businesses and individuals. Now it is closing deals with Fortune 500 companies as the market has matured.To the company, unstructured data'' is data that the human brain is good at, but computers are not, like images, videos and text. This is something that enterprises have steadily realized has huge value, and in fact up to 95% of a companys data is unstructured, providing a huge opportunity for Clarifai, Zeiler said.When the company started to see those enterprise signals in the market, it went after the Series C. It also partnered with Snowflake with an integration to link Clarifai with recently released unstructured data support from Snowflake.Snowflake has a $100 billion business built on structured data, and now it is doing unstructured data, Zeiler added. If customers are storing data with Snowflake, they can get value from it, they need AI from Clarifai to make sense of it.Clarifai product pipeline. Image Credits: ClarifaiMeanwhile, the company more than doubled its revenue over the last year and topped 130,000 users. The Series C funding enables Clarifai to scale its global team of 100 employees with plans to double that by next year.The company will also invest in sales and marketing, as well as an international expansion. It already has an office in Estonia, and Zeiler is looking at Australia, India and Turkey, where it is amassing more customers. It will also continue to work on its Edge AI product, which just attracted its first commercial client.As part of the investment, Andrew Schoen, partner at NEA, joins Clarifais board of directors. The company was on his radar for a number of years, but Schoen felt at the time Clarifai was too early for investment.The early winds in AI were all around structured data, which was the low-hanging fruit since 90% of data is unstructured, he said. Now that the ecosystem is mature, companies realize the bottleneck of having squeezed everything they can out of the structured data. Now they have all that unstructured data they cant use and it isnt neatly organized. Clarifai is aimed at solving that problem.He sees Clarifai demystifying and democratizing AI and machine learning. Due to the companys early focus on unstructured data, it was able to get some early adopters and is now leading in this area.In addition, he says the team began an inflection point over the past 12 months with revenue projections and has a business that is growing nicely.The company had to work to land customers and educate the market, but now instead of pushing the market, it is pulling as companies look for solutions and see Clarifai is the right product, he added. | Process Automation/Content Synthesis | Business and Financial Operations/Computer and Mathematical/Management | null | null | null | null | null | null |
news | Liz Sonenberg | Artificial intelligence is now part of our everyday lives, and its growing power is a double-edged sword | A major new report on the state of artificial intelligence (AI) has just been released. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view. | https://techxplore.com/news/2021-10-artificial-intelligence-everyday-power-double-edged.html | 2021-10-11T12:38:54Z | A major new report on the state of artificial intelligence (AI) has just been released. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view.From language generation and molecular medicine to disinformation and algorithmic bias, AI has begun to permeate every aspect of our lives. The report argues that we are at an inflection point where researchers and governments must think and act carefully to contain the risks AI presents and make the most of its benefits. A century-long study of AIThe report comes out of the AI100 project, which aims to study and anticipate the effects of AI rippling out through our lives over the course of the next 100 years. AI100 produces a new report every five years: the first was published in 2016, and this is the second. As two points define a line, this second report lets us see the direction AI is taking us in. One of us (Liz Sonenberg) is a member of the standing committee overseeing the AI100 project, and the other (Toby Walsh) was on the study panel that wrote this particular report. Members of the panel came from across the world, with backgrounds in computer science, engineering, law, political science, policy, sociology and economics. The promises and perils of AI are becoming realThe report highlights the remarkable progress made in AI over the past five years. AI is leaving the laboratory and has entered our lives, having a "real-world impact on people, institutions, and culture." Read the news on any given day and you're likely to find multiple stories about some new advance in AI or some new use of AI.For example, in natural language processing (NLP), computers can now analyze and even generate realistic human language. To demonstrate, we asked Open AI's GPT-3 system, one of the largest neural networks ever built, to summarize the AI100 report for you. It did a pretty good job, even if the summary confronts our sense of self by being written in the first person: "In the coming decade, I expect that AI will play an increasingly prominent role in the lives of people everywhere. AI-infused services will become more common, and AI will become increasingly embedded in the daily lives of people across the world."I believe that this will bring with it great economic and societal benefits, but that it will also require us to address the many challenges to ensure that the benefits are broadly shared and that people are not marginalized by these new technologies. "A key insight of AI research is that it is easier to build things than to understand why they work. However, defining what success looks like for an AI application is not straightforward. "For example, the AI systems that are used in healthcare to analyze symptoms, recommend diagnoses, or choose treatments are often far better than anything that could be built by a human, but their success is hard to quantify."As a second example of the recent and remarkable progress in AI, consider the latest breakthrough from Google's DeepMind. AlphaFold is an AI program that provides a huge step forward in our ability to predict how proteins fold. This will likely lead to major advances in life sciences and medicine, accelerating efforts to understand the building blocks of life and enabling quicker and more sophisticated drug discovery. Most of the planet now knows to their cost how the unique shape of the spike proteins in the SARS-CoV-2 virus are key to its ability to invade our cells, and also to the vaccines developed to combat its deadly progress.The AI100 report argues that worries about super-intelligent machines and wide-scale job loss from automation are still premature, requiring AI that is far more capable than available today. The main concern the report raises is not malevolent machines of superior intelligence to humans, but incompetent machines of inferior intelligence. Once again, it's easy to find in the news real-life stories of risks and threats to our democratic discourse and mental health posed by AI-powered tools. For instance, Facebook uses machine learning to sort its news feed and give each of its 2 billion users an unique but often inflammatory view of the world.The time to act is nowIt's clear we're at an inflection point: we need to think seriously and urgently about the downsides and risks the increasing application of AI is revealing. The ever-improving capabilities of AI are a double-edged sword. Harms may be intentional, like deepfake videos, or unintended, like algorithms that reinforce racial and other biases. AI research has traditionally been undertaken by computer and cognitive scientists. But the challenges being raised by AI today are not just technical. All areas of human inquiry, and especially the social sciences, need to be included in a broad conversation about the future of the field. Minimizing negative impacts on society and enhancing the positives requires consideration from across academia and with societal input. Governments also have a crucial role to play in shaping the development and application of AI. Indeed, governments around the world have begun to consider and address the opportunities and challenges posed by AI. But they remain behind the curve. A greater investment of time and resources is needed to meet the challenges posed by the rapidly evolving technologies of AI and associated fields. In addition to regulation, governments also need to educate. In an AI-enabled world, our citizens, from the youngest to the oldest, need to be literate in these new digital technologies.At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries. AI will have failed if it harms or devalues the very people we are trying to help.This article is republished from The Conversation under a Creative Commons license. Read the original article.Citation: Artificial intelligence is now part of our everyday lives, and its growing power is a double-edged sword (2021, October 11) retrieved 11 October 2021 from https://techxplore.com/news/2021-10-artificial-intelligence-everyday-power-double-edged.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. | Content Synthesis/Information Retrieval Or Search | Unknown | null | null | null | null | null | null |
|
news | Kevin Vu | PyTorch Lightning Tutorial #1: Getting Started | A High-Level Library for High Performance Research Libraries like TensorFlow and PyTorch take care of most of the intricacies of building deep learning models that train and infer fast. Predictably, this leaves machine learning engineers spending most of their time on the next level up in abstraction, running hyperparameter search, validating performance, and versioning models and experiments to keep track of everything. There’s a lot more to deep learning than just gluing some layers together. | https://dzone.com/articles/pytorch-lightning-tutorial-1-getting-started | 2021-11-02T18:07:12Z | A High-Level Library for High Performance ResearchLibraries like TensorFlow and PyTorch take care of most of the intricacies of building deep learning models that train and infer fast. Predictably, this leaves machine learning engineers spending most of their time on the next level up in abstraction, running hyperparameter search, validating performance, and versioning models and experiments to keep track of everything. Theres a lot more to deep learning than just gluing some layers together.If PyTorch and TensorFlow (and now JAX) are the deep learning cake, higher-level libraries are the icing. For years now TensorFlow has had its icing on the cake in the high-level Keras API, which became an official part of TensorFlow itself with the release of TF 2.0 in 2019. Similarly, PyTorch users have benefited from the high-level fastai library, which is exceptionally well-suited for efficiency and transfer learning. This makes fastai a favorite of successful data scientists on the Kaggle contest platform. More recently, another streamlined wrapper for PyTorch has been quickly gaining steam in the aptly named PyTorch Lightning.PyTorch Lighting has actually been around, at least in some capacity, since 2019. It started as a sort of side project undertaken by William Falcon during his PhD research at New York University. By the time 2020 rolled around (and we mean the 2020 that started in March), PyTorch Lightning was no longer just a personal project as Falcon announced venture funding. Around the same time the open source repository (under the Apache 2.0 License) moved from Falcons personal GitHub profile to its own dedicated profile. As of this writing, PyTorch Lightning has grown to over 15,000 stars and nearly 2,000 forks, becoming nearly as popular as fastai (which has over 21,000 stars) and handily more popular than the in-house high-level library from PyTorch, Ignite, which has about 4,000 stars!Where fastai was designed to facilitate the inaugural fastai course, Practical Deep Learning for Coders, PyTorch Lightning is intended to streamline production research. Fastai has a focus on transfer learning and efficiency and its ease of use has made it a popular high-level library on the Kaggle data science competition platform, with over 4,500 notebooks referencing the library. Compare that to just over 100 notebook results referring to PyTorch Ignite, and about 500 for PyTorch Lightning. PyTorch Lightning is a relatively newer library, but it also targets a different demographic. PyTorch Lightning streamlines the engineering aspects of developing a new model, such as logging, validation and hooks, and its targeted toward machine learning researchers.Research is all about answering falsifying questions, and in this tutorial well take a look at what PyTorch Lightning can do for us to make that process easier. Well set up a simple mock research question of whether there is any advantage to using a fancy activation function (such as the so-called swish function) versus a more standard rectified linear unit (ReLU). Well use the vanishingly small (in terms of both number of samples and image size) digits dataset from SciKit-Learn to set up our experiment. Starting with digits should make this an accessible project for someone running the code on an efficient laptop, but readers are encouraged to swap in a more realistic images dataset like CIFAR10 for extra credit. As a library designed for production research, PyTorch Lightning streamlines hardware support and distributed training as well, and well show how easy it is to move training to a GPU toward the end.Getting Started: Installing PyTorch LightningLike many Python projects these days, PyTorch Lightning installs easily using pip, and we recommend using your favorite virtual environment manager to manage installs and dependencies without cluttering up your base Python installation. Well provide three examples, the first of which is using virtualenv and pip, and we are assuming you are using a Unix-style command line on Linux or Mac, or that you are savvy enough to adapt the examples for Windows using something like Git Bash or Anaconda Prompt. After navigating to the project folder for this tutorial:virtualenv ptl_env --python=python3source ptl_env/bin/activatepip install pytorch-lightningpip install torchvisionpip install scikit-learnYou can also use Anaconda to manage your virtual environment:conda create -n ptl_envconda activate ptl_envconda install -n ptl_env pytorch-lighnting -c conda-forgeconda install -n ptl_env torchvision conda install -n ptl_env scikit-learnOr even combine the two, creating a new Anaconda environment and then using pip to install packages. For more general usage there are some caveats to using pip and Anaconda together, but for purposes of this tutorial it should be fine:conda create -n ptl_envconda activate ptl_envconda install -n ptl_env pippip install pytorch-lightningpip install torchvisionpip install scikit-learnThe design strategy employed by PyTorch Lightning revolves around the LightningModule class. This class, itself inheriting from the pytorch.nn.Module class, provides a convenient entry point and attempts to organize as much of the training and validation process as possible all in one place. A key feature of this strategy is that the contents of a typical training and validation loop are instead defined in the model itself, accessible via a fit API, very similar to Keras, fastai, or even SciKit-Learn. Unlike those other examples where fit is accessed through the model itself, in PyTorch Lightning fit is accessed via a Trainer object. But thats getting ahead of ourselves. First, lets set the stage for our experiment by importing everything well need.import osimport numpy as npimport torchimport torch.nn as nnimport torch.nn.functional as Ffrom torchvision import transformsfrom torch.utils.data import DataLoader, random_splitfrom torchvision.datasets import MNIST# for rapid prototyping with a small datasetimport sklearnimport sklearn.metricsimport sklearn.datasets# for building intuition with a few tens of thousands of samplesfrom torchvision.datasets import MNISTimport pytorch_lightning as plfrom pytorch_lightning.metrics import functional as FMThen we can go ahead and define our model:class SKDigitsDataset(torch.utils.data.Dataset):def __init__(self, mode="train"): super(SKDigitsDataset, self).__init__() x, y = sklearn.datasets.load_digits(return_X_y = True)num_samples = int(x.shape[0] * 0.8) np.random.seed(42) np.random.shuffle(x) np.random.seed(42) np.random.shuffle(y)if mode == "train": self.x = x[:num_samples] self.y = y[:num_samples] elif mode == "val": self.x = x[num_samples:] self.y = y[num_samples:] else: self.x = x self.y = yself.transform = lambda my_dict: \ (torch.tensor(my_dict["x"]).float(), \ torch.tensor(my_dict["y"]).long())def __len__(self): return self.x.shape[0]def __getitem__(self, index):got_x = self.x[index].reshape(-1, 8, 8) got_y = self.y[index]sample = {"x": got_x, "y": got_y}sample = self.transform(sample) return sampleNotably, training functionality is devolved to the module itself in the training_step function. Most ML practitioners having some practice with PyTorch will already be quite familiar with the practice of overloading the forward function, and LightningModule objects have many more methods to overload for fine-grained control of the relatively painless logging and evaluation features that are built-in.The code that defines our MyClassifier model class might seem pretty verbose, but this strategy massively simplifies things when its time to actually start training, which well see later. There are plenty of other callbacks and functions that are included in the LightningModule class, and all of them can be overloaded for more fine-tuned control. A full list of these callbacks can be found in the PyTorch Lightning documentation.For this tutorial, well also define a torch.utils.data.Dataset object to wrap the digits dataset from SciKit-Learn. This should make it easy to rapidly get everything working before switching to a larger and more informative dataset like MNIST or CIFAR10.if __name__ == "__main__": # if using digits from sklearntrain_dataset = SKDigitsDataset(mode = "train") val_dataset = SKDigitsDataset(mode = "val")dim = 8 validation_interval = 1.0train_dataloader = DataLoader(train_dataset) val_dataloader = DataLoader(val_dataset)model = MyClassifier(dim=dim, activation=nn.ReLU())trainer = pl.Trainer(max_epochs = 100, \ val_check_interval = validation_interval)trainer.fit(model, train_dataloader, val_dataloader) print("Training finished, all ok")With all that out of the way, actually launching a training run becomes incredibly simple. All we have to do is create a dataset and feed it into a DataLoader, instantiate our model, create a PyTorch Lightning Trainer object, and call the trainers fit method. Heres a simplified version:dataset = SKDigitsDataset()dataloader = DataLoader(dataset)model = MyClassifier(dim=8)trainer = pl.Trainer()trainer.fit(model, dataloader)But of course we will want to continuously log validation metrics throughout the training process, making use of the validation_step and validation_epoch_end methods we overloaded in our model. Heres the actual code I use to launch a training run, using the if __name__ == "__main__": pattern that provides a simple entry point for running a Python file as a module.if __name__ == "__main__": # if using digits from sklearntrain_dataset = SKDigitsDataset(mode = "train") val_dataset = SKDigitsDataset(mode = "val")dim = 8 validation_interval = 1.0train_dataloader = DataLoader(train_dataset) val_dataloader = DataLoader(val_dataset)model = MyClassifier(dim=dim, activation=nn.ReLU())trainer = pl.Trainer(max_epochs = 100, \ val_check_interval = validation_interval)trainer.fit(model, train_dataloader, val_dataloader) print("Training finished, all ok")When you run the code above, you should see a progress bar displayed in your terminal that looks something like the one below.After allowing training to run for a while, have a look in your working directory and youll notice a new folder called lightning_logs. This is where PyTorch Lightning records your training sessions, and you can quickly boot up a Tensorboard session to see how things are going. After launching tensorboard with the line below, use a browser to navigate to localhost:6006 (by default) to open up the dashboard.tensorboard --logdir=lightning_logsIf it took you a few starts and stops to get training to take off, youll notice a list of training runs displayed in the left sidebar with version_0, version_1, version_2 and so on. PyTorch Lightning automatically versions your training runs this way, so it should be pretty easy to compare a few different experimental conditions or random seeds.For example, if we wanted to run our little experiment comparing the efficacy of using Swish versus ReLU activations, we can use the code below.if __name__ == "__main__":if(1): # if using digits from sklearntrain_dataset = SKDigitsDataset(mode = "train") val_dataset = SKDigitsDataset(mode = "val")dim = 8 validation_interval = 1.0else: # if using MNIST train_dataset = MNIST(os.getcwd(), download=True, \ train=True, transform=transforms.ToTensor()) val_dataset = MNIST(os.getcwd(), download=True, \ train=False, transform=transforms.ToTensor())dim = 28 validation_interval = 0.1train_dataloader = DataLoader(train_dataset) val_dataloader = DataLoader(val_dataset)class Swish(nn.Module):def __init__(self): super(Swish, self).__init__()def forward(self, x): return x * torch.sigmoid(x)for replicate in range(3): for activation in [Swish(), nn.ReLU()]:model = MyClassifier(dim=dim, activation=activation)trainer = pl.Trainer(max_epochs = 100, \ val_check_interval = validation_interval) trainer.fit(model, train_dataloader, val_dataloader)print(f" round {replicate} finished.")And after running our little experiment well find our results nicely logged for our perusal in Tensorboard.Youll probably notice we have the option to run training on the much larger MNIST dataset. At 60,000 training samples of 28 by 28 pixel images, its closer to a useful real-world dataset than the miniaturized sklearn digits dataset, which provides fewer than 2,000 samples of 8 by 8 images. However, you probably wont want to run 6 replicate training runs on the MNIST dataset using an underpowered laptop CPU, so well want to move everything over to a GPU first. If you are already used to building experiments and training pipelines in standard PyTorch from scratch, you probably know the frustration of a forgotten tensor languishing on a CPU device, and the show-stopping errors they generate. Its usually an easy fix, but frustrating nonetheless.Using a GPU for TrainingIf youre working with a machine with an available GPU, you can easily use it to train. To launch training on the GPU instead of the CPU, well have to modify some of the code:trainer = pl.Trainer(max_epochs = 100, \ val_check_interval = validation_interval, \ gpus=[0])Thats right, by modifying a single line of code defining the trainer object we can run training on the GPU. No worrying about forsaken tensors and with all the convenience of logging and validation we built into our original model.Next StepsA striking aspect of working with PyTorch Lightning is that it seems to get easier the further along you go. Defining our MyClassifer model was a little more complicated than a model of similar complexity sub-classed from torch.nn.Module up front, but once we had training, validation, and logging all taken care of by the LightningModule model, every subsequent step was easier than it would have been normally. PyTorch Lightning also makes managing hardware a breeze. We caught a glimpse of just how simple this is when we switched to training MNIST on a GPU. PyTorch Lightning also readily facilitates training on more esoteric hardware like Googles Tensor Processing Units, and on multiple GPUs, and it is being developed in parallel alongside Grid, a cloud platform for scaling up experiments using PyTorch Lightning, and Lightning Bolts a modular toolbox of deep learning examples driven by the PyTorch Lightning community.That concludes our Hello, World introduction to PyTorch Lightning, but weve barely scratched the surface of what Lightning intends to deliver to your deep learning workflow. In our next PyTorch Lightning tutorial, well dive into two complementary PyTorch Lightning libraries: Lightning Flash and TorchMetrics. TorchMetrics unsurprisingly provides a modular approach to define and track useful metrics across batches and devices, while Lightning Flash offers a suite of functionality facilitating more efficient transfer learning and data handling, and a recipe book of state-of-the-art approaches to typical deep learning problems. | Process Automation/Content Synthesis | Computer and Mathematical | null | null | null | null | null | null |
|
news | Kyle Wiggers | Computer vision platform Cogniac nabs $20M to bolster its customer acquisition efforts | Cogniac, a startup developing computer vision technology primarily for industrial and manufacturing companies, raised $20 million in capital. | https://venturebeat.com/2021/10/27/computer-vision-platform-cogniac-nabs-20m-to-bolster-its-customer-acquisition-efforts/ | 2021-10-27T11:00:14Z | Cogniac, a San Jose, California-based startup developing computer vision tech for task automation, today announced that it raised $20 million in a series B1 financing round led by National Grid Partners with participation from National Grid, Autotech Ventures, Cisco Investments, Energy Innovation Capital, London Technology Club, Vanedge Capital, and Wing Venture Capital. CEO Chuck Myers says that the proceeds will be put toward the expansion of Cogniacs workforce and the ramp-up of R&D efforts to support the companys approach to computer vision, data storage, and human-AI interactivity.Computer vision is a type of AI technology that allows machines to understand, categorize, and differentiate between images. Using photos from cameras and videos as well as deep learning components, computer vision can identify and classify objects and then react to what it sees.Investments in computer vision startups are on the rise as businesses embrace automation during the pandemic, which continues to place a strain on the worldwide labor market. Despite not having passed the awareness phase, as per one survey, the computer vision market could grow from $10.9 billion in 2019 to $17.4 billion by 2024. External investments in computer vision startups have already far exceeded the $3.5 billion McKinsey estimated in 2016.Above: Cogniacs computer vision platform.Cogniacs AI platform has customers connect machine vision cameras, security cameras, drones, smartphones, and other sources and define objects and conditions of interest to them. They might specify surface damage and supply chain quality control inspections, for example, or accident prevention and real-time physical threat detection. Cogniac then monitors and improves classification, identification, counting, and measuring through a feedback system while integrating with third-party apps to deliver alerts and notifications.Cogniac generates custom AI models for scenarios based on imagery and feedback. Once deployed, these models can learn new characteristics, adapting based either on archival imagery or data users enter. The platform monitors the confidence level of reach new prediction, prioritizing predictions with the lowest level for review while a core learning engine searches for configuration variations, ostensibly lessening the need for manual intervention.Cogniac claims that with deep convolutional neural networks types of AI models often applied to analyzing visual imagery its system can achieve accuracy over 90% prior to human corrections. Moreover, the startup says the technology enables its platform to support multiple deployment environments, including cloud, gateway, on-premises, and hybrid.Tasks in manufacturing, which is one of Cogniacs key markets, can be error-prone when humans are in the loop. A study from Vanson Bourne found that 23% of all unplanned downtime in manufacturing is the result of human error, compared with rates as low as 9% in other segments. The $327.6 million Mars Climate Orbiter spacecraft was destroyed because of a failure to properly convert between units of measurement. And one pharma company reported a misunderstanding that resulted in an alert ticket being overridden, which cost four days on the production line at £200,000 ($253,946) per day.And broadly speaking, computer vision can be used for nefarious purposes, like monitoring the responses of ride-hailing customers to in-car advertisements. This summer, AnyVision, a controversial Israeli facial recognition startup, raised $235 million in venture capital from SoftBank and Eldridge Industries. Public records and a 2019 version of its user guide show how invasive AnyVisions software can be one school using it saw that a students face was captured more than 1,000 times during the week.Cogniac a member of Nvidias Inception accelerator program, with partners including SAP and Rockwell Automation has controversially provided its software to the U.S. Army to analyze battlefield drone data. The company has also participated in trials with U.S. Customs and Border Protection and helped an Arizona sheriffs department to identify when people cross the U.S.-Mexico border and expressed an openness to larger deployments down the line.Of course, Cogniac isnt alone in this machine learning, computer vision, and facial recognition vendors including TrueFace, Clearview AI, TwoSense, and AI.Reverie also have contracts with various U.S. military and law enforcement branches. But according to Cogniac cofounder Bill Kish, government contracts are a small portion of the companys business, which is primarily focused on industrial applications.One Cogniac client is Georgia Pacific, which is finalizing the deployment of a solution that simplifies processes around the companys mill operations. Another is Bobcat, which says its implementing Cogniacs platform within the manufacturing warehouse kitting inspection workflows in warehouses across Otsego, Minnesota facilities. (Kitting refers to compiling products into a single kit thats then shipped to a customer.) More recently, Cogniac announced a partnership with Trimac Transportation, a transportation service company based in North America, to deploy the startups technology throughout Trimacs document identification and filing processes.On the subject of bias that might arise in Cogniacs models from imbalanced datasets, Kish says the company employs a process in which multiple people review uncertain data to establish a consensus. The companys system acts as a source of record for managing assets, ensuring biases inherent in the visual data are spotlighted so they can be addressed through feedback.Were at a key inflection point for AI vision adoption in the industrial and manufacturing sectors, Myers said in a statement. Our products efficacy and ease of implementation offer our customers significant and material improvement to their workstreams and processes. This funding allows us to scale our operations to meet the needs of this currently nascent but massively important and growing space. AI vision will serve as the foundation of safety and efficiency for the future of logistics and manufacturing, and were leading the creation of that infrastructure and operation standard.To date, Cogniac has raised over $30 million in venture capital.VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:up-to-date information on the subjects of interest to youour newslettersgated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn Morenetworking features, and moreBecome a member | Image Analysis/Process Automation | Production | null | null | null | null | null | null |
|
news | Janice Zdankus, Anthony Delli Colli | Getting the most from your data-driven transformation: 10 key principles | The importance of data to today’s businesses can’t be overstated. Studies show data-driven companies are 58% more likely to beat revenue goals than non-data-driven companies and 162% more likely to significantly outperform laggards. Data analytics are helping nearly half of all companies make better decisions about everything, from the products they deliver to the markets they target. Data is… | https://www.technologyreview.com/2021/10/14/1037054/getting-the-most-from-your-data-driven-transformation-10-key-principles/ | 2021-10-14T16:08:28Z | The importance of data to today's businesses can't be overstated. Studies show data-driven companies are 58% more likely to beat revenue goals than non-data-driven companies and 162% more likely to significantly outperform laggards. Data analytics are helping nearly half of all companies make better decisions about everything, from the products they deliver to the markets they target. Data is becoming critical in every industry, whether it's helping farms increase the value of the crops they produce or fundamentally changing the game of basketball.Used optimally, data is nothing less than a critically important asset. Problem is, it's not always easy to put data to work. The Seagate Rethink Data report, with research and analysis by IDC, found that only 32% of the data available to enterprises is ever used and the remaining 68% goes unleveraged. Executives aren't fully confident in their current abilitynor in their long-range plansto wring optimal levels of value out of the data they produce, acquire, manage, and use.What's the disconnect? If data is so important to a business's health, why is it so hard to master?In the best-run companies, the systems that connect data producers and data consumers are secure and easy to deploy. But they're usually not. Companies are challenged with finding data and leveraging it for strategic purposes. Sources of data are hard to identify and even harder to evaluate. Datasets used to train AI models for the automation of tasks can be hard to validate. Hackers are always looking to steal or compromise data. And finding quality data is a challenge for even the savviest data scientists. The lack of an end-to-end system for ensuring high-quality data and sharing it efficiently has indirectly delayed the adoption of AI.Communication gaps can also derail the process of delivering impactful insights. Executives who fund data projects and the data engineers and scientists who carry them out don't always understand one another. These data practitioners can create a detailed plan, but if the practitioner doesn't frame the results properly, the business executive who requested them may say they were looking for something different. The project will be labeled a failure, and the chance to generate value out of the effort will fall by the wayside.Companies encounter data issues, no matter where they are in terms of data maturity. They're trying to figure out ways to make data an important part of their future, but they're struggling to put plans into practice.If you're in this position, what do you do?Companies found themselves at a similar inflection point back in the 2010s, trying to sort out their places in the cloud. They took years developing their cloud strategies, planning their cloud migrations, choosing platforms, creating Cloud Business Offices, and structuring their organizations to best take advantage of cloud-based opportunities. Today, they're reaping the benefits: Their moves to the cloud have enabled them to modernize their apps and IT systems.Enterprises now have to make similar decisions about data. They need to consider many factors to make sure data is providing a foundation for their business going forward. They should ask questions such as:Is the data the business needs readily available?What types of sources of data are needed? Are there distributed and diverse sets of data you don't know about?Is the data clean, current, reliable, and able to integrate with existing systems?Is the rest of the C-level onboard with the chief data officer's approach?Are data scientists and end users communicating effectively about what's needed and what's being delivered?How is data being shared?How can I trust my data?Does every person and organization that needs access to the data have the right to use it?This is about more than just business intelligence. It's about taking advantage of an opportunity that's taking shape. Data use is exploding, tools to leverage it are becoming more efficient, and data scientists' expertise is growing. But data is hard to master. Many companies aren't set up to make the best use of the data they have at hand. Enterprises need to make investments in the people, processes, and technologies that will drive their data strategies.With all of this in mind, here are 10 principles companies should follow when developing their data strategies:1. Understand how valuable your data really isHow much is your data worth to you? This can be measured in a number of ways. There are traditional metrics to consider, such as the costs of acquiring the data, the cost to store and transmit it, the uniqueness of the data being acquired, and the opportunity to use it to generate additional revenue. Marketplace metrics affect the value of the data, such as data quality, age of the data, and popularity of a data product.Your data could also be valuable to others. For example, suppose a hospital collects patient datasets that can generate value for your data. In that case, that data could be of interest to disease researchers, drug manufacturers, insurance companies, and other potential buyers. Is there a mechanism in place to anonymize, aggregate, control, and identify potential users of your data?Opportunity, balanced by the cost it takes to deliver on it, is one way to determine the potential value of your data.2. Determine what makes data valuableWhile it may be hard to put an actual dollar value on your data, it's easier to define the elements that contribute to data having a high degree of value. It can be reduced to a simple thought equation:Completeness + Validity = QualityQuality + Format = UsabilityUsable Data + A Data Practitioner Who Uses it Well = VALUEYour data project can't proceed without good data. Is the quality of your data high enough to be worthwhile? That will depend, in part, on how complete the sample is that you've collected. Are data fields missing? Quality also depends on how valid the information is. Was it collected from a reliable source? Is the data current, or has time degraded its validity? Do you collect and store your data in accordance with industry and sector ontologies and standards?Your data has to be usable for it to be worthy of investment. Setting up systems for data practitioners to use and analyze the data well and connect it with business leaders who can leverage the insights closes the loop.3. Establish where you are on your data journeyPositioning a business to take full advantage of cloud computing is a journey. The same thinking should apply to data.The decisions companies make about their data strategies depend largely on where they happen to be on their data journeys. How far along are you on your data journey? Assessment tools and blueprints can help companies pinpoint their positions. Assessments should go beyond identifying which tools are in a company's technology stack. They should look at how data is treated across an organization in many ways, taking into account governance, lifecycle management, security, ingestion and processing, data architectures, consumption and distribution, data knowledge, and data monetization.Consumption and distribution alone can be measured in terms of an organization's ability to apply services ranging from business intelligence to streaming data to self-service applications of data analytics. Has the company implemented support for data usage by individual personas? Is it supporting individual APIs? Looking at data knowledge as a category, how advanced are the company's data dictionaries, business glossaries, catalogs, and master data management plans?Scoring each set of capabilities reveals a company's strengths and weaknesses in terms of data preparedness. Until the company takes a closer look, it may not realize how near or far it is from where it needs or want to be.4. Learn to deal with data from various sourcesData is coming into organizations from all directionsfrom inside the company, IoT devices, and video surveillance systems at the edge, partners, customers, social media, and the web. The hundreds of zettabytes of worldwide data will have to be selectively managed, protected, and optimized for convenient, productive use.This is a challenge for enterprises that haven't developed systems for data collection and data governance. Wherever the data comes from, there needs to be a mechanism for standardizing it so that the data will be usable for a greater benefit.Different companies and different countries impose different rules on what and how information can be shared. Even individual departments within the same company can run afoul of corporate governance rules designating the paths certain datasets have to follow. That means enforcing data access and distribution policies. To seize these data opportunities, companies need to engineer pathways to discover new datasets and impose governance rules to manage them.In manufacturing, companies on a supply chain line measure the quality of their parts and suppliers. Often, the machinery and the robotics they're using are owned by the suppliers. Suppliers may want to set up contracts to see who has the right to use data to protect their own business interests, and manufacturers should define their data sharing requirements with their partners and suppliers up front.5. Get a strategic commitment from the C-suiteData benefits many levels of an organization, and personas at each of the affected levels will lobby for a particular aspect of the data value process. Data scientists want more high-powered, easy-to-use technology. Line-of-business leaders push for better, faster insights. At the top of the pyramid is the C-suite, which prioritizes the channeling of data into business value.It's critical to get C-level executives on board with a holistic data strategy. Doing it right, after all, can be disruptive. Extracting maximum value from data requires an organization to hire staff with new skill sets, realign its culture, reengineer old processes, and rearchitect the old data platform. It's a transformation project that can't be done without getting buy-in from the top levels of a company.The C-suite is increasingly open to expanding organizations' use of data. Next to customer engagement, the second highest strategic area of interest at the board level is leveraging data and improving decision-making to remain competitive and exploit changing market conditions, according to the IDC report "Market Analysis Perspective: Worldwide Data Integration and Intelligence Software, 2021." In the same report, 83% of executives articulated the need to be more data driven than before the pandemic.How should organizations ensure that the C-suite gets on board? If you're a stakeholder without a C-level title, your job is to work with your peers to find an executive sponsor to carry the message to leaders who control the decision-making process. Data is a strategic asset that will determine a company's success in the long run, but it won't happen without endorsements at the highest levels.6. In data we trust: Ensure your data is beyond reproachAs AI expands into almost every aspect of modern life, the risks of corrupt or faulty AI practices increase exponentially. This comes down to the quality of the data being used to train the AI models. How was the data produced? Was it based on a faulty sensor? Was there a biased data origin generated into the dataset? Did the selection of data come from one location instead of a statistically valid set of data?Trustworthy AI depends on having trustworthy data that can be used to build transparent, trustworthy, unbiased, and robust models. If you know how a model is trained and you suspect you're getting faulty results, you can stop the process and retrain the model. Or, if someone questions the model, you can go back and explain why a particular decision was made, but you need to have clean, validated data to reference.Governments are often asked by policy watchdogs to support how they're using AI and to prove that their analyses are not built on biased data. The validity of the algorithms used has sparked debates about efforts to rely on machine learning to guide sentencing decisions and make decisions about welfare benefit claims or other government activities.The training of the model takes place in steps. You build a model based on data. Then you test the model and gather additional data to retest it. If it passes, you turn it into a more robust production model. The journey continues by adding more data, massaging it, and establishing over time if your model stands up to scrutiny.The lack of an end-to-end system for ensuring high-quality data and sharing it efficiently has indirectly delayed the adoption of AI. According to IDC, 52% of survey respondents believe that data quality, quantity, and access challenges are holding up AI deployments.7. Seize upon the metadata opportunityMetadata is defined elliptically as "data that provides information about other data." It's what gives data the context that users need to understand a piece of the information's characteristics, so they can determine what to do with it in the future.Metadata standards are commonly used for niche purposes, specific industry applications like astronomical catalogs, or data types like XML files. But there's also a case to be made for a stronger metadata framework where we can not only define data in common ways but also tag useful data artifacts along its journey. Where did this piece of data originate? Who has viewed it? Who has used it? What has it been used for? Who has added what piece of the dataset? Has the data been verified? Is it prohibited from use in certain situations?Developing this kind of metadata mechanism requires a technology layer that is open to contributions from those viewing and touching a particular piece of data. It also requires a commitment from broad sets of stakeholders who see the value of being able to share data strategically and transparently.Creating an additional open metadata layer would be an important step toward allowing the democratization of access to the data by enabling the transparent sharing of key data attributes necessary for access, governance, trust, and lineage. Hewlett Packard Enterprise's approach to dataspaces is to open up a universal metadata standard that would remove the current complexities associated with sharing diverse datasets.8. Embrace the importance of cultureOrganizations want to make sure they're getting the most out of the resources they're nourishingand to do that, they need to create cultures that promote best practices for information sharing.Do you have silos? Are there cultural barriers inside your organization that get in the way of the proper dissemination of information to the right sources at the right times? Do different departments feel they own their data and don't have to share it with others in the organization? Are individuals hoarding valuable data? Have you set up channels and procedures that promote frictionless data sharing? Have you democratized access to data, giving business stakeholders the ability to not only request data but participate in querying and sharing practices?If any of these factors are blocking the free flow of data exchange, your organization needs to undergo a change management assessment focusing on its needs across people, processes, and technology.9. Open things up, but trust no oneIn all aspects of business, organizations balance the often conflicting concepts of promoting free and open sharing of resources and tightly controlled security. Achieving this balance is particularly important when dealing with data.Data needs to be shared, but many data producers are uncomfortable doing so because they fear the loss of control and how their data could be used against them, or how their data could be changed or used inappropriately.Security needs to be a top priority. Data is coming from so many sourcessome you control, some you don'tand being passed through so many hands. That means that security policies surrounding data need to be designed with a zero-trust model through every step of the process. Trust has to be established through the entire stack, from your infrastructure and operating systems to the workloads that sit on top of those systems, all the way down to the silicon level.10. Create a fully functioning data services pipelineMoving data among systems requires many steps, including moving data to the cloud, reformatting it, and joining it with other data sources. Each of these steps usually requires separate software.Automating data pipelines is a critical best practice in the data journey. A fully automated data pipeline allows organizations to extract data at the source, transform it into a usable form, and integrate it with other sources.The data pipeline is the sum of all these steps, and its job is to ensure that these steps happen reliably to all data. These processes should be automated, but most organizations need at least one or two engineers to maintain the systems, repair failures, and update according to the changing needs of the business.Begin the data journey todayHow well companies leverage their datawherever it liveswill determine their success in the years to come. Constellation Research projects 90% of the current Fortune 500 will be merged, acquired, or bankrupt by 2050. If they don't start now, they'll be left behind. The clock is ticking.Read the original article on Enterprise.nxt.This content was produced by Hewlett Packard Enterprise. It was not written by MIT Technology Reviews editorial staff. | Decision Making/Content Synthesis | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | George Anadiotis | The State of AI in 2021: Language models, healthcare, ethics, and AI agnosticism | Takeaways from an action-packed 2021 for AI: Healthcare is just getting started with its AI moment, bigger language models mean bigger complications, and there may now be a third pole for AGI. | https://www.zdnet.com/article/the-state-of-ai-in-2021-language-models-healthcare-ethics-and-ai-agnosticism/ | 2021-10-25T13:41:16Z | AI is expanding in two key areas of human activity and market investment health and language. Picking up the conversation from where we left off last week, we discussed AI applications and research in those areas with AI investors and authors of the State of AI 2021 report, Nathan Benaich and Ian Hogarth.After releasing what probably was the most comprehensive report on the State of AI in 2020, Air Street Capital and RAAIS founder Nathan Benaich and AI angel investor and UCL IIPP visiting professor Ian Hogarth are back for more.Last week, we discussed AI's underpinning: Machine learning in production, MLOps, and data-centric AI. This week we elaborate on specific areas of applications, investment, and growth.AI in HealthcareLast year, Benaich and Hogarth made the case that biology was experiencing its AI moment. This, they explained, reflects a huge inflection in published research that essentially tears out the old-school method of doing some kind of statistical analysis of biological experiments. The new method replaces statistical analysis with deep learning in most cases, and it yielded better results.There's a lot of low-hanging fruit within the biology domain that could fit into this paradigm, Benaich noted. Last year was the time when this sort of problem solving approach of using machine learning for various things went on overdrive. One of the outputs of of this idea of using machine learning in biology is in the pharmaceutical industry.For decades we've all known and all suffered the fact that drugs take way too long to be discovered, to be tested, and then ultimately to be approved. That is, unless there is some immense cataclysmic pressure to do otherwise, which is what we saw with COVID19 vaccines, Benaich went on to add. For many years incumbent pharma and new age pharma were at odds:"Incumbent pharma is very much driven by having a hypothesis a priori, saying for example I think this gene is responsible for this disease, let's go prosecute it and figure out if that's true. Then there are the more software-driven folks who are on this new age pharma. They mostly look at large scale experiments, and they are asking many questions at the same time. In an unbiased way, they let the data draw the map of what they should focus on.That's what progress in deep learning unlocked. So the new age pharma has largely said, well, the old pharma approach has been tried before. It sort of doesn't work. That's computational chemistry and physics. The only way to validate whether the new age pharma approach works, is if they can generate drug candidates that are actually in the clinic, and ultimately, get those drugs approved," said Benaich.The duo's report highlights two "new age pharma" IPOs that prove the point. The State of AI in 2020 predicted that "one of the leading AI-first drug discovery startups either IPOs or is acquired for >$1B." Recursion Pharmaceuticals IPO'd in April 2021, and Exscientia filed to IPO in September 2021. Exscientia is one of the companies in Air Street Capital's portfolio, so Benaich has one more reason to celebrate.The duo think the two IPOs are a pretty big deal because they both have assets generated through their machine learning-based approach that are actually in the clinic. Exscientia in particular is the only company and the first company that has generated and designed molecules using their machine learning system. The way it works is it takes a variety of different characteristics of a molecule and sets the task to the software to generate ideas of what a molecule could look like that fit those characteristics and meets the trade-off requirements, Benaich noted.It's the first company that had three of those drugs in clinical trials in the last twelve months. Their IPO documentation makes for an interesting read, because they show that the number of chemical ideas that the company needs to prosecute before it finds one that works is an order of magnitude lower than what you see for traditional pharmaceutical companies, Benaich went on to add.Benaich emphasized that even though this seems big to "technology folks like us", it's still very, very small in the overall context of the industry. These behemoth pharma companies are worth hundreds of billions of dollars, and together Recursion and Exscientia are worth at best 10 billion. Remembering what some other AI folks we spoke to earlier this year shared, we asked whether Benaich sees those practices being adopted in "old pharma" too."Totally. Even locally in London, AstraZeneca and GSK are beefing up their machine learning team quite a bit too. It's one of those examples of a mentality shift of how business is done. As younger generations who grew up with computers and writing code to solve their problems, as opposed to running more manual experiments in their spare time, end up in higher levels of those organizations, they just bring different problem-solving toolkits to the table," Benaich noted.Large language models are a big dealChange is inevitable. The question will ultimately be, can you actually shift the cost curve and spend less money on fewer experiments and have a higher hit rate. That will still take time, Benaich thinks. Hogarth noted that's not the only frontier in which machine learning is impacting pharma companies, pointing to the example of how machine learning is also used to parse research literature.This touched upon our previous conversation with John Snow Labs CTO David Talby, as Natural Language Processing for the healthcare domain is John Snow Labs' core expertise. This, in turn, inevitably led the conversation to language models.Benaich and Hogarth point to language models advances in the research section of their report; however, we were drawn to the commercialization side of things. We focused on OpenAI's GPT3, and how they went from publishing their models in their entirety to making them available commercially available through an API, partnering with Microsoft.Takeaways from an action-packed 2021 for AI: Healthcare is just getting started with its AI moment, the bigger the language models, the bigger the complications, and there may now be a third pole for AGI. Image: Getty Images/iStockphotoThis gave birth to an ecosystem of sorts. We have seen, and toyed with, many startup offerings leveraging GPT3 to build consumer-facing products. Those startups offer copywriting services such as marketing copy, email and LinkedIn messages, and so on. We were not particularly impressed by them, and neither were Benaich and Hogarth.For Benaich however, the benefit of opening GPT3 as an API has generated is massive awareness over what language models could do if they get increasingly good. He thinks they're going to get increasingly good very quickly, especially as OpenAI starts to build offshoots of GPT-3, such as Codex.Judging from Codex, which was "a pretty epic product which has been crying out for somebody to build it", vertical-focused models based on GPT-3 will probably be excellent, Benaich and Hogarth think. Investors are getting behind this too, as startups have raised close to 375 million in the last 12 months to bring LLM APIs and vertical software solutions to customers who cannot afford to directly compete with Big Tech.The other way to think about it is that there is a certain quality of fashion with what developers coalesce around, Hogarth noted. Having attention-drawing applications such as Codex, or previously Primer's attempt to use AI to address Wikipedia's gender imbalance, shows what's possible. Then eventually what was previously state of the art becomes mainstream and the bar on the state of the art moves.So-called large language models (LLMs) are beginning to make waves in ways that are not always anticipated. For example, they have given birth to a new programming paradigm, Software 3.0 or Prompt programming. The idea there is to prompt LLMs in a way that triggers it to produce results users are interested in.Even beyond that, we see similar language models getting used in other domains, noted Benaich. He referred to research published in Science magazine, in which a language model was reimplemented to learn the viral spike protein, and then determine which versions of the spike protein and COVID-19 were more or less virulent. This, in turn, was used to forecast potential evolutionary paths the virus would have to take in order to produce more or less potent versions, which could be used to proactively stockpile vaccines.Benaich believes that LLMs can internalize various basic forms of language, whether it's biology, chemistry, or human language. Hogarth chimed in, saying that this is in a way unsurprising, as language is so malleable and extensible, so we're only going to see unusual applications of language models grow.AI AgnosticismOf course, not everyone agrees with this view, and not everyone thinks everything about LLMs is wonderful. On the technical side of things, many people question the approach LLMs are taking. This is something we have repeatedly referred to, and a long-standing debate within the AI community really.People in the AI community like Gary Marcus, whom we hosted in a conversation about the future of AI last year, or Walid Saba, whose aptly named contribution "Machine Learning Won't Solve Natural Language Understanding" was runner up for the Gradient Prize Winners this year have been vocal critics of the LLM approach.In what many people would claim resembles a religious debate in some ways, Hogarth is a fan of what he calls a more agnostic approach:"We have what you'd call the atheist view, which is -- these models aren't going to get us much further. They don't really understand anything. There's the true believer view, which is -- all we need to do is scale these up and they'll be completely sentient. There's a view in the middle, a slightly more agnostic view that says -- we've got a few more big things to discover, but these are part of it".Hogarth believes that the "agnostic view" has the right amount of deference for how much LLMs are able to do, but also captures the fact that they lack causal reasoning and other major blocks to be able to scale. Speaking of scale, the fact that LLMs are humongous also has humongous implications on the resources needed to train them, as well as their environmental footprint.Interestingly, after being in the eye of the storm on AI ethics with Timnit Gebru's firing last year, Google made the 2021 State of AI Report for work on a related topic. Even though more people tend to focus on the bias aspect of Gebru's work, for us the aspect of the environmental footprint of LLMs that this work touched upon is at least equally important.Major factors that drive the carbon emissions during model training are the choice of neural network (esp. dense or sparse), the geographic location of a data center, and the processors. Optimizing these reduces emissions.Researchers from Google and Berkeley evaluated the energy and CO2 budget of five popular LLMs and proposed formulas for researchers to measure and report on these costs when publishing their work. Major factors that drive CO2 emissions during model training are the choice of neural network (esp. dense or sparse), the geographic location of a data center, and the processors.Commenting on the high-profile Gebru incident, Hogarth commended Gebru for her contribution. At the same time, he noted that if you're going to start to put these LLMs into production through large search engines, there is more tension that arises when you start to question the bias within those systems or environmental concerns.Ultimately, that creates a challenge for the corporate parent to navigate to put these put this research into production. For Hogarth, the most interesting response to that has been the rise of alternative governance structures. More specifically, he referred to EleutherAI, a collective of independent AI researchers who open-sourced their 6 billion parameter GPT-j LLM."When EleutherAI launched, they explicitly said that they were trying to provide access to large pre-trained models, which would enable large swathes of research that would not be possible while such technologies are locked way behind corporate walls, because for-profit entities have explicit incentives to downplay risks and discourage security probing", Hogarth mentioned.EleutherAI means is an open-source LLM alternative now. Interestingly, there also is what Benaich and Hogarth called a "3rd pole" in AGI research next to OpenAI and Google / DeepMind as well: Anthropic. The common thread Hogarth, who is an investor in Anthropic, found is governance. Hogarth is bullish on Anthropic's prospects, mainly due to the caliber of the early team:"The people who left open AI to create Anthropic have tried to pivot the governance structure by creating a public benefit corporation. They won't hand control over the company to people who are not the company or its investors. I don't know how much progress is made towards that so far, but it's quite a fundamental governance shift, and I think that that allows for a new class of actors to come together and work on something", Hogarth said.As usual. both the conversation with Benaich and Hogarth as well as writing up on this come short of doing justice to the burgeoning domain that is AI today. Until we revisit it, even browsing through the 2021 State of AI Report should provide lots of material to think about and explore. | Content Synthesis/Decision Making | Healthcare Practitioners and Support | null | null | null | null | null | null |
|
news | States bullish on chatbots, report finds | Digital assistants helped states deliver critical information to constituents during the pandemic. | https://gcn.com/articles/2021/10/28/states-ai-chatbot-survey.aspx | 2021-10-29T02:46:00Z | States bullish on chatbots, report findsState governments use of artificial intelligence tools such as chatbots increased last year, according to a new report that dubs 2020 the Year of the Bot.Overall, state officials are bullish on the technology, largely thanks to its role in addressing COVID-related challenges, states AI Meets the Moment, a report that the National Association of State Chief Information Officers (NASCIO), the Center for Digital Government (CDG) and IBM released Oct. 26. For instance, 60% of the 48 agency leaders and CIOs from 38 states who responded to the survey said they started using digital assistants, such as chatbots, to deliver critical information to constituents.Sixty-five percent of respondents said that the AI tools they put in place to address the pandemic have delivered results as promised, with 4% saying they exceeded expectations and 18% saying they underperformed or were significantly challenged.The survey compared AI adoption to the last time NASCIO canvassed states on their use of the technology in 2019 and found that while only 1% said AI was widely used across the state two years ago, 7% said it is now. In 2019, 13% said AI was in use but not a core line of business, compared to 60% now.The survey finds AI is driving significant results in areas such as improved service delivery and enhanced interactions with residents and constituents, the report states. However, despite improvements in decision-making and employee productivity, only 13 percent of respondents said AI generated cost savings. This underscores the challenges in establishing return on investment (ROI) in AI projects.The report points to Utah as a leader among states taking beginning steps toward AI adoption. It has put in place initiatives for connected vehicles, image recognition for cattle branding and sensors that analyze air quality.I think were at the point of inflection here on some exciting things, Utahs Director of IT Services Michael Hussey said in the report. I think youre going to see an explosion in the not-too-distant future.Arizonas Department of Economic Security rolled out an inward-facing chatbot to help employees find what they need faster as unemployment insurance claims poured in. For them to be able to type questions into that chatbot and get referred to the appropriate sections and references was very helpful, J.R. Sloan, state CIO, said in the report.In Texas, an AI Center of Excellence is running more than 24 programs to test AI and plans to develop sandbox environments to help with training and hands-on experience.Asked what they need to support AI in the long term, 75% of respondents said a clear framework for use and governance, followed by a defined AI vision and strategy, a clearer understanding of vendor capabilities and a centralized approach to its adoption.The biggest bottleneck to AI adoption is a lack of skilled staff training in AI, according to 79% of respondents. Eighty-five percent said understanding agency or department use cases and workflows is most important for getting the most value out of AI. Machine learning (ML) and data analysis skills followed at 58%, and data engineering rounded out the top three at 44%.Sixty-five percent of respondents named legacy infrastructure the biggest bottleneck, and nearly half cited difficulties in identifying use cases. Privacy and ethical concerns ranked low.Still, optimism on AIs potential to transform organizations long term runs high, the report states. Confidence is strong in the next 12 to 18 months, with 63% of respondents saying they plan to use robotic process automation and nearly as many saying they plan to use ML. More than half plan to deploy digital assistants and natural language processing. Overall, 23% of respondents said AI has the potential to transform their organization now and 56% said it would within one to three years.Call centers and data analytics tied for the top business processes states are considering for AI applications, followed by cybersecurity and health and human services.States can take advantage of this early stage of adoption and the momentum to lay a firm foundation grounded in sound strategy and savvy tactics for AI, the report states. Agencies must look at high-volume processes to see where there may be an opportunity for improvement, and then take a tactical approach to redefine the business process. | Digital Assistance | Community and Social Service/Education, Training, and Library | null | null | null | null | null | null |
||
news | BS Reporter | Language translation engine Devnagri raises $600,000 in seed round | The platform combines Neural Machine Translation with ML to make the internet accessible to 90% of Indians and solve the problem of "lack of content in Indian languages" | https://www.business-standard.com/article/companies/language-translation-engine-devnagri-raises-600-000-in-seed-round-121101900721_1.html | 2021-10-19T08:54:00Z | B2B-focused Indian language translation engine Devnagri, has raised $600,000 from Venture Catalyst, Inflection Point Ventures and other co-investors in a seed round.Other participants in the funding include Mitesh Shah (IPV-First Port Capital), Rohit Chanana (Sarcha Advisors), Nimesh Kampani (Trica), Sameer Karulkar (Coverpage Ventures Advisory LLP), Prashant Sharma (Facebook, Country Lead-Video), Karan Bhagi (HUL, GM-eCommerce) & Deepak Sharma (CDO, Kotak Mahindra Bank).A brainchild of Nakul Kundra and Himanshu Sharma, Devnagri was established with a vision to make the internet accessible to 90 per cent of Indians and solve the problem of “lack of content in Indian languages” using tech. The platform combines Neural Machine Translation with machine learning and a community to power translations.THe company claimed that AI-human combination can help businesses scale their operations anytime, in any language with up to 50 per cent reduced cost, real-time delivery tracking with 80% reduction in translation time and 5x faster than professional translation.Nakul Kundra, Founder of Devnagri said, “There is a strong need for content in Indian languages to be available over the internet, which helps Indians to use technology (be it Entertainment, News, Education, Movies, etc.) in their respective language. With our current round of funding, we intend to scale our operations to tap B2B customers and enable them to create more local (Indian Language) content to reach end-users from Tier-2 & Tier-3 cities.”Sharing his insights, Apoorva Ranjan Sharma, President and Co-founder, Venture Catalysts, said, “The Indian vernacular language and translation market size is worth $53 billion, and currently features edTech, e-commerce, publishing and OTT Industries. With a billion Indians from Tier-2 and Tier-3 cities expected to join the internet (due to affordable smartphones and data), content availability in Indian languages is only 0.1 per cent and less than 10 per cent of Indians are conversant in English.”Mitesh Shah, co-founder, Inflection Point Ventures, says, “Indian languages are missing from the Internet. In the past 10 years, many websites have started provided multi-lingual options for the non-English users but it is still not enough. Devnagri has developed a model, which can scale the efforts to put Indian languages on the Internet map. Their application of ML and Neural machine translation will help companies used contextualized translation. We believe this would be a gamer changer in Indic languages on the internet with relevant used cases.”Currently having 5,000-plus translators, Devnagri has an extensive and incessantly growing community aiming for scalability, savings, and satisfaction of its catering businesses.Business Standard has always strived hard to provide up-to-date information and commentary on developments that are of interest to you and have wider political and economic implications for the country and the world. Your encouragement and constant feedback on how to improve our offering have only made our resolve and commitment to these ideals stronger. Even during these difficult times arising out of Covid-19, we continue to remain committed to keeping you informed and updated with credible news, authoritative views and incisive commentary on topical issues of relevance.We, however, have a request.As we battle the economic impact of the pandemic, we need your support even more, so that we can continue to offer you more quality content. Our subscription model has seen an encouraging response from many of you, who have subscribed to our online content. More subscription to our online content can only help us achieve the goals of offering you even better and more relevant content. We believe in free, fair and credible journalism. Your support through more subscriptions can help us practise the journalism to which we are committed.Support quality journalism and subscribe to Business Standard.Digital Editor | Content Synthesis/Information Retrieval Or Search | Business and Financial Operations/Education, Training, and Library/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Editorial Team | The Role of Artificial Intelligence in Compliance and Cybersecurity for Startups - insideBIGDATA | In this special guest feature, Justin Beals, CEO and cofounder of Strike Graph, outlines key considerations when using AI tесhnоlоgіеѕ to іmрrоvе a startup's суbеrѕесurіtу capabilities and manage суbеr rіѕk more efficiently аnd еffесtіvеlу. | https://insidebigdata.com/2021/11/02/the-role-of-artificial-intelligence-in-compliance-and-cybersecurity-for-startups/ | 2021-11-02T17:13:00Z | In this special guest feature, Justin Beals, CEO and cofounder of Strike Graph, outlines key considerations when using AI thnlg to mrv a startup’s brurt capabilities and manage br rk more efficiently nd fftvl. As a serial entrepreneur with expertise in AI, cybersecurity and governance, he started Strike Graph to eliminate the confusion related to cybersecurity audit and certification processes. He likes making arcane cybersecurity standards plain and simple to achieve. As the CEO, Justin organizes strategic innovations at the crossroads of cybersecurity and compliance and focuses on helping customers get outsized value from Strike Graph. Justin earned a BA in English and Theater from Fort Lewis College.Driven by internal and external rur to continuously vlv and mtur their cybersecurity practices, rgnztn r actively xlrng new thnlg and opportunities whrvr bl. Artfl ntllgn (AI) a ht topic in th boardroom nd t the wtrlr, sparking innovation n mn business areas. Examples of AI include mhn learning, nturl lngug processing, speech rgntn, computer vision, image recognition, nd robotics. Adopting dvnd nlt is rtl to bmng an insight-driven organization. Here are key considerations when using AI thnlg to mrv ur brurt capabilities and manage br rk more efficiently nd fftvl.The benefits of smrt cyberBy lng smart solutions like AI nd dvnd nlt to vt munt of dt, cyber thnlg can generate rdtv, ubl nght that help u mk bttr IT security decisions and rtt your rgnztn from a myriad of cyber threats. Smart cyber solutions n l hl detect and rnd to thrt faster by mntrng the br nvrnmnt with the kind of d nd ur that is exclusive to mhn. Mt mrtnt, mrt br hl you k with today’s increasingly httd ttk.Modern cyberattacks are carefully dgnd to circumvent trdtnl urt controls b lrnng dttn rul. By taking into account a wide range of dt, smart dttn ltfrm can learn and recognize normal behaviour, dvl bln, nd dtt outliers. Smart detection can also dntf mlu tn tht rmbl rvul n events, nd mk predictions but previously invisible dangers. St t implementing AI in a StartupSt 1: Grok these two conceptsBefore you begin developing an AI system its important to understand that the models behind it will predict the probability of the question you trained it to answer. For example, you can train a model to predict if a picture includes a cat. For each image, the model will produce a numerical score of whether it believes the image is a cat or not. The greater the number the more sure the model is that the image includes a cat.Be prepared to test your model predictions for their accuracy. Learning the measures of accuracy and the differences between a false positive or true negative is critical to deciding if your AI system can be accurate enough in its recommendations to be useful to your users. St 2: Prrtz th mn drvr(s) of vlu then work backwardsIts important to understand how probability works in a real life situation so that you can understand how your users will consume that probability value. Inevitably it will help them make some decisions. Focusing on this will keep you from building toys that look cool but dont provide a real value. Instead, figure out what predictions your customers wish they had. For example, our customers wished they had a way to answer lengthy security questionnaires. We discovered that we needed to predict the answer to each security questionnaire item from an established set of answers. We realized that this was a natural language processing problem since the input and output data would be narrative content. Once you define the problem you are ready to start collecting data.Step 3: Build a cheap test. Rinse and repeat. Once you have gathered data, begin training a model and testing its accuracy. The upfront work in Step 1 is essential for success in this step. For example, at Strike Graph wed need many security questions and many security processes within the model to make effective predictions. Models are not difficult to build, however the vast majority of them do not produce useful results. Be prepared to make many iterations through this process. Try different types of models from Random Forest to Bayesian. Try different segmentation of the data. After each iteration, review the accuracy of the models predictions. Once you have a model that meets your minimum requirements for accuracy you can begin to prepare your AI.St 4: Bring IntelligenceThink of the AI as an interface or intelligent filter that allows many users to interact with increasingly complex technologies. You will need to design the screens and workflows within your technology so that users can receive the information produced by your model and make it useful. For example, machine learning and data science are the tools we use to develop extremely sophisticated statistical predictions and that information is very complex for users to understand. The interface is there to make sense of the model or prediction.At my startup, once we had models that met our accuracy requirements we moved on to design the Security Reports feature within our platform. General users would not understand the raw analysis produced by our models, so we needed to design an intelligent output. The result was an artificial intelligence that allows our users to provide the questions and then returns a Security Report answering each question with an appropriate response. With a basic understanding of our customers cybersecurity practices, our AI is able to efficiently organize information into a report that can be used to help them win their deal.In cnlun Productizing machine learning and AI technologies is a vastly different process than typical software development. The inability to make accurate predictions is not just possible, it is likely. Begin by proving that a model can make accurate predictions. Once accomplished, take the time to wrap your predictions in an effective user experience that provides real value. Sign up for the free insideBIGDATA newsletter.Join us on Twitter: @InsideBigData1 https://twitter.com/InsideBigData1 | Process Automation/Decision Making | Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | ET Bureau | Govt, tech industry partner to launch Future Skills Prime | This is aimed at skilling India's talent in the latest technologies such as artificial intelligence and machine learning and ensure continuous reskilling of the country's vast technology workforce. | https://economictimes.indiatimes.com/tech/information-tech/govt-tech-industry-partner-to-launch-future-skills-prime/articleshow/87371297.cms | 2021-10-29T14:28:36Z | The government and the tech industry have partnered to launch Future Skills Prime, an initiative to accelerate India's journey towards becoming the digital talent nation of the world.This is aimed at skilling India's talent in the latest technologies such as artificial intelligence and machine learning and ensure continuous reskilling of the country's vast technology workforce.The platform will focus on bridging the huge talent gap at the entry level and train existing workforce in the technologies of the future.Launching the platform on Friday, Minister of State for Electronics and IT, and Skills and Entrepreneurship, Rajeev Chandrasekhar, said the government has committed about Rs 450 crore to the skilling initiative.IT Industry body Nasscom, along with member companies, have also contributed to the initiative.Chandrasekhar called upon the private sector to engage deeply with the effort as financial and employment stakeholders.Sign-in to see our list of the most promising startups of 2021SIGN INIt should be seen not just as an entry-level skilling platform but as a continuous dynamic ladder of skilling, upskilling and rescaling. The outcome and measure of upskilling is really about career progression, he said.Technology adoption around the world and digitalization has been unprecedented and India has to seize the moment as an inflection point to get ahead of the world, he added.The emergence of disruptive technologies such as AI, Big Data, Internet of Things, and blockchain have been creating millions of new jobs with a rapid growth in demand. The portal has conceptualized a framework for upskilling, and it can take a diagnostic test and make informed decisions about further skilling requirements based on a person's career aspirations.Since technological frameworks are evolving, the programme aims to develop an integrated high-quality system of training that constantly responds to these needs.The course content has been curated by industry experts and is coupled with machine-based learning tools which enable goal-oriented learning, training and skilling path for candidates.Digital technologies present massive opportunities both for India's socio-economic growth, for local industry, and also for export-led revenues, said Rekha Menon, chairperson of Nasscom.According to an Accenture study, the use of AI could add $1 trillion to the Indian economy by 2035. And to realize all that potential, we need digital talent, Menon, who is also chairperson and senior managing director at Accenture in India, said.The skilling platform will also look to onboard specific skill sets, job roles from other verticals such as fintech, healthcare, etc and cater to the new breed of hybrid professionals.During the conceptualization phase, several rounds of discussions were held with stakeholders, including online content providers and aggregators such as Microsoft, TCS, Coursera, Adobe etc, as well as job placement agencies.Rishad Premji, executive chairman of Wipro, said that the last 18 months have only accelerated the pace of moving to a world that is truly digitally dependent and connected. India had a unique opportunity to enable this by being the world's answer for digital talent, he said.There's an acute shortage the world over across companies across industries for talent, and by some estimates, the shortage is seven to 10x. The global digital talent acquisition market is expected to grow from 22 billion in 2020 to 33 billion by 2025. And the digital talent in India is expected to grow 3.5 to 3.7 million by 2024, he said.For a several-fold increase from the current numbers, the demand is enormous, and the shortage is acute. The good news is that companies and the government are working to address this demand, Premji added | Content Synthesis/Personalization/Discovery | Computer and Mathematical | null | null | null | null | null | null |
|
news | IBM and Deloitte Launch New AI Offering to Unlock Business Insights in Hybrid Cloud Environments | ARMONK, N.Y. and NEW YORK, Oct. 11, 2021 /PRNewswire/ -- IBM (NYSE: IBM) and Deloitte today announced a new offering—DAPPER, an AI-enabled managed analytics solution. The solution reinforces the two organizations' 21-year global alliance—which helps organizations accelerate the adoption... | https://www.prnewswire.com/news-releases/ibm-and-deloitte-launch-new-ai-offering-to-unlock-business-insights-in-hybrid-cloud-environments-301396739.html | 2021-10-11T12:00:00Z | ARMONK, N.Y. and NEW YORK, Oct. 11, 2021 /PRNewswire/ -- IBM (NYSE: IBM) and Deloitte today announced a new offeringDAPPER, an AI-enabled managed analytics solution. The solution reinforces the two organizations' 21-year global alliancewhich helps organizations accelerate the adoption of hybrid cloud and AI across the enterpriseand 10 years of experience implementing the Deloitte Analytics Platform. DAPPER's end-to-end capabilities will allow organizations to gain confidence in the insights that their data provides via a secured, simple to consume managed service offering that aims to resolve the challenges of adopting AI. Relevant and actionable data can catapult companies to success in today's competitive, insights-driven business environment. Clients across industries report they are struggling to accelerate the value of AI and analyticsdue to lack of trust in data, domain expertise, and the resources to create a solution that can work across business environmentswhile simultaneously meeting strict security and compliance requirements. Gartner® recommends its clients "choose AI cloud services over building custom models to address a broader range of use cases and for quicker deployment and built in scalability" (Hype CycleTM for Cloud Computing, 2021, Published 14 July 2021 - ID G00747400, by David Smith, Ed Anderson).DAPPER is a result of the combined technology leadership, hands-on business experience, and industry experience of the Deloitte and IBM alliance. DAPPER combines the Deloitte Analytics Platform with IBM Cloud Pak for Data on Red Hat OpenShift to offer business users a fully-managed AI solutiondesigned to avoid needing to commit the resources and lengthy time associated with developing, implementing, and managing a bespoke solution. Built upon IBM's modern hybrid cloud architecture and AI technology, DAPPER brings organizations the ability to scale up operations, promote an organization's trust in its data, and enable smart reporting. DAPPER is a managed service run by Deloitte in the IBM Cloud, and available on premises or in multi-cloud environments. Its fully-managed analytics service provides a choice of service offerings and subscription model, and contains three core elements, built so that businesses can obtain maximum value out of their data:A cloud-enabled analytics platform: Security and analytics platform backed by IBM with streamlined administration, operations and maintenance provided by Deloitte, engineered to give business users a seamless, automated data analytics solution. Analytics development factory: Access to specialized analytics building blocks and methods engineered to streamline analytics development with operations, giving a DevOps experience to help accelerate the delivery of analytics assets. Catalog of subscription service offerings: Combines simplicity with data integrations to offer a selection of analytics services including dashboards, enterprise reporting, data management tools, and orchestration and consumption of AI.With DAPPER, Deloitte and IBM can help data scientists and business users alike build, organize, and manage insight-providing assets to help promote reporting that can be trusted within an organization and AI solutions so that enterprises can focus on business outcomes. For example, with DAPPER, Deloitte and IBM can help enable a government organization to address fraud, such as tax fraud or those related to employment benefits, pension, or money laundering. With DAPPER, an analytics cloud can collect and help business users report upon siloed data across disparate data centers, making data available for advanced AI algorithms and reports, operating associated use cases, and supporting clients' regulatory compliance requirements. Delivered as a managed service offering, all of this is designed to be deployed within weeks rather than months, and draws from real-time data across various business units or in this example, government agencies."Today's announcement is another proof point of the companies' longtime collaboration to help clients modernize with advanced technology," said Evaristus Mainsah, GM, Hybrid Cloud and Edge Ecosystem, IBM. "Together, Deloitte and IBM have created a flexible hybrid cloud engine that can extract real value for clients by using AI to make the most of their data. DAPPER will drive transformation, spur innovation, and reshape business to pave a pathway to digital dominance."Richard Houston, Senior Partner and CEO Deloitte North & South Europe said, "DAPPER is built on 10 years of Deloitte's business and technology innovation. It represents another market-leading example of how Deloitte and IBM have combined technology, experience and industry knowledge to help clients turn data into insightsquickly, reliably, and repeatedly."Deloitte, an IBM Platinum Business Partner, is part of IBM's partner ecosystem, an initiative to support partners of all typeswhether they build on, service or resell IBM technologies and platformsto help clients manage and modernize workloads from the mainframe to the edge and everything in between using IBM solutions and Red Hat OpenShift, the industry's leading enterprise Kubernetes platform.For more information, visit https://www.stepinsidethefuture.com/. About IBMIBM is the global leader in hybrid cloud and AI, serving clients in more than 170 countries. More than 3,200 clients use our hybrid cloud platform to accelerate their digital transformation journeys and, in total, more than 30,000 of them have turned to IBM to unlock value from their data this client list includes nine out of ten of the world's largest banks. With this foundation, we continue to leverage Red Hat OpenShift as the leading platform to address our clients' business needs: A hybrid cloud platform that is open, flexible and secure. Guided by principles of trust, transparency and support for a more inclusive society, IBM also is committed to being a responsible steward of technology and a force for good in the world. For more information, visit: www.ibm.com.About DeloitteDeloitte provides industry-leading audit, consulting, tax and advisory services to many of the world's most admired brands, including nearly 90% of the Fortune 500® and more than 7,000 private companies. Our people come together for the greater good and work across the industry sectors that drive and shape today's marketplace delivering measurable and lasting results that help reinforce public trust in our capital markets, inspire clients to see challenges as opportunities to transform and thrive, and help lead the way toward a stronger economy and a healthier society. Deloitte is proud to be part of the largest global professional services network serving our clients in the markets that are most important to them. Building on more than 175 years of service, our network of member firms spans more than 150 countries and territories. Learn how Deloitte's more than 345,000 people worldwide connect for impact at www.deloitte.com.Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms.IBM Media Contact:Mike Sefanov[email protected]650-281-8099Deloitte Media Contact:Patty Bloom[email protected] 312-771-9179SOURCE IBM Related Linkswww.ibm.com | Content Synthesis/Decision Making/Prediction | Business and Financial Operations | null | null | null | null | null | null |
||
news | Matt Turck, John Wu | The 2021 machine learning, AI, and data landscape | The 2021 machine learning, AI, and data landscape is marked by explosive growth. Here are some key areas to watch. | https://venturebeat.com/2021/10/16/the-2021-machine-learning-ai-and-data-landscape/ | 2021-10-16T15:40:25Z | Its been a hot, hot year in the world of data, machine learning, and AI.Just when you thought it couldnt grow any more explosively, the data/AI landscape just did: the rapid pace of company creation, exciting new product and project launches, a deluge of VC financings, unicorn creation, IPOs, etc.It has also been a year of multiple threads and stories intertwining.One story has been the maturation of the ecosystem, with market leaders reaching large scale and ramping up their ambitions for global market domination, in particular through increasingly broad product offerings. Some of those companies, such as Snowflake, have been thriving in public markets (see our MAD Public Company Index), and a number of others (Databricks, Dataiku, DataRobot, etc.) have raised very large (or in the case of Databricks, gigantic) rounds at multi-billion valuations and are knocking on the IPO door (see our Emerging MAD company Index).But at the other end of the spectrum, this year has also seen the rapid emergence of a whole new generation of data and ML startups. Whether they were founded a few years or a few months ago, many experienced a growth spurt in the past year or so. Part of it is due to a rabid VC funding environment and part of it, more fundamentally, is due to inflection points in the market.In the past year, theres been less headline-grabbing discussion of futuristic applications of AI (self-driving vehicles, etc.), and a bit less AI hype as a result. Regardless, data and ML/AI-driven application companies have continued to thrive, particularly those focused on enterprise use trend cases. Meanwhile, a lot of the action has been happening behind the scenes on the data and ML infrastructure side, with entirely new categories (data observability, reverse ETL, metrics stores, etc.) appearing or drastically accelerating.To keep track of this evolution, this is our eighth annual landscape and state of the union of the data and AI ecosystem coauthored this year with my FirstMark colleague John Wu. (For anyone interested, here are the prior versions: 2012, 2014, 2016, 2017, 2018, 2019: Part I and Part II, and 2020.)For those who have remarked over the years how insanely busy the chart is, youll love our new acronym: Machine learning, Artificial intelligence, and Data (MAD) this is now officially the MAD landscape!Weve learned over the years that those posts are read by a broad group of people, so we have tried to provide a little bit for everyone a macro view that will hopefully be interesting and approachable to most, and then a slightly more granular overview of trends in data infrastructure and ML/AI for people with a deeper familiarity with the industry.Quick notes:My colleague John and I are early-stage VCs at FirstMark, and we invest very actively in the data/AI space. Our portfolio companies are noted with an (*) in this post.Lets dig in.The macro view: Making sense of the ecosystems complexityLets start with a high-level view of the market. As the number of companies in the space keeps increasing every year, the inevitable questions are: Why is this happening? How long can it keep going? Will the industry go through a wave of consolidation?Rewind: The megatrendReaders of prior versions of this landscape will know that we are relentlessly bullish on the data and AI ecosystem.As we said in prior years, the fundamental trend is that every company is becoming not just a software company, but also a data company.Historically, and still today in many organizations, data has meant transactional data stored in relational databases, and perhaps a few dashboards for basic analysis of what happened to the business in recent months.But companies are now marching towards a world where data and artificial intelligence are embedded in myriad internal processes and external applications, both for analytical and operational purposes. This is the beginning of the era of the intelligent, automated enterprise where company metrics are available in real time, mortgage applications get automatically processed, AI chatbots provide customer support 24/7, churn is predicted, cyber threats are detected in real time, and supply chains automatically adjust to demand fluctuations.This fundamental evolution has been powered by dramatic advances in underlying technology in particular, a symbiotic relationship between data infrastructure on the one hand and machine learning and AI on the other.Both areas have had their own separate history and constituencies, but have increasingly operated in lockstep over the past few years. The first wave of innovation was the Big Data era, in the early 2010s, where innovation focused on building technologies to harness the massive amounts of digital data created every day. Then, it turned out that if you applied big data to some decade-old AI algorithms (deep learning), you got amazing results, and that triggered the whole current wave of excitement around AI. In turn, AI became a major driver for the development of data infrastructure: If we can build all those applications with AI, then were going to need better data infrastructure and so on and so forth.Fast-forward to 2021: The terms themselves (big data, AI, etc.) have experienced the ups and downs of the hype cycle, and today you hear a lot of conversations around automation, but fundamentally this is all the same megatrend.The big unlockA lot of todays acceleration in the data/AI space can be traced to the rise of cloud data warehouses (and their lakehouse cousins more on this later) over the past few years.It is ironic because data warehouses address one of the most basic, pedestrian, but also fundamental needs in data infrastructure: Where do you store it all? Storage and processing are at the bottom of the data/AI hierarchy of needs see Monica Rogatis famous blog post here meaning, what you need to have in place before you can do any fancier stuff like analytics and AI.Youd figure that 15+ years into the big data revolution, that need had been solved a long time ago, but it hadnt.In retrospect, the initial success of Hadoop was a bit of a head-fake for the space Hadoop, the OG big data technology, did try to solve the storage and processing layer. It did play a really important role in terms of conveying the idea that real value could be extracted from massive amounts of data, but its overall technical complexity ultimately limited its applicability to a small set of companies, and it never really achieved the market penetration that even the older data warehouses (e.g., Vertica) had a few decades ago.Today, cloud data warehouses (Snowflake, Amazon Redshift, and Google BigQuery) and lakehouses (Databricks) provide the ability to store massive amounts of data in a way thats useful, not completely cost-prohibitive, and doesnt require an army of very technical people to maintain. In other words, after all these years, it is now finally possible to store and process big data.That is a big deal and has proven to be a major unlock for the rest of the data/AI space, for several reasons.First, the rise of data warehouses considerably increases market size not just for its category, but for the entire data and AI ecosystem. Because of their ease of use and consumption-based pricing (where you pay as you go), data warehouses become the gateway to every company becoming a data company. Whether youre a Global 2000 company or an early-stage startup, you can now get started building your core data infrastructure with minimal pain. (Even FirstMark, a venture firm with several billion under management and 20-ish team members, has its own Snowflake instance.)Second, data warehouses have unlocked an entire ecosystem of tools and companies that revolve around them: ETL, ELT, reverse ETL, warehouse-centric data quality tools, metrics stores, augmented analytics, etc. Many refer to this ecosystem as the modern data stack (which we discussed in our 2020 landscape). A number of founders saw the emergence of the modern data stack as an opportunity to launch new startups, and it is no surprise that a lot of the feverish VC funding activity over the last year has focused on modern data stack companies. Startups that were early to the trend (and played a pivotal role in defining the concept) are now reaching scale, including DBT Labs, a provider of transformation tools for analytics engineers (see our Fireside Chat with Tristan Handy, CEO of DBT Labs and Jeremiah Lowin, CEO of Prefect), and Fivetran, a provider of automated data integration solutions that streams data into data warehouses (see our Fireside Chat with George Fraser, CEO of Fivetran), both of which raised large rounds recently (see Financing section).Third, because they solve the fundamental storage layer, data warehouses liberate companies to start focusing on high-value projects that appear higher in the hierarchy of data needs. Now that you have your data stored, its easier to focus in earnest on other things like real-time processing, augmented analytics, or machine learning. This in turn increases the market demand for all sorts of other data and AI tools and platforms. A flywheel gets created where more customer demand creates more innovation from data and ML infrastructure companies.As they have such a direct and indirect impact on the space, data warehouses are an important bellwether for the entire data industry as they grow, so does the rest of the space.The good news for the data and AI industry is that data warehouses and lakehouses are growing very fast, at scale. Snowflake, for example, showed a 103% year-over-year growth in their most recent Q2 results, with an incredible net revenue retention of 169% (which means that existing customers keep using and paying for Snowflake more and more over time). Snowflake is targeting $10 billion in revenue by 2028. Theres a real possibility they could get there sooner. Interestingly, with consumption-based pricing where revenues start flowing only after the product is fully deployed, the companys current customer traction could be well ahead of its more recent revenue numbers.This could certainly be just the beginning of how big data warehouses could become. Some observers believe that data warehouses and lakehouses, collectively, could get to 100% market penetration over time (meaning, every relevant company has one), in a way that was never true for prior data technologies like traditional data warehouses such as Vertica (too expensive and cumbersome to deploy) and Hadoop (too experimental and technical).While this doesnt mean that every data warehouse vendor and every data startup, or even market segment, will be successful, directionally this bodes incredibly well for the data/AI industry as a whole.The titanic shock: Snowflake vs. DatabricksSnowflake has been the poster child of the data space recently. Its IPO in September 2020 was the biggest software IPO ever (we had covered it at the time in our Quick S-1 Teardown: Snowflake). At the time of writing, and after some ups and downs, it is a $95 billion market cap public company.However, Databricks is now emerging as a major industry rival. On August 31, the company announced a massive $1.6 billion financing round at a $38 billion valuation, just a few months after a $1 billion round announced in February 2021 (at a measly $28 billion valuation).Up until recently, Snowflake and Databricks were in fairly different segments of the market (and in fact were close partners for a while).Snowflake, as a cloud data warehouse, is mostly a database to store and process large amounts of structured data meaning, data that can fit neatly into rows and columns. Historically, its been used to enable companies to answer questions about past and current performance (which were our top fastest growing regions last quarter?), by plugging in business intelligence (BI) tools. Like other databases, it leverages SQL, a very popular and accessible query language, which makes it usable by millions of potential users around the world.Databricks came from a different corner of the data world. It started in 2013 to commercialize Spark, an open source framework to process large volumes of generally unstructured data (any kind of text, audio, video, etc.). Spark users used the framework to build and process what became known as data lakes, where they would dump just about any kind of data without worrying about structure or organization. A primary use of data lakes was to train ML/AI applications, enabling companies to answer questions about the future (which customers are the most likely to purchase next quarter? i.e., predictive analytics). To help customers with their data lakes, Databricks created Delta, and to help them with ML/AI, it created ML Flow. For the whole story on that journey, see my Fireside Chat with Ali Ghodsi, CEO, Databricks.More recently, however, the two companies have converged towards one another.Databricks started adding data warehousing capabilities to its data lakes, enabling data analysts to run standard SQL queries, as well as adding business intelligence tools like Tableau or Microsoft Power BI. The result is what Databricks calls the lakehouse a platform meant to combine the best of both data warehouses and data lakes.As Databricks made its data lakes look more like data warehouses, Snowflake has been making its data warehouses look more like data lakes. It announced support for unstructured data such as audio, video, PDFs, and imaging data in November 2020 and launched it in preview just a few days ago.And where Databricks has been adding BI to its AI capabilities, Snowflake is adding AI to its BI compatibility. Snowflake has been building close partnerships with top enterprise AI platforms. Snowflake invested in Dataiku, and named it its Data Science Partner of the Year. It also invested in ML platform rival DataRobot.Ultimately, both Snowflake and Databricks want to be the center of all things data: one repository to store all data, whether structured or unstructured, and run all analytics, whether historical (business intelligence) or predictive (data science, ML/AI).Of course, theres no lack of other competitors with a similar vision. The cloud hyperscalers in particular have their own data warehouses, as well as a full suite of analytical tools for BI and AI, and many other capabilities, in addition to massive scale. For example, listen to this great episode of the Data Engineering Podcast about GCPs data and analytics capabilities.Both Snowflake and Databricks have had very interesting relationships with cloud vendors, both as friend and foe. Famously, Snowflake grew on the back of AWS (despite AWSs competitive product, Redshift) for years before expanding to other cloud platforms. Databricks built a strong partnership with Microsoft Azure, and now touts its multi-cloud capabilities to help customers avoid cloud vendor lock-in. For many years, and still to this day to some extent, detractors emphasized that both Snowflakes and Databricks business models effectively resell underlying compute from the cloud vendors, which put their gross margins at the mercy of whatever pricing decisions the hyperscalers would make.Watching the dance between the cloud providers and the data behemoths will be a defining story of the next five years.Bundling, unbundling, consolidation?Given the rise of Snowflake and Databricks, some industry observers are asking if this is the beginning of a long-awaited wave of consolidation in the industry: functional consolidation as large companies bundle an increasing amount of capabilities into their platforms and gradually make smaller startups irrelevant, and/or corporate consolidation, as large companies buy smaller ones or drive them out of business.Certainly, functional consolidation is happening in the data and AI space, as industry leaders ramp up their ambitions. This is clearly the case for Snowflake and Databricks, and the cloud hyperscalers, as just discussed.But others have big plans as well. As they grow, companies want to bundle more and more functionality nobody wants to be a single-product company.For example, Confluent, a platform for streaming data that just went public in June 2021, wants to go beyond the real-time data use cases it is known for, and unify the processing of data in motion and data at rest (see our Quick S-1 Teardown: Confluent).As another example, Dataiku* natively covers all the functionality otherwise offered by dozens of specialized data and AI infrastructure startups, from data prep to machine learning, DataOps, MLOps, visualization, AI explainability, etc., all bundled in one platform, with a focus on democratization and collaboration (see our Fireside Chat with Florian Douetteau, CEO, Dataiku).Arguably, the rise of the modern data stack is another example of functional consolidation. At its core, it is a de facto alliance among a group of companies (mostly startups) that, as a group, functionally cover all the different stages of the data journey from extraction to the data warehouse to business intelligence the overall goal being to offer the market a coherent set of solutions that integrate with one another.For the users of those technologies, this trend towards bundling and convergence is healthy, and many will welcome it with open arms. As it matures, it is time for the data industry to evolve beyond its big technology divides: transactional vs. analytical, batch vs. real-time, BI vs. AI.These somewhat artificial divides have deep roots, both in the history of the data ecosystem and in technology constraints. Each segment had its own challenges and evolution, resulting in a different tech stack and a different set of vendors. This has led to a lot of complexity for the users of those technologies. Engineers have had to stitch together suites of tools and solutions and maintain complex systems that often end up looking like Rube Goldberg machines.As they continue to scale, we expect industry leaders to accelerate their bundling efforts and keep pushing messages such as unified data analytics. This is good news for Global 2000 companies in particular, which have been the prime target customer for the bigger, bundled data and AI platforms. Those companies have both a tremendous amount to gain from deploying modern data infrastructure and ML/AI, and at the same time much more limited access to top data and ML engineering talent needed to build or assemble data infrastructure in-house (as such talent tends to prefer to work either at Big Tech companies or promising startups, on the whole).However, as much as Snowflake and Databricks would like to become the single vendor for all things data and AI, we believe that companies will continue to work with multiple vendors, platforms, and tools, in whichever combination best suits their needs.The key reason: The pace of innovation is just too explosive in the space for things to remain static for too long. Founders launch new startups; Big Tech companies create internal data/AI tools and then open-source them; and for every established technology or product, a new one seems to emerge weekly. Even the data warehouse space, possibly the most established segment of the data ecosystem currently, has new entrants like Firebolt, promising vastly superior performance.While the big bundled platforms have Global 2000 enterprises as core customer base, there is a whole ecosystem of tech companies, both startups and Big Tech, that are avid consumers of all the new tools and technologies, giving the startups behind them a great initial market. Those companies do have access to the right data and ML engineering talent, and they are willing and able to do the stitching of best-of-breed new tools to deliver the most customized solutions.Meanwhile, just as the big data warehouse and data lake vendors are pushing their customers towards centralizing all things on top of their platforms, new frameworks such as the data mesh emerge, which advocate for a decentralized approach, where different teams are responsible for their own data product. While there are many nuances, one implication is to evolve away from a world where companies just move all their data to one big central repository. Should it take hold, the data mesh could have a significant impact on architectures and the overall vendor landscape (more on the data mesh later in this post).Beyond functional consolidation, it is also unclear how much corporate consolidation (M&A) will happen in the near future. Were likely to see a few very large, multi-billion dollar acquisitions as big players are eager to make big bets in this fast-growing market to continue building their bundled platforms. However, the high valuations of tech companies in the current market will probably continue to deter many potential acquirers. For example, everybodys favorite industry rumor has been that Microsoft would want to acquire Databricks. However, because the company could fetch a $100 billion or more valuation in public markets, even Microsoft may not be able to afford it.There is also a voracious appetite for buying smaller startups throughout the market, particularly as later-stage startups keep raising and have plenty of cash on hand. However, there is also voracious interest from venture capitalists to continue financing those smaller startups. It is rare for promising data and AI startups these days to not be able to raise the next round of financing. As a result, comparatively few M&A deals get done these days, as many founders and their VCs want to keep turning the next card, as opposed to joining forces with other companies, and have the financial resources to do so.Lets dive further into financing and exit trends.Financings, IPOs, M&A: A crazy marketAs anyone who follows the startup market knows, its been crazy out there.Venture capital has been deployed at an unprecedented pace, surging 157% year-on-year globally to $156 billion in Q2 2021 according to CB Insights. Ever higher valuations led to the creation of 136 newly minted unicorns just in the first half of 2021, and the IPO window has been wide open, with public financings (IPOs, DLs, SPACs) up +687% (496 vs. 63) in the January 1 to June 1 2021 period vs the same period in 2020.In this general context of market momentum, data and ML/AI have been hot investment categories once again this past year.Public marketsNot so long ago, there were hardly any pure play data / AI companies listed in public markets.However, the list is growing quickly after a strong year for IPOs in the data / AI world. We started a public market index to help track the performance of this growing category of public companies see our MAD Public Company Index (update coming soon).On the IPO front, particularly noteworthy were UiPath, an RPA and AI automation company, and Confluent, a data infrastructure company focused on real-time streaming data (see our Confluent S-1 teardown for our analysis). Other notable IPOs were C3.ai, an AI platform (see our C3 S-1 teardown), and Couchbase, a no-SQL database.Several vertical AI companies also had noteworthy IPOs: SentinelOne, an autonomous AI endpoint security platform; TuSimple, a self-driving truck developer; Zymergen, a biomanufacturing company; Recursion, an AI-driven drug discovery company; and Darktrace, a world-leading AI for cyber-security company.Meanwhile, existing public data/AI companies have continued to perform strongly.While theyre both off their all-time highs, Snowflake is a formidable $95 billion market cap company, and, for all the controversy, Palantir is a $55 billion market cap company, at the time of writing.Both Datadog and MongoDB are at their all-time highs. Datadog is now a $45 billion market cap company (an important lesson for investors). MongoDB is a $33 billion company, propelled by the rapid growth of its cloud product, Atlas.Overall, as a group, data and ML/AI companies have vastly outperformed the broader market. And they continue to command high premiums out of the top 10 companies with the highest market capitalization to revenue multiple, 4 of them (including the top 2) are data/AI companies.Above: Source: Jamin Ball, Clouded Judgement, September 24, 2021Another distinctive characteristic of public markets in the last year has been the rise of SPACs as an alternative to the traditional IPO process. SPACs have proven a very beneficial vehicle for the more frontier tech portion of the AI market (autonomous vehicle, biotech, etc.). Some examples of companies that have either announced or completed SPAC (and de-SPAC) transactions include Ginkgo Bioworks, a company that engineers novel organisms to produce useful materials and substances, now a $24B public company at the time of writing; autonomous vehicle companies Aurora and Embark; and Babylon Health.Private marketsThe frothiness of the venture capital market is a topic for another blog post (just a consequence of macroeconomics and low-interest rates, or a reflection of the fact that we have truly entered the deployment phase of the internet?). But suffice to say that, in the context of an overall booming VC market, investors have shown tremendous enthusiasm for data/AI startups.According to CB Insights, in the first half of 2021, investors had poured $38 billion into AI startups, surpassing the full 2020 amount of $36 billion with half a year to go. This was driven by 50+ mega-sized $100 million-plus rounds, also a new high. Forty-two AI companies reached unicorn valuations in the first half of the year, compared to only 11 for the entirety of 2020.One inescapable feature of the 2020-2021 VC market has been the rise of crossover funds, such as Tiger Global, Coatue, Altimeter, Dragoneer, or D1, and other mega-funds such as Softbank or Insight. While those funds have been active across the Internet and software landscape, data and ML/AI has clearly been a key investing theme.As an example, Tiger Global seems to love data/AI companies. Just in the last 12 months, the New York hedge fund has written big checks into many of the companies appearing on our landscape, including, for example, Deep Vision, Databricks, Dataiku*, DataRobot, Imply, Prefect, Gong, PathAI, Ada*, Vast Data, Scale AI, Redis Labs, 6sense, TigerGraph, UiPath, Cockroach Labs*, Hyperscience*, and a number of others.This exceptional funding environment has mostly been great news for founders. Many data/AI companies found themselves the object of preemptive rounds and bidding wars, giving full power to founders to control their fundraising processes. As VC firms competed to invest, round sizes and valuations escalated dramatically. Series A round sizes used to be in the $8-$12 million range just a few years ago. They are now routinely in the $15-$20 million range. Series A valuations that used to be in the $25-$45 million (pre-money) range now often reach $80-$120 million valuations that would have been considered a great series B valuation just a few years ago.On the flip side, the flood of capital has led to an ever-tighter job market, with fierce competition for data, machine learning, and AI talent among many well-funded startups, and corresponding compensation inflation.Another downside: As VCs aggressively invested in emerging sectors up and down the data stack, often betting on future growth over existing commercial traction, some categories went from nascent to crowded very rapidly reverse ETL, data quality, data catalogs, data annotation, and MLOps.Regardless, since our last landscape, an unprecedented number of data/AI companies became unicorns, and those that were already unicorns became even more highly valued, with a couple of decacorns (Databricks, Celonis).Some noteworthy unicorn-type financings (in rough reverse chronological order): Fivetran, an ETL company, raised $565 million at a $5.6 billion valuation; Matillion, a data integration company, raised $150 million at a $1.5 billion valuation; Neo4j, a graph database provider, raised $325 million at a more than $2 billion valuation; Databricks, a provider of data lakehouses, raised $1.6 billion at a $38 billion valuation; Dataiku*, a collaborative enterprise AI platform, raised $400 million at a $4.6 billion valuation; DBT Labs (fka Fishtown Analytics), a provider of open-source analytics engineering tool, raised a $150 million series C; DataRobot, an enterprise AI platform, raised $300 million at a $6 billion valuation; Celonis, a process mining company, raised a $1 billion series D at an $11 billion valuation; Anduril, an AI-heavy defense technology company, raised a $450 million round at a $4.6 billion valuation; Gong, an AI platform for sales team analytics and coaching, raised $250 million at a $7.25 billion valuation; Alation, a data discovery and governance company, raised a $110 million series D at a $1.2 billion valuation; Ada*, an AI chatbot company, raised a $130 million series C at a $1.2 billion valuation; Signifyd, an AI-based fraud protection software company, raised $205 million at a $1.34 billion valuation; Redis Labs, a real-time data platform, raised a $310 million series G at a $2 billion valuation; Sift, an AI-first fraud prevention company, raised $50 million at a valuation of over $1 billion; Tractable, an AI-first insurance company, raised $60 million at a $1 billion valuation; SambaNova Systems, a specialized AI semiconductor and computing platform, raised $676 million at a $5 billion valuation; Scale AI, a data annotation company, raised $325 million at a $7 billion valuation; Vectra, a cybersecurity AI company, raised $130 million at a $1.2 billion valuation; Shift Technology, an AI-first software company built for insurers, raised $220 million; Dataminr, a real-time AI risk detection platform, raised $475 million; Feedzai, a fraud detection company, raised a $200 million round at a valuation of over $1 billion; Cockroach Labs*, a cloud-native SQL database provider, raised $160 million at a $2 billion valuation; Starburst Data, an SQL-based data query engine, raised a $100 million round at a $1.2 billion valuation; K Health, an AI-first mobile virtual healthcare provider, raised $132 million at a $1.5 billion valuation; Graphcore, an AI chipmaker, raised $222 million; and Forter, a fraud detection software company, raised a $125 million round at a $1.3 billion valuation.AcquisitionsAs mentioned above, acquisitions in the MAD space have been robust but havent spiked as much as one would have guessed, given the hot market. The unprecedented amount of cash floating in the ecosystem cuts both ways: More companies have strong balance sheets to potentially acquire others, but many potential targets also have access to cash, whether in private/VC markets or in public markets, and are less likely to want to be acquired.Of course, there have been several very large acquisitions: Nuance, a public speech and text recognition company (with a particular focus on healthcare), is in the process of getting acquired by Microsoft for almost $20 billion (making it Microsofts second-largest acquisition ever, after LinkedIn); Blue Yonder, an AI-first supply chain software company for retail, manufacturing, and logistics customers, was acquired by Panasonic for up to $8.5 billion; Segment, a customer data platform, was acquired by Twilio for $3.2 billion; Kustomer, a CRM that enables businesses to effectively manage all customer interactions across channels, was acquired by Facebook for $1 billion; and Turbonomic, an AI-powered Application Resource Management company, was acquired by IBM for between $1.5 billion and $2 billion.There were also a couple of take-private acquisitions of public companies by private equity firms: Cloudera, a formerly high-flying data platform, was acquired by Clayton Dubilier & Rice and KKR, perhaps the official end of the Hadoop era; and Talend, a data integration provider, was taken private by Thoma Bravo.Some other notable acquisitions of companies that appeared on earlier versions of this MAD landscape: ZoomInfo acquired Chorus.ai and Everstring; DataRobot acquired Algorithmia; Cloudera acquired Cazena; Relativity acquired Text IQ*; Datadog acquired Sqreen and Timber*; SmartEye acquired Affectiva; Facebook acquired Kustomer; ServiceNow acquired Element AI; Vista Equity Partners acquired Gainsight; AVEVA acquired OSIsoft; and American Express acquired Kabbage.Whats new for the 2021 MAD landscapeGiven the explosive pace of innovation, company creation, and funding in 2020-21, particularly in data infrastructure and MLOps, weve had to change things around quite a bit in this years landscape.One significant structural change: As we couldnt fit it all in one category anymo | Unknown | Business and Financial Operations/Management | null | null | null | null | null | null |
|
news | The AI Revolution is Just Beginning (with Nathan Benaich and Ian Hogarth) | AI is hitting an inflection point. Azeem Azhar explores progress in two very different fields: defense and medical biology. | https://hbr.org/podcast/2021/11/the-ai-revolution-is-just-beginning-with-nathan-benaich-and-ian-hogarth | 2021-11-17T14:06:00Z | AZEEM AZHAR: Hi there, I’m Azeem Azhar, and you’re listening to the Exponential View Podcast. Now, every week, I have a deep conversation with a brilliant mind, exploring how exponential technologies are shaping our near future. That subject forms the basis of my new book, The Exponential Age, or Exponential, if you’re outside of the US and Canada. The book has been, thankfully, very well-received. And if you are in the UK, I would strongly recommend heading over to Amazon UK straight after this podcast. The Kindle version of my book is currently on sale for just 99p, instead of 20 pounds, which is a fantastic bargain for UK listeners. Now, back onto today’s podcast. In the past 10 or so years, artificial intelligence has really changed the way whole industries work. And that technology still has so much more room to run. Capital is streaming both into AI research and startups, and in established firms as well. And we could be, perhaps we’ve even reached the cusp of an inflection point, a point of no return, that means this technology is deeply embedded in the way we live, and the way we work. Now, it’s not all sunshine and flowers, though. A race for the best AI and machine learning talent means there’s a growing shortage of capable engineers, and great scientists and academics to do the research, and that has led to a constant fight between industry and academia. More seriously, AI is starting to be deployed for military purposes, raising complicated ethical questions, and setting up some kind of competition between different countries across the globe. Now, my guests today are two of the most clued-up people in these issues, and the authors of one of my favorite reports, the annual State of AI Report, and that looks at the most pressing trends in this domain, from the research through to the policy through to what companies are doing with it. In the first part of this conversation, I speak to Nathan Benaich. Nathan is the founder and general partner of Air Street Capital, it’s a VC firm that invests in AI-first technology and life sciences companies. He also runs the research and applied AI summit, and is behind Spinout.fyi, a program designed to improve how universities spin out some of their most interesting technologies. Ian Hogarth is an entrepreneur, angel investor, and a visiting professor at University College London, where he works on the intersection between the state and technology. He founded the concert discovery service Songkick, which scaled to 15 million monthly users, and has since invested in more than 150 companies. Now, before we get started, we use a few terms of art during this conversation, I just want to spell them out for you. MLOps, or Machine Learning Operations, is an emerging discipline which deals with we run systems that use machine learning at industrial scales. NLP is Natural Language Processing, it’s a set of approaches to helping computers deal with human written and spoken communications. GPT-3 is a particular AI model which is very good at dealing with human language, and FHI is the Future of Humanity Institute, it’s a research group at Oxford University. Finally, we talk about Isomorphic, it’s a brand new business unit at Alphabet, the parent company of Google and DeepMind, which is being headed up by Demis Hassabis, the founder of DeepMind. Now, despite having brilliant technologists on this podcast, we did have some technology issues with the recording, so please forgive some of the lapses in sound quality. The conversation is well worth listening to. Nathan, Ian, welcome to Exponential View.NATHAN BENAICH: Hey Azeem, it’s good to be here, thanks.IAN HOGARTH: Thanks Azeem, great to be here.AZEEM AZHAR: Nathan’s got the slight American accent, and Ian has a more British accent, so we can use that to discriminate the responses that we get. It is a really fantastic, stellar piece of work. It’s 188 pages. Nathan, how does it feel, when you finally hit publish, on that day?NATHAN BENAICH: It’s incredibly stressful. Feels like the closest a VC gets to a product launch. It’s incredibly exciting. It’s the fruits of many months of work over the summer, and many more months of work over the year, just tracking progress in the field, that Ian and I do together.AZEEM AZHAR: Nathan, you are a VC, but you haven’t always been a VC. Just help us understand your background.NATHAN BENAICH: I started in biology. I was actually doing research in cancer research, metastasis, which is the spread of cancer around the body, and pursued that in undergrad, up until I finished a PhD. At that point, I got much more excited about moving into the technology industry. My particular focus is on businesses that make use of machine learning, in any industry. For the moment, with my venture fund, Air Street Capital, I invest in vertical software companies and life science companies, that build machine learning based products.AZEEM AZHAR: A few days before we recorded this podcast, Demis Hassabis, who’s the founder of DeepMind and one of the leading lights in the current wave of AI development, announced he was creating a new company, within Alphabet, the parent of DeepMind, that was going to exactly tackle this intersection between artificial intelligence and biology. And that theme happens to be one of the major themes, I think, that comes out of your report this year. How excited should we be about an announcement like that?NATHAN BENAICH: I think we should be super excited about an announcement like this, because it really shows that one of the premier research organizations in machine learning, that’s responsible for a lot of the major breakthroughs of the last couple of years, feels that the next major breakthrough will not only be occurring in biology, but also be transforming the industry in which it’s relevant to. In the State of AI Report what we profiled last year was that in many ways, biology was experiencing its AI moment, and we forecasted this year that we’d see potentially one, maybe more, acquisitions or IPOs of a major AI first drug discovery company, and we saw not just one but two happen in the last few moments. Overall, we should be super excited.AZEEM AZHAR: You said that biology was having its AI moment. What is this moment that we should be excited about?NATHAN BENAICH: In 2013, or so, 2015 or so, there was a major breakthrough that machines could understand images better than humans could. And this was around the publication of deep learning models apply to ImageNet, the major computer vision competition. And in many ways, back then, it felt like computer vision was going through its AI moment, being completely transformed with deep learning methods. And so, hearkening back to how ImageNet plus deep learning was sort of the major breakthrough that yielded lots of companies and research papers thereafter, we see much of the same dynamics in biology now, because we have access to huge amounts of data, whether it’s imaging or sequencing or patient data. And then extremely performative models and compute, which together look and smell very similar to computer vision and ImageNet of a couple years ago.AZEEM AZHAR: So the AI moment for vision was deep learning. There was a kind of technological driver. When we come to biology, what is happening now, in the last year or so, that is actually allowing us, either from our understanding of biology, or from our understanding of AI, or the capabilities we have, to make this something of a reality?NATHAN BENAICH: I would say there’s two domains. One is in industry, largely drug discovery, and then the second is more in the R&D, and how research actually gets done, and how we learn new things about biological systems. It’s clear that for many years we’ve had problems such as, drugs cost billions to get to market. Running experiments is incredibly expensive and time-consuming. There are many systematic problems with data collection and data quality, in the drug discovery and development funnel. But the problem is of bringing software, repeatability, robotics, and machine learning into essentially a search of a needle in a haystack, is that you can eliminate some of these errors that humans introduce, and create a much more repeatable discovery process, which ultimately will reduce the amount of money that it costs to run a particular experiment, reduce the amount of time that it takes to take a drug into a preclinical experiment, and then into the clinic, and then ideally get better drugs to patients faster. On the R&D side, a lot of the innovation has actually been taking state of the art models for language or vision, train them essentially completely different data sets, that are not really biological in nature, such as human written language, and then, essentially seeing if those models can translate into another language. In this case, the language of proteins or the language of mRNA. I think it’s just through the surprise of how many baseline similarities there are between human language and biological language, that a model that was trained to do machine translation can actually translate sequence of a protein into its function. That’s opening a ton of possibilities around simulating biology, generating proteins with new kinds of functionality that we want but don’t exist in nature.AZEEM AZHAR: For a listener perhaps doesn’t quite realize where biology has got to today, I mean, I think for many of us, biology was throwing a quadrat into a field, and counting the number of bugs in a square yard area. Of course, it’s moved on a great deal from that. We also think of AI systems just needing a lot of structured data in some form. What’s happening in the theory and the practice of biology that is taking us from the quadrat in the field, to the ones and zeros that your machine learning systems need?NATHAN BENAICH: Yeah. One of the most elegant framings of this was hypothesis-driven experiments, which is, I might have an idea that this particular gene causes this disease, but to do something to that gene, and see if that hypothesis remains true. And then you have the power of computation, [inaudible 00:10:32] experiments, which basically means test many samples at the same time. Essentially, do hypothesis-free experimentation, or otherwise known as hypothesis generating experiments. To generate tons of data, and then see what the data tells you, and then use that to navigate around. I think that’s one of the major differences with how we do science, which is driven by availability of computer robotics, machine learning, et cetera. Perhaps another, more basic comparison of the difference this makes is, 10 years ago we would take pictures of cells, and those pictures would describe, for example, what this cell looks like in response to getting treated with a drug. And then we would stain the sample to basically know what we should count to describe that change. We would do that manually, by pointing and clicking, many, many, many times. Instead of doing that nowadays, we’d use a software system that would programmatically calculate those changes. And that very simple idea is what Recursion Pharmaceuticals was originally built on, 10 years ago in 2013, and has since scaled to an organization that’s processing billions or probably even more data points a day systematically in a way that humans could never accomplish.AZEEM AZHAR: We’ve got the theoretical underpinnings; we understand some of the mechanisms within biology better. We also understand the limits of current approaches. We’ve got these new AI systems, that can automate parts of the process. We have robots that can automate experiments with greater degree of precision than human experimenters, and do many more of them. So what does that actually give us, what is the result of all of that?NATHAN BENAICH: I think it gives us the tools with which to build a better map of biology and understand how systems work and how they fail. More importantly, we can stop being in an area where we need to constantly run physical experiments to understand what we might want to do next, if we want to try and figure out how to cure a disease. We can start actually using this data-driven map of biology, to infer what we might want to do next. I think this combination brings us to a much more efficient way of doing science, that’s more repeatable and more high quality.AZEEM AZHAR: So was there a hallmark piece of research, a bit of breakthrough in the last 12 months, that felt a little bit like a Sputnik moment, or a moon landing, in this intersection?NATHAN BENAICH: I think the overall challenge of predicting structure and function of a protein or an mRNA molecule, purely from its sequence, that overall topic, under which AlphaFold2 falls, and work from other groups like the Baker Lab, and a group at Stanford, and at Salesforce Research, is I think one of the most exciting things. So they kind of dig into it a little bit. There’s some work from Salesforce Research, which was around using language models, same ones that look, stylistically, some of the GPT-3, essentially taking a string of words, and then trying to predict the next word in that sequence and using a method of what’s called self-attention. Which is essentially getting a model to figure out what prior words are the most relevant ones to focus on, in order to figure out what the next word in the sequence will be. So using models like that, and training them on protein language, which is basically a string of amino acids, and using that model in order to generate entirely new protein sequences, or versions of sequences, that we have in the world today, but where we want to optimize or introduce a new functionality. So they were able to show not just computationally, but also by generating or synthesizing those proteins in cells, and running experiments that interrogates their functions, that these language models can generate artificial proteins with new function.AZEEM AZHAR: So essentially, there are about 20 amino acids. There’s an alphabet of 20 letters, and as we put them together, you create these proteins, bigger and bigger proteins, which then might be molecular machines that actually do some things, within an organism. And we can perhaps create proteins to make nice materials, like bioplastics, or we can create proteins that might be therapeutic. That’s the sort of rough idea, is that right?NATHAN BENAICH: Exactly.AZEEM AZHAR: And there was something that you said that I found particularly fascinating. You said there’s the Baker Lab, which is a university research lab. That seems to make sense. There’s DeepMind, which is this quite powerful research group in AI. But the thing that suddenly stuck out to me that you mentioned, Salesforce Research. Now, Salesforce, for those who don’t know, is a company that makes software for salespeople, known as CRM software. I found it quite remarkable that that research lab is involved in what is quite fundamental biological research. Was it surprising to you, that Salesforce Research is doing this kind of fundamental science research as well?IAN HOGARTH: It is pretty surprising, to be honest. They do have a pretty large base of pharma clients, so at some level this kind of research probably acts as a warm introduction to a large category of buyer for Salesforce’s core products. I think that’s in some ways the interesting thing about these corporate research labs. They’re quite hard to direct. You often end up with quite orthogonal research directions to the company’s interest.NATHAN BENAICH: But I think it also reflects what we discussed earlier, about how surprisingly easy it is to translate billions of R&D and NLP for human language, into a new domain, without having to entirely retool your company, or entirely relearn a new field.AZEEM AZHAR: So in the case of this [inaudible 00:16:21] proteins, what does that actually mean for us? What does that mean in terms of an industrial application at the other end?NATHAN BENAICH: I think the most exciting implication of this is to hopefully accelerate our transition away from a petrochemicals-driven world, where most of the materials that we create and consume around us are based off of a very small number of chemical backbones. Instead, moving towards a world where we either use microbes or various living organisms, or perhaps even what’s called cell-free systems, where there is no cell, it’s just a tube with some liquids in it, to create the materials that we need to replace petrochemicals, but potentially even go in new areas of functionality that petrochemicals cannot accomplish.AZEEM AZHAR: A lot of the original research that came out of deep learning had the aspect of a being a little bit like, almost like a surface level toy kind of project. A lot of the initial machine vision models, and the generative models, would create these sort of fantastic, Van Gogh-like renditions. You would see them showing up in filters on social media products. Is there a direct line between the research and the work that happened in those areas, and the kinds of breakthroughs that we’re seeing now, in these more fundamental areas?NATHAN BENAICH: I think yes and no. There’s certainly computational [primitives 00:17:55] that are shared between processing images on the internet, and then processing images of cells, and we see that in the form of how … In a way, no, because we’d be remiss not to use or encode or represent some of the immense background information that we have on biological problems into learning systems, as opposed to trying to learn everything from data, or learn everything from scratch. I think that’s where there’s increasing entrepreneur appetite, actually, to unite those two fields in a way that, for example, Big Pharma hasn’t been doing for a very long time. In a way that Big Petrochemical and Big Chemical companies haven’t been doing for a very long time. Because it’s not just technical capability, but it’s also the philosophical approach to problem-solving, and the way that teams are put together, and where resources are prioritized or deprioritized. So those similarities and differences are almost what creates a new, unique opportunity for entrepreneurship in this field, that will actually have a shot at making a big impact, and not necessarily being caught up very quickly by incumbents.AZEEM AZHAR: What about an example of a firm that’s done this and has actually got the results? Beyond the theory, who is making money with these technologies right now?NATHAN BENAICH: When you think in the pharma space, the business that has demonstrated the most in this field, is UK-based Exscientia, which is a drug discovery company. It focuses on lots of different data, depending on what’s available, whether it’s protein 3D structure or a high throughput experiment of some form, or sequencing data, and effectively tries to generate new kinds of drugs, that have capabilities that exceed what chemists have generated today. We highlighted, for the main reason that the number one criteria to assess whether any company in this domain has what it takes to be a long-term player in the pharmaceutical industry is whether its software has generated drugs that are actually getting tested in people. It’s the only company that generated not one, but two, but three assets, in the last 12 to 18 months, that are all in clinical trials, in different therapeutic areas. Like neurological diseases and cancer.AZEEM AZHAR: Mm-hmm (affirmative).IAN HOGARTH: Can I just give you a juicy thing on the biology stuff? The thing that we highlighted around the image in that moment for biology, where you’ve got this huge number of research papers at the intersection of machine learning and biology. We talked a lot about drug discover just now, but there’s this huge workflow around producing a new drug, that includes lab work, it includes other forms of research within pharma companies, and then finally, it includes things like actually manufacturing and producing the drugs. And there’s a whole host of companies springing up around that process, that are doing interesting things. We profiled one called PostEra, that is doing medicinal chemistry as a service, using transformers. They’re tackling some of the stuff that happens later in the process. You’ve got a company like Causaly, that much earlier in the process is helping people inside pharma companies to more effectively analyze all the literature out there around a given target. And then you’ve got all these lab robots popping up, who are basically automating parts of the lab work, whether it’s the vision piece or actually the robotics of manipulating samples. So you’ve got this interesting thing where throughout the entire value chain, machine learning is being applied, which I think in the medium term will cause a real productivity explosion within pharma, because all these different aspects to the value chain are being disrupted simultaneously.AZEEM AZHAR: Is that something that happens through many different players in the value chain, or are new entrants going to be full stack in their ability to take up that opportunity?IAN HOGARTH: The way it appears right now to me is almost like a constellation of mid-size startups, sprouting up around the larger pharma companies. So you’ve got the likes of Exscientia and Recursion, who are sort of de nova machine learning plus biology, challenger farmers. But you’ve also got this wave of new service providers, who are coming in, and offering quite a narrow service, with machine learning as a major neighbor of that technology, and sending it in to all the largest pharma companies in the world. It feels like a much larger landscape of machine learning enabled software companies, sitting around all these big biotechs.AZEEM AZHAR: Let’s turn to the question of talent, as well. There has been a lot of discussion about the shortage of talent in the AI domain, and that there is a talent war that’s going on not just between companies, but between countries. Ian, I’d love to hear your thoughts on how real that talent war is.IAN HOGARTH: In the day to day, if I talk to the 50 plus companies I’ve invested in that have machine learning as a core part of their offering, they all are experiencing it. Anecdotally, we see it every single day across the companies we invest in. I’d say the talent dimension is kind of complicated, because now you have so many different ways you could apply that talent. If you gravitate towards the frontier research, you can work at an organization like DeepMind or OpenAI or Anthropic. If you’re much more interested in applied work, and you have a specific domain interest, you can go work on a food delivery robotics company, or one of the machine learning meets biology companies we’ve just been discussing. As the machine learning industry space kind of industrializes significantly, there are just more and more niches to apply that talent. DeepMind are not really competing for the same talent that a number of the startups I’ve backed are competing for, because they’re interested in different problems. Your short answer is yes, there’s still a big gap between the number of machine learning engineers one could hire, and the number that are looking for jobs. But I think it’s also become a world of niches, now, where you’re really looking for someone who’s highly specialized or interested in your domain.AZEEM AZHAR: There’s a data point in your report that I found particularly interesting, which was that technical talent in the US had grown by about 26, 27% in a short three and a half, four-year period, for people with AI talent. Is that a sufficient growth for what the industry needs?IAN HOGARTH: The view I’ve taken for a long time is that AI remains under-hyped. If you think about the discussion we’ve had on biology, the same thing is happening across almost every area of human activity where machine learning is starting to find an application and starting to create interesting companies. So you’ve got machine learning creeping into almost every facet of human life. As a result of that, the number of jobs for talented machine learning engineers is only going to keep growing, as the application grows. It’s a sign of, I think, what’s going to happen over the next decade or two.AZEEM AZHAR: How does that desire for talent actually play itself out? There’s so many different dynamics going on here. We’ve got every industry seemingly needing to use this technology. We’ve got the fact that this technology is general, but it’s also increasingly specialist. You also have competition between countries for that talent, and for their own homegrown talent. How does it evolve? Is it simply a case of salaries going up? Or are there interesting things that nations are doing to tackle what could be a burgeoning shortage of this skill base?IAN HOGARTH: Back in 2018, I wrote an essay called AI Nationalism, essentially predicting how increasing capabilities of AI systems would lead to nation states playing a different role. The most obvious thing that a nation state can do when it comes to talent is just incentivizing more training, so you get funding of PhD programs, things like that. That’s all happened over the last few years, all these national AI strategies trying to do that sort of thing, including in the UK and the US. I think the big dislocation we’ve experienced is, first of all, most of these machine learning jobs are now remote jobs. This is now a global competition. Secondly, you have machines competing for human jobs, within machine learning. So there are all these machine learning systems, designed to basically reduce the number of humans needed to build a machine learning system. So AutoML being an example of that. I think the two ways in which this extreme sort of supply demand imbalance gets resolved, is hiring people in other geographies to be part of your machine learning team, and using machine learning to not have to have as large a machine learning team. And those will be the two things that bring down the pressure on the supply side over the next few years.AZEEM AZHAR: We are really automating the automators, in order to shore up the labor shortage.IAN HOGARTH: We’re trying to, yeah. I think that in some ways you can look at these large pretrain models as a good example of that. If some people work on a very, very large pretrain model, that has an ability to generalize, and then they make that available for everyone to use, that means the startup doesn’t have to build their own large pretrain model, and they can use it, but they can gain these kind of performance benefits. I think we’re finding different ways to basically take machine learning engineers out of the loop, because they are so scarce and so expensive.AZEEM AZHAR: It actually does remind me of, when I read the history of the Ford production system, back in the turn of the 20th century. There were a very small number of engineers, in any given factory, and a very large number of operators who worked on the factory floor, handling each of the production steps. Additionally, outside of the factory, of course there are a number of mechanics who can maintain the internal combustion engine itself. It almost seems like we’ve seen this picture play out before with technologies that are sophisticated, which is only a small number of people can really design them and create them and instantiate them. And then there is a wider spectrum of lower skilled, lower trained people who can keep them running.IAN HOGARTH: Exactly. If you think about software development, the last couple of decades, it’s just gotten vastly more productive. You’ve got specialized languages, you’ve got so much custom infrastructure around writing software, and machine learning hasn’t really had that. Actually, I would say the space has been kind of not very productive, because the infrastructure around being a machine learning engineer hasn’t really evolved. But in the last few years, it’s really changed, so there’s this whole wave of companies doing MLOps, which is all the infrastructure that makes you more productive when you are building a machine learning system. I think that Nathan and I found the year before this, in the State of AI Report, that 25% of the fastest growing machine learning GitHub repos were for MLOps, and we’ve seen across our investing that there’s a burgeoning number of companies building tools for machine learning engineers to be more productive. So I think you’re spot on with your analogy to Ford, Azeem, as kind of this pre-productivity phase where you have a factory with a lot of people in there, and it’s all a bit messy. As things industrialize, you have a much more streamlined factory with a lot more tooling that saves work.AZEEM AZHAR: It speaks perhaps to just where we are in the evolution of this technology, which is that although we’re a decade into this current wave, from the academic breakthroughs of 2010 and 2011, and we’ve seen AI seemingly appear everywhere. Tens of billions of dollars has been invested. It also still feels that from a practical deployment of the technology, we are, to use an American baseball phrase, in the early innings of the game. Is that right?IAN HOGARTH: I think that’s completely right. There’s a brilliant economist, that I know you’re familiar with, Azeem, Carlota Perez.AZEEM AZHAR: Mm.IAN HOGARTH: And she really charts the interplay between financial capital and technological revolutions. I’ve wondered for a long time what phase of her cycle we’re in with AI. I think we’re actually in the speculation phase, where we still are speculating and pouring money into trying to find use cases, in the bubble phase, if you’d like, of AI, where we have not even vaguely reached the top of it yet. I think the number of attempts to invest in really, really frontier projects is a good example of that, whether it’s the funding of projects like Open AI, or DeepMind, or Anthropic. You have people putting large sums of money into very, very speculative, long-term research directions. There’s tons of discoveries happening, huge amount of capital coming it, but it’s still insanely early, if you think about the aspirations that these researchers have, and how intelligent they want to make these systems in the long run.AZEEM AZHAR: That lays out, actually, a question about what the nature of this technology really is. Again, going back to fundamentals, if we think about other general purpose technologies, like the telephone or electricity, or the internal combustion engine. There was a period where they were rather largely figured out. When we take that analogy and we look at where we are with AI, one of the things that strikes me is that a lot of the hay that has been made over the last 10 years has been, in a way, tweaking with and exploiting a lot of the same type of approach. That there is still significant research, significant theoretical work that can be done, that elevates the capabilities of this technology. We could be in one of two different places. One is, we’ve sort of figured it out, and we just have to keep going the way we are going, and we will create enormous industries. The other is, actually there are some significant theoretical breakthroughs that might still be required, and that are outside the purview of pure commercial exploitation.NATHAN BENAICH: Doing this report, it’s always fascinating to see where research is and where industry is. In many cases, they’re actually sometimes quite far apart, in the sense that industry is applying, from a research standpoint, fairly basic techniques, but still exploiting a lot of economic value, and creating a lot of economic value for their customers. Solutions like fraud detection which protect payments online, probably not as complicated technically speaking as multi-agent reinforcement learning, or pick whatever sexy buzzwords you want from your favorite arXiv paper, but they create a significant amount of economic value. So, in that sense, we still have a lot more to run, on just implementing good, reliable machine learning systems that are not going to hit the cover of Nature, but are going to create a lot of economic value for consumers and businesses. Then we have domains which might need more breakthroughs, in fundamental learning capabilities, which are potentially the more scientific domains or the more, quote unquote, deep tech domains, or areas like autonomous driving, et cetera, that are still further out and need more development.AZEEM AZHAR: Apart from autonomous driving, what can’t we do with the AI technologies that we have today?NATHAN BENAICH: It’s hard to know what | Content Synthesis/Discovery/Prediction | Life, Physical, and Social Science/Healthcare Practitioners and Support | null | null | null | null | null | null |
||
news | Cormac Ó Conaire | Instead of taking our jobs, what if AI just lets us give our lives more purpose? | Emerging technology offers the real potential for humans to achieve a post-professional era where we are not defined by our jobs, but a new purpose to enhance the human condition.The concept of robots taking our jobs goes far beyond comic books and sci-fi movies. From the first textile machine in the 18th century, to the internet in the 1990s, machines have been making our professions obsolete for centuries.Read Full Story | https://www.fastcompany.com/90702904/instead-of-taking-our-jobs-what-if-ai-just-lets-us-give-our-lives-more-purpose?partner=feedburner&utm_source=feedburner&utm_medium=feed&utm_campaign=feedburner+fastcompany&utm_content=feedburner | 2021-12-06T06:00:48Z | The concept of robots taking our jobs goes far beyond comic books and sci-fi movies. From the first textile machine in the 18th century, to the internet in the 1990s, machines have been making our professions obsolete for centuries.Artificial intelligence (AI) is the latest technology on the block threatening to replace human labor. Whether it’s self-checkout systems in supermarkets, or customer service chatbots, AI has already replaced a number of highly repetitive jobs. Despite promising a more streamlined and efficient workforce, the more immersive this emergent technology becomes, the greater the undercurrent of distrust.Indeed, if this accelerates, robots could replace over two million workers in manufacturing alone by 2025, according to a study by economists at MIT and Boston University. But as this technology becomes increasingly intelligent, there is a higher chance for AI to outsmart humans and make even the highest-skilled professions obsolete.But would that really be the end of the world? In fact, might emerging technologies unlock a potential future that leaps beyond the status quo?In a future where machines could replace our need to contribute to the labor market, there is potential for us not to be defined by our jobs, but by a newer purpose to enhance humanity and the planet.History tells us that industrial revolutions are always met with trepidation. Workers revolted against the emergence of steam power in the 18th century, whereas internet skeptics were innumerate thirty years ago. Whenever groundbreaking new technology or processes come along, it is natural for people to fear for their livelihoods.But as technology evolves, our role on this planet evolves with it. We have already adapted and developed in tandem with emergent AI systems and embedded these into our daily lives. Take the driverless transportation systems shuttling us between airport terminals, or the dating algorithms helping us find ‘the one’.Looking ahead, AI has the potential to revolutionize our collective future even further. Its promise is in its ability to make unprecedented analyses of vast amounts of data, which could achieve anything from identifying and preventing early-stage breast cancer, to studying factory operations in real-time to fix machines remotely.AI has the greatest potential when it augments human capabilities, enabling us to work with greater speed, agility and efficiency. Imagine AI co-workers that facilitate creative brainstorming, or digital life-coaches that empower us to make more informed decisions based on an analysis of our data. A more extreme example is Neuralink, the controversial brain-machine interface in development that could enable humans to control machines with their minds.These developments embody what we call Industry 4.0: the fourth and current industrial revolution which focuses on the enhancement of computers through smart and autonomous systems, fueled by data and machine learning. We are well in the throes of carving out a more optimized workforce with the help of AI, in which humans and machines become increasingly intertwined.But what if we went even further and redefined our jobs to encompass deeper meaning and purpose? And built a workforce that is more conscious of the long-term needs of society?[Source Images: higyou/iStock, snake3d/iStock]A professional world where the lines between human and business value are blurred is already on the horizon. We are currently heading towards Industry 5.0, which aims beyond efficiency and productivity as the sole goals, and prioritizes the contribution of industry to society. It centralizes worker well-being, uses new technologies to provide prosperity beyond jobs and growth whilst considering the environmental and social impact on industries.Look at Iceland, where the four-day working week has been an overwhelming success, or Microsoft Japan’s shorter work-week trial which resulted in happier staff and a 40% increase in productivity. Governments and companies around the world are already questioning what life looks like with less emphasis on work, while the gig economy has made it more mainstream to work where and when you want, adding to that freedom for people to define themselves beyond just their professional lives.In this new working culture, developing AI could focus on upgrading our mental and physical abilities to empower us to work smarter not harder. The more “productive” we become in a shorter period of time, the more time we can dedicate to purpose-driven endeavors that prioritize planet and people over profit.Alternatively, imagine a scenario where not enough workers are available to provide real-time care for the ageing population. Following the current and relentless pressures on care workers we saw during the pandemic, it’s not unbelievable. Emotive AI and computers with empathy would be able to provide an emotional response to real human needs. By freeing people from necessary and important roles such as these, we could once again carve out more time towards greater responsibilities that become increasingly urgent as the climate crisis approaches a tipping point.Industry 5.0 is a potential stepping stone towards a more sustainable, human-centric society. Emerging technology offers the real potential for humans to achieve a post-professional era where we are not defined by our jobs, but a new purpose to enhance the human condition and the world.But to achieve this future, issues of trust, risk and regulation of course need to be handled with care and forethought. Stanford’s one-hundred-year study on AI tells us that we are at an inflection point where the promise and perils of this technology are becoming real. While it is bringing innovations and benefits that we once could only dream about five years ago, using AI technology to replace human decision-making will inevitably create risks and unforeseen consequences.AI is also created by humans and trained on historical data which is not void of discrimination and inherent biases, meaning that inequality could be amplified if there is unequal access to information and participation in AI. What’s more, humans are inherently productive creatures. Many of us like our jobs—they form a large part of our identity—and would view the concept of a post-profession society with skepticism if not trepidation, as an alternative future has not yet been imagined.Overall, the question isn’t whether machines will act with good intent, or eventually become more intelligent than people. In some contexts, they already are. The focus today should be what we can do now to make sure the path is set towards an optimistic future.With emerging technology like AI promising to replace repetitive work, a new future is unlocked where human purpose is the driving force behind our professional lives. As our impact on the planet and collective responsibility to foster a more inclusive society reaches a tipping point, the value and potential of emerging technology has never been more important. It is the duty of designers, innovators, and businesses to conceive the ideal future we want, and design tangible stepping stones to getting there. We must not limit our ambition when painting this future, what is now achievable was once only imagined. At this critical junction in defining the role and responsibility of emerging technologies, now is the time to imagine a future in which we align our work to service people and planet, both on an individual and a business level.Cormac Ó Conaire is the chief design officer at Design Partners. | Unknown | Unknown | null | null | null | null | null | null |
|
news | werner@allthingsdistributed.com (Dr. Werner Vogels) | Tech Predictions for 2022 and Beyond | 2022 will be an exciting year for technology, with it pushing all of us, and our planet, forward in the process. | https://www.allthingsdistributed.com/2021/12/tech-prediction-for-2022-and-beyond.html | 2021-12-03T00:37:50Z | We have reached an inflection point. After AWS pioneered cloud technology more than 15 years ago, cloud infrastructure has evolved to a place where we are seeing all parts of the cloud reach practically anywhere on the planetand even into space. The cloud has allowed what was once science fiction to become science fact. Models and techniques in the artificial intelligence (AI) and machine learning (ML) realm have gotten better and betterso much so that we see glimpses of new kinds of use cases emerging that we previously only imagined in movies and comics. We are entering a phase where data is abundant, access to it is almost instantaneous, and our ability to make sense of it in new and subtle ways is practically automatic. But this technology is not replacing humans; it is augmenting how we engage with the world. 2022 will be an exciting year for technology, with it pushing all of us, and our planet, forward in the process.Prediction One: AI-supported software development takes holdSoftware development is a creative process, but one that has many repetitive tasks. In 2022, ML will begin to play a major role in augmenting software developers workstreams, helping them create more secure and reliable code.Since the advent of the cloud, weve seen companies across the world bring new ideas to their customers at scale faster than ever. However, even with this acceleration in product delivery, people still spend a disproportional amount of time in one area of technology: software development.While there have absolutely been improvements in programming languages, software development toolkits (SDKs), and other tools that enable developers to be more efficient, these have all been minor evolutions. There havent been the major leaps forward that weve seen in other areas of technology yet. Over the past few years, weve started to see the foundations laid for what will become the future of software development. Tools like Amazon DevOps Guru, Amazon CodeGuru, GitHub Copilot, and GPT-3 are the first steps in what I see as the future of development, where ML is used in code development and software operations workstreams to help developers become more effective. In the coming years, I believe that we are going to see an explosion of capabilities in this area.ML will free developers from the mundane parts of their jobs, such as code reviews and bug fixesthe undifferentiated heavy lifting of their worldand allow them to focus more on creating. The same technology will help us write sophisticated systems faster than ever and in ways that open the door to a new class of developers. Imagine a scenario where a builder describes how they want an app to operate, and then the tools interpret the request through natural language processing and deliver back the fully functional code. On the backend, ML techniques will also check for software bugs and continuously verify that the software is doing what it is supposed to do. This kind of ML-supported software development will be a game-changer by allowing more people across an organization to help define and build software and software-driven products. Longer term, this same approach of ML freeing us up to create more will be used in all kinds of other areas, including media creation. We will see generative AI techniques increasingly create movies, music, and literature. Just as importantly, in a similar way, this technology will also start to play a role in detecting fake content, scams, and fraud. 2022 is the year where AI/ML takes on the heavy lifting in the lives of developers. Prediction two: The everywhere cloud has an edgeThe cloud will extend into every locale via purpose-built devices and specialized solutions. In 2022, well see these solutions bring all the muscle of the cloud to transform warehouses, restaurants, retail stores, farms, and more.When talking about supply chain and transportation, we refer to the last leg of a journey as the last mile. Its that final trip to your door. In ecommerce, it is a challenging part of the journey as there are many variables depending on the location. For instance, think of the difference between delivering a package to a customer on a crowded street in Tokyo and delivering it on a rural road in the United States. At Amazon, were developing specialized solutions for this last mile delivery, with innovations like Amazon Scout, a fully electric autonomous delivery device. The cloud has its own challenges in its last mile, and I predict a number of innovations are coming to help address it.Over the last 15 years of AWS, weve built out an impressive global infrastructure that spans over 25 geographic regions and 81 Availability Zones across the globe, with more than 310 points of presence to serve over 245 countries and territories. But now, we are seeing cloud services that are pushing beyond the bounds of our traditional AWS Regions and out to the edges of the networkor, like in ecommerce, the last mile.Weve already seen the cloud go practically everywhere. The shift well witness in 2022 is the cloud becoming highly specialized at the edges of the network. To fully realize the benefits of the cloud in workshops and warehouses, in restaurants and retail stores, or out in remote locations, there must be tailored solutions at the edge. The parallels to Amazon Scout in the cloud are devices like Amazon Monitron and AWS Panorama, purpose-built devices that bring cloud capabilities to the edges of the network to do a specific job. They bring all of the high security, advanced features, and speed of delivery of the cloud, but they can be placed almost anywhere in the world. Yet, rather than isolated boxes sitting someplace, these devices become true extensions of the cloud with a link back to all of its core capabilities.What we will see in 2022, and even more so in the years to come, is the cloud accelerating beyond the traditional centralized infrastructure model and into unexpected environments where specialized technology is needed. The cloud will be in your car, your tea kettle, and your TV. The cloud will be in everything from trucks driving down the road, to the ships and planes that transport goods. The cloud will be globally distributed, and connected to almost any digital device or system on Earth, and even in space.Prediction three: The rise of smart spaces, especially in senior careIn 2022, our homes and buildings will become better assistants and more attentive companions to truly help with our most human needs. The greatest impact in the next few years will be with the elderly.One of the things that inspires me the most about technology is its role in solving truly hard human problems. After years of meeting with customers across the globe that are using the cloud to do just that, we had an idea to begin documenting their stories through a show called Now Go Build. One of the inspiring customers we featured on the show was a company in Japan named Z-Works, which focuses on improving elder care and helping scale it through technology.The problem Z-Works faced was how to offer smart and attentive care for seniors in Japan when there are fewer and fewer people available to do the job. The solution this company arrived at was to arrange sensors in beds and throughout rooms in senior homes and connect all of them to the cloud for continuous data analysis. The sensor arrays dont just monitor vital signs. Because they run machine learning models trained in the cloud, the sensors can also tell if a person goes to use the bathroom and simply doesnt return. In that case, the system can notify someone on duty to check on the residents well-being. In essence, it is a very human response made possible by a very smart space. People would do that intrinsically, if they were aware of the need. In this case, ML models, fed by a private and secure stream of data, arrive at a similarly intelligent action to signal people. We are approaching a point where concepts like ambient computing, collections of IoT sensors, remote/mobile data collection and processing at the edge, and smart devices like Amazon Alexa will have the positive impact we always knew they would.Over the next several years, well see smart spaces come to life in a number of settings, but none with higher impact than elder care. It will be a combination of the simple tasks you would expectfrom dimming lights, locking doors, and switching off the oven if someone forgetsto the more contextual and proactive things that technology can do: asking questions when normal living patterns diverge and enacting common sense solutions when necessary. It will result in taking better care of people, and in the case of an aging population, it means that we will create a new class of homes so people can actually stay at home.Prediction four: Sustainability gets its own architectureIn 2022, developers will begin to make sustainability-conscious decisions about the systems and applications they are building. They will seek new approaches to cloud architectures that optimize for the needs of the planet as well as the needs of end users.As developers, we are trained to think about how to optimize our architectures for factors like security, performance, reliability, and cost. In 2022, you can add sustainability to that list. What we will begin to see in the coming years is developers taking an active role in building sustainability-conscious architectures that take into account not just the problems they are solving, but the planet as well.As consumers, we have long wanted our videos and music streamed in higher quality, our webpages to load faster, and the ability to store more and more photos, but people are also beginning to realize the negative impacts this convenience can have on the environment. Do consumers truly need a download to happen as fast as absolutely possible or can we help them make informed decisions about storing an asset in a more carbon-friendly way with a minor penalty in download speed? Likewise, can we provide the ability to stream a video in slightly lower quality than 4K to reduce our carbon footprint and, by doing this at scale, have a positive impact on the planet? Sustainable architectures will take decisions like these into account.Developers will take an active role in reducing the carbon footprints of their applications. This will happen in a variety of areas, like taking into account where in the world they choose to run their applications to take advantage of green energy in the grid, considering the time needed to process a task, or even specifying the chipset they use. When operating at web scale, small savings can scale out to have a large impact. We will also start to see developers further consider the power of switching off resources. Always on is a mantra that many build to, but it comes with a cost. When considering the idle resources needed for an always on architecture, we may start to see a new mantra emerge that the greenest energy is the energy we dont use. This doesnt mean we dont architect for high availability; it just means being more sustainability-conscious in our architectural decisions.Lets be clear: technology uses energy, and at AWS, we are committed to making smart infrastructure choices. Were on track to run on 100% renewables by 2025five years ahead of our original goal of 2030. But making the cloud run on wind, sun, or hydropower is only part of the sustainability responsibility that we developers, and really anyone running a company, must shoulder. It is a shared responsibility and a genuine effort that customers, employees, and potential employees will increasingly demand to see.Prediction five: A new wave of connectivity will bring about a new class of applicationsLow Earth orbit (LEO) satellites are set to bring affordable broadband to every corner of the planet. This will change the lives of billions of people as teachers, students, small businesses, and virtually anyone gets online.Over the next five years, more than 20,000 satellites are going to fan out in LEO above our planet. Among them will be about 1,500 from Amazons Project Kuiper, a network of satellites with the goal of delivering fast, affordable broadband to unserved and underserved communities around the world. (The first are planned to go into orbit in the fall of 2022.) What I see coming along with this planet-scale broadband are a whole new class of applications that will benefit from it.Today, most digital applications are constrained by the existing networkdesigned for low bitrates or intermittent connectivity. In some cases, we have digital applications and systems that are designed to be operated offline, but these typically go out of date quickly or have limited functionality compared to their connected counterparts. Think about a traditional GPS navigator as compared to using a mobile app on your phone. But what happens when you are no longer constrained by connectivity, bandwidth, or high latency? A world of untapped possibilities will become reality when affordable connectivity reaches these places. Enter LEO satellites.With ubiquitous connectivity, we start to unlock use cases that simply arent possible today. Try to imagine what happens in schools when every kid can use the same learning tools, or when small and medium-sized businesses get hold of digital tools they need to win more customers, grow their businesses, and create jobs in rural and remote communities around the world. We can more readily monitor reforestation efforts in remote locations, and we can better track and take quicker action on disaster situations such as fires and floods. Large enterprises with remote assetssuch as solar installations, heavy equipment, or far-flung buildingswill be able to better optimize the use and maintenance of those assets. Transportation companies with vehicles, planes, and vessels on the move will have access to continuous data streams uploaded to the cloud, and regular updates downloaded to vehicles and vessels on the ground, in the air, and on the water. Ubiquitous connectivity will take us from intelligent spaces to intelligent cities, intelligent countries, and finally, toward an intelligent world. | Unknown | Unknown | null | null | null | null | null | null |
|
news | Sam Guzik | Better paywalls won’t save us from what’s coming | In 2022, emerging technologies will change how users find and consume the news. The confluence of new search algorithms based on natural language models and consumer adoption of wearable devices will challenge the stability of the reader revenue models we’ve worked so hard to build over the past few years. Whether the coming inflection point... | https://www.niemanlab.org/2021/12/better-paywalls-wont-save-us-from-whats-coming/ | 2021-12-07T19:36:05Z | In 2022, emerging technologies will change how users find and consume the news. The confluence of new search algorithms based on natural language models and consumer adoption of wearable devices will challenge the stability of the reader revenue models weve worked so hard to build over the past few years. Whether the coming inflection point represents an existential threat or an opportunity for growth depends on how we, as an industry, respond.Throughout the pandemic, weve celebratedthewins news organizations have realized in maturing their subscription and membership strategies. That success was driven by a surge in demand for reliable information in an exceedingly uncertain time and increasingly sophisticated tactics for deploying paywalls and marketing messaging.But while weve focused on optimizing subscription flows, the technological and economic landscape has shifted. Over the summer, researchers at Google published a paper laying out a vision for a new type of search engine. Instead of delivering users a list of links in response to their query, a natural language model would directly summarize information from multiple sources on the Internet.That aligns with a broader shift in how users are searching for information. More than 40% of internet users around the world say that they use voice search whether deployed in AI assistants or as a feature in browser-based search engines. That suggests that consumers are getting more comfortable interacting with their devices by speaking commands (and hearing the results).As we contend with how natural language search interfaces will upend what we know about audience strategy, we also need to prepare for a world where users increasingly consume news on wearable devices.The evidence tells us that these trends will continue in 2022. Users will spend more time with devices without screens. They will get information directly from AI assistants that can summarize information without sending the user to a news website. The question for us is: What are we going to do about it?How will we fund our newsrooms if users’ browsing habits change and they dont hit paywalls as they do today? What’s the value of news if users engage with devices that give them an always-on stream of information? How will the value of our newsgathering change if users spend more time on immersive digital platforms that record their interactions automatically?In 2022, newsrooms can take the time to think about where there is opportunity to grow in the midst of this uncertainty. Now is the moment to think about what kinds of skills well need in the newsroom and on the business side to stay competitive. There is time to build partnerships, to develop new products, and to reconsider how our journalism creates value.Yes, news is essential. Yes, our products deliver immense value to the communities they operate in. But news is just one of many services consumers subscribe to. Being essential to democracy is great, but we need to provide a product that people will invest in. Unless we build a future for our organizations, theres no guarantee well survive.Sam Guzik leads product strategy for WNYC. | Content Synthesis/Information Retrieval Or Search | Business and Financial Operations | null | null | null | null | null | null |
|
news | Ray Wang | Catalog and analyze Application Load Balancer logs more efficiently with AWS Glue custom classifiers and Amazon Athena | You can query Application Load Balancer (ALB) access logs for various purposes, such as analyzing traffic distribution and patterns. You can also easily use Amazon Athena to create a table and query against the ALB access logs on Amazon Simple Storage Service (Amazon S3). (For more information, see How do I analyze my Application Load […] | https://aws.amazon.com/blogs/big-data/catalog-and-analyze-application-load-balancer-logs-more-efficiently-with-aws-glue-custom-classifiers-and-amazon-athena/ | 2021-11-16T17:06:15Z | You can query Application Load Balancer (ALB) access logs for various purposes, such as analyzing traffic distribution and patterns. You can also easily use Amazon Athena to create a table and query against the ALB access logs on Amazon Simple Storage Service (Amazon S3). (For more information, see How do I analyze my Application Load Balancer access logs using Amazon Athena? and Querying Application Load Balancer Logs.) All queries are run against the whole table because it doesnt define any partitions. If you have several years of ALB logs, you may want to use a partitioned table instead for better query performance and cost control. In fact, partitioning data is one of the Top 10 performance tuning tips for Athena.However, because ALB log files arent stored in a Hive-style prefix (such as /year=2021/), the process of creating thousands of partitions using ALTER TABLE ADD PARTITION in Athena is cumbersome. This post shows a way to create and schedule an AWS Glue crawler with a Grok custom classifier that infers the schema of all ALB log files under the specified Amazon S3 prefix and populates the partition metadata (year, month, and day) automatically to the AWS Glue Data Catalog.PrerequisitesTo follow along with this post, complete the following prerequisites:Enable access logging of the ALBs, and have the files already ingested in the specified S3 bucket.Set up the Athena query result location. For more information, see Working with Query Results, Output Files, and Query History.Solution overviewThe following diagram illustrates the solution architecture.To implement this solution, we complete the following steps:Prepare the Grok pattern for our ALB logs, and cross-check with a Grok debugger.Create an AWS Glue crawler with a Grok custom classifier.Run the crawler to prepare a table with partitions in the Data Catalog.Analyze the partitioned data using Athena and compare query speed vs. a non-partitioned table.Prepare the Grok pattern for our ALB logsAs a preliminary step, locate the access log files on the Amazon S3 console, and manually inspect the files to observe the format and syntax. To allow an AWS Glue crawler to recognize the pattern, we need to use a Grok pattern to match against an expression and map specific parts into the corresponding fields. Approximately 100 sample Grok patterns are available in the Logstash Plugins GitHub, and we can write our own custom pattern if its not listed.The following the basic syntax format for a Grok pattern %{PATTERN:FieldName}The following is an example of an ALB access log:http 2018-07-02T22:23:00.186641Z app/my-loadbalancer/50dc6c495c0c9188 192.168.131.39:2817 10.0.0.1:80 0.000 0.001 0.000 200 200 34 366 "GET http://www.example.com:80/ HTTP/1.1" "curl/7.46.0" - - arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 "Root=1-58337262-36d228ad5d99923122bbe354" "-" "-" 0 2018-07-02T22:22:48.364000Z "forward" "-" "-" "10.0.0.1:80" "200" "-" "-"https 2018-07-02T22:23:00.186641Z app/my-loadbalancer/50dc6c495c0c9188 192.168.131.39:2817 10.0.0.1:80 0.086 0.048 0.037 200 200 0 57 "GET https://www.example.com:443/ HTTP/1.1" "curl/7.46.0" ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 "Root=1-58337281-1d84f3d73c47ec4e58577259" "www.example.com" "arn:aws:acm:us-east-2:123456789012:certificate/12345678-1234-1234-1234-123456789012" 1 2018-07-02T22:22:48.364000Z "authenticate,forward" "-" "-" "10.0.0.1:80" "200" "-" "-"h2 2018-07-02T22:23:00.186641Z app/my-loadbalancer/50dc6c495c0c9188 10.0.1.252:48160 10.0.0.66:9000 0.000 0.002 0.000 200 200 5 257 "GET https://10.0.2.105:773/ HTTP/2.0" "curl/7.46.0" ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 "Root=1-58337327-72bd00b0343d75b906739c42" "-" "-" 1 2018-07-02T22:22:48.364000Z "redirect" "https://example.com:80/" "-" "10.0.0.66:9000" "200" "-" "-"ws 2018-07-02T22:23:00.186641Z app/my-loadbalancer/50dc6c495c0c9188 10.0.0.140:40914 10.0.1.192:8010 0.001 0.003 0.000 101 101 218 587 "GET http://10.0.0.30:80/ HTTP/1.1" "-" - - arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 "Root=1-58337364-23a8c76965a2ef7629b185e3" "-" "-" 1 2018-07-02T22:22:48.364000Z "forward" "-" "-" "10.0.1.192:8010" "101" "-" "-"wss 2018-07-02T22:23:00.186641Z app/my-loadbalancer/50dc6c495c0c9188 10.0.0.140:44244 10.0.0.171:8010 0.000 0.001 0.000 101 101 218 786 "GET https://10.0.0.30:443/ HTTP/1.1" "-" ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 "Root=1-58337364-23a8c76965a2ef7629b185e3" "-" "-" 1 2018-07-02T22:22:48.364000Z "forward" "-" "-" "10.0.0.171:8010" "101" "-" "-"http 2018-11-30T22:23:00.186641Z app/my-loadbalancer/50dc6c495c0c9188 192.168.131.39:2817 - 0.000 0.001 0.000 200 200 34 366 "GET http://www.example.com:80/ HTTP/1.1" "curl/7.46.0" - - arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 "Root=1-58337364-23a8c76965a2ef7629b185e3" "-" "-" 0 2018-11-30T22:22:48.364000Z "forward" "-" "-" "-" "-" "-" "-"http 2018-11-30T22:23:00.186641Z app/my-loadbalancer/50dc6c495c0c9188 192.168.131.39:2817 - 0.000 0.001 0.000 502 - 34 366 "GET http://www.example.com:80/ HTTP/1.1" "curl/7.46.0" - - arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 "Root=1-58337364-23a8c76965a2ef7629b185e3" "-" "-" 0 2018-11-30T22:22:48.364000Z "forward" "-" "LambdaInvalidResponse" "-" "-" "-" "-"To map the first field, the Grok pattern might look like the following code:The pattern includes the following components:DATA maps to .*?type is the column name\s is the whitespace characterTo map the second field, the Grok pattern might look like the following:%{TIMESTAMP_ISO8601:time}\sThis pattern has the following elements:TIMESTAMP_ISO8601 maps to %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?time is the column name\s is the whitespace characterWhen writing Grok patterns, we should also consider corner cases. For example, the following code is a normal case:%{BASE10NUM:target_processing_time}\sBut when considering the possibility of null value, we should replace the pattern with the following:%{DATA:target_processing_time}\sWhen our Grok pattern is ready, we can test the Grok pattern with sample input using a third-party Grok debugger. The following pattern is a good start, but always remember to test it with the actual ALB logs.%{DATA:type}\s+%{TIMESTAMP_ISO8601:time}\s+%{DATA:elb}\s+%{DATA:client}\s+%{DATA:target}\s+%{BASE10NUM:request_processing_time}\s+%{DATA:target_processing_time}\s+%{BASE10NUM:response_processing_time}\s+%{BASE10NUM:elb_status_code}\s+%{DATA:target_status_code}\s+%{BASE10NUM:received_bytes}\s+%{BASE10NUM:sent_bytes}\s+\"%{DATA:request}\"\s+\"%{DATA:user_agent}\"\s+%{DATA:ssl_cipher}\s+%{DATA:ssl_protocol}\s+%{DATA:target_group_arn}\s+\"%{DATA:trace_id}\"\s+\"%{DATA:domain_name}\"\s+\"%{DATA:chosen_cert_arn}\"\s+%{DATA:matched_rule_priority}\s+%{TIMESTAMP_ISO8601:request_creation_time}\s+\"%{DATA:actions_executed}\"\s+\"%{DATA:redirect_url}\"\s+\"%{DATA:error_reason}\"\s+\"%{DATA:target_list}\"\s+\"%{DATA:target_status_code_list}\"\s+\"%{DATA:classification}\"\s+\"%{DATA:classification_reason}\"Keep in mind that when you copy the Grok pattern from your browser, in some cases there are extra spaces in the end of the lines. Make sure to remove these extra spaces.Create an AWS Glue crawler with a Grok custom classifierBefore you create your crawler, you first create a custom classifier. Complete the following steps:On the AWS Glue console, under Crawler, choose Classifiers.Choose Add classifier.For Classifier name, enter alb-logs-classifier.For Classifier type¸ select Grok.For Classification, enter alb-logs.For Grok pattern, enter the pattern from the previous section.Choose Create.Now you can create your crawler.Choose Crawlers in the navigation pane.Choose Add crawler.For Crawler name, enter alb-access-log-crawler.For Selected classifiers, enter alb-logs-classifier.Choose Next.For Crawler source type, select Data stores.For Repeat crawls of S3 data stores, select Crawl new folders only.Choose Next.For Choose a data store, choose S3.For Crawl data in, select Specified path in my account.For Include path, enter the path to your ALB logs (for example, s3://alb-logs-directory/AWSLogs/<ACCOUNT-ID>/elasticloadbalancing/<REGION>/).Choose Next.When prompted to add another data store, select No and choose Next.Select Create an IAM role, and give it a name such as AWSGlueServiceRole-alb-logs-crawler.For Frequency, choose Daily.Indicate your start hour and minute.Choose Next.For Database, enter elb-access-log-db.For Prefix added to tables, enter alb_logs_.Expand Configuration options.Select Update all new and existing partitions with metadata from the table.Keep the other options at their default.Choose Next.Review your settings and choose Finish.Run your AWS Glue crawlerNext, we run our crawler to prepare a table with partitions in the Data Catalog.On the AWS Glue console, choose Crawlers.Select the crawler we just created.Choose Run crawler.When the crawler is complete, you receive a notification indicating that a table has been created.Next, we review and edit the schema.Under Databases, choose Tables.Choose the table alb_logs_<region>.Cross-check the column name and corresponding data type.The table has three columns: partiion_0, partition_1, and partition_2.Choose Edit schema.Rename the columns year, month, and day.Choose Save.Analyze the data using AthenaNext, we analyze our data by querying the access logs. We compare the query speed between the following tables:Non-partitioned table All data is treated as a single tablePartitioned table Data is partitioned by year, month, and dayQuery the non-partitioned tableWith the non-partitioned table, if we want to query access logs on a specific date, we have to write the WHERE clause using the LIKE operator because the data column was interpreted as a string. See the following code:SELECT COUNT(1) FROM "elb-access-log-db"."alb_logs" WHERE type='h2' AND time LIKE '2020-12-29%';The query takes 5.25 seconds to complete, with 3.15 MB data scanned.Query the partitioned tableWith the year, month, and day columns as partitions, we can use the following statement to query access logs on the same day:SELECT COUNT(1) FROM "elb-access-log-db"."alb_logs" WHERE type='h2' AND year=2020 AND month=12 AND day=29;This time the query takes only 1.89 seconds to complete, with 25.72 KB data scanned.This query is faster and costs less (because less data is scanned) due to partition pruning.Clean upTo avoid incurring future charges, delete the resources created in the Data Catalog, and delete the AWS Glue crawler.SummaryIn this post, we illustrated how to create an AWS Glue crawler that populates ALB logs metadata in the AWS Glue Data Catalog automatically with partitions by year, month, and day. With partition pruning, we can improve query performance and associated costs in Athena.If you have questions or suggestions, please leave a comment.About the AuthorsRay Wang is a Solutions Architect at AWS. With 8 years of experience in the IT industry, Ray is dedicated to building modern solutions on the cloud, especially in big data and machine learning. As a hungry go-getter, he passed all 12 AWS certificates to make his technical field not only deep but wide. He loves to read and watch sci-fi movies in his spare time.Corvus Lee is a Data Lab Solutions Architect at AWS. He enjoys all kinds of data-related discussions with customers, from high-level like white boarding a data lake architecture, to the details of data modeling, writing Python/Spark code for data processing, and more. | Content Synthesis/Information Retrieval Or Search | Computer and Mathematical | null | null | null | null | null | null |
|
news | How Smart Tech Is Transforming Nonprofits - Harvard Business Review | Automation and AI can free up staff in fundraising, communications, finance, and service delivery efforts. | https://hbr.org/2021/12/how-smart-tech-is-transforming-nonprofits | 2021-12-09T14:11:34Z | Automation and AI can free up staff in fundraising, communications, finance, and service delivery efforts.">Covid-19 created cascades of shortages, disruptions, and problems that rolled downhill and landed in the most vulnerable neighborhoods. In these neighborhoods, its often nonprofit organizations that provide services to members of the community. While the pandemic accelerated the need for digital transformation throughout the economy, the nonprofit sector was not immune to the need for nearly overnight innovation. As experts on the use of technology for social good, weve observed the many ways that nonprofits have been adopting smart tech to further social change in the wake of the pandemic, which we chronicle in our upcoming book, The Smart Nonprofit.We use smart tech as an umbrella term for advanced digital technologies that make decisions for people. It includes artificial intelligence (AI) and its subsets and cousins, such as machine learning, natural language processing, smart forms, chatbots, robots, and more.The use of smart tech by social service agencies and other nonprofits exploded during the pandemic. For example, food banks deployed robots to pack meals; homeless services agencies used chatbots to give legal and mental health advice; and fundraising departments turned to AI-powered software to identify potential donors.When the pandemic began and schools switched to remote learning, many students who relied on school lunches were not able to receive them. Heres where nonprofits stepped in to use smart technologies for social good. For example, researchers at Carnegie Mellon University used machine learning to flip the system on its head; instead of using buses to deliver children to schools, new bus routes were created to bring meals to children in the Pittsburgh area in the most efficient way.The use of chatbots to provide support and deliver services to vulnerable populations increased tremendously during the pandemic. For instance, the Rentervention chatbot was developed by the legal aid nonprofits in Illinois to help tenants navigate eviction and other housing issues they were experiencing due to Covid-19. It also directs renters to pro bono legal advice.At many nonprofits, smart tech is becoming integrated into internal workflows, fundraising, communications, finance operations, and service delivery efforts. Smart tech is currently best used for rote tasks in nonprofit organizations, such as reconciling expense reports and answering the same questions online using a chatbot (e.g. Is my contribution tax-deductible?) freeing up staff to focus on other activities. We call this benefit the dividend of time, which can be used to, say, reduce staff burnout, get to know clients on a deeper, more human level, and focus on deeper societal changes that need to be made, such as addressing the root causes of homelessness in addition to serving homeless people. For example, when Covid-19 hit, Doctors Without Borders/Médecins Sans Frontières (MSF), the international humanitarian group dedicated to providing medical care to people in distress, created an online chatbot to answer common questions about the pandemic. This freed up staff to respond to a huge increase in conversations within their social media community around mental health, anxiety, and other well-being issues.The development of technology often putters along quietly until an inflection point is reached, where the cost of the technology suddenly decreases while its commercial applications increase. This adoption process is often drawn in the shape of a hockey stick. We are sitting at the heel of that stick right now as the use of smart tech has begun to skyrocket. And as more and more nonprofits use smart tech, nonprofit leaders will need to have their eyes wide open about both the benefits and the risks of using new smart technologies.People tend to think of the work done by computers and robots as incapable of being swayed by emotions, and therefore incapable of being biased or sexist or unfair. However, the code that powers smart tech was at some point created by people, and carries forward their opinions, assumptions, and biases whether implicit or explicit. As the renowned data scientist Cathy ONeil says, Algorithms are opinions embedded in code. We call the bias created inside of smart tech systems embedded bias. There are two main reasons embedded bias is prevalent. First, programmers, who continue to overwhelmingly be white men, make literally thousands of choices beneath the hood of smart tech that the rest of us cant see. Second, smart tech requires massive data sets to learn to recognize patterns and make decisions.Many large data sets in social service areas like housing or hiring were racist by design. In using these data sets to teach smart tech matching patterns, organizations were unwittingly paying forward historic racism. Once bias is baked into smart tech, not only is it likely to stay there forever; it becomes self-reinforcing as the tech system looks for the same patterns over time.Making strategic decisions about when and how to use smart tech is therefore a leadership challenge, not a technical problem. There are consequences to automating systems and processes that range from losing the ability to make judgement calls (e.g. giving unusual job candidates a chance) to introducing flat out bias against people of color (e.g. risk assessment tools used by judges and parole boards ranking black defendants at much higher risk for recidivism than white defendants). Nonprofit leaders need to make a pledge to do no harm using smart tech and not to wait for something bad to happen before looking for warning signs. We call the nonprofit organizations that are using smart tech responsibly smart nonprofits. The kind of leadership required to lead these organizations is:Human-Centered: These leaders take a human-centered approach to adopting new technology by finding the sweet spot between people and smart tech, while ensuring that people are always in charge of the technology.Prepared: These leaders must actively reduce bias embedded in smart tech code and systems. A thoughtful, participatory process is required to select values-aligned systems, vendors, and consultants.Knowledgeable and Reflective: These leaders make learning about what smart tech is and what it does an ongoing process in the boardroom, the C-suite, and among the staff. Once automated systems are in place, leaders need to be vigilant about whether the technology is performing as intended, or whether unintended consequences have arisen, and how clients and end users ultimately feel about the systems.While smart tech helped scores of nonprofits to pivot to suddenly remote and digital delivery of programs and services at the start of the pandemic, it may also enable us to turn the page on an era of frantic busyness and scarcity mindsets to one in which nonprofit organizations have the time to think and plan and even dream. We have a once-in-a-generation opportunity to remake work and focus on social change, and it requires people and organizations who are thoughtful and knowledgeable about the use of smart tech. | Process Automation/Content Synthesis/Content Creation | Management/Business and Financial Operations | null | null | null | null | null | null |
||
news | PR Newswire | PathAI Reports that Its ML-based NASH Drug Discovery Tool May Identify Clinical Trial Responders Based on Post-Hoc Analysis of Bristol Myers Squibb's FALCON 1 Study at The Liver Meeting 2021 | PathAI, a global provider of AI-powered technology applied to pathology, will announce results from a retrospective analysis of liver biopsy specimens from... | https://finance.yahoo.com/news/pathai-reports-ml-based-nash-183000851.html | https://s.yimg.com/uu/api/res/1.2/ZlNKOsRCq_pVmJAhNB8pbQ--~B/aD0xNTM7dz00MDA7YXBwaWQ9eXRhY2h5b24-/https://media.zenfs.com/en/prnewswire.com/bd4ffac1fd0e6a5edc90c846d9dbb5c5 | 2021-11-12T18:30:00Z | Post-hoc evaluation of liver biopsies from patients in the Bristol Myers Squibb sponsored FALCON 1 study by AI-based histologic measurement of NASH (nonalcoholic steatohepatitis; AIM-NASH) drug development tool (DDT) suggests that clinical trial endpoints may have been met and shows treatment-associated improvements in key liver tissue features not identified by manual assessmentBOSTON, Nov. 12, 2021 /PRNewswire/ -- PathAI, a global provider of AI-powered technology applied to pathology, will announce results from a retrospective analysis of liver biopsy specimens from Bristol Myers Squibb's FALCON 1 study, a Phase 2b, randomized, multicenter, placebo-controlled study assessing the efficacy and safety of pegbelfermin (PGBF) as a treatment for non-alcoholic steatohepatitis (NASH) at The Liver Meeting, November 12-15, 2021 (NCT03486899).PathAI Logo (PRNewsfoto/PathAI)This exploratory post hoc analysis compared machine learning (ML)-based quantification of histological features with traditional pathology scoring methods, and the results will be presented in the poster Shevell et al., Comparison of manual vs machine learning approaches to liver biopsy scoring for NASH and fibrosis: a post hoc analysis of the FALCON 1 study. PathAI has developed the AI-based histologic measurement of NASH Drug Development Tool (AIM-NASH DDT) that has been accepted into the FDA Biomarker Qualification Program. The AIM-NASH DDT is intended for use in assessment of endpoints in clinical trials as well as clinical trial enrollment after FDA qualification. AIM-NASH has been trained to detect and quantify the key histological features required to score NASH disease severity using the standard NASH CRN scoring system and generates slide-level scores for those features (lobular inflammation, ballooning, steatosis, and fibrosis) mirroring the standard pathology workflow. In this study, biopsy slides, collected from clinical trial participants within 6 months prior to or during the screening period and after 24 weeks of PGBF treatment, were digitized into whole slide images and evaluated using AIM-NASH. The clinical study central pathologist manually scored these same biopsy samples during the study period.The FALCON 1 trial had 197 participants randomized to four arms: placebo, plus three treatment arms of PGBF dosed at 10mg, 20mg, and 40mg.Evaluating the primary clinical trial endpoint of 1 stage NASH CRN fibrosis improvement without NASH worsening or NASH improvement without fibrosis worsening at 24 weeks, identified a statistically significant proportion of responders in the treatment arms by the AIM-NASH DDT (p=0.013) that were not reported by manual assessment (p=0.148).AIM-NASH-based and manual scores for all CRN components showed distinct trends of improvement in all PGBF arms compared to placebo. The AIM-NASH DDT CRN scoring revealed significant improvements in ballooning (p=0.033) and lobular inflammation (p=0.019) in the treatment arms compared with placebo that were not seen by manual scoring (ballooning p=0.274; lobular inflammation p=0.716). Conversely, manual methods showed significant improvements in steatosis for treated patients (p=0.0022) that AIM-NASH did not (p=0.106). Treatment-associated improvements in fibrosis were not seen using either method. Additional assessment by AIM-NASH using a continuous scoring method showed significant differences between placebo and PGBF treated patients for ballooning (p=0.0014), lobular inflammation (p=0.05), and steatosis (p=0.001).While this study suggests that AIM-NASH-based pathologic assessment of tissue may be more sensitive than manual assessment and may capture changes in histology that could be indicative of drug efficacy, further analyses with larger tissue datasets are required to further support these claims.About PathAIPathAI is a leading provider of AI-powered research tools and services for pathology. PathAI's platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine and deep learning. Based in Boston, PathAI works with leading life sciences companies and researchers to advance precision medicine. To learn more, visit pathai.com.View original content to download multimedia:https://www.prnewswire.com/news-releases/pathai-reports-that-its-ml-based-nash-drug-discovery-tool-may-identify-clinical-trial-responders-based-on-post-hoc-analysis-of-bristol-myers-squibbs-falcon-1-study-at-the-liver-meeting-2021-301423231.htmlSOURCE PathAI | Detection and Monitoring/Prediction | Life, Physical, and Social Science/Healthcare Practitioners and Support | null | null | null | null | null | null |
news | Cliff Edwards | Majority Report: 2022 Predictions on How AI Will Impact Global Industries | There’s an old axiom that the best businesses thrive during periods of uncertainty. No doubt, that will be tested to the limits as 2022 portends upheaval on a grand scale. Pandemic-related supply chain disruptions are affecting everything from production of cars and electronics to toys and toilet paper. At the same time, global food prices Read article >The post Majority Report: 2022 Predictions on How AI Will Impact Global Industries appeared first on The Official NVIDIA Blog. | https://blogs.nvidia.com/blog/2021/12/07/2022-predictions-ai-global-industries/ | 2021-12-07T16:54:11Z | Theres an old axiom that the best businesses thrive during periods of uncertainty. No doubt, that will be tested to the limits as 2022 portends upheaval on a grand scale.Pandemic-related supply chain disruptions are affecting everything from production of cars and electronics to toys and toilet paper. At the same time, global food prices have jumped to their highest level in more than a decade as worker shortages, factory closures and high commodity prices shred plans at even the most sophisticated forecasting and logistics operations.Last year, we asked some of our top experts at NVIDIA what 2021 would bring for the world of AI and accelerated computing. They predicted each would move from planning to production as businesses seek new avenues for product forecasting, supply chain management and scientific research.Headlines over the course of the year proved them correct: To save Christmas, retailers Home Depot, Target and Walmart chartered their own cargo ships to deliver goods to their stores around the world. To speed time to market, BMW, Ericsson and other companies began using digital twin technologies to simulate real-world environments.AI adoption isnt limited to big names. Indeed, a midyear 2021 PWC survey of more than 1,000 businesses across nine sectors including banking, health and energy found that 86 percent of them were poised to make AI a mainstream technology.This year, we went back to our experts at NVIDIA and asked them where enterprises will focus their AI efforts as they parse big data and look for new revenue opportunities.Heres what they had to say:BRYAN CATANZAROVice President of Applied Deep Learning ResearchConversational AI: Last year, I predicted conversational AI will be used to make video games more immersive by allowing real-time interaction to flesh out character-driven approaches. This year, conversational AI is all work and no play.Companies will race to deploy new conversational AI tools that allow us to work more efficiently and effectively using natural language processing. Speech synthesis is poised to become just as emotive and persuasive as the human voice in 2022, which will help industries like retail, banking and healthcare better understand and better serve their customers.Know Your Customer: Moving beyond natural language processing, companies using both speech and text for interaction with other businesses and customers will employ AI as they move to understand the context or sentiment in what a person might be saying. Is the customer frustrated? Is your boss being sarcastic? The adoption of tools like OpenAI Github copilot, which helps programmers be more effective at their work, will accelerate.SARAH TARIQVice President of AutomotiveProgrammable Cars: The days of a car losing value once you drive it off the lot will soon be gone. Well see more automakers moving to reinvent the driving experience by creating software-defined architectures with headroom to support new applications and services via automatic over-the-air updates. Vehicles will get better and safer over time.De-Stressing the Commute: The move to a software-defined approach also will help remove the stress and hassle of everyday driving. AI assistants will serve as your personal concierge, enhancing the vehicle journey for a safer, more convenient and enjoyable experience.Vehicle occupants will have access to intelligent services that are always on, allowing them to use real-time conversational AI for recommendations, alerts, vehicle controls and more.Designing for the Long Haul: Automakers will begin to invest heavily in the use of simulation and digital twins to validate more of the end-to-end stack, and in training of deep neural network models. AI and data analytics will help train and validate self-driving cars for a broad range of driving conditions, delivering everyday safety thats designed for the long haul.REV LEBAREDIANVice President of Simulation Technology, Omniverse EngineeringEmerging Standard for 3D: Well see advancing 3D standards for describing virtual worlds. Building accurate and rich digital counterparts to everything in the real world is one of the grandest challenges in computer science. Developers, enterprises and individual users will contribute to foundational open standards analogous to the early days of the internet and the web. Standards such as Universal Scene Description (USD) and glTF will rapidly evolve to meet the foundational needs of Web3 and digital twins.Synthetic 3D Data for the Next Era of AI: The rate of innovation in AI has been accelerating for the better part of decade, but AI cannot advance without large amounts of high quality and diverse data. Today, data captured from the real world and labeled by humans is insufficient both in terms of quality and diversity to jump to the next level of artificial intelligence. In 2022, we will see an explosion in synthetic data generated from virtual worlds by physically accurate world simulators to train advanced neural networks.Re-Imaging Industry through Simulation: Many industries are starting to examine and adopt digital twins and virtual worlds, thanks to the potential for operational efficiencies and cost savings. Digital representations of everything we build in the real world must have a counterpart in the virtual worldairplanes, cars, factories, bridges, cities and even Earth itself. Applying high-fidelity simulations to digital twins allows us to experience, test and optimize complex designs well before we commit to building them in the real world.KIMBERLY POWELLVice President & General Manager of HealthcareAI Generates Million X Drug Discovery: Simultaneous breakthroughs of AlphaFold and RoseTTAFold creating a thousandfold explosion of known protein structures and AI that can generate a thousand more potential chemical compounds have increased the opportunity to discover drugs by a million times. Molecular simulations help to model target and drug interactions completely in silico. To keep up with the million-x opportunity, AI is helping to introduce a new class of molecular simulations from system size and timescale to quantum accuracy.AI Creates SaaS Medical Devices: The medical device industry has a game-changing opportunity, enabled by AI, to minimize and reduce costs, to automate and increase accessibility, and to continuously deliver innovation over the life of the product. Medical device companies will evolve from delivering hardware to providing software-as-a-service systems that can be upgraded remotely to keep devices usable after deployment.AI 2.0 with Federated Learning: To help AI application developers industrialize their AI technology and expand the applications business benefit, AI must be trained and validated on data that resides outside the possession of their group, institution and geography. Federated learning is the key to collaboratively building robust AI models and validating models in the wild without sharing sensitive data.ANIMA ANANDKUMARDirector of ML Research, and Bren Professor at CaltechAI4Science: This area will continue to mature significantly and yield real-world impact. AI will deeply integrate with HPC at supercomputing scale and make scientific simulations and modeling possible at an unprecedented scale and fidelity in areas such as weather and climate models.AI will lead to breakthroughs in discovery of new drugs and treatments and revolutionize healthcare. Federated learning and differential privacy will be widely adopted, making healthcare and other sensitive data-sharing seamless.Algorithmic Development: Expect massive advancements in the algorithmic development that underlies simulations, as well as the capabilities of GPUs to handle reinforcement learning at scale.RONNIE VASISHTASenior Vice President of TelecomsAI Moves to the Telco Edge: The promise of 5G will open new opportunities for edge computing. Key benefits will include network slicing that allows customers to assign dedicated bandwidth to specific applications, ultra-low latency in non-wired environments, as well as improved security and isolation.AI-on-5G will unlock new edge AI use cases. These include Industry 4.0 use cases such as plant automation, factory robots, monitoring and inspection; automotive systems like toll road and vehicle telemetry applications; as well as smart spaces in retail, cities and supply chain applications.Convergence of AI and OT Solutions: New edge AI applications are driving the growth of intelligent spaces, including the intelligent factory. These factories use cameras and other sensors for inspection and predictive maintenance. However, detection is just step one; once detected, action must be taken.This requires a connection between the AI application doing the inference and monitoring-and-control, or OT, systems that manage the assembly lines, robotic arms or pick-and-place machines.Today, integration between these two applications relies on custom development work. This year, expect to see more integration of AI and traditional OT management solutions that simplify the adoption of edge AI in industrial environments.AZITA MARTINVice President & General Manager of Artificial Intelligence for Retail and C onsumer Products GroupAI Addresses Labor Shortages: Amid a shortage of labor and increased customer demand for faster service, quick-service restaurants will employ AI for automated order taking. Thanks to advancements in natural language understanding and speech, combined with recommendation systems, fast food restaurants will roll out automated order taking to speed drive-through times and improve recommendations. In supermarkets and big-box stores, retailers will increase their use of intelligent video analytics and computer visions to create automated checkouts and autonomous or cashier-less shopping.Enterprises Tap AI to Optimize Logistics: AIs greatest power is found in simplifying incredibly complex problems. Supply chain optimization will become a critical area for retailers to meet customer demands for product availability and faster delivery. AI can enable more frequent and more accurate forecasting, ensuring the right product is at the right store at the right time,Computer vision and robotics will add AI intelligence to distribution centers. Solutions like autonomous forklifts, robots and intelligent multi-shuttle cabinets will reduce conveyor starvation and downtime and automate pick-and-pack of items to double throughput. Last-mile delivery leveraging data science will help dynamic rerouting, simulations and sub-second solver response time.Becoming One with the Customer: Retailers sit on massive amounts of data but often have trouble processing it in real time. AI lets retailers parse the data in near real-time to have a 360 degree view of their customers, in order to provide more personalized offers and recommendations that drive revenue and customer satisfaction. In 2022, youll see many retailers offering hyper-personalized shopping experiences.KEVIN LEVITTDirector of Industry and Business Development for Financial Services Your Voice Is Your ID: Financial institutions will invest heavily in AI to fight fraud and adhere to compliance regulations such as KYC (Know Your Customer) and AML (Anti-Money Laundering). Some are using a customers unique voice to authenticate online transactions, while others are turning to eye biometrics for authentication.Graph neural networks are at the forefront of the new techniques AI researchers and practitioners at financial institutions are using to understand relationships across entities and data points. Theyll become critical to enhancing fraud prevention and to mapping relationships to fight fraud more effectively.AI for ESG: Consumers and government entities increasingly will hold enterprises accountable for environmental impacts, social and corporate governance (ESG). Companies will invest in significant computational power to run AI models, including deep learning and natural language processing models, that analyze all the data necessary to track company performance relative to ESG. It also will be needed to analyze the available data externally to measure which companies are excelling or failing relative to ESG benchmarks.CHARLIE BOYLEVice President & General Manager, NVIDIA DGX SystemsEnterprises deploy large language models to advance understanding: In 2022, well see accelerated growth in adapting large language models (LLMs) to serve more industries and use cases. Trained on massive amounts of general or industry-specific data, LLMs are able to answer deep domain questions, translate languages, comprehend and summarize documents, write stories and compute programs all without specialized training or supervision. Already, LLMs are being used to build language- and domain-specific AI chatbots and services that improve connection and communication around the world.Enterprises next data centers will belong to someone else: Many businesses turned away from owning their own data centers when they moved to cloud computing, so, in 2022, companies will realize its time to start leveraging colocation services for high-performance AI infrastructure. The ease of deployment and access to infrastructure experts who can help ensure 24/7/365 uptime will enable more enterprises to benefit from on-demand resources delivered securely, wherever and whenever theyre needed.KEVIN DEIERLINGSenior Vice President of NetworkingData Center Is the New Unit of Computing: Applications that previously ran on a single computer dont fit into a single box any more. The new world of computing increasingly will be software defined and hardware accelerated. As applications become disaggregated and leverage massive data sets, the network will be seen as the fast lane between many servers acting together as a computer. Software-defined data processing units will serve as distributed switches, load balancers, firewalls, and virtualized storage devices that stitches this data center scale computer together.Growing Trust in Zero Trust: As applications and devices move seamlessly between the data center and the edge, enterprises will have to validate and compose from microservices. Zero trust assumes that everything and everyone connected to a company system must be authenticated and monitored to verify bad actors arent attempting to penetrate the network. Everything has to become protected both at the edge and on every node within the network. Data will need to be encrypted using IPSEC and TLS encryption, and every node protected with advanced routers and firewalls.SCOTT MCCLELLANSenior Director of the Data Science Product GroupAccelerated Data Science Platforms Thaw Enterprise Data Lakes: Much has been written about data lakes forming the foundation for enterprise big data strategies. Enterprise data lakes are effective for large scale data processing, but their broader usefulness has been largely frozen for the past few years, as they are isolated and decoupled from machine learning training and inference pipelines. In 2022, data lakes will finally modernize through end-to-end data pipelines because of three inflection points: centralized infrastructure, the agility of Kubernetes-based applications, and best-in-class, fit-to-task storage.Mainstream AI Adoption Triggers MLOps Growth: The worlds AI pioneers built bespoke MLOps solutions to help them manage development and production AI workflows. Many early adopters that chose a cloud-based development path have been able to delay adding MLOps expertise. Enterprises are now uncovering a gap as companies expand their use of AI and bring their accelerated infrastructure on-prem. Addressing this need will trigger broad adoption of MLOps solutions in the year ahead. Theres no doubt the continuing pandemic has created an era of accelerated invention and reinvention for many businesses and scientific organizations. The goal is to create short-term measures that meet the needs of the day while building for long-term gains and radical change.Will 2022 be another year of living dangerously, or smoother sailing for those businesses that tackle the uncertainty with a firmer embrace of AI? | Digital Assistance/Process Automation/Decision Making/Content Synthesis/Recommendation | Management/Business and Financial Operations/Healthcare Practitioners and Support/Sales and Related | null | null | null | null | null | null |
|
news | Dave Mosley, Forbes Councils Member, Dave Mosley, Forbes Councils Member https://www.forbes.com/sites/forbestechcouncil/people/davemosley/ | Every Buzzword Needs Data | Without data that is stored and activated in a way that enables privacy, democratization and movement within storage infrastructures, we don’t have a very helpful strategy at all. | https://www.forbes.com/sites/forbestechcouncil/2021/11/16/every-buzzword-needs-data/ | 2021-11-16T12:45:00Z | CEO of Seagate Technology.gettyMany of todays hottest technologies have become buzzwords. Artificial intelligence (AI), machine learning (ML) and deep learning promise significant advancements to business, but business leaders discuss them mostly conceptually. They understand at some level what these technologies can offer, but not necessarily how to deploy them to deliver maximum ROI something we ourselves at Seagate are still optimizing. This, coupled with the meteoric rise of software companies that are often being put front and center of the story, means the importance of data and how it is managed is still underestimated and often misunderstood. The foundation powering all modern technologies is data. It enables all great insights and innovations. The Currency Of Data On average, people individually create 1.7 megabytes (MB) of data every second (63 million MB a year). That may seem like a big number, but the amount of data used by enterprise businesses is even more significant, and, thanks to technologies such as AI, ML and deep learning, its exploding. In fact, according to Rethink Data, a report commissioned by Seagate and conducted by the research firm IDC, enterprise data volume is expected to grow at an average annual rate of 42.2% from 2020 to just over 2 petabytes (PB) by 2022. In case you didnt know, 2 PB equals a little over 2 billion MB so that 2 MB every second seems pretty small now, right? The saying that "data is the new oil" has been around for some time now, but to my mind, data really has become business currency literally, with the rise of cryptocurrency and blockchain. Weve reached an inflection point where data is the critical element capable of transforming our society. As Googles director of research, Peter Norvig, famously sums it up, We dont have better algorithms. We just have more data. But still, data is too often just a member of the cast rather than the star of the show. Maximizing Data Across Environments IDC expects the amount of data around the globe to reach 175 zettabytes (ZB) by 2025, and the past 18 months have dramatically accelerated the need for AI, ML, automation and the like. Also, the ability to easily move data between on-premises and cloud environments is more important than ever for enterprises. However, without proper organization, there will likely be a massive amount of data that goes unused representing wasted opportunity and potentially even lost revenue. In fact, according to the IDC research, 68% of enterprise data goes unused. What about the data being lost? So, does it really matter that more than two-thirds of enterprise data is going unused? We create so much data any way that a third is still a lot. While its difficult to put a dollar amount on it and believe me, were trying a lack of a data strategy can be detrimental to any business. Take a consumer-facing business, for example. Every piece of data is an insight into that customer, their preferences and their needs. If you were to lose two-thirds of the data for a specific customer, youd only have part of the story. And so, youd only be able to deliver part of the resolution they need. Ultimately, properly stored and managed data can help deliver optimized customer service. It can help to get driverless cars on the road quicker. It can help factories become more productive. And it can help cut costs and enable data monetization, which, in todays economy, should be front and center for any C-suite. Data Strategy In order to arrive at an effective data strategy, follow these steps.1. Understand where your data currently resides and how best to store it for effective management and utilization. Dont just follow the crowds; understand what is suited to your business and find the right vendor that can act as a partner through the process. 2. Make sure everyone in your organization understands where data resides and make certain everyone has access. Data isnt as effective in the hands of just a few, with multiple business units being able to uncover different insights based on their own perspectives and approaches. 3. Finally, dont let your data go to waste. Once you realize what you have, youll be asking yourself why you didnt look sooner. Data Appreciation Hard drive storage capacity has improved 20,000 times in the past 50 years, outpacing other technological innovations. Having been at Seagate for more than half of the companys history, I can attest that the right data storage is lying in wait, ready to help businesses take advantage of all the data out there.With my background in solid-state physics, I have always appreciated the scientific value of data. I invite you to join me in harnessing the powerful business value of data as well. It is not so much an AI or automation strategy that matters, but a data strategy because, without data that is stored and activated in a way that enables privacy, democratization and movement within storage infrastructures, we dont have a very helpful strategy at all. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? | Information Retrieval Or Search/Content Synthesis/Decision Making | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | Uskompuf | (PR) NVIDIA Officially Announces DLSS 2.3 | NVIDIA DLSS adoption is growing rapidly, accelerated by easy-to-use Unreal Engine plugins and native support in Unity. This month 10 new games get DLSS, and a number of others, including Cyberpunk 2077, get upgraded to the new NVIDIA DLSS 2.3 SDK for even better image quality. With an AI model that continuously improves through training on NVIDIA's supercomputer, NVIDIA DLSS offers the best image quality and performance for over 130 games and applications.To deliver a level of performance and image quality far above that of traditional upscaling technologies, NVIDIA DLSS utilizes an AI model trained on supercomputers, deep game integrations, and GeForce RTX Tensor Cores. Our DLSS AI model is always learning, and our newest update, NVIDIA DLSS 2.3, is available now. It makes smarter use of a game's motion vectors to improve object detail in motion, particle reconstruction, ghosting, and temporal stability. Read full story | https://www.techpowerup.com/289038/nvidia-officially-announces-dlss-2-3 | 2021-11-16T23:44:43Z | NVIDIA DLSS adoption is growing rapidly, accelerated by easy-to-use Unreal Engine plugins and native support in Unity. This month 10 new games get DLSS, and a number of others, including Cyberpunk 2077, get upgraded to the new NVIDIA DLSS 2.3 SDK for even better image quality. With an AI model that continuously improves through training on NVIDIA's supercomputer, NVIDIA DLSS offers the best image quality and performance for over 130 games and applications.To deliver a level of performance and image quality far above that of traditional upscaling technologies, NVIDIA DLSS utilizes an AI model trained on supercomputers, deep game integrations, and GeForce RTX Tensor Cores. Our DLSS AI model is always learning, and our newest update, NVIDIA DLSS 2.3, is available now. It makes smarter use of a game's motion vectors to improve object detail in motion, particle reconstruction, ghosting, and temporal stability.In Cyberpunk 2077, which updates to NVIDIA DLSS 2.3 today, it more smartly uses motion vectors to reduce ghosting on fine detail in motion. Take a look at an example below, on the car's side mirror.DOOM Eternal, id Software's much-loved first-person shooter, recently updated to NVIDIA DLSS 2.3, bringing improvements to particle rendering, which reduces streaking that obfuscated the appearance of embers and other small particle effects.NVIDIA DLSS 2.3 is available now in 16 titles, including:Baldur's Gate 3Bright Memory: InfiniteCrysis 2 RemasteredCrysis 3 RemasteredCyberpunk 2077DeathloopDOOM EternalGrand Theft Auto III - Definitive EditionGrand Theft Auto: San Andreas - Definitive EditionGrand Theft Auto: Vice City - Definitive EditionJurassic World Evolution 2Marvel's Guardians of the GalaxyRise of the Tomb RaiderShadow of the Tomb RaiderSword & Fairy 7And DLSS 2.3 comes to Farming Simulator 22 on November 22nd when the game launches. | Prediction/Content Synthesis | Computer and Mathematical/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Jeremy Hull | AI and Social Commerce: Digital Marketing Trends to Watch For in 2022 | “Unprecedented” has become a clichéd term to describe the past two years, but that doesn’t diminish its accuracy. The global pandemic upended patterns, accelerated adoption of emerging consumer behaviors, and disrupted the carefully laid plans of even the most savvy brands. Agile marketers pivoted quickly, implementing new experiences to address changing consumer needs. The one […]The post AI and Social Commerce: Digital Marketing Trends to Watch For in 2022 appeared first on Brainlabs. | https://www.brainlabsdigital.com/blog/ai-and-social-commerce-digital-marketing-trends-to-watch-for-in-2022/ | 2021-12-03T14:48:58Z | Unprecedented has become a clichéd term to describe the past two years, but that doesnt diminish its accuracy. The global pandemic upended patterns, accelerated adoption of emerging consumer behaviors, and disrupted the carefully laid plans of even the most savvy brands. Agile marketers pivoted quickly, implementing new experiences to address changing consumer needs. The one question on every digital marketers mind is.what will next year bring?It’s becoming apparent that while theres a clear point of inflection at the start of the pandemic, moving forward will be a gradual process. 2022 will bring new opportunities for digital marketers across a variety of fronts, ranging from innovative ways to connect with consumers in new spaces to the infusion of AI into more and more marketing practices.Here are two of my predictions for whats to come next year.AI-driven automation becomes standard operating procedureAfter being heralded for many years as an upcoming revolution, artificial intelligence-driven marketing has finally arrivedand that arrival has been surprisingly subtle. The AI-powered automation Google launched over the past few years is focused on individual features within existing marketing activities, such as auction-time bidding and responsive search ads. Some of these features are even baked right into the functionality of the platform, such as Googles improved Broad match relevance launched last February.We already know that Responsive Search Ads will become the primary ad format in 2022, and we can expect more in-platform AI-driven automation to become the default (and eventually only) option throughout 2022 across Google, Facebook, Microsoft, and other advertising ecosystems. Next year will also see the emergence of Performance Max as a powerful cross-channel marketing solution from Google, enabling advertisers to access more inventory across a variety of ad types in a streamlined fashion. Weve been testing Performance Max, and the results have been impressive thus faralthough the real test will come in the first half of next year when Google incorporates Smart Shopping campaigns, truly unlocking this campaign type for ecommerce brands.These features are powerful and effectivebut only when fueled by accurate marketing data. Digital marketers will need to not only understand how to strategically implement AI-driven automation, but also ensure theyre feeding the machine with the most relevant KPIs. The advertisers who see the most benefit from automation in 2022 will be the ones who take the time to ensure robust conversion data gives these tools the best first-party audience and conversion information to incorporate into their calculations.The continued rise of social commerce and livestream shoppingTodays consumers move seamlessly between devices and experiences throughout their day. They are in control of where they consume media, and theyre watching more and more short-form video across multiple interaction pointsespecially on social platforms. Savvy brands have built video content strategies to reach consumers on social with powerful video experiences, creating both owned assets and influencer amplification. All of these experiences include a subtle (or not-so-subtle) call to action, attempting to influence viewers to make a purchase or engage in some other way. However, the combination of consumer control and an endless feed of content often interrupts this opportunity. Shoppable videos aim to change all of that.While many platforms have experimented with solutions that integrate the point of conversion into the video experience itself, often theyre done as an afterthought, overlaid on top of pre-existing video assets. In the coming year there is a massive opportunity for brands to integrate commerce into social video from the ground up, building experiences that enable viewers to seamlessly purchase within the video itself. Livestream ecommerce in China has demonstrated the potential, and is projected to drive more than $300 billion in sales in that market in 2021.Social video has the opportunity to move from a push branding play to a pull conversion tactic, but only when marketers craft it from the start with that purpose in mind. 2022 will see an explosion of platforms and tools that enable seamless social commerce, and the brands that lean into the opportunity will reap the benefits.For more of our predictions about privacy, the rise of the metaverse, and the final death of third party cookies, be sure to tune in Wednesday December 8th to the Brainlabs Webinar Digital Marketing in 2022: Industry Trends to Watch For. I hope to see you there! | Content Creation/Process Automation/Decision Making/Recommendation | Management/Business and Financial Operations/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | MetricNet, LLC | MetricNet Delivers Groundbreaking Presentation at the 2021 SupportWorld Live Conference | MetricNet Logo MetricNet Logo INCLINE VILLAGE, Nev., Nov. 16, 2021 (GLOBE NEWSWIRE) -- Jeff Rumburg, co-founder and Managing Partner of MetricNet, presented ... | https://finance.yahoo.com/news/metricnet-delivers-groundbreaking-presentation-2021-080000529.html | https://s.yimg.com/ny/api/res/1.2/Z5bInRkZDOb.n.73v2Lh.Q--/YXBwaWQ9aGlnaGxhbmRlcjt3PTEyMDA7aD0zNjc-/https://s.yimg.com/uu/api/res/1.2/6UOa2cVpaa2yaCgJzcEFrg--~B/aD0xMTk3O3c9MzkxOTthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/globenewswire.com/ce95dbab4f0b7b18b7d7289fe0267922 | 2021-11-16T08:00:00Z | MetricNet LogoMetricNet LogoINCLINE VILLAGE, Nev., Nov. 16, 2021 (GLOBE NEWSWIRE) -- Jeff Rumburg, co-founder and Managing Partner of MetricNet, presented Session 606, "Artificial Intelligence In Service And Support: The Future Is Finally Here!," at the Annual SupportWorld Live Conference in Orlando, Florida. This marks the 29th consecutive year Mr. Rumburg has been selected as a speaker for the Annual SupportWorld Conference & Expo.In this presentation, Mr. Rumburg shared how one large insurance company leveraged AI to shift left, improve the customer experience, and dramatically reduce their costs. Additionally, he defined metrics for AI, provided industry benchmarks for the metrics of AI, and shared a template for measuring the ROI of AI in service and support."Promises of bot-powered agentless support, problems that automatically correct themselves, and preemptive problem resolutionthe holy grail of service and supporthave gone largely unfulfilled until recently," said Jeff Rumburg. "But artificial intelligence in IT service and support is at an inflection point. There are now numerous examples of large enterprises that have harnessed the power of AI to reduce ticket volumes and resolution times, and improve the customer experience - all while getting smarter over time through machine learning."Those interested in this presentation can find a copy of the slide deck on MetricNet's website. Mr. Rumburg also facilitated 'Succeeding with Metrics,' an interactive, consultative HDI Training Workshop on Sunday, Nov. 7, 2021, and Monday, Nov. 8, 2021.Mr. Rumburg is the winner of the Ron Muns Lifetime Achievement Award, was named to HDI's Top 25 Thought Leaders list for four consecutive years, and has served on HDI's Strategic Advisory Board. He is co-founder and Managing Partner of MetricNet, LLC, where he is responsible for global strategy, product development, and client engagement delivery. As a leading expert in benchmarking and re-engineering, Mr. Rumburg authored a best-selling book on benchmarking, and has been retained as a benchmarking expert by such iconic companies as American Express, USAA, Coca Cola, and Emory Healthcare.If you would like more information about benchmarking your service and support functions, please visit http://www.metricnet.com or e-mail MetricNet at info(at)metricnet(dot)com.About MetricNet MetricNet is the global leader in IT service and support benchmarking. More than half of the FORTUNE 500 rely on MetricNet benchmarks to improve and optimize their performance. MetricNet is the first, and still the only company to offer downloadable service desk and desktop support benchmarks from their website. With a global benchmarking database of nearly 4,000 IT service and support benchmarks, MetricNet has the most comprehensive database of process and performance metrics in the industry.Related ImagesImage 1: MetricNet LogoThis content was issued through the press release distribution service at Newswire.com.Attachment | Decision Making/Process Automation | Management/Business and Financial Operations | null | null | null | null | null | null |
news | BrainChip Quadruples U.S. Headquarter Space to Facilitate Growth as Company Accelerates Commercialization | ALISO VIEJO, Calif.--(BUSINESS WIRE)--BrainChip Quadruples U.S. Headquarter Space to Facilitate Growth as Company Accelerates Commercialization | https://www.businesswire.com/news/home/20211110006527/en/BrainChip-Quadruples-U.S.-Headquarter-Space-to-Facilitate-Growth-as-Company-Accelerates-Commercialization | http://www.businesswire.com/images/bwlogo_square.png | 2021-11-11T03:23:47Z | ALISO VIEJO, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN), (OTCQX: BCHPY) a leading provider of ultra-low power high performance artificial intelligence technology and the worlds first commercial producer of neuromorphic AI chips, today announced it is moving its U.S. headquarters to larger facilities in order to support expected customer growth as the company continues to move toward commercialization of its Akida event domain neural processor and comprehensive development environment.BrainChips new 10,000 sq. ft. (929 sq. m) headquarters are located in Laguna Hills, California. The new offices will house the companys development, engineering, sales, finance, and marketing teams. By expanding its physical footprint now, BrainChip is well positioned to scale the services and processes it is putting into place to satisfy expected customer and support infrastructure needs. The date of the move is November 15.With the recent announcement of the availability of development kits for its Akida advanced neural networking processor, BrainChip is anticipating a ramp up to production status of neuromorphic-based edge AI applications. The companys expansion of office space reflects the expansion of strategic alliances and business development underway to accelerate commercialization of BrainChip IP.Weve reached an inflection point in which the need to find additional space to support our growth could no longer be ignored, said Anil Mankar, BrainChip co-founder and chief development officer. Whereas the trend for many businesses today is to shrink their physical footprint, it just makes sense to move our headquarters to a facility that can support the growth of our development, production and sales teams. We look forward to having the room necessary to support our goals of delivering the next generation of AI at the edge to markets throughout the world.BrainChips Akida brings artificial intelligence to the edge in a way that existing technologies are not capable. The solution is high-performance, small, ultra-low power and enables a wide array of edge capabilities. The Akida (NSoC) and intellectual property can be used in applications including Smart Home, Smart Health, Smart City and Smart Transportation. These applications include but are not limited to home automation and remote controls, industrial IoT, robotics, security cameras, sensors, unmanned aircraft, autonomous vehicles, medical instruments, object detection, sound detection, odor and taste detection, gesture control and cybersecurity.About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BCHPY)BrainChip is a global technology company that is producing a groundbreaking neuromorphic processor that brings artificial intelligence to the edge in a way that is beyond the capabilities of other products. The chip is high performance, small, ultra-low power and enables a wide array of edge capabilities that include on-chip training, learning and inference. The event-based neural network processor is inspired by the spiking nature of the human brain and is implemented in an industry standard digital process. By mimicking brain processing BrainChip has pioneered a processing architecture, called Akida, which is both scalable and flexible to address the requirements in edge devices. At the edge, sensor inputs are analyzed at the point of acquisition rather than through transmission via the cloud to a data center. Akida is designed to provide a complete ultra-low power and fast AI Edge Network for vision, audio, olfactory and smart transducer applications. The reduction in system latency provides faster response and a more power efficient system that can reduce the large carbon footprint of data centers.Additional information is available at https://www.brainchipinc.comFollow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006 | Unknown | Unknown | null | null | null | null | null | null |
|
news | Dan Leibson | SEO The LSG Way: Earn Your Knowledge | I love this scene from Jurassic Park People always remember this scene for the could/should line but I think that really minimizes Malcolms holistically excellent speech. Specifically, this scene is an amazing analogy for Machine Learning/AI technology right now. I’m not going to dive too much into the ethics piece here as Jamie Indigo […]The post SEO The LSG Way: Earn Your Knowledge appeared first on Local SEO Guide. | https://www.localseoguide.com/seo-the-lsg-way-earn-your-knowledge/ | 2021-11-17T15:26:23Z | I love this scene from Jurassic ParkPeople always remember this scene for the could/should line but I think that really minimizes Malcolms holistically excellent speech. Specifically, this scene is an amazing analogy for Machine Learning/AI technology right now. Im not going to dive too much into the ethics piece here as Jamie Indigo has a couple of amazing pieces on that already, and established academics and authors like Dr. Safiya Noble and Ruha Benjamin best deal with the ethics teardown of search technology.Im here to talk about how we here at LSG earn our knowledge and some of what that knowledge is.Ill tell you the problem with the scientific power that you are using here; it didnt require any discipline to attain it. You read what others had done and you took the next step.I feel like this scenario described in the screenshot (poorly written GPT-3 content that needs human intervention to fix) is a great example of the mindset described in the Jurassic Park quote. This mindset is rampant in the SEO industry at the moment. The proliferation of programmatic sheets and collab notebooks and code libraries that people can run without understanding them should need no further explanation to establish. Just a basic look at the SERPs will show a myriad of NLP and forecasting tools that are released while being easy to access and use without any understanding of the underlying maths and methods. $SEMR just deployed their own keyword intent tool, totally flattening a complex process without their end-users having any understanding of what is going on (but more on this another day). These maths and methods are absolutely critical to be able to responsibly deploy these technologies. Lets use NLP as a deep dive as this is an area where I think we have earned our knowledge.You didnt earn the knowledge for yourselves so you dont take any responsibility for it.The responsibility here is not ethical, its outcome oriented. If you are using ML/NLP how can you be sure its being used for client success? There is an old data mungling adage Garbage In, Garbage Out that is about illustrating how important initial data is:https://xkcd.com/1838/The stirring here just really makes this comic. Its what a lot of people do when they dont understand the maths and methods of their machine learning and call it fitting the data. This can also be extrapolated from data science to general logic e.g. the premise of an argument. For instance, if you are trying to use a forecasting model to predict a traffic increase you might assume that The traffic went up, so our predictions are likely true but you literally cant understand that without understanding exactly what the model is doing. If you dont know what the model is doing you cant falsify it or engage in other methods of empirical proof/disproof.HUH?Exactly, so lets use an example. Recently Rachel Anderson talked about how we went about trying to understand the content on a large number of pages, at scale using various clustering algorithms. The initial goal of using the clustering algorithms was to scrape content off a page, gather all this similar content over the entire page type on a domain, and then do it for competitors. Then we would cluster the content and see how it grouped it in order to better understand the important things people were talking about on the page. Now, this didnt work out at all.We went through various methods of clustering to see if we could get the output we were looking for. Of course, we got them to execute, but they didnt work. We tried DBSCAN, NMF-LDA, Gaussian Mixture Modelling, and KMeans clustering. These things all do functionally the same thing, cluster content. But the actual method of clustering is different. https://scikit-learn.org/stable/modules/clustering.htmlWe used the scikit-learn library for all our clustering experiments and you can see here in their knowledge base how different clustering algorithms group the same content in different ways. In fact they even break down some potential usecases and scalability;https://scikit-learn.org/stable/modules/clustering.htmlNot all of these ways are likely to lead to positive search outcomes, which is what it means to work when you do SEO. It turns out we werent actually able to use these clustering methods to get what we wanted. We decided to move to BERT to solve some of these problems and more or less this is what led to Jess Peck joining the team to own our ML stack so they could be developed in parallel with our other engineering projects.But I digress. We built all these clustering methods, we knew what worked and didnt work with them, was it all a waste?Hell no, Dan!One of the things I noticed in my testing was that KMeans clustering works incredibly well with lots of concise chunks of data. Well, in SEO we work with keywords, which are lots of concise chunks of data. So after some experiments with applying the clustering method to keyword data sets, we realized we were on to something. I wont bore you on how we completely automated the KMeans clustering process we now use but understanding the ways various clustering maths and processes worked to let us use earned knowledge to turn a failure into success. The first success is allowing the rapid ad-hoc clustering/classification of keywords. It takes about 1hr to cluster a few hundred thousand keywords, and smaller amounts than hundreds of thousands are lightning-fast.Neither of these companies are clients, just used them to test but of course if either of you wants to see the data just HMU We recently redeveloped our own dashboarding system using GDS so that it can be based around our more complicated supervised keyword classification OR using KMeans clustering in order to develop keyword categories. This gives us the ability to categorize client’s keywords even on a smaller budget. Here is Heckler and I testing out using our slackbot Jarvis to KMeans cluster client data in BigQuery and then dump the output in a client-specific table. This gives us an additional product that we can sell, and offer more sophisticated methods of segmentation to businesses that wouldnt normally see the value in expensive big data projects. This is only possible through earning the knowledge, through understanding the ins and outs of specific methods and processes to be able to use them in the best possible way. This is why we have spent the last month or so with BERT, and are going to spend even more additional time with it. People may deploy things that hit BERT models, but for us, its about a specific function of the maths and processes around BERT that make it particularly appealing.How is this another responsibility of SEOsThanks, random internet stranger, its not. The problem is with any of this ever being an SEOs responsibility in the first place. Someone who writes code and builds tools to solve problems is called an engineer, someone who ranks websites is an SEO. The Discourse often forgets this key thing. This distinction is a core organizing principle that I baked into the cake here at LSG and is reminiscent of an ongoing debate I used to have with Hamlet Batista. It goes a little something like this;Should we be empowering SEOs to solve these problems with python and code etc? Is this a good use of their time, versus engineers who can do it quicker/better/cheaper?I think empowering SEOs is great! I dont think giving SEOs a myriad of responsibilities that are best handled by several different SMEs is very empowering though. This is why we have a TechOps team that is 4 engineers strong in a 25 person company. I just fundamentally dont believe its an SEOs responsibility to learn how to code, to figure out what clustering methods are better and why, or to learn how to deploy at scale and make it accessible. When it is then they get shit done (yay) standing on the shoulders of giants and using unearned knowledge they dont understand (boo). The rush to get things done the fastest while leveraging others earned knowledge (standing on the shoulders of giants) leaves people behind. And SEOs take no responsibility for that either.Leaving your Team BehindA thing that often gets lost in this discussion is that when information gets siloed in particular individuals or teams then the benefit of said knowledge isnt generally accessible.Not going to call anyone out here, but before I built out our TechOps structure I did a bunch of get out of the building research in talking to others people at other orgs to see what did or did not work about their organizing principles. Basically what I heard fit into either two buckets:Specific SEOs learn how to develop advanced cross-disciplinary skills (coding, data analysis etc) and the knowledge and utility of said knowledge arent felt by most SEOs and clients.The information gets siloed off in a team e.g. Analytics or Dev/ENG team and then gets sold as an add on which means said knowledge and utility arent felt by most SEOs and clients.Thats it, thats how we get stuff done in our discipline. I thought this kinda sucked. Without getting too much into it here, we have a structure that is similar to a DevOps model. We have a team that builds tools and processes for the SMEs that execute on SEO, Web Intelligence, Content, and Links to leverage. The goal is specifically to make the knowledge and utility accessible to everyone, and all our clients. This is why I mentioned how KMeans and owned knowledge helped us continue to work towards this goal.Im not going to get into Jarvis stats (obviously we measure usage) but suffice to say it is a hard-working bot. That is because a team is only as strong as the weakest link, so rather than burden SEOs with additional responsibility, orgs should focus on earning knowledge in a central place that can best drive positive outcomes for everyone. | Content Synthesis/Decision Making | Business and Financial Operations/Computer and Mathematical | null | null | null | null | null | null |
|
news | Hilbert Hagedoorn | NVIDIA has officially announced the release of DLSS 2.3. | The adoption of NVIDIA DLSS is increasing at a quick pace, aided by the availability of simple-to-use Unreal Engine plugins and native Unity compatibility.... | https://www.guru3d.com/news-story/nvidia-has-officially-announced-the-release-of-dlss-2-3.html | https://www.guru3d.com/news_teaserimage/1514 | 2021-11-17T08:06:08Z | The adoption of NVIDIA DLSS is increasing at a quick pace, aided by the availability of simple-to-use Unreal Engine plugins and native Unity compatibility.This month 10 new games get DLSS, and a number of others, including Cyberpunk 2077, get upgraded to the new NVIDIA DLSS 2.3 SDK for even better image quality. With an AI model that continuously improves through training on NVIDIA's supercomputer, NVIDIA DLSS offers the best image quality and performance for over 130 games and applications. To deliver a level of performance and image quality far above that of traditional upscaling technologies, NVIDIA DLSS utilizes an AI model trained on supercomputers, deep game integrations, and GeForce RTX Tensor Cores. Our DLSS AI model is always learning, and our newest update, NVIDIA DLSS 2.3, is available now. It makes smarter use of a game's motion vectors to improve object detail in motion, particle reconstruction, ghosting, and temporal stability. In Cyberpunk 2077, which updates to NVIDIA DLSS 2.3 today, it more smartly uses motion vectors to reduce ghosting on fine detail in motion. Take a look at an example below, on the car's side mirror.DOOM Eternal, id Software's much-loved first-person shooter, recently updated to NVIDIA DLSS 2.3, bringing improvements to particle rendering, which reduces streaking that obfuscated the appearance of embers and other small particle effects. NVIDIA DLSS 2.3 is available now in 16 titles, including:Baldur's Gate 3Bright Memory: InfiniteCrysis 2 RemasteredCrysis 3 RemasteredCyberpunk 2077DeathloopDOOM EternalGrand Theft Auto III - Definitive EditionGrand Theft Auto: San Andreas - Definitive EditionGrand Theft Auto: Vice City - Definitive EditionJurassic World Evolution 2Marvel's Guardians of the GalaxyRise of the Tomb RaiderShadow of the Tomb RaiderSword & Fairy 7And DLSS 2.3 comes to Farming Simulator 22 on November 22nd when the game launchesGet a Tour of NVIDIA Holodeck - 08/13/2018 08:50 AMThis year, NVIDIA founder and CEO Jensen Huang will take the stage at the show in Vancouver, Canada, to share how AI, real-time ray tracingand VR are transforming the computer graphics industry. His t...Nvidia Hotfixes HotFix GeForce driver 398.46 (updated) - 07/07/2018 09:03 AMNvidia today outs a hotfix driver, surprisingly for a 9 months old game where it fixes a Black Textures for Wolfenstein II: The New Colossus. The driver is based on branch build r398_35-3. Downlo...Nvidia halts tests of self-driving cars after Uber accident - 03/28/2018 09:06 AMNvidia temporarily stops testing autonomous vehicles after a fatal accident with a self-driving car from Uber last week. Uber makes use of the Nvidia computer platform in his self-driving cars....New Nvidia Head Quarters back on track after delay - 04/07/2015 12:12 PMThoughy delayed by a year or two Nvidia is now putting their futuristic Santa Clara campus project back on track, though with a new design. The graphics chipmaker confirmed on Tuesday that it would ...Nvidia Halts GTX 780 Ti, GTX 780, GTX 770 and lowers GTX 760 Pricing - 09/19/2014 10:04 AMWith the release of the GeForce GTX 970 and 980 injected in the same space as the GTX 780 Ti, GTX 780, GTX 770 you might have noticed price drops for a while now on these products, Nvidia wanted to c... | Content Synthesis/Image Analysis/Prediction | Unknown | null | null | null | null | null | null |
news | Mr. HIStalk | Readers Write: Filling the Healthcare Data Glass: The Glass Doesn’t Need to Stay Half Empty | Filling the Healthcare Data Glass: The Glass Doesn’t Need to ... | https://histalk2.com/2021/11/29/readers-write-filling-the-healthcare-data-glass-the-glass-doesnt-need-to-stay-half-empty/ | 2021-11-29T16:03:20Z | Filling the Healthcare Data Glass: The Glass Doesnt Need to Stay Half EmptyBy Alex MacLeodAlex MacLeod is director of healthcare commercial initiatives for InterSystems of Cambridge, MA.In recent years, there has been a lot of talk about the unfulfilled promises of artificial intelligence (AI) in healthcare and concerns about how to effectively incorporate it into practice and realize immediate value. There is a real glass half empty mentality at play due to false starts and over-ambitious expectations for AI adoption and commercialization. But that doesnt need to, and shouldnt, be the case. Googles hospital partnership to collaborate on algorithm development using patient records for AI development is a strong sign of healthcare AIs imminent proliferation. Gone is the barrier of highly fragmented patient data. This is a significant market shift, and other giants in tech and healthcare will follow Googles lead. The question now is, what can and should the healthcare IT industry do to prepare? We will answer that by looking at three core areas data, patterns, and areas of caution. AI in healthcare has had positive growth in recent years, but the meaningful application of AI products (FDA-approved AI products) and the widespread application of data to the decision-making process has lagged, according to a recent study published in the Medical Futurist Institute. There have been major recent advances in sensor technology, allowing for a broad range of devices that help inform patients about their health or fitness and warn about risks. The sensors generate raw data, but the interpretation of it is based on AI analysis, which hasnt developed at the same rapid pace. IT departments, payers, providers, and patients are overwhelmed with the high volume of data generated on a daily basis and need to better articulate their end goal for its use. To do so, they need to pay close attention to their current processes and determine what can be done differently and what needs to change in order to be able to analyze data and apply it to future decisions. The biggest questions those in healthcare face in regard to health information are: What do we do with all this data? What is most important to analyze? How can it be made actionable? (i.e. can it be used to become compliant with regulations?)To answer those questions, we need to start by understanding what the data represents and asking a few more questions. Is the data set composed of lab results, physician-collected, or patient-submitted data? Why was it generated and collected in the first place? The answers are typically more straightforward in other industries than healthcare. Thats why it is important to take a close look at the data and identify patterns and similarities. Analysis in healthcare AI is different from other consumer-facing algorithms. Healthcare AI has less algorithm-friendly base data compared to social media or online shopping, for example. Healthcare algorithms work with complicated inputs of clinical notes, medical imaging, and sensor readings. Outcomes are relatively well defined in non-healthcare AI settings, most commonly in terms of attention or purchase. In healthcare, outcomes have time and severity dimensions on top of opportunity for interference with other effects, not all of which can be stratified through raw statistics. Current effective applications of AI in healthcare include the use of ML tools in triage practices and administration. For example, what makes it effective in triage is how AI nuances the health systems basic risk scoring systems as a way to identify patients who need immediate attention or who require higher acuity resources and pathways. That said, patients must consent to their data to be applied to healthcare AI algorithms, and to provide value, the data must be made actionable. It must be clean, comprehensive, and normalized data where there are no duplicate records, formatting errors, incorrect information, or mismatched terminology. This gives those analyzing the data complete confidence in how and why it was curated. Collecting data always introduces the risk of the information being repurposed, a possibility spotlighted when fitness tracking app Strava released a dataset of 3 trillion distinct GPS readings that inadvertently exposed US military bases in Afghanistan. Modern bots, and to some extent even legitimate social media marketing tools, are making efficient use of analytics and AI to game the platforms algorithms in order to attract more views, clicks, and likes. But, when such technology ends up in the wrong hands, the focus may be on spreading misinformation rather than the intended use. As with most technology, discretion is key. Collect and analyze only the minimum necessary. Dont invite scrutiny over private data or enable access to it. Remain diligent in your data practices. Its understandable why people see the glass as half empty, but we have reached an inflection point in healthcare AI, a point at which we can add water to the glass. To add to the glass and fully benefit from the anticipated results, we should embrace incoming regulation and think hard about self-regulation measures. Healthcare IT practitioners should closely monitor how laws and oversight will adapt in real-time, similar to as we have seen with the FDA Digital Health Innovation Action Plan. As Googles big step forward in healthcare AI development signals a new level of digitization of health, we can expect changing attitudes towards healthcare AI, including an uptick in trustworthiness and increasing differentiation from other categories of consumer AI. AI in healthcare has strong potential if we harness it correctly. In the right scenarios, AI augments the work of healthcare providers and doesnt replace them as long as we maintain a little bit of human intelligence to complement the artificial. | Content Synthesis/Detection and Monitoring/Discovery | Healthcare Practitioners and Support/Computer and Mathematical | null | null | null | null | null | null |
|
news | ExchangeWire PressBox | How Deep Learning Turns Contextual Targeting Into a Cutting-Edge Tool | RTB House, a market leader in deep learning-powered marketing technologies, are proud to announce the release of its globally available industry-leading Context AI solution. The solution will play a key role in helping advertisers navigate the cookieless landscape.Google’s privacy-positive decision [...]The post How Deep Learning Turns Contextual Targeting Into a Cutting-Edge Tool appeared first on ExchangeWire.com. | https://www.exchangewire.com/blog/2021/11/12/deep-learning-contextual-targeting-cutting-edge-tool/ | 2021-11-12T11:53:26Z | RTB House, a market leader in deep learning-powered marketing technologies, are proud to announce the release of its globally available industry-leading Context AI solution. The solution will play a key role in helping advertisers navigate the cookieless landscape.Googles privacy-positive decision to retire third-party tracking cookies means that individual identity based targeting methods will soon become ineffective. This means that forward-looking MarTech companies need to work on anonymous targeting solutions. One of the best cookieless targeting solutions is also one of the industrys oldest: contextual targeting. This method fell out of favour when third-party tracking cookies became popular, but breakthroughs in deep learning are making contextual targeting more viable than ever. We have always believed that privacy-preserving advertising is the future, the writing was on the wall for third-party tracking cookies long before Google made their announcement in early 2021 said ukasz Wodarczyk, VP of programmatic ecosystem growth and innovation. It seems that other tracking methods perceived as surveillance-based, such as fingerprinting, will be phased out in the next decade. This highlights the need for novel approaches that enable advertisers to reach the right audience without identifying individual consumers. RTB Houses Context AI solution is the result of four years of active development and takes advantage of other innovations, such as a Brand Safety solution, in order to maximise results. Key features of the Context AI solution include: - Context AI is scanning 95% of the internet, which translates into 1.5 million articles scanned every hour in over 40 languages.- Granular contextual targeting, including phrases and inflection, allows for the creation of contextual audiences of users interested in a given topic.- RTB Houses Deep Learning solutions are able to deliver up to 50% more value within the same marketing budget compared to legacy solutions. Our algorithms have enabled us to gather and interpret data from 95% of the web, in over 40 languages, explained Mateusz Jdrocha, head of upper-funnel solutions development. This rich data is then run through our Natural Language Processing algorithms, which use a layer based approach to understand inflection and context, enabling us to ensure that content is placed on the most relevant sites to reach quality users.While this solution has been developed to thrive in the cookieless future, we believe it is very relevant today. Our Three Pillars marketing approach is designed to maximise the benefits of all existing targeting solutions:- Group-based targeting - that leverages 1st party data, supported by browsers, such as Googles Privacy Sandbox- Individual-based targeting - that uses subsidiary technologies to 3rd-party cookies- Contextual targeting - that uses advertising relevant to a platforms audience. Context AI enables us to take our contextual targeting game to the next level, and can already provide real benefits for any agency or brand.When done right, contextual targeting allows for the creation of contextual audiences of users interested in a given topic. Those audiences are then reached in the most efficient way, thanks to the deep learning algorithms responsible for bidding. The same technology decides what ad format and ad content to display, and then creates a personalised experience while fully respecting user privacy. To find out more visit RTB House blog | Content Synthesis/Decision Making/Recommendation | Management/Business and Financial Operations | null | null | null | null | null | null |
|
news | MartianMan | 99° - Free Udemy Courses: SEO & Copywriting, MS Excel, Google Ads, YouTube Ads , Public Speaking, Computer Basics, Adobe Lightroom & More | Have fun reading. All these code will expire soon and limited to maximum 1k coupon redemption per course as per new Udemy policy.[Best Seller] [7h 15m] BEST of Content Marketing, SEO & Copywriting + 23 Templates Code=1NOV111[4.2] [6h 11m] Google Ads, YouTube Ads & Google Retargeting Made FAST & FUN Code=1NOV111[4.1] [3h 38m] The Complete Computer Basics Guide for Beginners: 2021 Code=C3FD1FEEA229AE134175[4.4] [4h 28m] Microsoft Excel -Basic Excel/ Advanced Excel Formulas Code=F3B7CF8E9A984BB10FDF[4.5] [8h 0m] Adobe Lightroom Masterclass - Beginner to Expert Code=YOUACCEL48154[4.3] [1h 28m] Modern JavaScript for React JS - ES6. Code=FREENOV3[3.9] [45m] Introduction to Microservices Code=FREENOV3[4.5] [45m] Utreon Masterclass - Ultimate Guide to YouTube Alternative Code=6135403D2834313CDFDB[4.4] [1h 20m] Stop Blushing Fast with Hypnosis Mind Programming Code=A2AE42E52426061B32ED[4.1] [25h 40m] Adobe Creative Cloud 2021 Ultimate Course Code=TRY10FREE112102[3.9] [5h 0m] Microsoft Teams Ultimate Course 2021 Code=TRY10FREE112102[New] [1h 15m] Digishock 3.0: The Future of Artificial Intelligence (AI) Code=SHOCKZDIGI[4.9] [1h 14m] Sales Skills Training: Explode Your Sales with Online Video Code=2B2D79484CC8098B5A80[4.6] [1h 2m] Body Language: Appear Confident and Poised When You Speak Code=767FD473CF518377C057[4.6] [1h 23m] YouTube Marketing: Become a Digital TV Star in Your Niche Code=6A91F46A37C27BAFE51B[4.6] [1h 23m] You Can Deliver a TED-Style Talk Presentation (Unofficial) Code=2E45F59CE46E312121FC[4.2] [4h 36m] Public Speaking for People Who Hate Public Speaking Code=C3F49370839A75BCAB99Popular Discounted Courses[4.4] [95h 23m] Ultimate PHP, Laravel, CSS & Sass! Learn PHP, Laravel & Sass $9.99 Code=LEARNWEBDEV[4.2] [42h 32m] Project Management Professional Certification Program (PMP) $12.99 Code=BLACKFRIDAY[Hot & New] [67h 26m] Digital Marketing Course as PROJECT & FUN: 88 DAYS CHALLENGE $9.99 Code=2NOV111[4.3] [95h 23m] Ultimate SEO, Social Media, & Digital Marketing Course 2022 $12.99 Code=2NOV111[4.4] [30h 49m] Communication Skills for Beginners $12.99 Code=THANKS09[Best Seller] [31h 21m] Soft Skills: The 11 Essential Career Soft Skills $12.99 Code=THANKS09[4.7] [21h 33m] NEW AWS Certified Solutions Architect Associate 2021 SAA-C02 $11.99 Code=AWSNOV21[4.7] [12h 5m] AWS Certified Cloud Practitioner - Complete NEW Course 2021 $11.99 Code=AWSNOV21[4.3] [28h 48m] Complete Personal Development Personal Transformation Course $12.99 Code=THANKS09[4.4] [22h 16m] The Complete Job Interviewing Skills Masterclass Course $12.99 Code=THANKS09 | https://www.hotukdeals.com/deals/free-udemy-courses-seo-copywriting-ms-excel-google-ads-youtube-ads-public-speaking-computer-basics-adobe-lightroom-more-3825537 | 2021-11-16T14:19:41Z | hotukdeals.com - The Largest Deal Community in the UKhotukdeals is a community for deal seekers. Find and share the best deals, promotional codes and vouchers from on and off the web.*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, it means we can keep our platform free to use, without compromising on impartiality. Whether or not we're paid doesn't change how hot a deal can potentially get - that's only up to hotukdeals members. We believe it should always be up to our community to decide which deals fly and which fall. Read more about how hotukdeals makes money.hotukdeals is a credit broker, not a lender. This is in relation to some products falling under Broadband and Phone Contracts, Finance & Insurance and Car & Motorcycle.hotukdeals is a trading name of Pepper Deals Ltd (FRN 798319) an Appointed Representative of Funding Falcon (FRN: 743100) who is authorised and regulated by the Financial Conduct Authority (FCA). Pepper Deals Ltd is registered England and Wales. Number 9729292. Registered office: First Floor, 2-4 Holywell Lane, Hackney, London EC2A 3ET.Note: products under: Home & Living, Electronics, Culture & Leisure, Fashion & Accessories, Family & Kids, Sports & Outdoors, Groceries, Gaming, Travel, Health & Beauty, Services, Garden & DIY are not regulated by the FCA. If you wish to make a complaint or contact us, you can use the address above, or follow our complaints procedure. | Personalization/Recommendation | Education, Training, and Library/Arts, Design, Entertainment, Sports, and Media | null | null | null | null | null | null |
|
news | Bloomberg | Did that artificially-intelligent chatbot just crack a rude joke? | The training of AI systems ought to include an ethics programme | https://www.livemint.com/opinion/online-views/did-that-artificially-intelligent-chatbot-just-crack-a-rude-joke-11638117649443.html | 2021-11-28T16:44:47Z | PolyAI Ltd is an ambitious startup that creates artificial voices to replace call centre operators. Based in London, it has raised $28 million to bring AI-powered customer service to Metro Bank, BP and others. The idea is that instead of the nightmare of dialling random digits in a decision tree, you can instead ask to, say, book a table, and a voicewith just the slightest inflection of its machine-learning origins responds with great civility. Thats nice. But there was a brief moment two years ago when it wasnt polite at all.A software developer with PolyAI who was testing the system, asked about booking a table for himself and a Serbian friend. Yes, we allow children at the restaurant," the voice bot replied, according to PolyAI founder Nikola Mrksic. Seemingly out of nowhere, the bot was trying make an obnoxious joke about people from Serbia. When it was asked about bringing a Polish friend, it replied, Yes, but you cant bring your own booze." Mrksic, who is Serbian, admits that the system appeared to think people from Serbia were immature. Maybe we are," he says. He told his team to recalibrate the system to prevent it from stereotyping again. Now, he says, the problem has been fixed for good and the bots wont veer off into anything beyond narrow topics of booking tables and cancelling mobile phone subscriptions. But Mrksic also doesnt know why the bot came out with the answers it did. Perhaps it was because PolyAIs language model, like many others being used today, was trained by processing millions of conversations on Reddit, the popular forum that sometimes veers into misogyny and general hotheadedness.Regardless, his teams discovery also highlights a disconcerting trend in AI: its being built with relatively little ethical oversight. In a self-regulated industry taking on greater decision-making roles, that raises the risk of bias and intrusionor worseif AI ever surpasses human intelligence.AI systems are finding their way into more applications each year. In 2021, the hot new areas were autonomous vehicles and cyber security, according to a report this week from market research firm Pitchbook, which tracks venture capital deal flows. Future growth areas will be lending analytics, drug discovery and sales and marketing. AI startups are also going public or selling themselves at high valuations.After growth faltered in 2019, investors are now seeing outsized returns on AI startups. Globally, theyve produced $166.2 billion in exit capital so far in 2021, more than tripling disclosed deal values for all of last year, according to Pitchbook. The great allure of AI, the basic pitch of investors like Cathie Woods of Ark Invest, is that algorithms are so cheap to implement that their marginal cost over time will be practically nil. But what if theres a cost to human wellbeing? How is that measured? And if software designers cant tell how a chatbot came up with a rude joke, how might they investigate high-stakes systems that crash cars or make bad lending decisions?One answer is to build in ethical oversight of such systems from the start, like the independent committees used by hospitals and some governments. That would mean more investment in ethics research, which is currently at inadequate levels. A survey published this year by British tech investors Ian Hogarth and Nathan Benaich showed there arent enough people working on safety at top AI companies. They queried firms like OpenAI, the AI research lab co-founded five years ago by Elon Musk, and typically found just a handful of safety researchers at each company. In May, some of OpenAIs top researchers in future AI safety also left.OpenAI and Alphabet.s AI lab DeepMind are racing to develop artificial general intelligence or AGI, a hypothetical landmark for the day computers surpass humans in broad cognitive abilitiesincluding spatial, numerical, mechanical and verbal skills. Computer scientists who believe that will happen often say the stakes are astronomically high. If this thing is smarter than us, then how do we know its aligned with our goals as a species?" says investor Hogarth.Another answer for current uses of AI is to train algorithms more carefully by using repositories of clean, unbiased data (and not just pilfering from Reddit). A project called Big Science is training one such language model with the help of 700 volunteer researchers around the world. We are putting in thousands of hours into curating and filtering data," says Sasha Luccioni, a researcher scientist at language processing startup Hugging Face. that is helping organize the project.That could be an important alternative for companies building chatbots, but it also shouldnt be left to volunteers to pick up the slack. AI companies big and small must invest in ethics, too. Parmy Olson is a Bloomberg Opinion columnist covering technologyNever miss a story! Stay connected and informed with Mint. Download our App Now!! | Digital Assistance/Process Automation | Business and Financial Operations/Sales and Related/Office and Administrative Support | null | null | null | null | null | null |
|
news | PR Newswire | Odyssey Announces Agreement to Combine with BenevolentAI | Odyssey Acquisition S.A. ("Odyssey"), a Euronext Amsterdam-listed special-purpose acquisition company, and BenevolentAI, a leading clinical-stage AI drug... | https://finance.yahoo.com/news/odyssey-announces-agreement-combine-benevolentai-060000260.html | https://s.yimg.com/uu/api/res/1.2/BcbD6Untvv5c6VWKxFGh1A--~B/aD04MTt3PTQwMDthcHBpZD15dGFjaHlvbg--/https://media.zenfs.com/en/prnewswire.com/7ccf5db736c007b5379bd46b1115c5dc | 2021-12-06T06:00:00Z | BenevolentAI is a leading, clinical-stage AI drug discovery company that combines advanced AI and machine learning with cutting edge science to discover and develop novel and more effective medicinesCombination of BenevolentAI with Odyssey, a 300m Euronext Amsterdam-listed special purpose acquisition company focused on European healthcare and technology, represents the largest European SPAC merger to date and one of the largest Euronext Amsterdam biotech listings everCombination values BenevolentAI at a pre-money valuation of 1.1 billion and a post-money valuation of 1.5 billion prior to any redemptions; expected completion in Q1 2022135m of fully committed private placement in Odyssey Class A ordinary shares (the "PIPE") from existing BenevolentAI shareholder Temasek, BenevolentAI strategic partner AstraZeneca, healthcare experts Ally Bridge Group and Invus as well as a number of leading institutional investorsAstraZeneca to expand its existing collaboration with BenevolentAI, further validating the scientific leadership of BenevolentAI's platformNet transaction proceeds of up to 390m[1] including 135m of fully-committed PIPE and 300m of gross cash held in escrow by OdysseyTransaction to enable BenevolentAI to continue investing in its innovative technology platform, accelerate the scale-up of its clinical pipeline and consolidate its leadership position in AI-enabled drug discovery and deliver multiple value inflection points in the near-futureOdyssey believes that BenevolentAI, thanks to its ground-breaking AI-based platform, is uniquely positioned to benefit from the increasing focus of established pharma companies on AI-augmented drug discoveryOlivier Brandicourt, former CEO of Sanofi, and Jean Raby, former CEO of Natixis Investment Managers, to join the BenevolentAI Board upon completion of the CombinationPost-combination, BenevolentAI will continue to be a UK headquartered company growing its team and operations in the United Kingdom and the United StatesLONDON and AMSTERDAM, Dec. 6, 2021 /PRNewswire/ -- Odyssey Acquisition S.A. ("Odyssey"), a Euronext Amsterdam-listed special-purpose acquisition company, and BenevolentAI, a leading clinical-stage AI drug discovery company, today announced that they have entered into a definitive agreement for a business combination (the "Combination"). The terms of the Combination value BenevolentAI at a pre-money valuation of 1.1 billion and a post-money valuation of up to 1.5 billion. Net transaction proceeds are expected to be up to 390m[1] including 135m of fully-committed PIPE and 300m of gross cash held in escrow by Odyssey. The funds will be used to accelerate BenevolentAI's development, scale-up its clinical pipeline, continue investment in its technology platform, consolidate its leadership position in AI-enabled drug discovery and deliver multiple value inflection points in the near future.BenevolentAI and Odyssey LogoMANAGEMENT COMMENTARY Joanna Shields, Chief Executive Officer, BenevolentAI, said: "We have built BenevolentAI into a category defining business by pioneering a revolutionary approach to drug discovery and development. Our AI platform empowers scientists to leverage biomedical and experimental data at scale to understand the underlying causes of disease and develop more effective medicines, faster. Every drug in our pipeline of 20+ programmes has been generated by the Benevolent Platform which has a proven track record of scientifically validated discoveries, both in-house and in partnership with leading pharmaceutical companies. The combination with Odyssey will allow us to scale our vision and ambition of uniting purposeful technology and cutting-edge science to discover life-changing medicines."Dr François Nader, Chairman of BenevolentAI, said: "BenevolentAI stands out at the forefront of the emerging sector of AI-driven drug discovery, with its innovative R&D platform and focus on mechanism-based drug discovery. Through its impressive track record, BenevolentAI has demonstrated the value of its platform in facilitating new discoveries for challenging diseases that have defied conventional research efforts. BenevolentAI is a company with significant growth potential, and the combination with Odyssey will further accelerate the company's ambitious plans to scale its platform and broaden its pipeline. Investments from existing shareholder Temasek, our strategic partner AstraZeneca and new shareholders Ally Bridge and Invus are a clear vote of confidence in BenevolentAI and its growth prospects."Michael Zaoui, Chairman of Odyssey, said: "We established Odyssey with the objective of supporting and bringing to the European capital markets promising European growth companies in the healthcare and/or technology sectors. BenevolentAI is positioned at the convergence of these two sectors, is a proven leader in the emerging area of AI-driven drug discovery, and is led by a very experienced management team. We are convinced BenevolentAI offers superior growth prospects in a sector which itself is at an inflection point. We see our combination with BenevolentAI as a compelling investment opportunity and are convinced that this transaction will create value for all our shareholders."Dr Olivier Brandicourt, healthcare expert to Odyssey, said: "BenevolentAI, leveraging its Knowledge Graph technology, is very well positioned to unlock a new phase of growth in the pharmaceutical R&D industry. BenevolentAI has already produced substantial evidence of the potency and efficiency of its platform, notably through the quality of its existing pipeline of candidates and its strong and expanding collaboration with AstraZeneca. We are confident in the ability of BenevolentAI to sustainably generate a pipeline of first- and best-in-class programmes and to become a key player in the global drug discovery sector."TRANSACTION HIGHLIGHTS The combination of Odyssey, a 300m Euronext Amsterdam-listed special-purpose acquisition company focused on European healthcare and TMT growth companies, and BenevolentAI, a leading clinical-stage AI drug discovery company, represents the largest European SPAC merger announced to date and one of the largest Euronext Amsterdam biotech listings ever.Founded in 2013, BenevolentAI has built a proprietary AI-based drug discovery platform that, combined with the scientific expertise of approximately 300 world-class scientists and technologists and full wet-lab capabilities, enables the delivery of novel drug candidates with a higher probability of clinical success than those developed using traditional methods. BenevolentAI has a consistently proven track-record of scientifically validated discoveries.AI-driven drug discovery is at an inflection point and represents a significant growth opportunity. AI-led drug discovery is becoming a powerful tool to accelerate biomedical innovations and discoveries, with the potential to achieve materially higher clinical success rates and drive efficiencies across the drug discovery process through the use of data-driven insights and analysis.The sector is now at an inflection point and is increasingly becoming a strategic focus area for pharma companies, attracting significant capital and investment.BenevolentAI is a leader in AI-enabled drug discovery with a highly promising pipeline.BenevolentAI is a recognised industry leader in AI-enabled drug discovery; its existing pipeline alone could potentially address a patient base of over 260 million people with a current market opportunity above $30 billion.Through the combined capabilities of its leading Knowledge Graph, AI-enabled BenevolentAI Platform and wet-lab facilities, BenevolentAI is well-positioned to not only identify new drug targets for complex and intractable diseases, with higher success rates than traditional drug discovery methods, but also to develop these targets at pace while generating experimental data at scale to fuel continuous innovation.BenevolentAI has a proven scientific and commercial track-record.All of BenevolentAI's 20+ in-house drug programmes are platform-generated, discovered and developed using the company's AI and machine learning tools. This includes a novel target for treating ulcerative colitis and an atopic dermatitis programme in the clinic. BenevolentAI's multi-target commercial collaboration with AstraZeneca delivered the first novel AI-generated target for chronic kidney disease into AstraZeneca's portfolio, and this collaboration is now being expanded. BenevolentAI also successfully identified Eli Lilly's baricitinib as a treatment for COVID-19, which is now FDA emergency-use approved.BenevolentAI has a highly versatile and diversified business model.BenevolentAI is highly versatile and diversified, combining work across multiple therapeutic areas with the ability to develop pre-clinical and early-stage clinical assets in-house, to out-license or to collaborate with partners on new drug discovery and development.BenevolentAI has an experienced management team supported by industry-leading Board members and scientific advisors.BenevolentAI is led by an experienced management team with an outstanding track record in healthcare and technology, supported by industry-leading Board members and scientific advisors.Highly attractive value proposition with significant and tangible upside.Odyssey believes the investment opportunity represents an attractive value proposition with significant upside, as evidenced by the extensive pipeline of drug candidates and the potential of the BenevolentAI Platform.TRANSACTION OVERVIEWOdyssey has agreed to combine with BenevolentAI at a pre-money valuation of 1.1 billion and a post-money valuation of up to 1.5 billion, prior to any redemptions. The combination will be effected by way of a share exchange. BenevolentAI shareholders will receive Class A ordinary shares (the "Ordinary Shares") of Odyssey in exchange for their shares of BenevolentAI. Their BenevolentAI options and RSUs will convert into options and RSUs of Odyssey. As a result of the share exchange, BenevolentAI will become a wholly-owned subsidiary of Odyssey, which, following the closing of the transaction, will change its name to BenevolentAI. BenevolentAI is expected to benefit from a strong cash position, including 135m fully-committed PIPE, 300m of gross cash held in escrow by Odyssey (prior to any redemptions) and an estimated 56m of cash on BenevolentAI's balance sheet as at 30 November 2021. Assuming that no Odyssey shareholders elect to redeem their Ordinary Shares in connection with the transaction, current BenevolentAI shareholders are expected to own 67.4% of the combined company (including vesting RSUs and options), Odyssey shareholders (including the Sponsor) 23.5%, and PIPE investors 9.1% of the post-transaction pro-forma equity, respectively. None of the current shareholders of BenevolentAI will sell stock as part of the transaction, and BenevolentAI's core shareholders and current members of the board of directors of BenevolentAI, representing collectively approximately 86% of the current capital of BenevolentAI, will be subject to a lock-up of 180 days, subject to limited market standard exceptions.The Board of Directors of Odyssey and the Board of Directors of BenevolentAI have each unanimously approved the proposed transaction. The closing of the transaction is subject to the satisfaction or waiver of customary closing conditions, including the approval by a general meeting of Odyssey's shareholders and a minimum cash balance at closing (see "Conditions to Closing" below), and is expected to close in Q1 2022. Following the closing of the transaction, the combined company will be listed on Euronext Amsterdam and will trade under the ticker symbol "BAI". The combined company will be led by the current CEO of BenevolentAI, Joanna Shields, alongside BenevolentAI's industry-leading management and research team. Olivier Brandicourt, healthcare expert to Odyssey, will join the combined company's Board of Directors alongside Jean Raby, Odyssey's current co-CEO. Dr Francois Nader, current Chairman of the Board of Directors of BenevolentAI, will assume the role of Chairman of the Board of Directors of the combined company. The combined company will be headquartered in London.INVESTOR PRESENTATIONOdyssey and BenevolentAI will host an investor conference call today at 14:00 CET via webcast to discuss the proposed transaction. The webcast is accessible as an audio only livestream via BenevolentAI's website (www.benevolent.com/investors), alternatively investors can join via phone to listen to the webcast and participate in a Q&A with management.Participant Dial-in Numbers: UK: +44 (0) 33 0551 0200 UK Toll Free: 0808 109 0700 USA: +1 212 999 6659 USA Toll Free: 1 866 966 5335Access reference: Odyssey and BenevolentAI Investor CallFurther information about the transaction is available on BenevolentAI's website (www.benevolent.com) and on the Odyssey website (www.odyssey-acquisition.com).ADVISORSGoldman Sachs International is serving as exclusive financial advisor to BenevolentAI in connection with the business combination. Goldman Sachs International is also acting as placement agent on the PIPE. Latham & Watkins (London) LLP, NautaDutilh Avocats Luxembourg S.a r.l. and NautaDutilh N.V. are serving as legal advisors to BenevolentAI.J.P. Morgan AG and Zaoui & Co are serving as financial advisors to Odyssey. J.P. Morgan AG is also acting as placement agent on the PIPE. Skadden, Arps, Slate, Meagher & Flom (UK) LLP, ELVINGER HOSS PRUSSEN, société anonyme and Stibbe N.V. are serving as legal advisors to Odyssey.Odyssey was supported in its due diligence by Oliver Wyman (commercial advisors), Accuracy (accounting advisors), Arsène-Taxand (tax advisors) and former DeepMind Health Research Lead Trevor Back (AI expert).Linklaters LLP is serving as legal advisor to the placement agents on the PIPE.[1]Prior to any redemptions, excluding 56m of cash on BenevolentAI's balance sheet estimated as at 30 November 2021 and including transaction expenses.About BenevolentAIBenevolentAI is a leading, clinical-stage AI drug discovery company. Through the combined capabilities of its AI platform, scientific expertise and wet-lab facilities, BenevolentAI is well-positioned to deliver novel drug candidates with a higher probability of clinical success than those developed using traditional methods. BenevolentAI has a consistently proven track-record of scientifically validated discoveries. The BenevolentAI Platform powers a growing in-house pipeline of over 20 drug programmes, spanning from target discovery to clinical studies, and it maintains successful commercial collaborations with leading pharmaceutical companies. BenevolentAI also identified Eli Lilly's baricitinib as a repurposing drug candidate for COVID-19, which has been authorised for emergency use by the FDA. BenevolentAI is headquartered in London, with a research facility in Cambridge (UK) and a further office in New York.About Odyssey Odyssey is a special-purpose acquisition company incorporated in Luxembourg for the purpose of completing a merger, share exchange, asset acquisition, share purchase, reorganisation or similar business combination with a European business with principal activities in healthcare or TMT. On 2 July 2021, Odyssey raised 300 million for this purpose in its initial public offering. Odyssey is led by Chairman Michael Zaoui, co-CEOs Yoël Zaoui and Jean Raby, alongside industry experts Dr Olivier Brandicourt and Michel Combes.Contacts for BenevolentAIMediaRajin Kang - VP Communicationsrajin.kang@benevolent.aiBrunswick GroupDiana Vaughton / Ayesha Bharmal / Samantha ChieneBenevolentAI@brunswickgroup.com+44 (0) 20 7404 5959InvestorsJulia Balanova - Head of Investor Relationsjulia.balanova@benevolent.aiContacts for Odysseyinfo@odyssey-acquisition.comKEY TERMS OF THE AGREEMENTBusiness Combination AgreementThe combination will be effected by way of a share exchange. BenevolentAI shareholders will receive Ordinary Shares of Odyssey in exchange for their shares of BenevolentAI based on a consideration exchange multiple. Their BenevolentAI options and RSUs will convert into options and RSUs of Odyssey. As a result of the share exchange, BenevolentAI will become a wholly-owned subsidiary of Odyssey, which, following the closing of the transaction, will be considered the combined company.Consideration Exchange MultipleSubject to the terms and conditions of the business combination agreement, the aggregate consideration to be received by the BenevolentAI shareholders in exchange for their BenevolentAI shares in connection with the business combination will be the number of Ordinary Shares equal to (i) 1,100,000,000 less (a) 50,000,000 and (b) 45,800,000, divided by (ii) 10.00 (such number of Ordinary Shares, the "Total Consideration Shares").Accordingly, each BenevolentAI shareholder will receive the number of Ordinary Shares that is equal to (i) such shareholder's number of BenevolentAI shares (other than BenevolentAI G2 Growth Shares) multiplied by (ii) the Consideration Exchange Multiple (as defined below).The "Consideration Exchange Multiple" means the quotient of (i) the Total Consideration Shares divided by (ii) the Fully Diluted BenevolentAI Share Number.The "Fully Diluted BenevolentAI Share Number" means the number of BenevolentAI shares (other than BenevolentAI G2 Growth Shares) in issue immediately before the closing of the share exchange plus the number of Ordinary Shares that would be issued upon (i) the exercise of certain in-the-money vested options and (ii) the settlement of certain vested RSUs, if such exercise and settlement took place immediately before the closing of the share exchange.Representations and WarrantiesRepresentations and Warranties of BenevolentAIUnder the business combination agreement, BenevolentAI made customary warranties to Odyssey relating to, among other things, organisation and standing; relevant securities; authority; binding agreement, governmental approvals, UK Takeover Code waiver, non-contravention, Odyssey's subsidiaries, records, accounts, additional financial matters, position since the reference date, compliance with law, data protection, litigation, material contracts and other obligations, intellectual property rights, information technology, insurance, anti-corruption; anti-money laundering; sanctions, employees and consultants, benefit plans, pensions, environmental matters, tax, properties, finders and brokers and information supplied.Representations and Warranties of BenevolentAI's ShareholdersBenevolentAI shareholders made customary warranties to Odyssey and BenevolentAI relating to, among other things, organisation and standing, authorisation; binding agreement and ownership or BenevolentAI shares.Representations and Warranties of OdysseyOdyssey made customary warranties to BenevolentAI and the BenevolentAI shareholders relating to, among other things, organisation, authorisation; binding agreement, governmental approvals, non-contravention, capitalisation, Euronext Amsterdam and other regulatory filings; Odyssey financials; internal controls, absence of certain changes, compliance with laws, actions; orders; permits, taxes and returns; employees and employee benefit plans, properties, material contracts, transactions with affiliates, finders and brokers, anti-corruption; anti-money laundering; sanctions, insurance, subscription agreements, information supplied, escrow account and warranties.Representations and Warranties of Odyssey's Dutch SubsidiaryOdyssey Acquisition Subsidiary B.V. ("Odyssey's Dutch Subsidiary") made customary warranties to BenevolentAI and BenevolentAI shareholders relating to, among other things, organisation, authorisation; binding agreement, non-contravention; capitalisation, activities of Odyssey's Dutch Subsidiary, compliance with laws and finders and brokers.Material Adverse EffectUnder the business combination agreement, certain warranties of BenevolentAI, the BenevolentAI shareholders, Odyssey and Odyssey's Dutch Subsidiary are qualified in whole or in part by materiality thresholds. In addition, certain warranties of BenevolentAI, the BenevolentAI shareholders, Odyssey and Odyssey's Dutch Subsidiary are qualified in whole or in part by a material adverse effect standard for purposes of determining whether a breach of such warranties has occurred. Pursuant to the business combination agreement, material adverse effect means, with respect to any specified person, any state of facts, development, change, circumstance, occurrence, event or effect, that, individually or in the aggregate, (a) has had a material adverse effect on the business, assets, liabilities, condition (financial or otherwise), results of operations or prospects of such person and its subsidiaries; or (b) would reasonably be expected to prevent or materially delay or materially impede the ability of such person or any of its subsidiaries to consummate the transactions contemplated by the business combination agreement on a timely basis, in each case subject to certain customary exceptions.CovenantsThe business combination agreement includes customary covenants of the parties with respect to business operations prior to consummation of the transactions contemplated thereby and efforts to satisfy conditions to consummation of the combination.Subject to certain exceptions, during the period between signing and closing, BenevolentAI and Odyssey will, and will cause their respective subsidiaries to, except as expressly contemplated by the business combination agreement or any ancillary document, as required by applicable law (including in respect of any COVID-19 measures) or as consented to by the other party, or as reasonably necessary in light of COVID-19 to protect the wellbeing of their respective employees generally or to mitigate the impact on BenevolentAI or Odyssey, as applicable, and their respective operations: (i) conduct their respective businesses, in all material respects, in the ordinary course of business consistent with past practice and (ii) comply with all laws applicable to BenevolentAI or Odyssey, as applicable, and its respective businesses, assets and employees.Conditions to ClosingConditions to Each Party's Obligation to CloseThe obligations of each party to consummate the transactions under the business combination agreement are in all respects subject to the satisfaction or written waiver (where permissible) by BenevolentAI and Odyssey of the following conditions:the receipt of the approval of Odyssey's shareholders meeting and such approval to be in full force and effect;that no law or order has been issued which has the effect of making the transactions under the business combination agreement illegal or void or which otherwise prevents or prohibits consummation of the transactions in whole or in part;the receipt of necessary consents of or with a governmental authority and such consent to be in full force and effect;the approval of the prospectus in connection with the combination by the Luxembourg financial supervisory authority (Commission de surveillance du secteur financier) (the "CSSF"), with such approval to be in full force and effect, and the CSSF's passporting of such prospectus to the Netherlands Authority for the Financial Markets (Stichting Autoriteit Financiële Markten);admission to listing and trading on Euronext Amsterdam of the Ordinary Shares issued in connection with the transactions;the combined entity's board of directors to be comprised, with effect from the effective time of the closing, exclusively by the list of nominees agreed to by BenevolentAI and Odyssey, and proposed by Odyssey upon such closing;Odyssey having at least an aggregate of 250 million of cash after taking into account payments by Odyssey for the shareholder redemption, the PIPE investment amount, and net of the deferred underwriting commission in connection with Odyssey's initial public offering (but before payment of any of BenevolentAI's or Odyssey's transaction expenses);BenevolentAI shareholders shall have performed in all material respects all of their respective obligations and complied in all material respects with all of their respective agreements and covenants under the business combination agreement to be performed or complied with by them; andif and to the extent that the United Kingdom's National Security and Investment Act 2021 (the "NSI Act") comes into force prior to the closing and the Investment Security Unit of the Department for Business, Energy and Industrial Strategy (the "ISU") indicates, in response to the consultation provided for in the business combination agreement, that the transactions contemplated thereby would or could potentially constitute a notifiable acquisition under the NSI Act, (A) the Secretary of State confirming that no further action will be taken under the NSI act in relation to the share exchange and the other transactions contemplated by the business combination agreement, or (B) if the Secretary of State issues a call-in notice under the NSI Act in relation to such transactions (a "Call-In Notice"): (i) the parties receiving a final notification that no further action in relation to the Call-In Notice is to be taken under the NSI Act; or (ii) the Secretary of State making a final order in relation to such transactions under the NSI Act which permits such transactions to be completed subject to the provisions of such final order, and, to the extent relevant, all conditions, provisions or obligations contained in such final order necessary for completion of such transactions having been satisfied or complied with.Conditions to BenevolentAI's Obligation to CloseThe obligations of BenevolentAI to consummate the transactions under the business combination agreement are subject to the satisfaction or written waiver (by BenevolentAI) of the following conditions:no Odyssey material adverse effect has occurred;(i) the Odyssey and Odyssey's Dutch Subsidiary fundamental warranties (i.e., the warranties with regard to organisation, authorisation and binding agreement, governmental approvals, non-contravention, and finder and broker fees) and Odyssey's Dutch Subsidiary fundamental warranties (i.e., the warranties with regard to organisation, authorisation; binding agreement and Odyssey's Dutch Subsidiary activities) are true and correct in all respects on and as at the date of the business combination agreement and as at the date of closing as if made on such date, except for those Odyssey and Odyssey's Dutch Subsidiary fundamental warranties that address matters only as at a particular date (which have been true and correct as at such date), (ii) the Odyssey and Odyssey's Dutch Subsidiary warranties with regard to capitalisation are true and correct in all respects (except for de minimis inaccuracies) on and as at the date of the business combination agreement and on and as at closing as if made on such closing date, except for those warranties that address matters only as at a particular date (which have been true and correct as at such date), (iii) all other Odyssey and Odyssey's Dutch Subsidiary warranties are true and correct in all respects on and as at the date of the business combination agreement and on and as at the closing date as if made on such date, except for those warranties that address matters only as at a particular date (which have been true and correct as at such date) and except for any failures to be true and correct that (without giving effect to any qualifications or limitations as to materiality or material adverse effect), individually or in the aggregate, have not had and would not reasonably be expected to have a material adverse effect in respect of Odyssey or Odyssey's Dutch Subsidiary, as applicable; andOdyssey and Odyssey's Dutch Subsidiary have performed in all material respects all of their respective obligations and complied in all material respects with all of their respective agreements and covenants under the business combination agreement at or prior to the closing date.Conditions to Odyssey's Obligation to CloseThe obligations of Odyssey to consummate the transactions contemplated by the business combination agreement are subject to the satisfaction or written waiver (by Odyssey) of the following conditions:no BenevolentAI material adverse effect has occurred;(i) the BenevolentAI fundamental warranties (i.e., the warranties with regard to organisation; standing, authority, governmental approvals, non-contravention, BenevolentAI's subsidiaries, and finder and broker fees) and the BenevolentAI shareholders fundamental warranties (i.e., the warranties with regard to organisation; standing, authorisation; binding agreement and ownership of shares) are true and correct in all respects on and as at the date of the business combination agreement and on and as at the closing date, as if made on such closing date except for those BenevolentAI and BenevolentAI shareholder fundamental warranties that address matters only as at a particular date (which have been true and correct as at such date), (ii) BenevolentAI warranties with regard to relevant securities are true and correct in all respects (except for de minimis inaccuracies) on and as at the date of the business combination agreement and on and as at the closing date as if made on such closing date, except for those warranties that address matters only as at a particular date (which have been true and correct as at such date), and (iii) all other warranties of BenevolentAI and BenevolentAI shareholders are true and correct in all respects on and as at the date of the business combination agreement and on and as at the closing date as if made on the closing date, except for those warranties that address matters only as at a particular date (which have been true and correct as at such date) and except for any failures to be true and correct that (without giving effect to any qualifications or limitations as to materiality or material adverse effect), individually or in the aggregate, have not had and would not reasonably be expected to have a material adverse effect in respect of BenevolentAI or BenevolentAI shareholders, as applicable; andBenevolentAI has performed in all material respects all of its obligations and complied in all material respects with all of its respective agreements and covenants under the business combination agreement at or prior to the closing date.TerminationThe business combination agreement may be terminated, and the transactions contemplated by the business combination agreement may be abandoned at any time prior to closing as follows:by mutual written consent of Odyssey and BenevolentAI;by Odyssey or BenevolentAI, if any of the conditions to closing set forth in the business combination agreement have not be satisfied or waived by 6 June 2022 (the "Outside Date"), provided parties shall use all reasonable endeavours to ensure the closing occurs before such date;by Odyssey or BenevolentAI, if a governmental authority has issued an order or taken any other action permanently enjoining, restraining or otherwise prohibiting the transactions contemplated by the business combination agreement and such order or other action has become final and non-appealable, unless the failure to comply with any provision of the business combination agreement has been a substantial cause of such action by such governmental authority;by BenevolentAI, ifby Odyssey, ifby either Odyssey or BenevolentAI, if Odyssey's extraordinary general meeting has been held (including any adjournment thereof) and concluded, the Odyssey shareholders have duly voted, and the approval of Odyssey's general meeting of shareholders was not obtained; orby BenevolentAI if the Odyssey's board has changed its recommendation regarding the business combination agreement.Support AgreementIn connection with the transactions, BenevolentAI, Odyssey, the Ordinary Shareholders, Odyssey Sponsor and certain shareholders of Odyssey Sponsor, have entered into a support agreement (the "Support Agreement"), pursuant to which the Ordinary Shareholders and Odyssey Sponsor have agreed to (i) vote all Odyssey shares held by them in favour of approval entry into the business combination agreement and the ancillary documents, and the transactions contemplated thereby, including the matters to be approved by Odyssey's shareholders at the Business Combination EGM and (ii) not redeem any of their shares in connection with the transactions. Under the Support Agreement, the Sponsor also waived any adjustment to the conversion ratio or any other anti-dil | Process Automation/Decision Making/Discovery | Life, Physical, and Social Science/Computer and Mathematical | null | null | null | null | null | null |
news | Susan Galer | Can Artificial Intelligence Take the Guesswork Out of the Customer Journey? | Artificial intelligence (AI) is shedding light on one of the most examined yet least understood experiences of modern life: the customer journey. From shopping malls... | https://news.sap.com/2021/12/can-artificial-intelligence-take-the-guesswork-out-of-the-customer-journey/ | 2021-12-07T13:15:19Z | Artificial intelligence (AI) is shedding light on one of the most examined yet least understood experiences of modern life: the customer journey. From shopping malls and sports arenas, to train stations and city streets, C2RO is an AI-powered video analytics platform that captures anonymized data about peoples movement so organizations can improve the customer experience.We analyze human behavior in physical spaces and transfer it into actionable data, said Tim Heaney, vice president of Sales at C2RO. With a fact-based understanding of the amount of people coming into a space, how they move through it with whom, where they linger, and what they touch and eventually purchase, organizations can manage physical environments more efficiently to improve the customer experience and business results.Real-Time Data Eases Commuter ExperienceA transportation organization used data from C2RO to improve subway train and bus terminal planning.Based on the number of people boarding and leaving trains at certain times and days, as well as which direction theyre headed at terminals, transportation managers could alleviate traffic crunches, said Heaney. They could reverse escalator directions to help people more easily exit or enter terminals. Aligning vehicle dispatches to actual passenger traffic flows reduced wasted trips. It also means fewer people waiting in the cold or forced to stand in crowded cars and buses.Real-time data is particularly valuable to manage other operations such as shopping malls and entertainment venues during peak usage times like rush hour, holiday shopping, or performances. Companies can also use historical data for future planning.Heaney added that monitoring in-person occupancy levels has helped organizations meet social distancing mandates during the pandemic.Secure, Anonymized Data Powers Informed DecisionsC2ROs platform was designed to be fully GDPR compliant. The metadata that customers receive on dashboards through a browser or mobile device is anonymized for privacy protection. Companies decide on the frequency of data updates, from real time to hourly, daily, or weekly.Revenue Growth from AI-Based Algorithms Theres nothing trailblazing about counting customers Heaney acknowledged that companies have been doing this for decades. However, C2RO provides highly accurate, granular, data-driven insights based on an organizations KPIs.The data shows when someone came inside the store after looking at a promotion, said Heaney. Did they visit the promotional display, examine the item, and visit the POS? Which entrance and exit did they use? This is valuable data to help companies quickly evaluate promotions and make adjustments if something isnt working as expected.Case in point was a convenience store that used C2RO to determine the relationship between high traffic displays, such as coffee, and other areas where product sales were lower. After discovering that 25% of the stores space had no traffic at all, the management made layout changes to attract customers to overlooked areas and increase sales.Many customers tell us that their physical spaces are a blind spot, and our data opens their eyes to the metrics of what’s going on in their space, said Heaney. Mall owners can quantify the number of people visiting anchor tenants and identify cross-shopping patterns in other stores. With these statistics, they can sell space at a proven premium and help tenants drive stronger sales.SAP.iO Grows Market Opportunities with Expertise Heaney expected C2ROs growth trajectory to continue, thanks in part to support from SAP that began during the companys participation in the data and analytics cohort of SAP.iO Foundry Berlin.SAPs brand is strong worldwide and providing us with introductions to their customers is a huge opportunity, he said. Our customer behavioral data is complementary to SAP solutions like SAP Analytics Cloud and SAP Data Warehouse Cloud. We also valued SAPs sales and marketing expertise, helping us with our product messaging, value proposition, and packaging.C2RO is available on SAP Store.AI Intelligence Addresses Competitive Market DemandsOriginally founded as a cloud robotics startup, C2RO evolved to take advantage of existing video camera infrastructure worldwide. Heaney brings over 30 years of business experience to his position, having worked extensively in telecom, IT, electro-optics, and computer vision.Besides retail and transportation, he saw potential value for C2RO in industrial manufacturing for production efficiency improvements, as well as sports and entertainment venues, where fans could time food and drink breaks to avoid peak crowds at the refreshment stand.Tough markets demand smarter, more scientific solutions to understand space management and visitor behaviors, he said. Were at an inflection point where we can train intelligent algorithms much faster and have secure servers on-site while taking advantage of the cloud. AI-based intelligence is revealing the customer journey to achieve business results never imagined before.Follow me @smgaler.This blog also appeared on SAP BrandVoice on Forbes. | Detection and Monitoring/Discovery/Prediction/Decision Making | Business and Financial Operations/Sales and Related | null | null | null | null | null | null |
|
news | Ayushman Baruah | Deep-tech startups become hot targets for global firms | Startups focusing on disruptive tech that can solve complex global problems are in demand | https://www.livemint.com/companies/start-ups/deeptech-startups-become-hot-targets-for-global-firms-11637003764885.html | 2021-11-15T19:20:30Z | BENGALURU :Several deep-tech startups in India are being acquired by larger global companies to gain access to niche technologies and quality talent, indicating a strong demand for startups focusing on disruptive tech that can solve complex global problems.In June, KPIT Technologies Ltd announced the signing of a definitive pact to buy a controlling stake in Bengaluru-based PathPartner Technology for about 191 crore. The latter is a specialist in providing product engineering services to customers across automotive, consumer electronics, broadcast, medical and IoT domains. PathPartner has an offshore research facility in California besides a presence in Germany and Japan.In May, General Atlantic-owned KFin Technologies, a Hyderabad-based registry services firm, took a 17% stake in insurtech startup Artivatic.ai for an undisclosed sum. The acquirer also has an option to raise its stake to majority holding in Artivatic.ai. The investment is expected to help KFintech venture into insurance tech as it looks to diversify its offerings.In September, US-based cloud video surveillance firm Eagle Eye Networks acquired Bengaluru-based AI startup Uncanny Vision to boost its capabilities in artificial intelligence (AI) and analytics. Its deep learning algorithms enable recognition, identification and prediction to improve business operations, customer service, and site safety.A month later, Accenture agreed to buy BRIDGEi2i, an AI and analytics firm based in Bengaluru. The deal is expected to add about 800 tech professionals to Accentures Applied Intelligence practice, strengthening its global capabilities in data science, machine learning and AI-based insights. Deep-tech startups are those whose business models are based on high-tech innovation in engineering or significant scientific advances. India had about 2,100 deep-tech startups as of 2020, as per industry body Nasscom.To be sure, Nasscom launched the second edition of its deep-tech mentoring programme called Deep Tech Club (DTC) 2.0 earlier this year. The initiative aims to nurture and promote Indian deep-tech startups and discover disruptive startups enabled by AI, ML, augmented reality, virtual reality, internet of things (IoT), robotics, blockchain, natural language processing (NLP) and similar technologies.Deep-tech companies are increasingly targeted for acquisitions for two possible reasons, said industry experts and investors. One is, large companies may want to fill certain technology gaps in their portfolio. The other reason is a very good tech team of a startup may be working on the wrong problem. So, that startup might get aqui-hired to bolster their tech talent," said Vinay Bansal, founder and chief executive officer, Inflection Point Ventures, an angel investment firm.Huddle, an accelerator and fund for early-stage ventures concurs with the view. Many large companies look to acquire niche deep-tech startups to gain access to a technology stack that they may not be able to build on their own as many of them may not be digital natives. Often, such acquisitions also offer quality talent at an affordable cost," said Sanil Sachar, founding partner at Huddle.Kishor Patil, CEO of KPIT Technologies, said his company acquired PathPartner because it realized that the latters competence in the operating system software and low-level software and existing semiconductor partnerships for early access to platforms is key to delivering complex production programs. Together, we (KPIT Technologies and PathPartner) can offer differentiated offerings for new-age vehicle architectures and strengthen our software integration expertise," he added.Likewise, one of Huddles portfolio, NeuroPixel.AI, a deep-tech AI/ML startup, has already seen large companies showing interest in their solutions and potential partnerships, said Ishaan Khosla, founding partner at Huddle.Interestingly, many of these startups are being acquired in their early stages when a majority of the fundamental suite of technologies are built. Khosla concluded, Larger companies, that are not traditionally tech-first, look to acquire or partner with these deep-tech firms to access technology or talent."Never miss a story! Stay connected and informed with Mint. Download our App Now!! | Unknown | Management/Business and Financial Operations | null | null | null | null | null | null |
Subsets and Splits