text
stringlengths
0
752
SAP taps Microsoft’s generative AI technologies
About the AuthorBy Ryan Daws | May 17, 2023
Categories: Applications, Artificial Intelligence, Companies, Enterprise, Microsoft,
SAP taps Microsoft’s generative AI technologiesRyan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)
SAP and Microsoft have announced a new collaboration aimed at utilising generative AI technology to address fundamental business challenges.
The partnership will integrate SAP SuccessFactors solutions with Microsoft 365 Copilot and Copilot in Viva Learning, as well as Microsoft’s Azure OpenAI Service, to harness the power of language models for analysing and generating natural language. These integrations will enable organisations to enhance their recruitment, talent development, and learning processes.
The skills gap is a significant issue faced by companies worldwide. Organisations struggle to bridge the divide between the skills they currently possess and the skills required for the future. This challenge necessitates optimising recruitment strategies to attract the right talent and the development of effective programs to nurture employee growth. However, these tasks often involve manual and repetitive work, resulting in inefficiencies and missed opportunities.
Through their collaboration, SAP and Microsoft seek to streamline recruitment and employee learning processes using generative AI technology. By leveraging the Azure OpenAI Service API and SAP SuccessFactors data, SAP will create compelling and highly targeted job descriptions.
The integration between SAP SuccessFactors Recruiting solution and Microsoft 365 will enable people leaders to fine-tune job descriptions using Copilot in Microsoft Word, ensuring market competitiveness and detecting bias. The final job descriptions will seamlessly flow into SAP SuccessFactors solutions, eliminating disruptions to workflow. Additionally, SAP will utilise the Azure OpenAI Service API to provide interviewers with prompts and suggested questions based on candidate resumes and job descriptions within Microsoft Teams.
Furthermore, the collaboration will facilitate integration between SAP SuccessFactors solutions and Microsoft Viva Learning. Employees will have the ability to use Copilot in Viva Learning to conduct natural language queries and receive personalised learning recommendations aligned with their career goals. As learning activities are completed, the SAP SuccessFactors portfolio will automatically update, providing organisations with an up-to-date view of the skills landscape within their workforce.
The partnership between SAP and Microsoft not only aims to revolutionise recruitment and employee development but also serves as a model for enhancing the capabilities of large-language models across various industries. SAP’s extensive global data estate presents an opportunity to amplify the potential of AI tools in different fields.
SAP says that it adheres to industry standards and emphasises transparency, privacy, and unbiased decision-making. The company has established guiding principles for the use of AI in its software and collaborates with ethics experts to ensure ethical AI deployment.
By combining their respective strengths, SAP and Microsoft are paving the way for AI-powered innovations that will drive productivity and transform human resources.
Through the integration of generative AI technology into SAP SuccessFactors solutions and Microsoft’s productivity tools, organisations can effectively bridge the skills gap and empower their workforce to thrive in a rapidly-changing business landscape.
(Photo by Google DeepMind on Unsplash)
Similar: A quarter of tech firms use generative AI for software development
Meta’s open-source speech AI models support over 1,100 languages
About the AuthorBy Ryan Daws | May 23, 2023
Categories: Applications, Artificial Intelligence, Companies, Development, Meta (Facebook), Voice Recognition,
Meta’s open-source speech AI models support over 1,100 languagesRyan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)
Advancements in machine learning and speech recognition technology have made information more accessible to people, particularly those who rely on voice to access information. However, the lack of labelled data for numerous languages poses a significant challenge in developing high-quality machine-learning models.
In response to this problem, the Meta-led Massively Multilingual Speech (MMS) project has made remarkable strides in expanding language coverage and improving the performance of speech recognition and synthesis models.
By combining self-supervised learning techniques with a diverse dataset of religious readings, the MMS project has achieved impressive results in growing the ~100 languages supported by existing speech recognition models to over 1,100 languages.
Breaking down language barriers
To address the scarcity of labelled data for most languages, the MMS project utilised religious texts, such as the Bible, which have been translated into numerous languages.
These translations provided publicly available audio recordings of people reading the texts, enabling the creation of a dataset comprising readings of the New Testament in over 1,100 languages.
By including unlabeled recordings of other religious readings, the project expanded language coverage to recognise over 4,000 languages.
Despite the dataset’s specific domain and predominantly male speakers, the models performed equally well for male and female voices. Meta also says it did not introduce any religious bias.
Overcoming challenges through self-supervised learning
Training conventional supervised speech recognition models with just 32 hours of data per language is inadequate.
To overcome this limitation, the MMS project leveraged the benefits of the wav2vec 2.0 self-supervised speech representation learning technique.
By training self-supervised models on approximately 500,000 hours of speech data across 1,400 languages, the project significantly reduced the reliance on labelled data.
The resulting models were then fine-tuned for specific speech tasks, such as multilingual speech recognition and language identification.
Impressive results
Evaluation of the models trained on the MMS data revealed impressive results. In a comparison with OpenAI’s Whisper, the MMS models exhibited half the word error rate while covering 11 times more languages.
Furthermore, the MMS project successfully built text-to-speech systems for over 1,100 languages. Despite the limitation of having relatively few different speakers for many languages, the speech generated by these systems exhibited high quality.
While the MMS models have shown promising results, it is essential to acknowledge their imperfections. Mistranscriptions or misinterpretations by the speech-to-text model could result in offensive or inaccurate language. The MMS project emphasises collaboration across the AI community to mitigate such risks.
You can read the MMS paper here or find the project on GitHub.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, artificial intelligence, meta, meta mms, mms, speech recognition, text-to-speech, voice recognition
Sophia: A Scalable Stochastic Second-order Optimizer for
Language Model Pre-training
Hong Liu Zhiyuan Li David Hall Percy Liang Tengyu Ma
Stanford University
{hliu99, zhiyuanli, dlwh, pliang, tengyuma}@cs.stanford.edu
Abstract
Given the massive cost of language model pre-training, a non-trivial improvement of the optimization
algorithm would lead to a material reduction on the time and cost of training. Adam and its variants have
been state-of-the-art for years, and more sophisticated second-order (Hessian-based) optimizers often incur
too much per-step overhead. In this paper, we propose Sophia, Second-order Clipped Stochastic Optimization,
a simple scalable second-order optimizer that uses a light-weight estimate of the diagonal Hessian as the
pre-conditioner. The update is the moving average of the gradients divided by the moving average of the
estimated Hessian, followed by element-wise clipping. The clipping controls the worst-case update size
and tames the negative impact of non-convexity and rapid change of Hessian along the trajectory. Sophia
only estimates the diagonal Hessian every handful of iterations, which has negligible average per-step time
and memory overhead. On language modeling with GPT-2 models of sizes ranging from 125M to 770M,
Sophia achieves a 2x speed-up compared with Adam in the number of steps, total compute, and wall-clock
time. Theoretically, we show that Sophia adapts to the curvature in different components of the parameters,
which can be highly heterogeneous for language modeling tasks. Our run-time bound does not depend on the
condition number of the loss.
1 Introduction
Language models (LLMs) have gained phenomenal capabilities as their scale grows (Radford et al.,
2019; Kaplan et al., 2020; Brown et al., 2020; Zhang et al., 2022; Touvron et al., 2023; OpenAI, 2023).
However, pre-training LLMs is incredibly time-consuming due to the massive datasets and model
sizes—hundreds of thousands of updates to the model parameters are required. For example, PaLM

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
1
Add dataset card